WO2022118939A1 - Camera, method for generating trained model pertaining to respiratory infection, trained model pertaining to respiratory infection, automatic diagnosis method pertaining to respiratory infection, and computer program - Google Patents

Camera, method for generating trained model pertaining to respiratory infection, trained model pertaining to respiratory infection, automatic diagnosis method pertaining to respiratory infection, and computer program Download PDF

Info

Publication number
WO2022118939A1
WO2022118939A1 PCT/JP2021/044366 JP2021044366W WO2022118939A1 WO 2022118939 A1 WO2022118939 A1 WO 2022118939A1 JP 2021044366 W JP2021044366 W JP 2021044366W WO 2022118939 A1 WO2022118939 A1 WO 2022118939A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
image data
light source
light
infected
Prior art date
Application number
PCT/JP2021/044366
Other languages
French (fr)
Japanese (ja)
Inventor
正男 山本
Original Assignee
正男 山本
株式会社Eggs
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 正男 山本, 株式会社Eggs filed Critical 正男 山本
Publication of WO2022118939A1 publication Critical patent/WO2022118939A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/04Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances
    • A61B1/045Control thereof
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/06Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor with illuminating arrangements
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/06Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor with illuminating arrangements
    • A61B1/07Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor with illuminating arrangements using light-conductive means, e.g. optical fibres
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/24Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor for the mouth, i.e. stomatoscopes, e.g. with tongue depressors; Instruments for opening or keeping open the mouth
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/17Systems in which incident light is modified in accordance with the properties of the material investigated
    • G01N21/21Polarisation-affecting properties
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/17Systems in which incident light is modified in accordance with the properties of the material investigated
    • G01N21/25Colour; Spectral properties, i.e. comparison of effect of material on the light at two or more different wavelengths or wavelength bands
    • G01N21/27Colour; Spectral properties, i.e. comparison of effect of material on the light at two or more different wavelengths or wavelength bands using photo-electric detection ; circuits for computing concentration
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N33/00Investigating or analysing materials by specific methods not covered by groups G01N1/00 - G01N31/00
    • G01N33/48Biological material, e.g. blood, urine; Haemocytometers
    • G01N33/483Physical analysis of biological material

Definitions

  • the present invention relates to a diagnostic imaging technique for respiratory infections.
  • the term "respiratory tract infection” means an infectious disease in which symptoms appear in the respiratory tract in general, and includes cases where the cause is a virus or a bacterium.
  • Various respiratory infections are known, but the most problematic one in the world today is clearly the infection caused by the new coronavirus (COVID-19).
  • COVID-19 the new coronavirus
  • Early detection of infected persons is an important issue to prevent the spread of the new coronavirus.
  • the PCR test is a test to detect the gene of the new coronavirus from the sample collected from the subject, and if the gene of the new coronavirus can be detected, the subject who provided the sample is infected with the new coronavirus. Is determined. There is also an antibody test.
  • the PCR test sufficiently corresponds to the purpose of early detection of the infected person. I can't do it. Even in the case of an antibody test, it takes a certain amount of time from the infection of the new coronavirus to generate an antibody in the body of the infected person, so it is difficult to detect the infected person of the new coronavirus at an early stage even by the antibody test.
  • the above story is not limited to the new coronavirus infection. It is self-evident that the spread of other respiratory infections can be suppressed if the infected person can be detected early. For example, even when a new respiratory infection that follows the current new coronavirus infection appears, it is strongly expected that early detection of the infected person will be important.
  • An object of the present invention is to provide a simple and inexpensive technique for early detection of a person infected with a respiratory tract infection.
  • the inventor of the present application continued research to solve the above-mentioned problems. As a result, the following findings were obtained.
  • epithelial cells such as the upper respiratory tract (mainly the oral cavity, pharynx, and nasal cavity) and vascular endothelial cells.
  • the oral cavity, pharynx, and nasal cavity become important routes of entry of the new coronavirus into the body due to the large expression of its functional receptors in the oral cavity, pharynx, and nasal cavity. Therefore, mutations occur in the relevant part of the person infected with the new coronavirus.
  • the time when the mutation occurs in this part is earlier than the time when the symptom appears, and therefore, it is possible that the subject can be identified as the infected person of the new coronavirus by each diagnostic technique explained in the background technique section. Extremely expensive. Considering these points, it can be concluded that it is useful to observe the epithelial cells and blood vessels of the upper respiratory tract for early detection of infected persons with respiratory infections such as coronavirus infection. can. Of course, such observations (or observations and diagnoses) can also be made by the physician directly observing the subject's upper respiratory tract, however, if simplicity and cheapness are pursued, such observations may be made in that part. It is better to use diagnostic imaging.
  • a person in charge of a school, restaurant, department store or supermarket, or movie theater will be able to have a subject who uses the facility undergo a diagnostic imaging.
  • diagnostic imaging it will be possible to use diagnostic imaging in a screening manner, and it will be possible to isolate infected persons from non-infected persons at an early stage. Not only that, non-infected persons will be able to engage in economic activities, thus reducing the damage caused by respiratory infections to the economy.
  • the image diagnosis is performed on the visitors of a certain facility, it is highly probable that all the persons in the facility are non-infected, so for example, a dinner in a restaurant.
  • the restaurant user is freed from the dressed behavior from the conventional way of thinking that a mask should be worn at the time.
  • it can also be used in combination with other tests, for example, a definitive diagnosis is made later by a PCR test.
  • the present invention has been made based on the above findings, and provides as a specific technique a diagnostic imaging technique useful for early detection of respiratory infections.
  • the first issue (problem 1) is which range of the upper respiratory tract of the subject exhibiting the symptom of respiratory infection is preferable as the image used in the diagnostic imaging. Unless an appropriate area of the upper respiratory tract is imaged, subjects cannot be distinguished from those infected with the new coronavirus infection, those infected with other viruses or bacteria, and those who are not infected. Further, according to the research of the inventor of the present application, it is also found that the accuracy of image diagnosis is affected by the nature of the image that captures the state of the upper respiratory tract, in addition to the range of the upper respiratory tract that is imaged. It has become clear.
  • the second problem is at what timing the image of the upper respiratory tract is used as the image used in the image diagnosis.
  • the posterior wall of the pharynx of the upper respiratory tract is hidden and invisible under normal conditions. For example, when breathing in while making a voice, the posterior wall of the pharynx is exposed so that it can be seen from outside the oral cavity, but it is difficult to take an image from outside the oral cavity using a camera at that timing. And this issue can be viewed more universally.
  • the camera usually has a shutter means (in the present application, the term "shutter means” includes not only a shutter button which is a physical button but also a non-physical button such as one displayed on a smartphone screen or the like).
  • the invention for solving the problem 1 is referred to as the first invention for convenience.
  • the subject in order to distinguish a subject from a person infected with a new type of coronavirus infection, a person infected with another virus or a bacterium, and a non-infected person by diagnostic imaging, the subject is imaged by an imaging device.
  • the image should include the posterior wall of the pharynx, the left and right anterior palatine arches, and the uvula. The reason is as follows.
  • the colors of the posterior pharyngeal wall, the left and right anterior palatine arches, and the uvula are all normal colors and general colors in those who are not infected with viruses and bacteria related to respiratory infections. Is a light pink color.
  • the mucous membrane is the color of the posterior wall of the pharynx, the left and right anterior palatine arches, and the uvula. It turns red and shows red. This is because when a virus or bacterium adheres to the mucous membrane of the site, the capillaries grow or dilate due to an immune reaction.
  • the posterior wall of the pharynx and the left and right anterior palatine arches are both reddish and red, but the color of the uvula is the virus.
  • the color was as light pink as that of a healthy person who was not infected with the virus, or in some cases, a lighter pink color that was closer to white.
  • reddish and reddish part since the growth and dilation of capillaries occur, a person infected with the new coronavirus infection, a person infected with another virus or a bacterium, and a non-infected person.
  • the camera according to the first invention is for capturing an image to enable such a distinction.
  • the camera captures an image obtained by imaging a light source that irradiates the oral cavity with illumination light, a lens that passes the image light generated by the reflection of the illumination light in the oral cavity, and the image light that has passed through the lens. It is a camera that captures an image of the inside of the oral cavity from the outside of the oral cavity, which has an image pickup element that generates image data of the light beam.
  • the lens and the image sensor in this camera so that the image captured by the image sensor based on the image light includes the posterior wall of the pharynx, the left and right anterior palatine arches, and the uvula. It has become.
  • the camera in the first invention includes a light source.
  • the light source irradiates the oral cavity with illumination light from outside the oral cavity. Since the inside of the oral cavity is a closed space that does not allow outside light to enter, by allowing the camera to take an image with the illumination light emitted from the light source, the properties of the image to be taken (for example, the color of what is reflected in the image). Can be stabilized. Of course, this increases the accuracy of diagnostic imaging.
  • the camera according to the first invention also has a lens that allows the image light generated by the illumination light emitted from the light source to be reflected in the oral cavity to pass through, and an image pickup device that captures the image light that has passed through the lens to generate image data. And have.
  • the lens and the image sensor in this camera are adapted so that the image captured by the image sensor based on the image light includes the posterior wall of the pharynx, the left and right anterior palatine arches, and the uvula. That is, the lens forms an image on the image sensor so that the image to be imaged includes the posterior wall of the pharynx, the left and right anterior palatine arches, and the uvula.
  • the number of lenses may be one, or may be composed of a plurality of lenses.
  • another optical element for example, a mirror or a prism
  • the posterior wall of the pharynx, the front left and right in the image captured by the image sensor. If the image light is imaged on the image sensor in a state that includes the palatal arch and the palate drop, "the lens and the image captured by the image sensor based on the image light are images taken by the image sensor. , The posterior wall of the pharynx, the left and right anterior palatal arches, and the palatal ptosis are included.
  • Such a camera can be used to view subjects from outside the oral cavity, without having to insert a portion of it into the oral cavity, with a person infected with the new corona virus and other virus / bacterial infections related to respiratory infections. It is possible to capture images necessary for performing diagnostic imaging to distinguish between non-infected persons of viruses and bacteria related to infectious diseases. By being able to take images from outside the oral cavity, even if the person in charge of facilities such as schools, restaurants, department stores, supermarkets, or movie theaters does not have specialized knowledge or skills related to medicine, they can use endoscopes, etc. Compared with the case of using the facility, it becomes easier to capture an image necessary for the above-mentioned purpose in the oral cavity of a subject who uses the facility.
  • the camera according to the first invention includes a light source that emits illumination light.
  • the light source may be a single light source or a plurality of light sources.
  • the camera has a first polarizing plate that allows the illumination light to pass through and is linearly polarized in a predetermined polarization direction, and a second polarizing plate that passes the image light and has a polarization direction orthogonal to the first polarizing plate. And may be provided.
  • the illumination light emitted into the oral cavity is linearly polarized.
  • Illumination light which is linearly polarized light, is reflected in the oral cavity as described above and becomes image light.
  • the image light is divided into two types of light.
  • One is surface reflected light reflected on the surface of the mucous membrane in the oral cavity such as saliva
  • the other is internally reflected light transmitted through saliva and reflected on the surface of the mucous membrane itself in the oral cavity.
  • Surface reflected light is light that causes glare in the captured image and may cause overexposure in the image, but on the other hand, it well expresses the unevenness and shape of the object reflected in the image. ..
  • Such an image is referred to as a reflection image in the present application.
  • the internally reflected light is suitable for obtaining an image without glare, and is not suitable for grasping the unevenness and shape of an object reflected in the image, but the object reflected in the image.
  • the surface-reflected light ideally completely maintains the linearly polarized light property originally possessed by the image light, and the internally reflected light loses the linearly polarized property originally possessed by the image light and becomes natural light. have. Therefore, if a second polarizing plate whose polarization direction is orthogonal to the first polarization is arranged in the optical path of the image light (to be exact, the vibration of the linear polarization of the light passing through the first polarizing plate).
  • the image pickup element of the camera using the first polarizing plate and the second polarizing plate as described above can perform imaging only by the internally reflected light passing through the second polarizing plate.
  • the image which is a non-reflective image obtained by the internally reflected light is used for observing a blood vessel image which is an image of a blood vessel existing inside a color or mucous membrane without glare based on body fluid such as saliva. It will be suitable.
  • Such images show subjects with new coronavirus infections and other viral and bacterial infections related to respiratory infections, depending on the color or vascular image of the posterior pharyngeal wall, left and right anterior palatal arches, and palatal ptosis.
  • the non-reflective image is a blood vessel inside the mucous membrane in the posterior pharyngeal wall, the left and right anterior palatine arches, and the uvula, which tend to be obscured by the glare caused by the reflection of the illumination light by the body fluid according to the reflected image. Since the blood vessel image, which is the image of the uvula, can be seen well, it is extremely suitable for observing the blood vessel image. Therefore, according to such a camera, the accuracy of image diagnosis can be improved.
  • the light source in the camera of the first invention is a first light source in which the illumination light emitted from the light source passes through the first polarizing plate and then heads into the oral cavity, and the illumination light emitted from the light source is the first.
  • a second light source that goes into the oral cavity without passing through the polarizing plate and that emits light selectively with the first light source is included, and the image pickup element is in the oral cavity from the first light source.
  • the image captured by the image pickup device based on the illumination light emitted from the first light source and passing through the first polarizing plate is an image that is a non-reflective image according to the principle described in the previous paragraph.
  • it may pass through the third polarizing plate, which is a polarizing plate having the same polarization direction as the second polarizing plate, without exiting the second light source and passing through the first polarizing plate (for example, it may pass through the third polarizing plate having the same polarization direction as the second polarizing plate).
  • the reflected image light the image light is natural light if there is no third polarizing plate
  • both the surface reflected light and the internally reflected light are natural light.
  • both the surface reflected light and the internally reflected light contained in the image light which are both natural light, ideally half of their light amount passes through the second polarizing plate.
  • the image captured by the image pickup device becomes an image in which the surface reflected light and the internally reflected light are combined.
  • the image is basically the same as an ordinary image captured by an ordinary camera using natural light as the illumination light, so there is glare and reflection that makes it easy to grasp the unevenness and shape of the object reflected in the image. It becomes an image. That is, this camera can capture both the non-reflective image and the reflected image by alternately turning on the first light source and the second light source.
  • the pharynx It is advisable to observe at least one of the color and angiography of the posterior wall, the left and right anterior palatal arches, and the palatal appendage.
  • lymphoid follicles occur in the posterior wall of the pharynx in a person infected with influenza virus. These lymphoid follicles create irregularities on the posterior wall of the pharynx, which is normally smooth.
  • the subjects are a person infected with the new corona virus, a person infected with other viruses / bacteria related to respiratory infections, and a respiratory infection. It may be possible to distinguish between non-infected persons of viruses and bacteria related to the disease with higher accuracy.
  • a camera having a first light source and a second light source as described above non-reflective images of the posterior wall of the pharynx centered on at least one of color or blood vessel images, left and right anterior palatine arches, and uvula. It is advantageous from the above-mentioned viewpoint because it enables both the observation by the image of the posterior wall of the pharynx centered on the unevenness or shape, the left and right anterior palatine arches, and the image of the uvula.
  • the image data of the image captured by the camera of the first invention is used for image diagnosis.
  • the camera of the first invention has, for example, a transmission / reception mechanism, and may transmit image data of the captured image to a place where image diagnosis is performed via the transmission / reception mechanism.
  • the destination of the image data is, for example, a computer device that the doctor can access if the doctor executes the image diagnosis, and an automatic diagnosis device or an automatic diagnosis if the automatic diagnosis device executes the automatic diagnosis.
  • the transmission / reception mechanism may be known or well-known, and may be a standardized existing one.
  • the communication executed by the transmission / reception mechanism may be, for example, wireless communication via an Internet line and may execute communication by a 5th generation mobile communication system.
  • the camera itself transmits the image data of the captured image to the place where the image diagnosis is performed.
  • the communication executed by the transmission / reception mechanism may be short-range wireless communication.
  • An example is Wi-Fi TM, which is one of the wireless LAN standards.
  • the camera sends image data to a predetermined wireless router by executing Wi-Fi communication, and transmits image data about the captured image to a place where the wireless router performs image diagnosis. Is common.
  • Another example of short-range wireless communication is Bluetooth TM.
  • the camera sends image data to a predetermined smartphone or tablet by executing Bluetooth communication, and sends image data about the captured image to a place where the smartphone or tablet performs image diagnosis. Is common.
  • the image captured by the camera of the first invention may or may not be a still image. Still images may be continuously captured. In the case where still images are continuously captured, if the time interval during which the still images are captured is shortened, it will eventually become equivalent to a moving image. In other words, the image captured by the camera of the first invention may be a still image or a moving image. Therefore, the image data sent from the camera directly or indirectly to the place where the image diagnosis is performed may be image data about a still image, or a continuous still image or a moving image. It may be image data about. When the image data sent from the camera to the place where the image diagnosis is performed is for a still image, the transmission process is lighter and the transmission time can be shortened.
  • the camera may have a shutter means for generating image data for an image whose image sensor is a still image by being operated by a user.
  • the camera allows the user to operate the shutter means once to obtain at least one of both the image data of the non-reflective image which is a still image and the image data of the reflected image which is a still image. It may be generated one by one.
  • at least one image data about the non-reflective image and the still image of the reflected image is generated by the camera and sent to the place where the image diagnosis is performed.
  • the invention for solving the problem 2 will be referred to as a second invention for convenience. Similar to the first invention, the second invention can be applied to a camera for capturing an image used for diagnostic imaging, but its application range is wider.
  • the camera is a camera having a lens that allows image light from an object to pass through, and an image pickup element that generates image data for an image obtained by capturing the image light that has passed through the lens, and is the image pickup element.
  • It is a camera equipped with a means of selection.
  • the image pickup device in the camera of the second invention continuously generates image data of a still image at predetermined time intervals.
  • the time interval for generating image data for a still image may or may not be constant, but is generally constant. If the interval becomes short, the image sensor will generate image data for the moving image.
  • the camera according to the second invention has an overwrite recording unit that records image data for a predetermined time while overwriting the image data in order from the oldest one, and a shutter means that stops overwriting of the image data on the overwrite recording unit by the user's operation. And, after operating the shutter means, it is provided with a selection means for selecting any one of the image data on the overwrite recording unit at that time.
  • the overwrite recording unit is typically a ring buffer, but it is also possible to adopt a configuration in which image data is recorded while overwriting in a general memory.
  • the camera according to the second invention records image data generated one after another by the image sensor in an overwrite recording unit for a predetermined time while overwriting the oldest ones in order. That is, the overwrite recording unit is designed to always maintain a state in which image data for a predetermined time in the past has been recorded.
  • the camera of the second invention has a shutter means.
  • the shutter means may appear to the user to function as a means for determining when to capture a still image in a typical camera, but in the camera of the second invention, it is in fact. , It functions to stop the overwriting of the image data on the overwrite recording unit by the user's operation.
  • the camera of the second invention has a selection means for selecting any image data on the overwrite recording unit at that time after the user operates the shutter means. Therefore, in the camera of the second invention, when the user operates the shutter means in order to take an image, the image is taken by the image sensor during a predetermined time before the moment when the shutter means is operated, and the overwrite recording unit is used. It becomes possible to select an appropriate image data from the past image data for a predetermined time portion from the timing at the same time when the shutter means is pressed, which is recorded in. This image selected by the selection means should be transmitted to the outside from the transmission means if the camera of the second invention includes a transmission means for transmitting image data to the outside (that is, described in the first invention).
  • the transmission means is equivalent to the transmission / reception mechanism described in the first invention. Due to the existence of such a shutter means, an image sensor, an overwrite recording unit, and a selection means, the camera of the second invention captures a still image when the object to be imaged is in a desired state without missing a shutter chance. , It becomes possible to take an image (or select the imaged data) at a timing desired by the user. In addition, by making it possible to select an image at the timing before operating the shutter means, which is one of the causes of camera shake, the image captured by the camera of the second invention is affected by camera shake. It will be difficult.
  • the camera of the second invention is supposed to take an image of the inside of the oral cavity from outside the oral cavity, includes a light source for irradiating the inside of the oral cavity with illumination light, and the lens and the image pickup element are based on the image light.
  • the image captured by the image pickup device may include the posterior wall of the pharynx.
  • the effect of imaging the inside of the oral cavity from outside the oral cavity is equivalent to the effect caused by being able to image from outside the oral cavity described in the first invention.
  • the effect of the camera of the second invention provided with a light source for irradiating the illumination light in the oral cavity is equivalent to the effect caused by the camera of the first invention by providing the light source for irradiating the illumination light in the oral cavity.
  • the lens and the image pickup device in the camera of the second invention so that the image captured by the image pickup element based on the image light includes the pharyngeal posterior wall portion (may be a part of the pharyngeal posterior wall portion).
  • the lens and the image sensor in the camera of the first invention are such that the posterior wall of the pharynx of the subject, the left and right anterior palatal arches, and the palatal drop are reflected in the image captured by the image sensor.
  • the image pickup range is narrower than that of the camera of the first invention.
  • the applicant's findings regarding respiratory infections are "observing the colors of the posterior wall of the pharynx, the left and right anterior palatal arches, and the palatal ptosis. This makes it possible to distinguish subjects from those infected with the new corona virus, those infected with other viruses / bacteria related to respiratory infections, and those who are not infected with viruses / bacteria related to respiratory infections. " That is. Therefore, the image captured by the camera of the second invention including only the posterior wall of the pharynx may not show the left and right anterior palatal arches and the palatal tract. In that case, the second invention.
  • subjects are divided into those infected with the new corona virus, those infected with other viruses / bacteria related to respiratory infections, and those who are not infected with viruses / bacteria related to respiratory infections. It may not be possible. However, it may be possible to diagnose some respiratory infections based on images of the posterior wall of the pharynx, and at least distinguish between infected and non-infected with some virus or bacterium. (For example, it is possible to find an infected person with influenza virus from the unevenness of the surface of the posterior wall of the pharynx), so it can be said that the camera of the second invention described above is sufficiently meaningful.
  • the subjects could be identified as a new corona virus-infected person and other viruses related to respiratory infections by using images showing only the posterior wall of the pharynx without showing the left and right anterior palatal arches and the palatal droop.
  • Features related to shape such as color, vascular image, unevenness, etc., may be discovered to enable the distinction between infected with bacteria and non-infected with viruses and bacteria related to respiratory infections. Therefore, if such a feature is discovered, even with the camera of the second invention, which covers only the posterior wall of the pharynx, the subject can be infected with the new coronavirus and other respiratory infections.
  • the subjects are those infected with the new corona virus and other viruses / bacteria related to respiratory infections. If it is desired to be able to distinguish between an infected person and a non-infected person of a virus / bacterium related to a respiratory infection, the lens and the image pickup element in the camera of the second invention are made by an image pickup element based on the image light.
  • the still image captured may include the posterior wall of the pharynx, the left and right anterior palatal arches, and the palatal ptosis.
  • the posterior wall of the pharynx is hidden by the tongue under normal conditions, and cannot be observed or imaged from outside the oral cavity.
  • the base of the tongue is lowered and the posterior wall of the pharynx becomes visible from outside the oral cavity.
  • the posterior wall of the pharynx becomes visible from outside the oral cavity, the left and right anterior palatine arches and uvula also become visible from outside the oral cavity at the same time.
  • the posterior wall of the pharynx is imaged using the camera of the second invention, the posterior wall of the pharynx is imaged by the image sensor at the moment when the posterior wall of the pharynx becomes visible from the outside of the oral cavity.
  • the human reaction rate usually has a delay of 0.2 seconds, at least about 0.1 seconds.
  • the shutter means is usually provided in the main body of the camera, so that the operation of the shutter means often causes camera shake, and the captured image is out of focus.
  • the captured still image is often useless for the purpose of diagnostic imaging.
  • the camera of the second invention which can be used as a still image captured by selecting a still image captured at the moment when the shutter means is operated or at a timing earlier than that, is still for such image diagnosis purposes. It is also suitable for capturing images.
  • the camera of the second invention in the case of photographing the inside of the oral cavity has a first polarizing plate that allows the illumination light to pass through and linearly polarized light having a predetermined polarization direction.
  • a second polarizing plate having a polarization direction orthogonal to that of the first polarizing plate, which allows the image light to pass through, may be provided.
  • the camera of the second invention in this case obtains the same effect as that of the first invention having the same configuration.
  • the light source in the camera of the second invention in the case of photographing the inside of the oral cavity is the inside of the oral cavity after the illumination light emitted from the light source passes through the first polarizing plate.
  • the image pickup element includes a non-reflective image captured by the image light generated by the illumination light emitted from the first light source into the oral cavity and the illumination light emitted into the oral cavity from the second light source. It may be adapted to capture both the reflected image captured by the image light generated by the image light.
  • the camera of the second invention in this case obtains the same effect as that of the first invention having the same configuration.
  • the first light source and the second light source alternately have images based on image data generated by the image pickup element.
  • the lights may be alternately turned on at a timing such that the still image due to the illumination light from the light source and the still image due to the illumination light from the second light source are obtained. That is, the timing at which the image pickup device performs image pickup for generating image data and the timing at which the first light source and the second light source are turned on may be controlled in a synchronized state.
  • the image sensor has the image data of the still image as the non-reflective image and the image data of the still image as the reflected image. Will be generated alternately.
  • the non-reflective image and the reflected image captured at close timing are generated one after another in a paired state. Since the pair of non-reflective images and the reflected images are a pair of non-reflective images and reflective images, both of which are still images, obtained by imaging almost the same position in the oral cavity at almost the same time, they are used. It is suitable for observation or diagnostic imaging. The benefits of performing observation or diagnostic imaging using both non-reflective and reflective images are as already described in the description of the first invention.
  • the camera of the second invention further includes a display for displaying a still image based on the image data on the overwrite recording unit, and the selection means captures at least one of the image data on the overwrite recording unit.
  • a display for displaying a still image based on the image data on the overwrite recording unit
  • the selection means captures at least one of the image data on the overwrite recording unit.
  • It may include an operation unit operated by a user for accepting an input for selecting and specifying. If such a configuration exists, it is up to the user to select image data by a selection means, for example, at least one selection in which the imaging range is correct and in focus (for example, without the influence of camera shake). Based on this, it will be possible to do it manually by the user.
  • the display and the operation unit do not necessarily have to be integrated with the camera of the second invention, and for example, other devices used in combination with the camera of the second invention (examples are smartphones and tablets). It may be provided in.
  • the selection means automatically selects at least one of the image data on the overwrite recording unit that has a correct imaging range and is in focus (for example, without the influence of camera shake). Means may be included. By doing so, it becomes possible to automatically select at least one image data for a still image in which the imaging range is correct and is in focus by the selection means. According to this, the burden on the user can be reduced, and the quality of the selected still image can be kept within a certain range.
  • the automatic selection means does not have to be integrated with the camera of the second invention, and is provided in, for example, another device (eg, a smartphone or a tablet) used in combination with the camera of the second invention. It doesn't matter.
  • the automatic selection means uses artificial intelligence or artificial intelligence to automatically extract image data of a still image that has a correct imaging range and is in focus from a plurality of image data recorded on the overwrite recording unit. You may use the mechanism used.
  • the third invention is a technique for performing image diagnosis by automatic diagnosis.
  • the third invention relates to an automatic diagnostic device, the image or image data used by the automatic diagnostic device in the automatic diagnosis may be generated by the camera according to the first invention or the second invention, but is not so. Is also good.
  • the inventor of the present application proposes a method of generating a trained model as one aspect of the third invention.
  • the method for generating this trained model is as follows: image data, which is data about a still image in which an imaging site including a subject's posterior pharyngeal wall, left and right anterior palatal arches, and palatal ptosis is captured, and the subject is infected with the new coronavirus.
  • image data is machine-learned as teacher data with data including information on whether the person is infected with other viruses / bacteria related to respiratory infections or non-infected persons with viruses / bacteria related to respiratory infections.
  • the subjects who had the imaged site reflected in the image data using the colors of the posterior wall of the pharynx, the left and right anterior palatal arches, and the palatal ptosis in the still image based on the above were the new coronavirus infected person and the respiratory organs. Trained for automated diagnosis of respiratory infections, producing a trained model that estimates whether the person is infected with other viruses or bacteria related to infectious diseases or non-infected with viruses or bacteria related to respiratory infections. This is a model generation method. Further, the inventor of the present application proposes a trained model as another aspect of the third invention. This trained model is generated by, for example, the above-mentioned method for generating a trained model, and is the core of the automatic diagnostic apparatus described later.
  • the trained model according to the third invention is image data which is data about a still image in which an imaging site including a subject's posterior pharyngeal wall, left and right anterior palatal arches, and palatal ptosis is captured, and the subject is infected with a new type of coronavirus.
  • Generated by machine learning as teacher data with data including information on whether the person is infected with other viruses / bacteria related to respiratory infections or non-infected with viruses / bacteria related to respiratory infections.
  • the subject who had the image-imposed site reflected in the image data was characterized by the color or blood vessel image of each of the posterior wall of the pharynx, the left and right anterior palatal arches, and the palatal ptosis in the still image based on the image data.
  • Automatic diagnosis of respiratory infections to estimate whether you are infected with the new corona virus, infected with other viruses / bacteria related to respiratory infections, or non-infected with viruses / bacteria related to respiratory infections.
  • a trained model for use (sometimes referred to simply as a "trained model").
  • the colors of the posterior pharyngeal wall, the left and right anterior palatine arches, and the uvula are usually in those who are not infected with viruses and bacteria related to respiratory infections.
  • the color of the uvula generally light pink.
  • the mucous membrane is the color of the posterior pharyngeal wall, the left and right anterior palatine arches, and the uvula. It turns red and shows red.
  • the posterior wall of the pharynx and the left and right anterior palatine arches are both reddish, but the color of the uvula is a healthy person who is not infected with the virus or bacteria. It has the same light pink color as, or even lighter than that, and has a pink color close to white. The difference in color between pale pink, pink, and reddish red, which is close to white, is caused by the proliferation and dilation of capillaries.
  • the subject was identified with a new coronavirus infected person and other viruses and bacteria related to respiratory infections. It is possible to distinguish between infected persons and non-infected persons of viruses / bacteria related to respiratory infections. That is, in the trained model as described above, when the image data of the still image in which the imaged portion of the subject is captured is input to the trained model, the feature amount of the color or the blood vessel image in the imaged portion in the still image based on the image data is used.
  • the subject can be divided into three types: a person infected with the new coronavirus, a person infected with other viruses / bacteria related to respiratory infections, and a person not infected with the virus / bacteria related to respiratory infections. It will be something like.
  • the image data may be data about a non-reflective image in which the image pickup portion is reflected.
  • the method of capturing a non-reflective image is as described above.
  • the non-reflective image is suitable for accurately grasping the blood vessel image, which is an image of a blood vessel located slightly behind the surface of the color and mucous membrane. Therefore, the trained model generated by such a generation method can make the above-mentioned three distinctions more accurately.
  • what should be input to this trained model is the non-reflective image in which the imaged portion is reflected. It is the image data of.
  • the image data may be a pair of the image data of the non-reflective image in which the image pickup portion is captured and the image data of the reflection image in which the image pickup portion is captured. good.
  • the method of capturing a non-reflective image is as described above.
  • non-reflective images are suitable for accurately grasping colors and blood vessel images.
  • the reflected image is suitable for accurately grasping the shape of the surface of the object to be imaged, such as unevenness.
  • the trained model generated by such a generation method can make the above-mentioned three distinctions more accurately.
  • the feature amount may include both the color and blood vessel image of the posterior wall of the pharynx, the left and right anterior palatine arches, and the uvula, and the feature amount may include the pharynx. It may include surface irregularities on the posterior wall, left and right anterior palatine arches, and uvula.
  • the inventor of the present application also proposes an automatic diagnostic device for respiratory infections using the above-mentioned trained model.
  • the automatic diagnostic device is an automatic diagnostic device for respiratory infections using any of the trained models described so far, and the imaging site of the subject to be diagnosed for respiratory infections is captured.
  • a receiving means for receiving image data which is data about a still image, and an extraction for extracting the color or blood vessel image of each of the posterior pharyngeal wall, the left and right anterior palatal arches, and the palatal ptosis in the still image based on the image data as feature quantities.
  • an automatic diagnostic device for respiratory infections comprising an output means for outputting estimation results of whether the person is infected with other viruses / bacteria related to respiratory infections or non-infected persons with viruses / bacteria related to respiratory infections.
  • an automatic diagnostic device by inputting image data equivalent to that used when generating a trained model into the receiving means, a still image based on the image data can be obtained without human judgment. It is possible to determine which of the above-mentioned three categories the subject having the imaged imaging site belongs to.
  • this automatic diagnostic device if the subject himself or the person in charge of the above-mentioned facility uses, for example, a smartphone or a tablet (a camera provided in the smartphone is used like the camera of the first invention and the second invention, the smartphone or the like can be used. , It may also serve as the camera in the first and second inventions of the present application.) Automatic diagnosis is performed based on the image data sent from the Internet using the Internet, and the estimation result data is used as the source of the image data. It can be used to send it back to a smartphone immediately, for example, using the Internet.
  • an automatic diagnostic method for respiratory infections performed by a computer having a recording medium on which the trained model described above is recorded.
  • the effect of this method is equal to the effect of an automated diagnostic device for respiratory infections.
  • An exemplary method is an automated diagnostic method for respiratory infections performed by a computer with a recording medium recording any of the trained models described above.
  • this method includes a reception process of receiving image data, which is data about a still image of a subject to be diagnosed with a respiratory infection, which is executed by a computer, and stillness based on the image data.
  • An extraction process for extracting the color or blood vessel image of the posterior wall of the pharynx, the left and right anterior palatal arches, and the palatal ptosis in the image as feature quantities, and the feature quantity extracted in the extraction process is input to the trained model.
  • the subject whose imaging site is reflected in the image data is a person infected with the new corona virus, a person infected with other viruses / bacteria related to respiratory infections, and a non-infected person with viruses / bacteria related to respiratory infections.
  • the inventor of the present application also proposes, as still another aspect of the third invention, a computer program for operating a predetermined computer as an automatic diagnostic device for respiratory infections using the above-mentioned trained model.
  • the effect of this computer program is equal to the effect of an automated diagnostic device for respiratory infections, and by allowing a given, eg, general purpose computer, to function as an automated diagnostic device for respiratory infections using a trained model.
  • An example computer program is a computer program for operating a predetermined computer as an automatic diagnostic device for respiratory infections using any of the trained models described above.
  • the computer program receives image data which is data about a still image in which the imaged portion of a subject to be diagnosed with a respiratory infection is reflected in the computer, and a stillness based on the image data.
  • image data is data about a still image in which the imaged portion of a subject to be diagnosed with a respiratory infection is reflected in the computer, and a stillness based on the image data.
  • An extraction process for extracting the color or blood vessel image of each of the posterior wall of the pharynx, the left and right anterior palatal arches, and the palatal ptosis in the image as a feature amount, and inputting the feature amount extracted in the extraction process into the trained model.
  • the subject whose imaging site is reflected in the image data is a person infected with the new corona virus, a person infected with other viruses / bacteria related to respiratory infections, and a non-infected person with viruses / bacteria related to respiratory infections. It is a computer program for executing an output process that outputs an estimation result of one of the two.
  • FIG. 3 is a perspective view of a camera according to the first embodiment and a computer device used in combination with the camera.
  • FIG. 1 is a horizontal cross-sectional view of the head of the camera shown in FIG.
  • FIG. 1 is a vertical cross-sectional view of the camera shown in FIG.
  • the block diagram which shows the functional block generated inside the computer apparatus shown in FIG. It is a figure for demonstrating the behavior of the illumination light and the reflected light at the time of taking an image using the 1st light source with the camera shown in FIG. It is a figure for demonstrating the behavior of the illumination light and the reflected light at the time of taking an image using the 2nd light source by the camera shown in FIG.
  • the functional block diagram which shows the functional block generated in the computer apparatus which constitutes the learning apparatus of 3rd Embodiment.
  • the figure which conceptually shows the content of the data recorded in the learning data recording part included in FIG. The figure which shows the whole structure of the automatic diagnosis system including the automatic diagnosis apparatus of 3rd Embodiment.
  • the functional block diagram which shows the functional block generated in the computer apparatus which constitutes the automatic diagnostic apparatus shown in FIG.
  • FIG. 1 shows an overview of the camera 1 and its accessory computer device 100 in this embodiment.
  • the camera 1 is used in combination with the computer device 100.
  • the computer device 100 has a function of sending image data captured by the camera 1 to a place where image diagnosis is performed, for example, via the Internet. This function may be implemented in the camera 1 itself as in the case of the modification 1 described later.
  • the place where the image diagnosis is performed is a device (for example, a computer device) that can be accessed by the doctor or other person if the image diagnosis is performed, and if the image diagnosis is performed by a machine, the device is concerned.
  • An automatic diagnostic device that performs diagnosis or a device that can be accessed by the automatic diagnostic device (for example, a computer device).
  • the camera 1 in this embodiment is for capturing an image used for diagnosing a human respiratory tract infection. That is, the subject in this embodiment is a human being.
  • the camera 1 in this embodiment is capable of capturing a part of the upper respiratory tract of a human being as a subject, more specifically, a range including the posterior wall of the pharynx, the left and right anterior palatine arches, and the uvula. ing.
  • the camera 1 in this embodiment includes a grip portion 10 that can be held by hand, and a head portion 20 provided on the upper front side of the grip portion 10.
  • the grip portion 10 and the head portion 20 are made of, for example, an opaque resin, but not limited to this. At least the head 20 is usually made of an opaque material. The inside of them is hollow, and various parts are built in or attached to the inside thereof as described later.
  • the grip portion 10 and the head 20 contain the parts, they function as a de facto case in which the parts are built.
  • the grip portion 10 has a shape that can be held by one hand, and is not limited to this, but in this embodiment, it has a rod shape or a columnar shape.
  • the head 20 is not limited to this, but is a cylinder configured to be called a hood having a substantially rectangular cross section, which slightly expands toward the tip (may be tubular or slightly narrows toward the tip). Yes, it is made of an opaque material, such as an opaque resin.
  • An opening 21 is provided at the tip of the head 20 (the side facing the face of the subject at the time of use, the front side in FIG. 1).
  • the opening 21 in this embodiment is a rectangle whose four corners are rounded.
  • the opening 21 may be square or circular.
  • the subject himself or a person in charge of a facility other than the subject for example, a school, a restaurant, a department store, a supermarket, or a movie theater, grips the grip portion 10, and the edge of the opening 21 at the tip of the head 20 is held. Is used in the form of pointing the subject's mouth.
  • a switch 15 is provided at an appropriate position of the grip portion 10 or the head 20, for example, on the front side of the grip portion 10.
  • the switch 15 is an operator for performing an input that triggers the start of image pickup by an image pickup device, which will be described later, and in this embodiment, it is a push button that is pushed into the grip portion 10 to perform input. However, if the image pickup device can generate an input signal that triggers the start of image pickup, the switch 15 does not need to be a push button type. More specifically, the function of the switch 15 may be implemented in the computer device 100.
  • FIG. 2 shows a horizontal cross-sectional view of the head 20 of the camera 1
  • FIG. 3 shows a vertical cross-sectional view of the entire camera 1.
  • a light source 31 including a lens 11, a first light source 31a, and a second light source 31b, and a first polarizing plate 32 are provided on the front side of the inside of the head 20 in FIG.
  • the first light source 31a and the second light source 31b may have the same configuration.
  • the first light source 31a and the second light source 31b play different roles in this embodiment, and they are alternately lit, and when each is lit, they will be described later.
  • the images captured by the image pickup device 12 are different from each other.
  • the first light source 31a can be used in all cases including the second light source 31b. That is, all the light from the first light source 31a and the second light source 31b in the figure may pass through the first polarizing plate 32. Further, a second polarizing plate 33 and an image pickup device 12 are provided on the inner side of the head 20 in FIG. 1 on the inner side.
  • the first light source 31a and the second light source 31b are both, but not limited to, a plurality of both. Both the first light source 31a and the second light source 31b emit natural light as illumination light. As far as possible, the first light source 31a may be a known or well-known light source, and may be an appropriate light source such as a light bulb or an LED. The same applies to the second light source 31b.
  • the first light source 31a in this embodiment is an LED, but the same is true for the second light source 31b.
  • the first light source 31a and the second light source 31b can be the same when viewed as hardware, which is the case in this embodiment.
  • Both the first light source 31a and the second light source 31b emit light toward the oral cavity of the subject to be photographed with a certain degree of directivity.
  • the directions of the first light source 31a and the second light source 31b are adjusted so that the light emitted from them is emitted in an appropriate direction.
  • Both the first light source 31a and the second light source 31b are fixed to a substrate fixed inside the head 20 by an appropriate method, but the illustration of the substrate is omitted.
  • the wavelength of the illumination light emitted by the first light source 31a is not particularly limited, but the wavelength of the illumination light is preferably in the visible light region, and in this embodiment, the first light source 31a emits general white light. It has become.
  • the wavelength of the illumination light is different when natural light is applied to the first polarizing plate 32, which is two polarizing plates arranged along the optical axis and whose polarization directions are orthogonal to each other, and the second polarizing plate 34, which will be described later. It is preferable that the natural light is limited to a range in which almost all of the natural light disappears (for example, 90% or more disappears). It is also possible to arrange a filter that limits the wavelength of the illumination light between the first light source 31a and the second light source 31b and the oral cavity to be imaged.
  • the first light source 31a and the second light source 31b in this embodiment are not limited to this, but are alternately turned on after the switch 15 is pressed, as will be described later.
  • the number of the first light sources 31a in this embodiment is not limited to this, but is plural.
  • Each first light source 31a is, but is not limited to, located near the opening 21 of the head 20, and in this embodiment, the walls of the head 20 on both lateral sides of FIG. 1 of the opening 21 in the horizontal direction. It is located slightly inside.
  • a plurality of first light sources 31a located on the right side and the left side of the opening 21 are arranged linearly, more accurately, in the vertical direction in FIG. 1, in this embodiment.
  • the number of the first light sources 31a arranged in the vertical direction on the right side of the opening 21 is four, and the same applies to the left side of the opening 21.
  • the number of the first light sources 31a arranged in the vertical direction on the right side and the left side of the opening 21 does not have to be four, and more specifically, it does not have to be a plurality.
  • the second light source 31b is provided on the upper side and the lower side of the first light source 31a, which are arranged vertically four by four on the right side and the left side of the opening 21, respectively, as described above. There is.
  • the position and number of the second light source 31b are not limited to this.
  • the first polarizing plate 32 which is a polarizing plate for linearly polarizing the illumination light which is the natural light passing through the first light source 31, is respectively. They are arranged one by one.
  • the first polarizing plate 32 in this embodiment is a vertically long rectangle as shown in FIGS. 1 and 3.
  • all the illumination light that contributes to the image pickup performed by the image pickup element 12 as described later passes through the first polarizing plate 32, and is the first. 1
  • the length in the vertical direction and the width in the horizontal direction of the polarizing plate 32 are designed from that viewpoint.
  • a partition wall 19 which is a donut-shaped wall that does not allow light to pass through is provided in the head 20 so that the outer edge thereof is in contact with the inner peripheral surface of the head 20 without a gap.
  • the partition wall 19 divides the space inside the head 20 into a space on the front side and a space on the rear side of the lens 11.
  • the light emitted from each of the first light source 31a and the second light source 31b is not reflected by the object to be imaged, and the space in which the image sensor 12 on the rear side of the lens 11 directly exists. Will not reach.
  • the second light source 31b is located above and below the first polarizing plate 32, and the illumination light emitted from them irradiates the oral cavity of the subject as natural light without passing through the first polarizing plate 32. It is supposed to be done.
  • the illumination light emitted from the first light source 31a is linearly polarized and directed toward the subject's oral cavity, and the illumination light emitted from the second light source 31b remains natural light (or is emitted from the second light source 31b).
  • Illumination light passes through a polarizing plate (not shown), which should be called a third polarizing plate whose vibration direction is orthogonal to that of the first polarizing plate 32, and thus linearly polarized light generated by passing through the first polarizing plate 32.
  • a polarizing plate (not shown), which should be called a third polarizing plate whose vibration direction is orthogonal to that of the first polarizing plate 32, and thus linearly polarized light generated by passing through the first polarizing plate 32.
  • the design of the installation position of the first light source 31a and the second light source 31b, the shape and position of the first polarizing plate 32, etc. Can be changed as appropriate from the above example.
  • the illumination light that has passed through the first polarizing plate 32 becomes linearly polarized light having a plane of polarization in a predetermined direction.
  • the plane of polarization of the illumination light which is linearly polarized light that has passed through the first polarizing plate 32, is not limited to this, but is
  • the lens 11 captures the reflected light generated by the illumination light reflected by an object to be imaged in the oral cavity, in this embodiment, including the posterior wall of the pharynx, the left and right anterior palatine arches, and the uvula. This is for forming an image on the element 12.
  • the lens 11 does not have to be a single lens, and may include necessary optical components other than the lens 11, such as a mirror and a prism. Further, the lens 11 may have a function of magnifying an image, or may have a function other than that.
  • the lens 11 in this embodiment is not limited to this, but is a magnifying lens.
  • the image pickup device 12 captures the reflected light and performs an image pickup.
  • the image pickup device 12 of this embodiment may be a known or well-known image sensor 12 as long as it can perform color imaging, and may be a commercially available one.
  • the image pickup device 12 can be configured by, for example, a CCD (Charge Coupled Device) or a CMOS (Complementary Metal Oxide Semiconductor).
  • the image sensor 12 generates image data obtained by image pickup.
  • the image captured by the image pickup device 12 may or may not be a moving image, but in this embodiment, image data of a still image is continuously generated at predetermined time intervals. In this case, the predetermined time is, for example, 20 mm seconds to 50 mm seconds, and then image data of 20 to 50 still images per second is generated by the image sensor 12.
  • the image sensor 12 actually generates image data of a moving image.
  • the image pickup device 12 is connected to the circuit 13 by a connection line 12a.
  • the circuit 13 is also connected to the first light source 31a and the second light source 31b by a connection line (not shown).
  • the circuit 13 receives the image data generated by the image pickup element 12 from the image pickup element 12 via the connection line 12a.
  • the circuit 13 performs necessary processing such as brightness adjustment and analog / digital conversion if necessary prior to the output of the video signal to the outside.
  • the circuit 13 is also configured to control the timing of turning on and off the first light source 31a and the second light source 31b connected by a connection line (not shown). The timing of turning on and off the first light source 31a and the second light source 31b will be described later.
  • the circuit 13 is connected to the output terminal 14 via the connection line 13a.
  • the output terminal 14 is connected to the computer device 100 via a cable 16 (not shown).
  • the connection between the cable 16 and the output terminal 14 may be performed in any way, but it may be convenient to use, for example, USB or other standardized connection method.
  • the output of the moving image data generated by the camera 1 to the computer device 100 does not need to be performed by wire as in this embodiment.
  • the camera 1 is provided with, for example, a known or well-known transmission / reception mechanism for communicating with the computer device 100 by, for example, Bluetooth TM, instead of the output terminal 14. Become.
  • the circuit 13 is also connected to the switch 15 described above by a connection line 15a.
  • the circuit 13 that receives the input signal from the switch 15 causes the image pickup element 12 to start imaging, and turns on and off the first light source 31a and the second light source 31b at the timings described later. ..
  • the second polarizing plate 33 is a polarizing plate made of the same material as the first polarizing plate 32, but its function is to convert the illumination light, which is natural light emitted from the first light source 31a, into linearly polarized light. It is different from the first polarizing plate 32 to be changed.
  • the second polarizing plate 33 is the object of the reflected light generated by the illumination light emitted from the first light source 31a, which is linearly polarized by the first polarizing plate 32, and is reflected on the surface of the object. It has a function of blocking the linearly polarized light component contained in the surface-reflected light, which is the light reflected on the surface and will be described in detail later.
  • the first polarizing plate 32 and the second polarizing plate 33 are oriented so that the planes of polarization of the linearly polarized light passing through them are orthogonal to each other. That is, the polarization directions of the first polarizing plate 32 and the second polarizing plate 33 are orthogonal to each other.
  • the polarization plane of linearly polarized light generated by passing through the second polarizing plate 33 is in the vertical direction in FIG. It is supposed to be.
  • the image light which is the reflected light from the object, can reach the image pickup device 12 only after passing through the lens 11 and further through the second polarizing plate 33. In other words, the image light that cannot pass through the second polarizing plate 33 is not captured by the image pickup device 12.
  • the computer device 100 is a general computer and may be a commercially available one.
  • the computer device 100 in this embodiment is a commercially available tablet, but not limited to this.
  • the computer device 100 does not necessarily have to be a tablet as long as it has the configurations and functions described below, and may be a smartphone, a notebook personal computer, a desktop personal computer, or the like. Even in the case of a smartphone or a personal computer, the computer device 100 may be commercially available. Examples of tablets include the iPad (trademark) series manufactured and sold by Apple Japan LLC. Examples of smartphones include the iPhone (trademark) series manufactured and sold by the company.
  • the appearance of the computer device 100 is shown in FIG.
  • the computer device 100 includes a display 101.
  • the display 101 is for displaying a still image or a moving image, generally both of them, and a known or well-known display 101 can be used.
  • the display 101 is, for example, a liquid crystal display.
  • the computer device 100 also includes an input device 102.
  • the input device 102 is for the user to make a desired input to the computer device 100.
  • a known or well-known input device 102 can be used as the input device 102.
  • the input device 102 of the computer device 100 in this embodiment is a button type, but the input device 102 is not limited to this, and a numeric keypad, a keyboard, a trackball, a mouse, or the like can also be used.
  • the input device 102 may be a keyboard, a mouse, or the like.
  • the display 101 is a touch panel, the display 101 also has a function of the input device 102, which is the case in this embodiment.
  • the hardware configuration of the computer device 100 is shown in FIG.
  • the hardware includes a CPU (central processing unit) 111, a ROM (read only memory) 112, a RAM (random access memory) 113, and an interface 114, which are connected to each other by a bus 116.
  • the CPU 111 is an arithmetic unit that performs arithmetic operations.
  • the CPU 111 executes a process described later, for example, by executing a computer program recorded in the ROM 112 or the RAM 113.
  • the hardware may be equipped with an HDD (hard disk drive) or other large-capacity recording device, and the computer program may be recorded on the large-capacity recording device.
  • the computer program referred to here is for causing a computer device 100 that operates in cooperation with the camera 1 to execute a process of transmitting image data generated as described later to a place where image diagnosis is executed.
  • This computer program may be pre-installed in the computer device 100, or may be installed after the computer device 100 is shipped.
  • the computer program may be installed in the computer device 100 via a predetermined recording medium such as a memory card (not shown), or may be installed via a network such as a LAN or the Internet.
  • the ROM 112 records computer programs and data necessary for the CPU 111 to execute a process described later.
  • the computer program recorded in the ROM 112 is not limited to this, and if the computer device 100 is a tablet, the computer program or data necessary for the computer device 100 to function as a tablet, for example, for executing e-mail. Is recorded.
  • the computer device 100 is also capable of browsing a home page on the Internet, and may be equipped with a publicly known or well-known web browser for making it possible.
  • the RAM 113 provides a work area required for the CPU 111 to perform processing. In some cases, for example, at least a part of the above-mentioned computer program or data may be recorded.
  • the interface 114 exchanges data between the CPU 111, the RAM 113, and the like connected by the bus 116 and the outside.
  • the display 101 described above and the input device 102 are connected to the interface 114.
  • the data about the operation content input from the input device 102 is input to the bus 116 from the interface 114.
  • image data for displaying an image on the display 101 is output from the interface 114 to the display 101.
  • the interface 114 also receives image data from the cable 16 described above (more precisely, from an input terminal (not shown) included in the computer device 100 connected to the cable 16).
  • the image data input from the cable 16 is sent from the interface 114 to the bus 116.
  • a transmission / reception mechanism is connected to the interface 114.
  • the transmission / reception mechanism is capable of performing short-range wireless communication, for example, when the computer device 100 wirelessly communicates with the camera 1. Further, the transmission / reception mechanism is capable of performing Internet communication, and can transmit image data received from the camera 1 to a place where image diagnosis is performed via an Internet line.
  • a functional block as shown in FIG. 5 is generated inside the computer device 100.
  • the functional block described below may be generated by the function of the above-mentioned computer program alone for making the computer device 100 function as described above, but is installed in the above-mentioned computer program and the computer device 100. It may be generated in collaboration with the OS and other computer programs.
  • an input unit 121, a control unit 122, an image data recording unit 123, and an output unit 124 are generated in relation to the functions of the present invention.
  • the input unit 121 receives data from the interface 114.
  • the data received by the input unit 121 is the processing selection data input from the input device 102 and the image data input from the cable 16.
  • the processing selection data is data for selecting whether to record the image data in the computer device 100 or to transmit the image data from the computer device 100 to a place where the image diagnosis is performed.
  • the control unit 122 may receive the above-mentioned processing selection data and image data. When the processing selection data is to record the image data in the computer device 100, the control unit 122 records the image data in the image data recording unit 123, and the processing selection data records the image data in the computer device 100.
  • the image data recording unit 123 is a recording area that is normally a part of the RAM 113 for recording image data as described above.
  • the image data is recorded in the image data recording unit 123 together with the identification information for identifying which subject the image data belongs to.
  • the output unit 124 has a function of outputting data including image data to the outside via the interface 114. For example, when the processing selection data selects to transmit the image data from the computer device 100 to the place where the image diagnosis is performed, the output unit 124 transmits the image data received from the control unit 122 via the interface 114.
  • the transmission mechanism sends the image data to the place where the image diagnosis is performed via the Internet. Further, the output unit 124 sends the image data to the display 101 via the interface 114 as needed. In this case, the image based on the image data is displayed on the display 101.
  • the camera 1 and the computer device 100 When the camera 1 and the computer device 100 are used, first, as described above, the camera 1 and the computer device 100 are connected by the cable 16. Further, the user (which may be the subject himself / herself) launches the above-mentioned computer program recorded in the computer device 100, operates the input device 102, and inputs the processing selection data.
  • the process selection data is data for selecting whether to record the image data in the computer device 100 or to send the image data from the computer device 100 to the place where the image diagnosis is performed. ..
  • the display 101 of the computer device 100 displays an image prompting the user to input the processing selection data, and the user inputs the processing selection data according to the instruction by the image.
  • the display of such an image on the display 101 is performed by, for example, data generated by the control unit 122 and sent from the control unit 122 to the display 101 via the output unit 124 and the interface 114.
  • the processing selection data input from the input device 102 is input to the control unit 122 via the interface 114 and the input unit 121.
  • the processing selection data input by the user selects to transmit the image data from the computer device 100 to the place where the image diagnosis is performed.
  • the subject himself, a doctor, or a user who is in charge of the facility grips the grip portion 10 of the camera 1, and the opening 21 in the head 20 thereof is directed toward the mouth of the subject. Then, the user presses the switch 15. Then, the circuit 13 causes the image pickup device 12 to start imaging, and starts lighting the first light source 31a and the second light source 31b alternately. Either the first light source 31a or the second light source 31b may be turned on first, but in this embodiment, it is assumed that the first light source 31a is turned on first.
  • the timing of imaging of the image pickup device 12 in this embodiment is not limited to this, but is at intervals of 50 mm seconds.
  • the timing of imaging by the image pickup device 12 and the timing of turning on and off the first light source 31a and the second light source 31b are synchronized with each other. More specifically, if the first light source 31a is turned on and the second light source 31b is turned off at the timing when the image sensor 12 takes an image and the image data of a certain still image is generated, the following At the timing when the image data is generated, the second light source 31b is turned on and the first light source 31a is turned off. Then, at the timing when the next image data is generated, the first light source 31a is turned on and the second light source 31b is turned off, and at the timing when the next image data is further generated, the second light source 31b is turned on.
  • the posterior pharyngeal wall, left and right anterior palatine arches, and uvula in the subject's oral cavity are hidden behind the tongue and cannot be seen under normal conditions. It is in a state. Therefore, the subject whose opening 21 of the camera 1 is directed toward the oral cavity inhales while making a voice. It is preferable to inhale as strongly as possible while making a loud voice as much as possible. Then, for a short time, for example, 0.5 seconds inside and outside, the base of the tongue goes down, and the posterior wall of the pharynx, the left and right anterior palatine arches, and the uvula can be seen.
  • the image sensor 12 of the camera 1 continuously generates image data of a still image including the posterior wall of the pharynx, the left and right anterior palatine arches, and the uvula.
  • the image data continuously generated every 50 mm seconds is alternately arranged with one based on the illumination light emitted from the first light source 31a and one based on the illumination light emitted from the second light source 31b. It will be.
  • FIG. 6 conceptually shows what kind of reflected light is captured by the image pickup device 12 by the illumination light from the first light source 31a.
  • FIG. 6A shows surface reflected light reflected on the outermost surface of an object wet with body fluid such as saliva (the mucous membranes of the above-mentioned four parts), and
  • FIG. 6B shows an object wet with body fluid. It shows the behavior of internally reflected light, which is the reflected light that enters slightly inside from the surface of the object and is effectively reflected on the surface of the object itself excluding body fluids.
  • the straight line drawn in the circle mark of the thick line conceptually indicates the direction of the polarizing surface of the illumination light or the reflected light in the relevant part, and the line drawn radially in the circle mark is.
  • the illumination light emitted from the first light source 31a passes through the first polarizing plate 32.
  • the illumination light that has passed through the first polarizing plate 32 becomes linearly polarized light.
  • the polarization plane of linearly polarized light which is the illumination light in that case, is the horizontal direction in FIG. Up to this point, it is common to FIGS. 6A and 6B.
  • the illumination light that is linearly polarized light that has passed through the first polarizing plate 32 hits the object X and becomes the reflected light from the object X.
  • the surface reflected light (reflected light generated by being reflected on the surface of the body fluid) ideally maintains its polarized state.
  • the surface reflected light which is linearly polarized light is blocked by the second polarizing plate 33 whose direction of the polarizing surface of the linearly polarized light generated when natural light is passed is orthogonal to the first polarizing plate 32, and is blocked by the image pickup element 12. Does not reach (Fig. 6 (A)).
  • the polarized light of the internally reflected light (light that has passed through the body fluid and is reflected on the surface of the mucous membrane or slightly behind it) is disturbed.
  • the light that vibrates in the direction orthogonal to the polarization plane of the linearly polarized light contained in the surface reflected light passes through the second polarizing plate 33, and therefore is about half of the amount of light. Will reach the image pickup element 12 (FIG. 6B).
  • the light used for the image pickup device 12 to take an image by using the illumination light from the first light source 31a is only the internally reflected light.
  • the image generated by the image sensor 12 taking an image using the illumination light derived from the first light source 31a is a matte non-reflective image without glare. It means that it will be.
  • FIG. 7 conceptually shows what the reflected light captured by the image pickup device 12 is by the illumination light from the second light source 31b.
  • 7 (A) and 7 (B) show the behavior of the surface reflected light and the behavior of the internally reflected light, respectively, as in FIGS. 6 (A) and 6 (B).
  • the orientation of the polarizing surface of the illumination light or the reflected light and the symbol indicating that the linear polarization property is disturbed follow those in FIG.
  • the illumination light emitted from the second light source 31b does not pass through the first polarizing plate 32. Therefore, the illumination light emitted from the second light source 31b heads toward the object X as natural light. Illumination light, which is natural light, hits the object X and becomes reflected light from the object X.
  • both the surface reflected light and the internally reflected light remain natural light (FIGS. 7A and 7B).
  • both the surface reflected light and the internally reflected light have about half of the light amount of the second polarizing plate 33. It passes through and reaches the image pickup device 12 (FIGS. 7A and 7B).
  • the light used by the image pickup device 12 to capture an image using the illumination light from the second light source 31b includes both surface reflected light and internally reflected light. What this means is that the image generated by the image pickup device 12 taking an image using the illumination light derived from the second light source 31b becomes a glaring, glossy reflection image. That's what it means.
  • the data about the still image continuously generated by the image sensor 12 is sent to the circuit 13 via the connection line 12a, and after the circuit 13 performs appropriate processing (brightness adjustment, etc.) as necessary. It reaches the output terminal 14 via the connection line 13a. Then, it reaches the computer device 100 from the output terminal 14 via the cable 16.
  • the image data is a series of still image data.
  • the still image data is sent from the interface 114 to the control unit 122 via the input unit 121, and further reaches the output unit 124.
  • the output unit 124 sends image data to the transmission / reception mechanism via the interface 114, and the image data is sent from the transmission / reception mechanism to a place where image diagnosis is performed via the Internet. Every other image data is for a non-reflective image and for a reflected image.
  • the place where the image diagnosis is performed is, for example, a computer device that the doctor can access if the doctor performs the image diagnosis, or an automatic diagnosis device or an automatic diagnosis device if the automatic diagnosis device performs the automatic diagnosis.
  • the image data sent to the place where the image diagnosis is performed may be one image data or several image data of the still image of the reflected image and the non-reflective image among the still image data. Alternatively, it may be a large number of image data about such a still image in which the still images of the reflected image and the non-reflective image are arranged alternately in a large number which can be called moving image data.
  • the doctor or the automatic diagnostic device appropriately extracts only the image data of the non-reflective image from the data of the same image to generate a moving image of the non-reflective image, and extracts only the image data of the reflected image.
  • To generate a moving image of a reflected image and to perform image diagnosis by selecting one or several still image data suitable for image diagnosis from the image data of a non-reflective image and a reflected image. Can be done.
  • the selection of a still image suitable for such image diagnosis may be performed by the camera 1 or the computer device 100, and as a technique for making such a selection, a technique as described in the second embodiment is used. It can also be applied.
  • the result of the diagnosis performed by the doctor or the automatic diagnostic device may be returned from the doctor or the automatic diagnostic device to the computer device 100 via the Internet. It is possible.
  • the processing selection data is to record the image data in the computer device 100
  • the image data is recorded in the image data recording unit 123 in the computer device 100. This image data is sent to a place where image diagnosis is performed at an appropriate time and used for image diagnosis.
  • the camera 1 of the first embodiment is not used by the camera 1 alone, but is combined with a computer device 100 that transmits image data generated by the camera 1 to a place where image diagnosis is performed via the Internet. It was used.
  • a smartphone or tablet which is an example of the current computer device 100, is generally equipped with a camera. Therefore, it is also possible to make the computer device 100 in the first embodiment also have the function of the camera 1 in the first embodiment. Modification 1 is such an example.
  • the computer device 100 of the first modification may be the same as that described in the first embodiment.
  • a lens 104 and a light source 105 are provided on the back side of the computer device 100 shown in FIG. 1 as shown in FIG. 8 (A).
  • the lens 104 is exposed from the housing of the computer device 100.
  • the lens 104 forms a part of the camera included in the computer device 100.
  • the image pickup element is a CCD or CMOS, and can capture a moving image, that is, a continuous still image.
  • the image pickup interval of the still image can be appropriately adjusted by the function of the computer program installed in the computer device 100, and of course, the image pickup interval can be the same as that of the image pickup device of the first embodiment.
  • the image captured by the image pickup device can be displayed on the display 102 in substantially real time by a known or well-known mechanism.
  • the light source 105 is exposed on the surface of the housing of the computer device 100.
  • the illumination light emitted by the light source 105 may be the same as in the case of the first embodiment. However, there is only one proof of the first modification, and the light continues to be lit while the image pickup is performed by the image pickup device.
  • the computer device 100 of the first modification is used in combination with the polarizing plate 140 as shown in FIG. 8 (B).
  • the polarizing plate 140 is configured to include, but is not limited to, a first polarizing plate 141 and a second polarizing plate 142, both of which are rectangular, and when both are combined, a horizontally long rectangular shape is formed.
  • the first polarizing plate 141 and the second polarizing plate 142 may or may not be integrated, but in this embodiment, they are integrated.
  • the first polarizing plate 141 corresponds to the first polarizing plate 32 in the first embodiment
  • the second polarizing plate 142 corresponds to the second polarizing plate 33 in the first embodiment.
  • the polarization directions of the first polarizing plate 141 and the second polarizing plate 142 are orthogonal to each other.
  • the polarization direction of the first polarizing plate 141 is along the upper side of the housing, and the second polarizing plate is oriented.
  • the polarization direction of 142 is a direction along the lateral side of the housing.
  • the horizontal and vertical lines attached to the first polarizing plate 141 and the second polarizing plate 142 in FIG. 8A indicate their polarization directions.
  • the polarizing plate 140 can be detachably fixed to the back surface of the housing of the computer device 100.
  • the polarizing plate 140 in the first modification is a computer device using a known or well-known adhesive applied to the back surface in FIG.
  • the polarizing plate 140 When the polarizing plate 140 is fixed to the housing of the computer device 100, the first polarizing plate 141 is located on the front side of the light source 105 like the first polarizing plate 32 of the first embodiment.
  • the second polarizing plate 142 is on the front side of the lens 104, that is, an image pickup element (not shown), like the second polarizing plate 33 of the first embodiment.
  • the first polarizing plate 140 and the second polarizing plate in the polarizing plate 140 can be positioned in front of the light source 105 and the lens 104 of the computer device 100 while maintaining such a positional relationship.
  • the size and shape of 142 are designed.
  • the hardware configuration of the computer device 100 of the first modification is the same as that of the computer device 100 of the first embodiment shown in FIG.
  • the image data input to the interface 114 is sent from an image pickup element (not shown) in the computer device 100, unlike the case of the first embodiment sent from the camera 1 outside the computer device 100.
  • the functional block generated in the computer device 100 of the first modification is the same as in the case of the first embodiment, and is as shown in FIG.
  • the function of each functional block in the first embodiment may be substantially the same as those functions in the first embodiment, and is so in this embodiment.
  • the continuous still image being captured by the image sensor is displayed as a de facto moving image on the display 101 of the computer device 100 in substantially real time.
  • control unit 122 which receives image data about still images continuously generated by the image pickup device via the interface 114 and the input unit 121, sends the image data to the display 101 via the output unit 124 and the interface 114. It can be realized by setting. Also in the first embodiment, it is possible to adopt a configuration in which the image being captured by the camera 1 is displayed on the display 101 of the computer device 100 in substantially real time.
  • the method of using the computer device 100 of the first modification is as follows. First, the processing selection data is input using the input device 102 of the computer device 100 in the same manner as described in the first embodiment.
  • the content of the processing selection data can be the same two ways as in the first embodiment, which is the case in this embodiment.
  • the subject himself or a doctor or the like who takes an image of the subject points the lens 104, which forms a part of the camera of the computer device 100, into the oral cavity of the subject.
  • the subject himself or a doctor or the like operates a button displayed on the computer device 100, for example, on the screen to indicate the intention of "starting image pickup by the image pickup device" (not shown), thereby performing image pickup. Is started.
  • the light source 105 is turned on at the same time as the imaging is started.
  • the image sensor can continuously or moving a still image including the posterior wall of the pharynx of the subject, the left and right anterior palatine arches, and the uvula.
  • the image sensor and the lens 104 included in the computer device 100 can capture an image in the above range. ..
  • the image sensor 12 continuously generates image data of a still image including the posterior wall of the pharynx, the left and right anterior palatine arches, and the uvula. For example, the image data continuously generated every 50 mm seconds becomes the image data of the non-reflective image as described above.
  • the data about the still image continuously generated by the image sensor is sent to the control unit 122 via the interface 114 and the input unit 121, and as described above, the control unit 122 sends the output unit 124 and the interface 114 to the control unit 122. It is sent to the display 101 via the display 101 in substantially real time.
  • the display 101 displays a de facto moving image of the non-reflective image while the image sensor is taking an image in substantially real time. As shown in FIG. 8, in the housing of the computer device 100, if the display 101 is behind the lens 104, that is, if the camera in the computer device 100 is a so-called out-camera, the subject other than the subject.
  • the three parties can always confirm whether the subject's pharyngeal posterior wall, left and right anterior palatine arches, and uvula are reflected in the image. can.
  • the camera in the computer device 100 adopts a so-called in-camera configuration in which the lens 104 and the display 101 are present on the same surface of the housing, the subject himself / herself who holds the computer device 100 and performs imaging. However, it is possible to confirm the moving image being imaged on the display 101.
  • the image data of the non-reflective image is also sent from the output unit 124 to the transmission / reception mechanism, and although it depends on the type of processing selection data, the image diagnosis is performed from there via the Internet as in the case of the first embodiment. Will be sent to the place where the event is held.
  • the processing selection data is to record the image data in the computer device 100
  • the image data is recorded in the image data recording unit 123 in the computer device 100.
  • the method of image diagnosis based on image data is as described in the first embodiment, but the image diagnosis in the case of the modification 1 is an image diagnosis using only a non-reflective image.
  • the second embodiment is for capturing an image for image diagnosis, like the camera 1 of the first embodiment.
  • the image pickup target of the camera 1 of the second embodiment is not necessarily in the oral cavity, but may be in the nasal cavity, and more specifically, it does not have to be a part of the body.
  • the image pickup target of the camera 1 of the second embodiment may be, for example, a landscape, a sports scene, a car, a train, an animal, or the like, like a general single-lens reflex camera or a mirrorless camera.
  • the camera 1 of the second embodiment is suitable for imaging when the subject moves quickly.
  • the camera 1 of the second embodiment is assumed to be a camera for performing image diagnosis for respiratory infections as in the case of the first embodiment.
  • the camera 1 of the second embodiment is configured as shown in FIG. 1 as in the case of the first embodiment, and is used together with the computer device 100 as in the case of the first embodiment.
  • the configuration of the camera 1 of the second embodiment can be the same as that of the first embodiment except for the circuit 13, and this is the case in this embodiment.
  • the difference is that the circuit 13 of the camera 1 of the second embodiment has a built-in overwrite recording unit which did not exist in the first embodiment, and the switch 15 in the camera 1 of the second embodiment is provided.
  • the point is that the operation of the camera 1 when operated is different from that of the first embodiment.
  • the image sensor 12 and the lens 11 of the camera 1 of the second embodiment may be capable of capturing at least the posterior wall of the pharynx or a part thereof, but in this embodiment, the first embodiment is used. As in the case of the morphology, the posterior wall of the pharynx, the left and right anterior palatine arches, and the uvula can be imaged.
  • the overwrite recording unit is a recording medium for recording image data continuously generated by the image sensor 12. Specifically, the image data of the still images continuously generated by the image pickup device 12 is recorded for a predetermined time while overwriting the oldest ones in order.
  • the overwrite recording unit is a ring buffer in this embodiment, but the function of recording image data for still images continuously generated by the image sensor 12 for a predetermined time while overwriting the oldest ones in order is a ring buffer.
  • it can also be achieved by, for example, RAM, which is a general memory.
  • the time interval at which the image pickup device 12 takes an image and generates image data of a still image can follow that of the first embodiment.
  • the image data recorded in the overwrite recording unit is set to 0.3 seconds to 1 second, preferably 0.5 seconds ⁇ 0.1 seconds, going back to the past from that time. If the image data can be selected as described later by going back by that amount of time, by operating the switch 15 described later, the still image that the person who operated the switch 15 wanted to take can be obtained. , There is a high probability that it will exist in the overwrite recording section.
  • the image data of the still image for the past 0.5 seconds is always maintained in the state of being recorded in the overwrite recording unit. If the image sensor 12 takes an image every 50 mm seconds and generates image data for a still image, the overwrite recording unit always keeps a state in which 10 image data are recorded. Is done. Although not limited to this, in this embodiment, it is assumed that image data for the past 0.5 seconds from that time is recorded in the overwrite recording unit, and the image data generation interval in the image sensor is set. , 50 mm seconds.
  • the switch 15 of the first embodiment has a function of causing the image pickup device 12 to start imaging and to start turning on and off the first light source 31a and the second light source 31b.
  • the switch 15 in the second embodiment has a function of stopping the overwriting of the image data recorded in the overwrite recording unit at the moment when the switch 15 is operated.
  • the image data generated at the timing of image pickup of the image pickup device 12 immediately after the moment when the switch 15 is operated may be recorded in the overwrite recording unit in a state of being overwritten with the oldest image data.
  • the computer device 100 of the second embodiment is basically the same as the computer device 100 of the first embodiment.
  • the hardware configuration is the same as that of the computer device 100 of the first embodiment shown in FIG.
  • the functional block generated in the computer device 100 of the second embodiment is the same as that of the first embodiment, and is as shown in FIG.
  • the function of each functional block in the second embodiment may be substantially the same as those functions in the first embodiment, and is so in this embodiment.
  • the continuous still image being imaged by the image sensor 12 of the camera becomes a de facto moving image in substantially real time. It is displayed on the display 101 of the computer device 100.
  • the image data of the still images continuously generated by the image sensor 12 of the camera 1 is sent to the computer device 100 via the cable 16.
  • the image data sent to the computer device 100 is received by the control unit 122 via the interface 114 and the input unit 121, and sent to the display 101 via the output unit 124 and the interface 114.
  • the continuous still image being captured by the image sensor 12 of the camera is displayed as a virtual moving image on the display 101 of the computer device 100 in substantially real time.
  • the control unit 122 of the computer device 100 also has a function as a selection means in the second invention of the present application. Its function will be described later. In order to realize the function, as will be described later, the control unit 122 of the computer device 100 transfers all the image data recorded in the overwrite recording unit of the camera 1 via the cable 16, the interface 114, and the input unit 121. It can be read out.
  • the camera 1 and the computer device 100 of the second embodiment are connected by the cable 16 as in the case of the first embodiment.
  • the image sensor 12 of the camera 1 starts image pickup, and the first light source 31a and the second light source 31b start to light alternately.
  • an operation of a switch different from the switch 15 provided on the camera 1 or a switch 15 is performed.
  • the same processing selection data as in the case of the first embodiment is input.
  • the image sensor 12 in the computer device 100 of the second embodiment obtains image data of a still image which is a non-reflective image and image data of a still image which is a reflected image. It is generated alternately every 50 mm seconds. Image data is generated one after another and recorded in the overwrite recording unit. After 0.5 seconds have passed since the image data was first generated, as described above, the overwrite recording unit always keeps the state in which 10 data for the past 0.5 seconds have been recorded. Become. Further, the image data generated one after another is sent one after another from the camera 1 to the computer device 100, and the still images captured by the image sensor 12 of the camera 1 are displayed one after another on the display 101 in substantially real time. It will be displayed. This means that the virtual moving image captured by the image pickup device 12 is displayed on the display 101 in substantially real time.
  • the subject himself or a third party such as a doctor who images the subject grips the camera 1 with one hand and the computer device 100 with the other hand. Using both hands, the opening 21 of the camera 1 is directed into the oral cavity of the subject, and the display 101 of the computer device 100 is viewed.
  • the subject to be imaged or a third party such as a doctor can take an image with the camera 1 while confirming the moving image at that time displayed on the display 101.
  • the subject inhales aloud.
  • the posterior wall of the pharynx and its surroundings can be observed in the oral cavity from outside the oral cavity.
  • a subject who performs imaging after confirming the moment, or a third party such as a doctor operates the switch 15.
  • the timing of operating the switch 15, that is, pressing the switch 15, may be the moment when the person performing the imaging determines that it is suitable for imaging the posterior wall of the pharynx, the left and right anterior palatine arches, and the uvula. That is, the person performing the imaging may operate the switch 15 with the same feeling as operating the shutter means of a general camera.
  • the input from the switch 15 is transmitted to the circuit 13, and the overwrite recording unit stops overwriting the image data.
  • the image data of the still image captured by the image pickup device 12 in the past 0.5 seconds remains in the overwrite recording unit from the moment the switch 15 is pressed.
  • the image data of the still image remaining in the overwrite recording unit has a high probability of reflecting the posterior wall of the pharynx, the left and right anterior palatine arches, and the uvula, and the image data of the still image in focus. It is included. This is because the data recorded here is the image data of a still image captured by the camera 1 before the switch 15 is pressed, in a state where camera shake is unlikely to occur, except for at least the last image data. be. In general, the reaction speed of humans has a delay of about 0.2 seconds (0.1 seconds even for superhumans), but a plurality of images for the past 0.5 seconds from the time when the switch 15 is operated.
  • the past image data to be left in the overwrite recording unit should be at least 0.3 seconds, and at most 1 second, preferably about 0.5 seconds ⁇ 0.1 seconds. ..
  • a third party such as a subject or a doctor operates the computer device 100.
  • the subject, the doctor, or the like operates the input device 102 according to the instruction of the computer device 100 to input all the image data recorded in the overwriting recording unit of the camera 1 to the computer device 100. Load it.
  • the data regarding the instruction "read the image data recorded in the overwrite recording unit” input from the input device 102 is sent to the control unit 122 via the interface 114 and the input unit 121, and the control unit receives the data.
  • the 122 reads all the image data recorded in the overwrite recording unit of the camera 1 via the cable 16, that is, 10 image data in this embodiment.
  • the subject, the doctor, or the like can display, for example, one still image based on ten image data on the display 101 of the computer device 100.
  • the display of the still image on the display 101 can be performed by the control unit 122 sending image data to the display 101 via the output unit 124 and the interface 114.
  • Five of the ten still images displayed on the display 101 are still images that are non-reflective images, and the remaining five are still images that are reflective images.
  • the subject, the doctor, or the like selects at least one of the still image, which is a non-reflective image, and the still image, which is a reflected image, in which the imaging range is correct and in focus. Such selection can be made by input from the input device 102.
  • At least one image data (at most one to three) for a still image which is a non-reflective image and at least one image data (at most one) for a still image which is a reflected image selected in this way. 3) means that the processing selection data is "transmitting image data from the computer device 100 to a place where image diagnosis is performed", as in the case of the first embodiment. Then, it is sent from the control unit 122 to the place where the image diagnosis is performed via the output unit 124, the interface 114, the transmission / reception mechanism, and the Internet.
  • At least one image data of the still image which is a non-reflective image and at least one image data of the still image which is a reflected image are the processing selection data "recording the image data in the image data recording unit 123". In that case, it is sent from the control unit 122 to the image data recording unit 123 and recorded in the image data recording unit 123 in the same manner as in the case of the first embodiment. The method of using the image data thereafter is the same as that of the first embodiment.
  • the data is transmitted to the place where the image diagnosis is performed (or recorded in the image data recording unit 123).
  • the selection of the image data is performed by manual input at the discretion of the operator of the camera 1 and the computer device 100.
  • the image data is read from the overwrite recording unit to the control unit 122 after the switch 15 is operated, or the image data is selected from the plurality of image data read from the overwrite recording unit to the control unit 122.
  • the control unit 122 may automatically perform the operation.
  • the control unit 122 As a condition for the control unit 122 to select image data from a plurality of image data read from the overwrite recording unit to the control unit 122, it is necessary to use a condition that the imaging range is correct and the image data is in focus. can. Whether or not the imaging range is correct can be easily determined by using a known or well-known image recognition technique. Further, whether or not the subject is in focus can be easily determined by using a known or well-known edge detection technique. Further, it is also possible to use artificial intelligence or a mechanism using artificial intelligence for such determination.
  • the computer device 100 selects the image data transmitted (or recorded in the image data recording unit 123) to the place where the image diagnosis is performed from the plurality of image data recorded in the overwrite recording unit. It was done at. However, it is also possible to make this selection on the camera 1 side. However, when the person who operates the camera 1 manually makes the selection, all the images of the plurality of image data recorded in the overwrite recording unit immediately after the switch 15 is operated on the camera 1 itself. For example, the camera 1 itself is provided with a display for displaying all of them one by one, and the camera 1 itself is provided with an input device for performing such display and input for selecting image data. Both will be needed.
  • the control unit 122 of the computer device 100 if such a selection is to be made automatically, if a mechanism for automatically performing the above-mentioned processing executed by the control unit 122 of the computer device 100 is provided in, for example, the circuit 13 of the camera 1, the above-mentioned description will be made.
  • the display and the input device can be omitted.
  • the computer device 100 since the computer device 100 includes both a display and an input device in the first place, it is transmitted (or image data) from a plurality of image data recorded in the overwrite recording unit to a place where image diagnosis is performed.
  • an automatic diagnostic device for respiratory infections using a trained model (hereinafter, may be simply referred to as an “automatic diagnostic device”) will be described.
  • the automatic diagnostic device includes a trained model, as described below. Therefore, in order to obtain an automatic diagnostic device, it is first necessary to obtain a trained model.
  • the device necessary for obtaining the trained model will be referred to as a learning device for convenience.
  • a learning device is required in addition to the automatic diagnostic device.
  • the required hardware configuration can be the same for the automatic diagnostic device and the learning device, it is possible to combine them into one device by appropriately setting the computer programs to be installed in them. be.
  • Both the automatic diagnostic device and the learning device include a computer device.
  • This computer device is different from the computer device 100 in the first embodiment and the second embodiment.
  • the computer devices included in the automatic diagnostic device and the learning device can have the same configuration, and are the same in this embodiment.
  • the configuration of the automatic diagnostic device is the same, but for the time being, the hardware configuration of the learning device will be explained.
  • the hardware of the computer device of the third embodiment is uniform, and is different from the hardware configuration of the computer device 100 shown in FIG. 4 described in the first embodiment. There is no.
  • the computer device according to the third embodiment includes an HDD, an SSD (Solid State Drive), and other large-capacity recording devices.
  • the hardware of the computer device constituting the learning device will be described with the reference numeral of the large-capacity recording device set to 115 and the other reference numerals kept as shown in FIG.
  • the hardware constituting the learning device includes a CPU 111, a ROM 112, a RAM 113, an interface 114, and a large-capacity recording device 115, which are connected to each other by a bus 116.
  • a transmission / reception mechanism is connected to the interface 114 to enable communication via the Internet, but this function is idle when the computer device functions as a learning device, and the computer device is playing. It works only when it functions as an automatic diagnostic device.
  • An input device, a display, or the like may be connected to the interface 114, and in most cases they are connected, but in a computer device that functions as a learning device or an automatic diagnostic device, the interface 114 may be connected. , They don't make much sense, so I won't mention them.
  • the functions of the CPU 111, ROM 112, RAM 113, interface 114, and bus 116 are the same as those in the first embodiment.
  • the large-capacity recording device 115 records a computer program and necessary data for making the computer device function as a learning device. At least a part of this computer program and data may be recorded in the ROM 112 and the RAM 113.
  • the computer program for causing the computer device to execute the processing described later necessary for the computer device to function as the learning device may be pre-installed in the computer device or may be post-installed. May be.
  • the computer program may be installed in a computer device via a predetermined recording medium such as a memory card (not shown), or may be installed via a network such as a LAN or the Internet.
  • the computer program may include an OS and other necessary computer programs in addition to the above computer program.
  • the computer device described above executes the processing necessary for functioning as a learning device by the following functional blocks.
  • a functional block as shown in FIG. 9 is generated inside the computer device.
  • the following functional blocks may be generated by the function of the above-mentioned computer program alone for causing the computer device to execute the processing described below necessary for the computer device to function as the learning device.
  • the above-mentioned computer program may be generated in cooperation with an OS or other computer program installed in a computer device.
  • a learning data recording unit 311, a feature amount extraction unit 312, a learning model unit 313, and a correctness determination unit 314 are generated in relation to the functions of the present invention.
  • the learning data recording unit 311 records data for use by the learning model unit 313 for machine learning.
  • the data recorded in the learning data recording unit 311 is conceptually shown in FIG.
  • a large number of image data are recorded in the learning data recording unit 311.
  • the image data is image data 400, which is data about a still image of the subject, including the posterior wall of the pharynx, the left and right anterior palatine arches, and the imaged region including the uvula.
  • images that are still images of non-reflective images taken at about the same position at about the same time eg, generated and selected using the camera 1 and computer device 100 described in the second embodiment).
  • the image data for the 400A and the image data for the image 400B, which is a still image of the reflected image, are recorded in a paired state.
  • the subject having the imaging site reflected in the image data 400 is a person infected with a new coronavirus, a person infected with other viruses / bacteria related to respiratory infections, and a virus related to respiratory infections.
  • the data of the tag 410 indicating which of the non-infected persons of the virus is attached is attached in a state of being associated with each other.
  • the data of the tag 410 may include other information such as the subject's gender, age, race, degree of symptoms, and the like. In the example shown in FIG. 10, the gender information is included in the tag 410.
  • the code A is attached to the tongue
  • the symbol B is attached to the posterior wall of the pharynx
  • the reference C is attached to the front left and right.
  • the uvula is labeled with the palatine arch and D.
  • the image data 400 may be only the data of the image 400A of the non-reflective image and the tag 410. In that case, the image data of the image 400B of the reflected image does not exist in the learning data recording unit 311.
  • the feature amount extraction unit 312 has a function of reading image data from the learning data recording unit 311 together with a tag associated with the learning data recording unit 311 and extracting the feature amounts of the non-reflection image image 400A and the reflection image image 400B. ..
  • the feature amount to be extracted from the image 400A of the non-reflective image and the feature amount to be extracted from the image 400B of the reflected image may or may not be the same.
  • the feature quantity to be extracted from the non-reflective image 400A is the color or blood vessel image in each of the posterior pharyngeal wall, the left and right anterior palatine arches, and the uvula reflected in the image 400A, or both of them.
  • the unevenness of the posterior pharyngeal wall or the posterior pharyngeal wall, the left and right anterior palatine arches, and the uvula in the non-reflective image 400A may be used as a feature quantity.
  • the image 400B of the reflected image the features to be extracted from the image 400A of the non-reflective image are reflected in the image 400A in both the color and blood vessel images of the posterior pharyngeal wall, the left and right anterior palatine arches, and the uvula, respectively, and the posterior pharyngeal wall. Can be uneven.
  • the feature amount to be extracted from the reflected image 400B is obtained in each of the posterior pharyngeal wall, the left and right anterior palatine arches, and the uvula reflected in the image 400B, as in the case of the non-reflective image 400A. Both the color and the blood vessel image, and the unevenness of the posterior wall of the pharynx can be used.
  • the non-reflective image image 400A shows the posterior wall of the pharynx and the left and right anterior palatine arches.
  • the unevenness of the posterior wall or the posterior wall of the pharynx, the left and right anterior palatine arches, and the uvula can be extracted as feature quantities.
  • both the color and the blood vessel image in each of the posterior pharyngeal wall, the left and right anterior palatine arches, and the uvula are extracted as feature quantities from the image 400A of the non-reflective image.
  • the unevenness of the posterior wall of the pharynx is extracted as a feature amount.
  • the feature amount extraction unit 312 uses only the non-reflective image 400A to obtain a non-reflective image. Any of the above-mentioned feature quantities described for the image 400A of the above is extracted.
  • the range of the posterior wall of the pharynx, the left and right anterior palatine arches, and the uvula is extracted from the image 400A.
  • the range of the posterior wall of the pharynx, the left and right anterior palatine arches, and the uvula is extracted from the image 400A.
  • it is included in each range of the posterior pharyngeal wall, the left and right anterior palatine arches, and the uvula.
  • the color of each range of the posterior wall of the pharynx, the left and right anterior palatine arches, and the uvula can be extracted.
  • the technique of Lab color space may be used for color recognition.
  • the blood vessel image of the posterior pharyngeal wall, the left and right anterior palatine arches, and the uvula can be extracted.
  • the contour of the blood vessel is extracted, thereby extracting the blood vessel image as a feature.
  • the thickness of the blood vessel in each range of the posterior wall of the pharynx, the left and right anterior palatine arches, and the uvula of the image 400A (for example, the average thickness of the blood vessels, the blood vessel). It is possible to extract at least one such as the maximum thickness), the number of blood vessels, the number of blood vessels thinner than a certain thickness, the total length of blood vessels, and the like. Even when the unevenness of the posterior wall of the pharynx is extracted as a feature amount from the image 400B of the reflected image, it can be done by using a known or well-known technique.
  • the image 400B is used as in the case of the image 400A of the non-reflective image. Identify the area of the posterior wall of the pharynx from the inside. Then, in the range of the posterior wall of the pharynx in the image 400B, for example, the portion that is shining white due to the reflection of light generated by the body fluid is counted as a part of the convex portion, and the posterior wall of the pharynx is counted. Unevenness can be detected.
  • the feature amount extraction unit 312 sends the feature amount extracted for the image 400A which is a non-reflective image and the image 400B which is a reflection image to the learning model unit 313 together with the data of the tag 410 attached to the image data.
  • the learning model unit 313 is a general artificial intelligence that performs supervised learning.
  • the learning model unit 313 inputs the feature amount received from the feature amount extraction unit 312 together with the tag 410 attached to the image data, and performs learning.
  • the learning model unit 313 has a feature amount tagged as "new corona virus infected person", a feature amount tagged as "other virus / bacterial infected person related to respiratory infection", and a feature amount. For subjects of "new corona virus infected person” by inputting, for example, tens to thousands of feature quantities tagged with "non-infected person of virus / bacterium related to respiratory infection", for example.
  • Image data features and image data features of "other virus / bacterial infected persons related to respiratory infections” and subjects of "virus / bacterial non-infected persons related to respiratory infections” Learn about the feature amount of the image data and each of them.
  • the learning model unit 313 has an image pickup portion reflected in the image data based on the parameters configured by the training when the feature amount of the image data is input in the untagged state. Estimated results showing whether the subject is infected with the new coronavirus, infected with other viruses / bacteria related to respiratory infections, or non-infected with viruses / bacteria related to respiratory infections. Will be output.
  • the correctness determination unit 314 is used to further improve the accuracy of the estimation result of the learning model unit 313.
  • the feature amount is input to the trained model unit 313 after learning to some extent without the tag 410 attached, the estimation result is output, and the result is sent to the correctness determination unit 314.
  • the correctness determination unit 314 reads the tag 410 attached to the feature amount sent to the learning model unit 313 from the learning data recording unit 311 and determines whether or not the estimation result is correct.
  • the correctness determination unit 314 modifies the parameters in the learning model unit 313 so that the estimation result becomes more correct according to the correctness of the estimation result or the ratio of the correctness of the estimation result.
  • the correctness determination unit 314 and the above-mentioned processing performed by the unit can be omitted.
  • the accuracy of the estimation result of the learning model unit 313 hardly improves, it can be determined that the learning model unit 313 has completed machine learning.
  • the learning model unit 313 has normal colors for the posterior wall of the pharynx, the left and right anterior palatal arches, and the palatal ptosis in those who are not infected with viruses and bacteria related to respiratory infections.
  • the posterior wall of the pharynx, the left and right anterior palatal arches, and the palatal ptosis In both cases, the mucous membrane is reddish and red, and in those infected with the new corona virus, the posterior wall of the pharynx and the left and right anterior palatal arches are both reddish and red, but the color of the palatal ptosis. Learns to have a pale pink color similar to that of a healthy person who is not infected with a virus or bacterium, or in some cases a lighter pink color that is closer to white. It is also learned that the capillaries are proliferated and dilated in the reddish and reddish portion described above.
  • the trained model unit 313 that has completed machine learning is a trained model for automatic diagnosis of respiratory tract infections.
  • the automatic diagnostic apparatus is composed of a computer device. Then, as shown in FIG. 11, the automatic diagnostic apparatus 300 is used in a state of being connected to the computer apparatus 100 connected to the camera 1 via the Internet 400 which is a network.
  • the automatic diagnosis device 300 constitutes an automatic diagnosis system in cooperation with the camera 1 and the computer device 100.
  • a plurality of computer devices 100 each of which is connected to the camera 1, can be connected to the automatic diagnostic device 300.
  • the computer device 100 having the configuration as described in the first modification having the function of the camera 1 may be connected to the automatic diagnostic device via the Internet 400. It is assumed that the camera 1 is the one described in the second embodiment.
  • the image data of the non-reflective image and the reflected image of the subject's imaged portion are captured by the camera 1 on the computer device 100 and the Internet 400. It shall be sent via.
  • the hardware configuration of the computer device constituting the automatic diagnosis device 300 is the same as that of the learning device.
  • the computer programs and data mainly recorded in the large-capacity recording device 115 are different between the automatic diagnostic device 300 and the learning device.
  • the computer program for causing the computer device to execute the processing described later necessary for the computer device to function as the automatic diagnostic device may be pre-installed in the computer device or may be post-installed. It may be there.
  • the computer program may be installed in a computer device via a predetermined recording medium such as a memory card (not shown), or may be installed via a network such as a LAN or the Internet.
  • the computer device constituting the automatic diagnostic device described above executes the processing necessary for functioning as the automatic diagnostic device by the following functional blocks.
  • a functional block as shown in FIG. 12 is generated inside the computer device.
  • the following functional blocks may be generated by the function of the above-mentioned computer program alone for causing the computer device to execute the processing described below necessary for the computer device to function as the automatic diagnostic device. However, it may be generated by the cooperation between the above-mentioned computer program and an OS or other computer program installed in the computer device.
  • an input unit 321, a feature amount extraction unit 322, a trained model unit 323, and an output unit 324 are generated in relation to the functions of the present invention.
  • the input unit 321 is connected to the interface.
  • the interface is further connected to a transmission / reception mechanism (not shown).
  • the transmission / reception mechanism is an image of a pair of non-reflective images and still images of a reflected image, which are transmitted via the Internet 400, generated by the camera 1, and transmitted by the computer device 100, in which the imaged portion of the subject is reflected. You are supposed to receive data.
  • the input unit 321 is adapted to receive the image data received by the transmission / reception mechanism from the interface.
  • the feature amount extraction unit 322 extracts the feature amount from the image data of the pair of received non-reflective images and the still images of the reflected images.
  • the feature amount extracted by the feature amount extraction unit 322 is the same as the feature amount extracted by the feature extraction unit 312 of the learning device.
  • the method for extracting features is the same.
  • the feature amount extraction unit 322 sends the generated feature amount data to the trained model unit 323.
  • the trained model unit 323 is a learning model unit 313 in the learning device that has completed machine learning, and includes parameters configured by learning.
  • the trained model unit 323 outputs an estimation result when it receives an input of feature amount data, as was the case with the trained model unit 313.
  • the estimation results show that the subjects with the imaging site reflected in the image data of the pair of non-reflective images and the still images of the reflected images from which the feature quantity was extracted are the new coronavirus infected person and the respiratory organs. It indicates whether the person is infected with other viruses / bacteria related to infectious diseases or non-infected with viruses / bacteria related to respiratory infections.
  • the estimation result is text data of "new coronavirus infected person" when the subject is estimated to be a new coronavirus infected person, and the subject is other than the new coronavirus. If it is presumed that the person is infected with a virus or bacterium related to respiratory infection, the text data "Infected person with a virus or bacterium other than the new corona virus", the subject is not a virus or bacterium related to respiratory infection. If it is presumed to be an infected person, the text data shall be "non-infected person".
  • the estimation result data output from the trained model unit 323 is sent to the output unit 324.
  • the output unit 324 is connected to the transmission / reception mechanism via an interface.
  • the output unit 324 sends the received estimation result data to the transmission / reception mechanism via the interface.
  • the transmission / reception mechanism transmits the received estimation result data to the computer device 100, which is the source of the image data that triggered the generation of the estimation result, via the Internet 400.
  • the computer device 100 receives the estimation result data by its transmission / reception mechanism.
  • the estimation result data is sent to the input unit 121 via the interface 114, and is sent to the display 101 via the control unit 122, the output unit 124, and the interface 114.
  • the display 101 shows a display based on the estimation result data, that is, a "new coronavirus infected person", a "virus / bacterial infected person other than the new coronavirus", and a "non-infected person” based on the above three text data. Is displayed.
  • the subject is either a person infected with the new coronavirus, a person infected with a virus / bacterium related to a respiratory infection other than the new coronavirus, or a person who is not infected with a virus / bacterium related to a respiratory infection.
  • the inside will be a safe space.
  • each home can, for example, before going out, the new coronavirus. You will be able to confirm the infection. Moreover, the redness and changes in the vascular image caused by the immune response to the infection with the new coronavirus occur earlier than the symptoms of the new coronavirus, so asymptomatic infected persons with the new coronavirus spread the infection. You will be able to prevent the situation effectively.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Pathology (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • Animal Behavior & Ethology (AREA)
  • Medical Informatics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Chemical & Material Sciences (AREA)
  • Optics & Photonics (AREA)
  • Immunology (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • General Physics & Mathematics (AREA)
  • Medicinal Chemistry (AREA)
  • Mathematical Physics (AREA)
  • Theoretical Computer Science (AREA)
  • Urology & Nephrology (AREA)
  • Hematology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Food Science & Technology (AREA)
  • Dentistry (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Endoscopes (AREA)
  • Investigating Or Analysing Biological Materials (AREA)
  • Investigating Or Analysing Materials By Optical Means (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

Provided is a camera which can capture an image that is usable for image diagnosis for a respiratory infection. The camera comprises a lens 11, an imaging element 12, a first light source 31a, a second light source 31b, a first polarization plate 32, and a second polarization plate 33. Illumination light output from the first light source 31a passes through the first polarization plate 32 and becomes polarization light, and image light generated by hitting an object to be imaged passes through the lens 11 and the second polarization plate 33 and is imaged by the imaging element 12. Illumination light output from the second light source 31b does not pass the first polarization plate 33 and remains as natural light, and the image light passes through the lens 11 and the second polarization plate 33 and is imaged by the imaging element 12. An imaging range of the imaging element 12 is a range including a posterior wall portion of a pharynx, left and right anterior faucial arch, and the uvula.

Description

カメラ、呼吸器感染症に関する学習済みモデルの生成方法、呼吸器感染症に関する学習済モデル、呼吸器感染症に関する自動診断方法、並びにコンピュータプログラムCameras, how to generate trained models for respiratory infections, trained models for respiratory infections, automated diagnostic methods for respiratory infections, and computer programs
 本発明は、呼吸器感染症の画像診断技術に関する。 The present invention relates to a diagnostic imaging technique for respiratory infections.
 本願において、「呼吸器感染症」の文言は、呼吸器に症状の現れる感染症一般を意味し、その原因がウイルスである場合も細菌である場合も含む。
 様々な呼吸器感染症が知られているが、現在、世界中でもっとも問題視されているのは明らかに、新型コロナウイルス(COVID-19)による感染症である。
 新型コロナウイルスが人類に大きな脅威を与える原因は様々であるが、その効果が確定されている治療薬やワクチンが存在しないというのが、その原因の一部となる。したがって、新型コロナウイルスの感染拡大を防止することが、現状では何よりも重要となっている。
 新型コロナウイルスの感染拡大を防止するための重要な課題として、感染者の早期発見がある。最近の研究により、感染者に感染した新型コロナウイルスが感染力を強く持ち始めるのは、感染者に症状が現れる2、3日前の時点であることがわかってきている。それだけに、感染者を、できれば症状が出るよりも前の段階で、早期に発見して非感染者から隔離することができれば、新型コロナウイルスの感染拡大を抑制する効果が大きいと考えられる。
In the present application, the term "respiratory tract infection" means an infectious disease in which symptoms appear in the respiratory tract in general, and includes cases where the cause is a virus or a bacterium.
Various respiratory infections are known, but the most problematic one in the world today is clearly the infection caused by the new coronavirus (COVID-19).
There are various causes for the new coronavirus to pose a great threat to humankind, but one of the causes is that there are no therapeutic drugs or vaccines whose effects have been confirmed. Therefore, it is of utmost importance at present to prevent the spread of the new coronavirus.
Early detection of infected persons is an important issue to prevent the spread of the new coronavirus. Recent studies have shown that the new coronavirus that infects infected individuals begins to become highly infectious a few days before the onset of symptoms in the infected. Therefore, if the infected person can be detected early and isolated from the non-infected person, preferably before the onset of symptoms, it is considered that the effect of suppressing the spread of the new coronavirus is great.
 新型コロナウイルスの感染者を発見する、或いは感染者と非感染者とを区別するための手法としては複数の技術が存在し、それらのうちの幾つかは既に実用化されている。
 最も良く知られているのはPCR(Polymerase Chain Reaction)検査である。PCR検査は、被験者から採取した検体から、新型コロナウイルスの遺伝子を検出する検査であり、新型コロナウイルスの遺伝子を検出できた場合には、検体を提供した被験者が新型コロナウイルスの感染者であると判定される。
 抗体検査も存在する。抗体検査では、被験者から採取した検体から、新型コロナウイルスに感染した者に現れる特有の抗体が検出された場合に、検体を提供した被験者が新型コロナウイルスの感染者であると判定される。
 更に、CT(Computed Tomography)画像を用いた検査も存在する。新型コロナウイルスの感染者の肺のCT画像が特殊なものとなることが知られているので、CT画像を用いて、新型コロナウイルス感染者を見出そうという試みも存在する。
There are several techniques for detecting infected persons with the new coronavirus or distinguishing between infected persons and non-infected persons, and some of them have already been put into practical use.
The best known is the PCR (Polymerase Chain Reaction) test. The PCR test is a test to detect the gene of the new coronavirus from the sample collected from the subject, and if the gene of the new coronavirus can be detected, the subject who provided the sample is infected with the new coronavirus. Is determined.
There is also an antibody test. In the antibody test, when a specific antibody appearing in a person infected with the new coronavirus is detected in the sample collected from the subject, it is determined that the subject who provided the sample is infected with the new coronavirus.
Furthermore, there is also an examination using CT (Computed Tomography) images. Since it is known that CT images of the lungs of a person infected with the new coronavirus are special, there is an attempt to find a person infected with the new coronavirus using the CT image.
 しかしながら、PCR検査で新型コロナウイルスを検出することができるのは、感染者の体内である程度新型コロナウイルスが増殖した後であるから、PCR検査で、感染者の早期発見の目的には十分に対応できるわけではない。
 抗体検査の場合でも、感染者の体内に抗体が生じるのに新型コロナウイルスの感染からある程度の時間が必要であることから、抗体検査によっても、新型コロナウイルスの感染者の早期発見は難しい。
 同様に、CT画像を用いる場合でも、肺において新型コロナウイルスがある程度増殖してからでないと感染者と非感染者とを区別することができないから、これも、新型コロナウイルスの早期発見のための技術として応用するのは難しい。
 しかも、これら技術はいずれも、診断を行うために必要な時間が長く、また、診断のためのコストも安価ではないから、感染者か非感染者か判らない者、例えば、症状の無い者に対して、スクリーニング的な用法で用いるには向かない。
However, since the new coronavirus can be detected by the PCR test only after the new coronavirus has propagated to some extent in the infected person's body, the PCR test sufficiently corresponds to the purpose of early detection of the infected person. I can't do it.
Even in the case of an antibody test, it takes a certain amount of time from the infection of the new coronavirus to generate an antibody in the body of the infected person, so it is difficult to detect the infected person of the new coronavirus at an early stage even by the antibody test.
Similarly, even when using CT images, it is not possible to distinguish between infected and non-infected persons until the new coronavirus has propagated to some extent in the lungs, so this is also for early detection of the new coronavirus. It is difficult to apply as a technology.
Moreover, all of these techniques require a long time to make a diagnosis, and the cost for the diagnosis is not low, so it is suitable for those who do not know whether they are infected or non-infected, for example, those who have no symptoms. On the other hand, it is not suitable for use in screening.
 以上の話は、新型コロナウイルス感染症のみに限った話ではない。他の呼吸器感染症においても、感染者の早期発見ができれば感染拡大の抑制を行えるのは自明である。例えば、現在の新型コロナウイルス感染症に続く新しい呼吸器感染症が登場した場合においても、感染者の早期発見が重要となることが強く予想される。 The above story is not limited to the new coronavirus infection. It is self-evident that the spread of other respiratory infections can be suppressed if the infected person can be detected early. For example, even when a new respiratory infection that follows the current new coronavirus infection appears, it is strongly expected that early detection of the infected person will be important.
 本願発明は、呼吸器感染症の感染者を早期に発見するための手軽で安価な技術を提供することを課題とする。 An object of the present invention is to provide a simple and inexpensive technique for early detection of a person infected with a respiratory tract infection.
 上述の課題を解決するために本願発明者は研究を続けた。
 その結果、次のような知見を得た。呼吸器感染症の感染者においては、上気道(主に、口腔、咽頭、鼻腔)などの上皮細胞と血管内皮細胞に発赤その他の変異を生じることが多い。例えば、新型コロナウイルスの場合には、その機能的受容体が口腔、咽頭、鼻腔に多く発現することに起因して、口腔、咽頭、鼻腔が新型コロナウイルスの体内への重要な侵入経路となっており、そのため、新型コロナウイルスに感染した者の当該部分には変異が生じる。そして、当該部分に変異が生じる時期は、症状が現れるよりも早く、それ故、背景技術の欄で説明した各診断技術によって被験者を新型コロナウイルスの感染者と特定できる時期よりも早い可能性が極めて高い。
 そのような点を考慮すると、新型コロナウイルス感染症を始めとする呼吸器感染症の感染者を早期発見するには、上気道の上皮細胞と血管を観察することが有用であると結論づけることができる。
 もっとも、かかる観察(又は観察及び診断)は、医師が被験者の上気道を直接観察することによってももちろん行い得るが、しかしながら、簡単さや安価さを追求するなら、そのような観察は、当該部分の画像診断によるのが良い。医師が、対面での患者の観察、診断を行いにくくなっている昨今であれば尚更である。画像診断でそれを可能とするのであれば、特に、例えば、人工知能を利用した自動的な画像診断でその観察及び診断を行えるようにしたのであれば、手軽さや費用の低廉さにも繋がる。
 しかも、被験者が画像診断に必要な画像を自身で撮像することにすれば、既に広く普及しているスマートフォンその他を用いた通信技術を用いて、その画像を医師、或いは自動診断を行うインターネット上に存在するコンピュータ等に送ることは容易である。これは、一般人である被験者が、家庭で、例えば毎日、画像診断を受けられるようになる、ということを意味する。また、被験者以外の者が画像診断に必要な画像を撮像し、それを医師或いは自動診断を行うコンピュータ等に送るようにすることも同じく容易である。これは、例えば、学校、レストラン、百貨店やスーパーマーケット、或いは映画館の担当者が、その施設を利用する被験者に対して画像診断を受けさせられるようになる、ということを意味する。
 いずれにせよ、そのような画像診断が実現すれば、スクリーニング的な用法で画像診断を用いることが可能となり、早い段階で、感染者を非感染者から隔離することが可能となる。そればかりか、非感染者は経済活動に勤しむことが可能となるから、呼吸器感染症が経済に与えるダメージを抑制することができる。また、ある施設の入場者に対して画像診断を行うこととすれば、その施設内にいるすべての者は非感染者であることが高い確率で保証されるので、例えば、レストラン内での会食の際にマスクを着用すべきといった、従前の考え方からすれば非常識な行動様式からレストランの利用者は開放される。もちろん、例えば、PCR検査により確定診断を後程行う等、他の検査と組合せた利用も可能である。
 本願発明は、以上のような知見に基づいてなされたものであり、呼吸器感染症を早期に発見するために有用な画像診断技術を、具体的な技術として提供するものである。
The inventor of the present application continued research to solve the above-mentioned problems.
As a result, the following findings were obtained. In infected individuals with respiratory infections, redness and other mutations often occur in epithelial cells such as the upper respiratory tract (mainly the oral cavity, pharynx, and nasal cavity) and vascular endothelial cells. For example, in the case of the new coronavirus, the oral cavity, pharynx, and nasal cavity become important routes of entry of the new coronavirus into the body due to the large expression of its functional receptors in the oral cavity, pharynx, and nasal cavity. Therefore, mutations occur in the relevant part of the person infected with the new coronavirus. And, the time when the mutation occurs in this part is earlier than the time when the symptom appears, and therefore, it is possible that the subject can be identified as the infected person of the new coronavirus by each diagnostic technique explained in the background technique section. Extremely expensive.
Considering these points, it can be concluded that it is useful to observe the epithelial cells and blood vessels of the upper respiratory tract for early detection of infected persons with respiratory infections such as coronavirus infection. can.
Of course, such observations (or observations and diagnoses) can also be made by the physician directly observing the subject's upper respiratory tract, however, if simplicity and cheapness are pursued, such observations may be made in that part. It is better to use diagnostic imaging. This is even more so nowadays when it is difficult for doctors to observe and diagnose patients face-to-face. If it is possible to do so by diagnostic imaging, for example, if the observation and diagnosis can be performed by automatic diagnostic imaging using artificial intelligence, it will lead to convenience and low cost.
Moreover, if the subject decides to take an image necessary for image diagnosis by himself / herself, the image can be taken by a doctor or on the Internet for automatic diagnosis by using a communication technique using a smartphone or the like that is already widely used. It is easy to send to an existing computer or the like. This means that ordinary subjects will be able to undergo diagnostic imaging at home, for example daily. It is also easy for a person other than the subject to take an image necessary for image diagnosis and send it to a doctor or a computer for performing automatic diagnosis. This means, for example, that a person in charge of a school, restaurant, department store or supermarket, or movie theater will be able to have a subject who uses the facility undergo a diagnostic imaging.
In any case, if such diagnostic imaging is realized, it will be possible to use diagnostic imaging in a screening manner, and it will be possible to isolate infected persons from non-infected persons at an early stage. Not only that, non-infected persons will be able to engage in economic activities, thus reducing the damage caused by respiratory infections to the economy. In addition, if the image diagnosis is performed on the visitors of a certain facility, it is highly probable that all the persons in the facility are non-infected, so for example, a dinner in a restaurant. The restaurant user is freed from the insane behavior from the conventional way of thinking that a mask should be worn at the time. Of course, it can also be used in combination with other tests, for example, a definitive diagnosis is made later by a PCR test.
The present invention has been made based on the above findings, and provides as a specific technique a diagnostic imaging technique useful for early detection of respiratory infections.
 本願発明者が上述の如き知見に基づいて画像診断技術についての研究を行った結果、更に幾つかの課題が浮かび上がった。
 1つ目の課題(課題1)としては、画像診断で用いる画像として、呼吸器感染症の症状を呈する被験者の上気道のどの範囲を撮像することが好ましいか、という点である。上気道のうち、適切な範囲を撮像しなければ、被験者を、新型コロナウイルス感染症の感染者と、他のウイルス又は細菌の感染者と、非感染者とに区別を行うことができない。また、上気道のどの範囲を撮像するかということに加えて、上気道の状態を撮像する画像の性質の如何によっても、画像診断の精度に影響が出るということも、本願発明者の研究によって明らかになってきた。
 また、2つ目の課題(課題2)としては、画像診断で用いる画像として、上気道をどのようなタイミングで撮像した画像を用いるか、という点がある。上気道のうち咽頭後壁部は通常の状態では隠れていて見えない。例えば、声を出しながら息を吸うと、咽頭後壁部は口腔外から見える状態に露出するが、そのタイミングで口腔外からカメラを用いて撮像を行うのは難しい。そして、この課題は、より普遍的な捉え方をすることもできる。カメラは通常、シャッター手段(本願で「シャッター手段」という場合には、物理ボタンであるシャッターボタンの他、スマートフォンの画面等に表示されたものの如き物理的でないボタンである場合も含む。)を有しているが、従前のカメラでは、シャッター手段の操作から、撮像素子によって静止画像が撮像されるタイミングにはタイムラグがあるため、撮像の対象となる対象物が、ユーザが静止画像に真に収めたいタイミングとなった瞬間にユーザがシャッター手段を操作したとしても、対象物は既に異なる状態になってしまっていることが多い。これは、一瞬しか覗かない咽頭後壁部を撮像する場合等には、顕著な問題となって現れる。
 また、3つ目の課題(課題3)としては、画像診断を自動診断で行う場合には、自動診断を行うための装置或いはシステムが必要となる、という点がある。現在はそのような装置或いはシステムが存在しない。
 以下、上述の3つの課題を解決する発明について順に説明していく。
As a result of the inventor of the present application conducting research on the diagnostic imaging technique based on the above-mentioned findings, some further problems have emerged.
The first issue (problem 1) is which range of the upper respiratory tract of the subject exhibiting the symptom of respiratory infection is preferable as the image used in the diagnostic imaging. Unless an appropriate area of the upper respiratory tract is imaged, subjects cannot be distinguished from those infected with the new coronavirus infection, those infected with other viruses or bacteria, and those who are not infected. Further, according to the research of the inventor of the present application, it is also found that the accuracy of image diagnosis is affected by the nature of the image that captures the state of the upper respiratory tract, in addition to the range of the upper respiratory tract that is imaged. It has become clear.
The second problem (problem 2) is at what timing the image of the upper respiratory tract is used as the image used in the image diagnosis. The posterior wall of the pharynx of the upper respiratory tract is hidden and invisible under normal conditions. For example, when breathing in while making a voice, the posterior wall of the pharynx is exposed so that it can be seen from outside the oral cavity, but it is difficult to take an image from outside the oral cavity using a camera at that timing. And this issue can be viewed more universally. The camera usually has a shutter means (in the present application, the term "shutter means" includes not only a shutter button which is a physical button but also a non-physical button such as one displayed on a smartphone screen or the like). However, in the conventional camera, there is a time lag in the timing at which the still image is captured by the image pickup element due to the operation of the shutter means, so that the object to be imaged is truly included in the still image by the user. Even if the user operates the shutter means at the moment when the desired timing is reached, the object is often already in a different state. This becomes a remarkable problem when imaging the posterior wall of the pharynx, which is seen only for a moment.
Further, as a third problem (problem 3), when performing image diagnosis by automatic diagnosis, a device or system for performing automatic diagnosis is required. Currently there is no such device or system.
Hereinafter, inventions that solve the above-mentioned three problems will be described in order.
 まず、課題1を解決するための発明について説明を行う。課題1を解決するための発明を、便宜上第1発明と称する。
 本願発明者の研究によれば、画像診断により、被験者を新型コロナウイルス感染症の感染者と、他のウイルス又は細菌の感染者と、非感染者とに区別するには、撮像素子によって撮像される画像が、咽頭後壁部、左右の前口蓋弓、及び口蓋垂を含むものである必要がある。それは以下のような理由による。
 本願発明者の研究によると、呼吸器感染症に関するウイルス・細菌に感染していない者においては、咽頭後壁部、左右の前口蓋弓、及び口蓋垂の色彩はいずれも、通常の色彩、一般的には薄いピンク色である。他方、呼吸器感染症に関する新型コロナウイルス以外のウイルス・細菌に感染している者においては、その殆どの場合、咽頭後壁部、左右の前口蓋弓、及び口蓋垂の色彩はいずれも、粘膜が発赤して赤色を呈する。これは、ウイルス・細菌が当該箇所の粘膜に付着すると、免疫反応による毛細血管の増殖や拡張が生じるからである。そして、本願発明者の更なる研究により、新型コロナウイルスに感染している者においては、咽頭後壁部と左右の前口蓋弓はいずれも発赤して赤色を呈するものの、口蓋垂の色彩は、ウイルス・細菌に感染していない健常者と同様の薄いピンク色か、場合によってはそれよりも薄い、より白色に近いピンク色を呈することが見いだされた。
 また、上述した発赤して赤色を呈する部分においては、毛細血管の増殖や拡張が生じていることから、新型コロナウイルス感染症の感染者と、他のウイルス又は細菌の感染者と、非感染者とでは、咽頭後壁部、左右の前口蓋弓、及び口蓋垂の画像に違いがある。
 したがって、咽頭後壁部、左右の前口蓋弓、及び口蓋垂の色彩又は血管像を観察することにより、被験者を、新型コロナウイルス感染者と、呼吸器感染症に関するその他のウイルス・細菌の感染者と、呼吸器感染症に関するウイルス・細菌の非感染者とに区別することが可能となる。
First, an invention for solving the problem 1 will be described. The invention for solving the problem 1 is referred to as the first invention for convenience.
According to the research of the inventor of the present application, in order to distinguish a subject from a person infected with a new type of coronavirus infection, a person infected with another virus or a bacterium, and a non-infected person by diagnostic imaging, the subject is imaged by an imaging device. The image should include the posterior wall of the pharynx, the left and right anterior palatine arches, and the uvula. The reason is as follows.
According to the research of the inventor of the present application, the colors of the posterior pharyngeal wall, the left and right anterior palatine arches, and the uvula are all normal colors and general colors in those who are not infected with viruses and bacteria related to respiratory infections. Is a light pink color. On the other hand, in most cases of those infected with viruses / bacteria other than the new coronavirus related to respiratory infections, the mucous membrane is the color of the posterior wall of the pharynx, the left and right anterior palatine arches, and the uvula. It turns red and shows red. This is because when a virus or bacterium adheres to the mucous membrane of the site, the capillaries grow or dilate due to an immune reaction. According to further research by the inventor of the present application, in those infected with the new corona virus, the posterior wall of the pharynx and the left and right anterior palatine arches are both reddish and red, but the color of the uvula is the virus. -It was found that the color was as light pink as that of a healthy person who was not infected with the virus, or in some cases, a lighter pink color that was closer to white.
In addition, in the above-mentioned reddish and reddish part, since the growth and dilation of capillaries occur, a person infected with the new coronavirus infection, a person infected with another virus or a bacterium, and a non-infected person. There are differences in the images of the posterior wall of the pharynx, the left and right anterior palatine arches, and the uvula.
Therefore, by observing the color or vascular image of the posterior wall of the pharynx, the left and right anterior palatal arches, and the palatal ptosis, subjects were identified with those infected with the new coronavirus and those infected with other viruses and bacteria related to respiratory infections. , It becomes possible to distinguish from non-infected persons of viruses and bacteria related to respiratory infections.
 第1発明によるカメラは、このような区別を可能とするための画像を撮像するためのものである。
 そのカメラは、口腔内に照明光を照射する光源と、前記照明光が口腔内で反射することによって生じた像光を通過させるレンズと、前記レンズを通過した像光を撮像して得た画像についての画像データを生成する撮像素子とを有する、口腔外から口腔内を撮像するカメラである。そして、このカメラにおける前記レンズ、及び前記撮像素子は、前記像光に基づいて前記撮像素子によって撮像される画像が、咽頭後壁部、左右の前口蓋弓、及び口蓋垂を含むものとなるようになっている。
The camera according to the first invention is for capturing an image to enable such a distinction.
The camera captures an image obtained by imaging a light source that irradiates the oral cavity with illumination light, a lens that passes the image light generated by the reflection of the illumination light in the oral cavity, and the image light that has passed through the lens. It is a camera that captures an image of the inside of the oral cavity from the outside of the oral cavity, which has an image pickup element that generates image data of the light beam. The lens and the image sensor in this camera so that the image captured by the image sensor based on the image light includes the posterior wall of the pharynx, the left and right anterior palatine arches, and the uvula. It has become.
 第1発明におけるカメラは、光源を備える。光源は、口腔外から口腔内に照明光を照射する。口腔内は外光が入らない閉空間であるから、光源から照射された照明光によりカメラが撮像を行うようにすることで、撮影される画像の性状(例えば、画像に写り込んだものの色彩)を安定させることができる。もちろん、これは、画像による診断の正確性を増す。
 第1発明におけるカメラは、また、光源から出た照明光が口腔内で反射することによって生じた像光を通過させるレンズと、レンズを通過した像光を撮像して画像データを生成する撮像素子とを有する。このカメラにおけるレンズ、及び撮像素子は、像光に基づいて撮像素子によって撮像される画像が、咽頭後壁部、左右の前口蓋弓、及び口蓋垂を含むものとなるようになっている。つまり、レンズは、撮像される画像中に咽頭後壁部、左右の前口蓋弓、及び口蓋垂を含むものとなるような状態で、像光を撮像素子に結像させる。なお、レンズは1つでも構わないし、複数枚のレンズによって構成されていても構わない。なお、カメラにおける像光の光路中には、レンズに加えて他の光学素子(例えば、鏡、プリズム)が含まれていても構わない。本願では(第2発明の場合でも)、レンズに加えて他の光学素子が像光の光路中に存在していたとしても、撮像素子で撮像される画像中に咽頭後壁部、左右の前口蓋弓、及び口蓋垂を含むものとなるような状態で、像光が撮像素子に結像されるのであれば、「レンズ、及び撮像素子が、像光に基づいて撮像素子によって撮像される画像が、咽頭後壁部、左右の前口蓋弓、及び口蓋垂を含むものとなるようになっている」という条件が充足されるものとする。
 このようなカメラは、口腔内にその一部を挿入することなく、口腔の外側から、被験者を、新型コロナウイルス感染者と、呼吸器感染症に関するその他のウイルス・細菌の感染者と、呼吸器感染症に関するウイルス・細菌の非感染者とに区別するための画像診断を行うために必要な画像を撮像することができる。口腔外からの撮像が可能であることにより、学校、レストラン、百貨店やスーパーマーケット、或いは映画館等の施設の担当者が、仮に医学に関する専門的な知識や技能を持たなくとも、内視鏡等を用いる場合と比較して、その施設を利用する被験者の口腔内の上述の目的に必要な画像を撮像するのが容易となる。また、口腔外からの撮像が可能であることにより、内視鏡等を用いる場合とは異なりカメラの一部が被験者に接触して被験者の体液が付着することを避けることが可能となるので、このカメラを用いれば、上述の目的に必要な被験者の口腔内の画像を複数の被験者について連続して撮像することが可能となるため、ある施設に入る利用者の全員といった、多数の被験者についての画像診断を実現することが容易となる。
The camera in the first invention includes a light source. The light source irradiates the oral cavity with illumination light from outside the oral cavity. Since the inside of the oral cavity is a closed space that does not allow outside light to enter, by allowing the camera to take an image with the illumination light emitted from the light source, the properties of the image to be taken (for example, the color of what is reflected in the image). Can be stabilized. Of course, this increases the accuracy of diagnostic imaging.
The camera according to the first invention also has a lens that allows the image light generated by the illumination light emitted from the light source to be reflected in the oral cavity to pass through, and an image pickup device that captures the image light that has passed through the lens to generate image data. And have. The lens and the image sensor in this camera are adapted so that the image captured by the image sensor based on the image light includes the posterior wall of the pharynx, the left and right anterior palatine arches, and the uvula. That is, the lens forms an image on the image sensor so that the image to be imaged includes the posterior wall of the pharynx, the left and right anterior palatine arches, and the uvula. The number of lenses may be one, or may be composed of a plurality of lenses. In addition to the lens, another optical element (for example, a mirror or a prism) may be included in the optical path of the image light in the camera. In the present application (even in the case of the second invention), even if other optical elements in addition to the lens are present in the optical path of the image light, the posterior wall of the pharynx, the front left and right, in the image captured by the image sensor. If the image light is imaged on the image sensor in a state that includes the palatal arch and the palate drop, "the lens and the image captured by the image sensor based on the image light are images taken by the image sensor. , The posterior wall of the pharynx, the left and right anterior palatal arches, and the palatal ptosis are included. "
Such a camera can be used to view subjects from outside the oral cavity, without having to insert a portion of it into the oral cavity, with a person infected with the new corona virus and other virus / bacterial infections related to respiratory infections. It is possible to capture images necessary for performing diagnostic imaging to distinguish between non-infected persons of viruses and bacteria related to infectious diseases. By being able to take images from outside the oral cavity, even if the person in charge of facilities such as schools, restaurants, department stores, supermarkets, or movie theaters does not have specialized knowledge or skills related to medicine, they can use endoscopes, etc. Compared with the case of using the facility, it becomes easier to capture an image necessary for the above-mentioned purpose in the oral cavity of a subject who uses the facility. In addition, since it is possible to take an image from outside the oral cavity, it is possible to prevent a part of the camera from coming into contact with the subject and adhering the body fluid of the subject, unlike the case of using an endoscope or the like. With this camera, it is possible to continuously capture images of the oral cavity of a subject necessary for the above-mentioned purpose for multiple subjects, so that for a large number of subjects such as all users entering a certain facility. It becomes easy to realize diagnostic imaging.
 第1発明によるカメラは、上述のように、照明光を発する光源を備えている。光源は単数でも良いし、複数でも良い。
 カメラは、前記照明光を通過させて、偏光方向が所定方向である直線偏光とする第1偏光板と、前記像光を通過させる、偏光方向が前記第1偏光板と直交する第2偏光板とを備えていてもよい。
 照明光を通過させる上述のような第1偏光板を有するカメラでは、口腔内に照射される照明光は直線偏光となる。直線偏光である照明光は、上述したように口腔内で反射され、像光となる。ここで、像光は2つの種類の光に分かれる。一方は、口腔内の粘膜の唾液等の表面で反射される表面反射光であり、他方は、唾液等を透過して口腔内の粘膜自体の表面で反射される内部反射光である。表面反射光は、撮像された画像においてギラツキの原因となる光であり、画像中で白飛びの原因となることがあるが、他方、画像中に写り込んだ物の凹凸や形状を良く表現する。このような画像を本願では反射画像と呼ぶ。それに対して内部反射光は、ギラツキの無い画像を得るのに適した光であり、画像中に写り込んだ物の凹凸や形状を把握するには向かないが、画像中に写り込んだ物の色彩や、写り込んだものの内部の性状を良く表現する。これを本願では無反射画像と呼ぶ。そして、表面反射光は元々像光が持っていた直線偏光の性質を理想的には完全に維持し、内部反射光は元々像光が持っていた直線偏光の性質を失い自然光化する、という性質を持つ。
 したがって、像光の光路中に「偏光方向が第1偏光と直交している」第2偏光板を配しておくと(正確には、「第1偏光板を通過した光の直線偏光の振動面の向きと、第2偏光板を通過した光の直線偏光の振動面の向きとが直交するような位置関係にある」と記載すべきであるが、あまりに冗長なため、第1偏光板と第2偏光板とがそのような関係にある場合には、以降も、「第1偏光板と第2偏光板の偏光方向が直交している」という表現を用いる。)、直線偏光性を保っている表面反射光は第1偏光板と偏光方向が直交している第2偏光板で遮断され、直線偏光性を失っている自然光である内部反射光は、理想的には光量の半分が第2偏光板を通過する。
 つまり、上述の如き第1偏光板と第2偏光板とを用いるカメラの撮像素子は、第2偏光板を通過した内部反射光のみによる撮像を行うことが可能となる。上述したように、内部反射光によって得られる無反射画像である画像は、唾液等の体液に基づくギラツキの除かれた、色彩或いは粘膜の内部に存在する血管についての像である血管像の観察に適したものとなる。そのような画像は、被験者を、咽頭後壁部、左右の前口蓋弓、及び口蓋垂の色彩又は血管像の別によって、新型コロナウイルス感染者と、呼吸器感染症に関するその他のウイルス・細菌の感染者と、呼吸器感染症に関するウイルス・細菌の非感染者とに区別するという、画像診断の目的に極めて向いている。また、無反射画像である画像は、反射画像によれば体液による照明光の反射によって生じるギラツキによって見えなくなりがちな、咽頭後壁部、左右の前口蓋弓、及び口蓋垂において粘膜の内部に有る血管の像である血管像もよく見えるものとなるので、血管像の観察にも極めて向いている。
 したがって、このようなカメラによれば、画像診断の精度を高めることができる。
As described above, the camera according to the first invention includes a light source that emits illumination light. The light source may be a single light source or a plurality of light sources.
The camera has a first polarizing plate that allows the illumination light to pass through and is linearly polarized in a predetermined polarization direction, and a second polarizing plate that passes the image light and has a polarization direction orthogonal to the first polarizing plate. And may be provided.
In a camera having a first polarizing plate as described above that allows illumination light to pass through, the illumination light emitted into the oral cavity is linearly polarized. Illumination light, which is linearly polarized light, is reflected in the oral cavity as described above and becomes image light. Here, the image light is divided into two types of light. One is surface reflected light reflected on the surface of the mucous membrane in the oral cavity such as saliva, and the other is internally reflected light transmitted through saliva and reflected on the surface of the mucous membrane itself in the oral cavity. Surface reflected light is light that causes glare in the captured image and may cause overexposure in the image, but on the other hand, it well expresses the unevenness and shape of the object reflected in the image. .. Such an image is referred to as a reflection image in the present application. On the other hand, the internally reflected light is suitable for obtaining an image without glare, and is not suitable for grasping the unevenness and shape of an object reflected in the image, but the object reflected in the image. It expresses the colors and the internal properties of what is reflected. This is referred to as a non-reflective image in the present application. The surface-reflected light ideally completely maintains the linearly polarized light property originally possessed by the image light, and the internally reflected light loses the linearly polarized property originally possessed by the image light and becomes natural light. have.
Therefore, if a second polarizing plate whose polarization direction is orthogonal to the first polarization is arranged in the optical path of the image light (to be exact, the vibration of the linear polarization of the light passing through the first polarizing plate). There is a positional relationship in which the orientation of the surface and the orientation of the vibrating surface of the linearly polarized light that has passed through the second polarizing plate are orthogonal to each other. " When the second polarizing plate has such a relationship, the expression "the polarization directions of the first polarizing plate and the second polarizing plate are orthogonal to each other" is used hereafter), and the linear polarization property is maintained. The surface reflected light is blocked by the second polarizing plate whose polarization direction is orthogonal to that of the first polarizing plate, and the internally reflected light, which is natural light having lost its linear polarization property, ideally has half the amount of light. 2 Pass through the polarizing plate.
That is, the image pickup element of the camera using the first polarizing plate and the second polarizing plate as described above can perform imaging only by the internally reflected light passing through the second polarizing plate. As described above, the image which is a non-reflective image obtained by the internally reflected light is used for observing a blood vessel image which is an image of a blood vessel existing inside a color or mucous membrane without glare based on body fluid such as saliva. It will be suitable. Such images show subjects with new coronavirus infections and other viral and bacterial infections related to respiratory infections, depending on the color or vascular image of the posterior pharyngeal wall, left and right anterior palatal arches, and palatal ptosis. It is extremely suitable for the purpose of diagnostic imaging, which is to distinguish between those who are not infected with viruses and bacteria related to respiratory infections. In addition, the non-reflective image is a blood vessel inside the mucous membrane in the posterior pharyngeal wall, the left and right anterior palatine arches, and the uvula, which tend to be obscured by the glare caused by the reflection of the illumination light by the body fluid according to the reflected image. Since the blood vessel image, which is the image of the uvula, can be seen well, it is extremely suitable for observing the blood vessel image.
Therefore, according to such a camera, the accuracy of image diagnosis can be improved.
 第1発明のカメラにおける前記光源は、その光源から出た前記照明光が前記第1偏光板を通過してから口腔内に向かう第1光源と、その光源から出た前記照明光が前記第1偏光板を通過せずに前記口腔内に向かう第2光源であって、前記第1光源と択一的に発光するものとを含んでいるとともに、前記撮像素子は、前記第1光源から口腔内に照射された前記照明光によって生じる前記像光により撮像される無反射画像と、前記第2光源から口腔内に照射された前記照明光によって生じる前記像光により撮像される反射画像との双方を撮像するようになっていてもよい。
 第1光源から出て第1偏光板を通過する照明光に基づいて撮像素子によって撮像される画像は、前の段落で説明した原理により無反射画像である画像となる。
 他方、第2光源から出て第1偏光板を通過しないで(例えば、第2偏光板と同じ向きの偏光方向を持つ偏光板である第3偏光板を通過させても良い。)口腔内で反射した像光(像光は、第3偏光板が無いのであれば、自然光である)は、表面反射光も、内部反射光も自然光となる。そうすると、いずれも自然光である、像光に含まれる表面反射光と内部反射光はともに、理想的にはそれらの光量の半分が第2偏光板を通過する。
 そうすると、この場合において、撮像素子で撮像される画像は、表面反射光と内部反射光とを合わせた画像となる。この場合の画像は、要するに自然光を照明光とした、普通のカメラで撮像される普通の画像と同様なのであるから、ギラツキのある、画像中に写り込んだ物の凹凸や形状を把握しやすい反射画像となる。
 つまり、このカメラは、第1光源と第2光源とを択一的に点灯させることにより、無反射画像と、反射画像との双方を撮像できるものとなる。
 上述したように、被験者を、新型コロナウイルス感染者と、呼吸器感染症に関するその他のウイルス・細菌の感染者と、呼吸器感染症に関するウイルス・細菌の非感染者とに区別するには、咽頭後壁部、左右の前口蓋弓、及び口蓋垂の色彩と血管像の少なくとも一方を観察するのが良い。しかしながら、例えば、インフルエンザウイルスの感染者においては、咽頭後壁部にリンパ濾胞が生じることが知られている。このリンパ濾胞は、本来であれば滑らかな咽頭後壁部に凹凸を作る。したがって、咽頭後壁に凹凸が有るか否かを観察することにより、呼吸器感染症に関する新型コロナウイルス以外のウイルス・細菌の感染者を更に、インフルエンザウイルスの感染者と、新型コロナウイルス及びインフルエンザウイルス以外のウイルス・細菌の感染者とに区別することが可能となる。また、現時点では定かな知見は無いものの、通常の内視鏡を用いる場合よりも大きな範囲である咽頭後壁部、左右の前口蓋弓、及び口蓋垂を写り込ませた画像において、色彩、血管像の少なくとも一方のみならず、上記4つの部位の凹凸も組合せて判定を行えば、被験者を、新型コロナウイルス感染者と、呼吸器感染症に関するその他のウイルス・細菌の感染者と、呼吸器感染症に関するウイルス・細菌の非感染者とに区別するということを、より精度高く行える可能性がある。
 上述の如き第1光源と第2光源とを有するカメラによれば、無反射画像による、色彩又は血管像の少なくとも一つを中心とした咽頭後壁部、左右の前口蓋弓、及び口蓋垂の画像による観察と、凹凸或いは形状を中心とした咽頭後壁部、左右の前口蓋弓、及び口蓋垂の画像による観察との双方を行えるようになるため、上述の観点から有利である。
The light source in the camera of the first invention is a first light source in which the illumination light emitted from the light source passes through the first polarizing plate and then heads into the oral cavity, and the illumination light emitted from the light source is the first. A second light source that goes into the oral cavity without passing through the polarizing plate and that emits light selectively with the first light source is included, and the image pickup element is in the oral cavity from the first light source. Both the non-reflective image captured by the image light generated by the illumination light irradiated on the surface and the reflected image captured by the image light generated by the illumination light emitted into the oral cavity from the second light source. It may be designed to take an image.
The image captured by the image pickup device based on the illumination light emitted from the first light source and passing through the first polarizing plate is an image that is a non-reflective image according to the principle described in the previous paragraph.
On the other hand, it may pass through the third polarizing plate, which is a polarizing plate having the same polarization direction as the second polarizing plate, without exiting the second light source and passing through the first polarizing plate (for example, it may pass through the third polarizing plate having the same polarization direction as the second polarizing plate). As for the reflected image light (the image light is natural light if there is no third polarizing plate), both the surface reflected light and the internally reflected light are natural light. Then, both the surface reflected light and the internally reflected light contained in the image light, which are both natural light, ideally half of their light amount passes through the second polarizing plate.
Then, in this case, the image captured by the image pickup device becomes an image in which the surface reflected light and the internally reflected light are combined. In this case, the image is basically the same as an ordinary image captured by an ordinary camera using natural light as the illumination light, so there is glare and reflection that makes it easy to grasp the unevenness and shape of the object reflected in the image. It becomes an image.
That is, this camera can capture both the non-reflective image and the reflected image by alternately turning on the first light source and the second light source.
As mentioned above, to distinguish subjects from those infected with the new corona virus, those infected with other viruses / bacteria related to respiratory infections, and those who are not infected with viruses / bacteria related to respiratory infections, the pharynx It is advisable to observe at least one of the color and angiography of the posterior wall, the left and right anterior palatal arches, and the palatal appendage. However, for example, it is known that lymphoid follicles occur in the posterior wall of the pharynx in a person infected with influenza virus. These lymphoid follicles create irregularities on the posterior wall of the pharynx, which is normally smooth. Therefore, by observing whether or not the posterior wall of the pharynx has irregularities, those infected with viruses / bacteria other than the new coronavirus related to respiratory infections can be further infected with influenza virus, and the new coronavirus and influenza virus. It is possible to distinguish from infected persons with viruses and bacteria other than the above. In addition, although there is no definite knowledge at this time, the color and blood vessel image in the image showing the posterior wall of the pharynx, the left and right anterior palatal arches, and the palatal tract, which are larger than when using a normal endoscope. If the judgment is made by combining not only at least one of the above four sites but also the unevenness of the above four sites, the subjects are a person infected with the new corona virus, a person infected with other viruses / bacteria related to respiratory infections, and a respiratory infection. It may be possible to distinguish between non-infected persons of viruses and bacteria related to the disease with higher accuracy.
According to a camera having a first light source and a second light source as described above, non-reflective images of the posterior wall of the pharynx centered on at least one of color or blood vessel images, left and right anterior palatine arches, and uvula. It is advantageous from the above-mentioned viewpoint because it enables both the observation by the image of the posterior wall of the pharynx centered on the unevenness or shape, the left and right anterior palatine arches, and the image of the uvula.
 第1発明のカメラで撮像された画像についての画像データは、画像診断に利用される。第1発明のカメラは、例えば、送受信機構を有しており、送受信機構を介して、画像診断の行われる場所へと、撮像した画像についての画像データを送信するようになっていても良い。画像データの送り先は、例えば、医師が画像診断を実行するのであれば、医師がアクセスすることができるコンピュータ装置であり、自動診断装置が自動診断を実行するのであれば、自動診断装置或いは自動診断装置がアクセスすることができるコンピュータ装置である。送受信機構は公知、或いは周知のもので良く、規格化された既存のものでも良い。送受信機構が実行する通信は、例えば、インターネット回線を介した無線の通信であって、第5世代移動通信システムによる通信を実行するものであっても良い。この場合には、例えば、カメラ自体が、画像診断の行われる場所へと、撮像した画像についての画像データを送信する。また、送受信機構が実行する通信は、近距離無線通信であっても構わない。その例は、例えば、無線LANの規格の一つであるWi-Fi(商標)である。この場合には、カメラは、Wi-Fi通信を実行することによって所定の無線ルータへ画像データを送り、無線ルータが画像診断の行われる場所へと、撮像した画像についての画像データを送信するのが一般的である。また、近距離無線通信の他の例は、Bluetooth(商標)である。この場合には、カメラは、Bluetooth通信を実行することによって所定のスマートフォンやタブレットへ画像データを送り、スマートフォンやタブレットが画像診断の行われる場所へと、撮像した画像についての画像データを送信するのが一般的である。
 第1発明のカメラで撮像される画像は、静止画像であってもそうでなくてもよい。静止画像は連続して撮像されても良い。静止画像が連続して撮像される場合であって、静止画像が撮像される時間間隔が短くなれば、それはやがて動画像と同等のものになる。換言すれば、第1発明のカメラで撮像される画像は静止画像であっても、動画像であっても良い。したがって、カメラから、直接或いは他の機器を介して間接的に画像診断の行われる場所へと送られる画像データは、静止画像についての画像データであっても良いし、連続した静止画像或いは動画像についての画像データであっても良い。カメラから、画像診断の行われる場所へと送られる画像データは、静止画像についてのものである方が、送信の処理が軽く、送信時間も短縮することができる。
 カメラは、ユーザが操作することにより前記撮像素子が静止画像である画像についての画像データを生成するシャッター手段を有していてもよい。この場合のカメラは、ユーザが前記シャッター手段を一回操作することにより、静止画像である前記無反射画像の画像データと、静止画像である前記反射画像の画像データとの双方を、少なくとも1つずつ生成するようになっていてもよい。この場合、無反射画像と反射画像の静止画像についての画像データが少なくとも1つずつ、カメラによって生成され、画像診断の行われる場所へと送られることになる。
The image data of the image captured by the camera of the first invention is used for image diagnosis. The camera of the first invention has, for example, a transmission / reception mechanism, and may transmit image data of the captured image to a place where image diagnosis is performed via the transmission / reception mechanism. The destination of the image data is, for example, a computer device that the doctor can access if the doctor executes the image diagnosis, and an automatic diagnosis device or an automatic diagnosis if the automatic diagnosis device executes the automatic diagnosis. A computer device that the device can access. The transmission / reception mechanism may be known or well-known, and may be a standardized existing one. The communication executed by the transmission / reception mechanism may be, for example, wireless communication via an Internet line and may execute communication by a 5th generation mobile communication system. In this case, for example, the camera itself transmits the image data of the captured image to the place where the image diagnosis is performed. Further, the communication executed by the transmission / reception mechanism may be short-range wireless communication. An example is Wi-Fi ™, which is one of the wireless LAN standards. In this case, the camera sends image data to a predetermined wireless router by executing Wi-Fi communication, and transmits image data about the captured image to a place where the wireless router performs image diagnosis. Is common. Another example of short-range wireless communication is Bluetooth ™. In this case, the camera sends image data to a predetermined smartphone or tablet by executing Bluetooth communication, and sends image data about the captured image to a place where the smartphone or tablet performs image diagnosis. Is common.
The image captured by the camera of the first invention may or may not be a still image. Still images may be continuously captured. In the case where still images are continuously captured, if the time interval during which the still images are captured is shortened, it will eventually become equivalent to a moving image. In other words, the image captured by the camera of the first invention may be a still image or a moving image. Therefore, the image data sent from the camera directly or indirectly to the place where the image diagnosis is performed may be image data about a still image, or a continuous still image or a moving image. It may be image data about. When the image data sent from the camera to the place where the image diagnosis is performed is for a still image, the transmission process is lighter and the transmission time can be shortened.
The camera may have a shutter means for generating image data for an image whose image sensor is a still image by being operated by a user. In this case, the camera allows the user to operate the shutter means once to obtain at least one of both the image data of the non-reflective image which is a still image and the image data of the reflected image which is a still image. It may be generated one by one. In this case, at least one image data about the non-reflective image and the still image of the reflected image is generated by the camera and sent to the place where the image diagnosis is performed.
 次に、課題2を解決する発明について説明する。課題2を解決するための発明を、便宜上、第2発明と称することにする。
 第2発明も、第1発明と同様に、画像診断に用いられる画像を撮像するためのカメラに応用することもできるが、その応用範囲はより広い。
 そのカメラは、対象物からの像光を通過させるレンズと、レンズを通過した像光を撮像して得た画像についての画像データを生成する撮像素子とを有する、カメラであって、前記撮像素子は、所定時間ごとに静止画像についての画像データを連続して生成するようになっているとともに、前記画像データを、古いものから順に上書きしながら所定時間分記録する上書き記録部と、ユーザが操作することにより、前記上書き記録部上の前記画像データの上書きを停止させるシャッター手段と、前記シャッター手段を操作した後に、その時点で前記上書き記録部上にある画像データのうちの任意のものを選択する選択手段と、を備えている、カメラである。
Next, an invention for solving the problem 2 will be described. The invention for solving the problem 2 will be referred to as a second invention for convenience.
Similar to the first invention, the second invention can be applied to a camera for capturing an image used for diagnostic imaging, but its application range is wider.
The camera is a camera having a lens that allows image light from an object to pass through, and an image pickup element that generates image data for an image obtained by capturing the image light that has passed through the lens, and is the image pickup element. Is designed to continuously generate image data for still images at predetermined time intervals, and is operated by a user with an overwrite recording unit that records the image data for a predetermined time while overwriting the image data in order from the oldest one. By doing so, any one of the shutter means for stopping the overwriting of the image data on the overwrite recording unit and the image data on the overwrite recording unit at that time after operating the shutter means is selected. It is a camera equipped with a means of selection.
 第2発明のカメラにおける前記撮像素子は、所定時間ごとに静止画像についての画像データを連続して生成するようになっている。静止画像についての画像データを生成する時間間隔は一定であってもそうでなくても良いが、一般的には一定である。その間隔が短くなれば、撮像素子は、動画像についての画像データを生成するものとなる。
 第2発明におけるカメラは、画像データを、古いものから順に上書きしながら所定時間分記録する上書き記録部と、ユーザが操作することにより、上書き記録部上の前記画像データの上書きを停止させるシャッター手段と、シャッター手段を操作した後に、その時点で上書き記録部上にある画像データのうちの任意のものを選択する選択手段と、を備えている。
 上書き記録部は、典型的にはリングバッファであるが、一般的なメモリに、上書きしながら画像データを記録していくという構成を採用することもできる。
 第2発明におけるカメラは、撮像素子で次々に生成される画像データを上書き記録部に、古いものから順に上書きしながら所定時間分記録するようになっている。つまり、上書き記録部は、過去の所定時間分の画像データが記録された状態を常に保つようになっている。第2発明のカメラはシャッター手段を有している。シャッター手段は、ユーザには、一般的なカメラにおいて、静止画像の撮像を行うタイミングを決定するための手段として機能しているように見えるであろうが、第2発明のカメラでは、実際のところ、ユーザの操作により上書き記録部上の画像データの上書きを停止させるように機能する。そして、第2発明のカメラは、シャッター手段をユーザが操作した後に、その時点で上書き記録部上にある画像データのうちの任意のものを選択する選択手段を有する。したがって、第2発明のカメラでは、ユーザが撮像を行おうと思ってシャッター手段を操作した場合、シャッター手段を操作したその瞬間から、それ以前の所定時間の間に撮像素子で撮像されて上書き記録部に記録されている、シャッター手段を押したのと同時のタイミングから所定時間部分の過去の画像データの中から適当なものを、選択することができるようなものとなる。
 選択手段で選択されたこの画像は、第2発明のカメラが、外部に対して画像データを送信する送信手段を備えるのであれば、送信手段から外部に送信すべき(つまり、第1発明で説明した「画像診断の行われる場所」へ送られるべき)画像、より詳細には静止画像とすることができる。送信手段は、第1発明で説明した送受信機構と同等のものである。
 このような、シャッター手段、撮像素子、上書き記録部、及び選択手段の存在により、第2発明のカメラは、撮像の対象物が所望の状態にあるときにおける静止画像を、シャッターチャンスを逃すこと無く、ユーザが望むタイミングで撮像する(或いは、撮像したデータを選択する)ことが可能となる。加えて、画像が手ぶれを生じる原因の一つであるシャッター手段の操作を行う前のタイミングの画像を選択可能とすることにより、第2発明のカメラで撮像された画像は、手ぶれの影響を受けにくいものとなる。
The image pickup device in the camera of the second invention continuously generates image data of a still image at predetermined time intervals. The time interval for generating image data for a still image may or may not be constant, but is generally constant. If the interval becomes short, the image sensor will generate image data for the moving image.
The camera according to the second invention has an overwrite recording unit that records image data for a predetermined time while overwriting the image data in order from the oldest one, and a shutter means that stops overwriting of the image data on the overwrite recording unit by the user's operation. And, after operating the shutter means, it is provided with a selection means for selecting any one of the image data on the overwrite recording unit at that time.
The overwrite recording unit is typically a ring buffer, but it is also possible to adopt a configuration in which image data is recorded while overwriting in a general memory.
The camera according to the second invention records image data generated one after another by the image sensor in an overwrite recording unit for a predetermined time while overwriting the oldest ones in order. That is, the overwrite recording unit is designed to always maintain a state in which image data for a predetermined time in the past has been recorded. The camera of the second invention has a shutter means. The shutter means may appear to the user to function as a means for determining when to capture a still image in a typical camera, but in the camera of the second invention, it is in fact. , It functions to stop the overwriting of the image data on the overwrite recording unit by the user's operation. Then, the camera of the second invention has a selection means for selecting any image data on the overwrite recording unit at that time after the user operates the shutter means. Therefore, in the camera of the second invention, when the user operates the shutter means in order to take an image, the image is taken by the image sensor during a predetermined time before the moment when the shutter means is operated, and the overwrite recording unit is used. It becomes possible to select an appropriate image data from the past image data for a predetermined time portion from the timing at the same time when the shutter means is pressed, which is recorded in.
This image selected by the selection means should be transmitted to the outside from the transmission means if the camera of the second invention includes a transmission means for transmitting image data to the outside (that is, described in the first invention). It can be an image (which should be sent to the "place where the diagnostic imaging is performed"), more specifically a still image. The transmission means is equivalent to the transmission / reception mechanism described in the first invention.
Due to the existence of such a shutter means, an image sensor, an overwrite recording unit, and a selection means, the camera of the second invention captures a still image when the object to be imaged is in a desired state without missing a shutter chance. , It becomes possible to take an image (or select the imaged data) at a timing desired by the user. In addition, by making it possible to select an image at the timing before operating the shutter means, which is one of the causes of camera shake, the image captured by the camera of the second invention is affected by camera shake. It will be difficult.
 第2発明のカメラは、口腔外から口腔内を撮像するものとされており、口腔内に照明光を照射する光源を備えており、前記レンズ、及び前記撮像素子は、前記像光に基づいて前記撮像素子によって撮像される画像が、咽頭後壁部を含むものとなるようになっていてもよい。
 口腔外から口腔内を撮像することによる効果は、第1発明で述べた口腔外からの撮像が可能であることによって生じる効果に等しい。また、第2発明のカメラが口腔内に照明光を照射する光源を備えることによる効果も、口腔内に照明光を照射する光源を備えることにより第1発明のカメラに生じる効果に等しい。
 第2発明のカメラにおけるレンズ、及び撮像素子が、像光に基づいて撮像素子によって撮像される画像が、咽頭後壁部を含む(咽頭後壁部の一部でも良い。)ものとなるようになっている場合、第1発明のカメラにおけるレンズ、及び撮像素子は、撮像素子で撮像された画像に被験者の咽頭後壁部、左右の前口蓋弓、及び口蓋垂が映り込むようになっていたので、その撮像範囲は、第1の発明のカメラよりも狭い。
 第1発明についての説明の中で既に述べたように、呼吸器感染症について現時点において出願人が得ている知見は、「咽頭後壁部、左右の前口蓋弓、及び口蓋垂の色彩を観察することにより、被験者を、新型コロナウイルス感染者と、呼吸器感染症に関するその他のウイルス・細菌の感染者と、呼吸器感染症に関するウイルス・細菌の非感染者とに区別することが可能となる」というものである。したがって、咽頭後壁部のみを撮影範囲に含む第2発明のカメラで撮像された画像には、左右の前口蓋弓、及び口蓋垂が写り込んでいない場合があり、その場合には、第2発明で撮像した画像に基づくのみでは、被験者を、新型コロナウイルス感染者と、呼吸器感染症に関するその他のウイルス・細菌の感染者と、呼吸器感染症に関するウイルス・細菌の非感染者とに区別することができない可能性がある。とはいえ、咽頭後壁部が写り込んだ画像に基づいて、何らかの呼吸器感染症の診断を行える可能性があり、少なくとも何らかのウイルス・細菌の感染者と非感染者を区別することは可能である(例えば、咽頭後壁部の表面の凹凸からインフルエンザウイルスの感染者を見出すことは可能である。)ので、上述した第2発明のカメラには十分に意味があるといえる。また、今後、左右の前口蓋弓、及び口蓋垂が写り込んでいない、咽頭後壁部のみが写り込んだ画像によっても、被験者を、新型コロナウイルス感染者と、呼吸器感染症に関するその他のウイルス・細菌の感染者と、呼吸器感染症に関するウイルス・細菌の非感染者とに区別することを可能とするための、例えば、色彩や、血管像や、凹凸等の形状に関する特徴が発見されるかもしれないから、仮にそのような特徴が発見されれば、咽頭後壁部のみを撮像範囲とする第2発明のカメラによっても、被験者を、新型コロナウイルス感染者と、呼吸器感染症に関するその他のウイルス・細菌の感染者と、呼吸器感染症に関するウイルス・細菌の非感染者とに区別することが可能となる。しかしながら、第2発明のカメラで撮像した画像による画像診断によって、現時点で出願人が得ている知見に基づいて、被験者を、新型コロナウイルス感染者と、呼吸器感染症に関するその他のウイルス・細菌の感染者と、呼吸器感染症に関するウイルス・細菌の非感染者とに区別することを可能にしたいのであれば、第2発明のカメラにおけるレンズ、及び撮像素子を、像光に基づいて撮像素子によって撮像される静止画像が、咽頭後壁部、左右の前口蓋弓、及び口蓋垂を含むものとなるようにしておけば良い。
 なお、既に述べたように、咽頭後壁部は、通常の状態では、舌により隠れており、口腔外から観察も撮像もできない。他方、被験者が、声を出しながら勢いよく息を吸うと、舌の根元が下がり、咽頭後壁部が口腔外から見える状態になる。また、咽頭後壁部が口腔外から見える状態になったとき、左右の前口蓋弓と口蓋垂も同時に、口腔外から見える状態になる。したがって、咽頭後壁部を第2発明のカメラを用いて撮像しようとするのであれば、咽頭後壁部が口腔外から見える状態になったその瞬間に、撮像素子で咽頭後壁部の撮像を行う必要がある。しかしながら、特に被験者が自分自身でシャッター手段を操作する場合において顕著であるが、息を吸うという他の行為に気を取られて、シャッター手段の操作が遅れ気味となる。人間の反応速度には、通常、0.2秒、最小でも0.1秒程度の遅れがある。また、シャッター手段を操作しようとすると、シャッター手段は大抵カメラの本体に設けられているので、シャッター手段の操作によって手ぶれが生じることが多く、撮像された画像の焦点が合っていない等の理由で、撮像された静止画像が、画像診断の目的では使い物にならなくなることが多い。
 シャッター手段を操作した瞬間、或いはそれより過去のタイミングで撮像されていた静止画像を選択して撮像した静止画像として利用することができる第2発明のカメラは、そのような画像診断目的での静止画像を撮像するにも好適である。
The camera of the second invention is supposed to take an image of the inside of the oral cavity from outside the oral cavity, includes a light source for irradiating the inside of the oral cavity with illumination light, and the lens and the image pickup element are based on the image light. The image captured by the image pickup device may include the posterior wall of the pharynx.
The effect of imaging the inside of the oral cavity from outside the oral cavity is equivalent to the effect caused by being able to image from outside the oral cavity described in the first invention. Further, the effect of the camera of the second invention provided with a light source for irradiating the illumination light in the oral cavity is equivalent to the effect caused by the camera of the first invention by providing the light source for irradiating the illumination light in the oral cavity.
The lens and the image pickup device in the camera of the second invention so that the image captured by the image pickup element based on the image light includes the pharyngeal posterior wall portion (may be a part of the pharyngeal posterior wall portion). In this case, the lens and the image sensor in the camera of the first invention are such that the posterior wall of the pharynx of the subject, the left and right anterior palatal arches, and the palatal drop are reflected in the image captured by the image sensor. The image pickup range is narrower than that of the camera of the first invention.
As already mentioned in the description of the first invention, the applicant's findings regarding respiratory infections are "observing the colors of the posterior wall of the pharynx, the left and right anterior palatal arches, and the palatal ptosis. This makes it possible to distinguish subjects from those infected with the new corona virus, those infected with other viruses / bacteria related to respiratory infections, and those who are not infected with viruses / bacteria related to respiratory infections. " That is. Therefore, the image captured by the camera of the second invention including only the posterior wall of the pharynx may not show the left and right anterior palatal arches and the palatal tract. In that case, the second invention. Based solely on the images taken in, subjects are divided into those infected with the new corona virus, those infected with other viruses / bacteria related to respiratory infections, and those who are not infected with viruses / bacteria related to respiratory infections. It may not be possible. However, it may be possible to diagnose some respiratory infections based on images of the posterior wall of the pharynx, and at least distinguish between infected and non-infected with some virus or bacterium. (For example, it is possible to find an infected person with influenza virus from the unevenness of the surface of the posterior wall of the pharynx), so it can be said that the camera of the second invention described above is sufficiently meaningful. In addition, in the future, the subjects could be identified as a new corona virus-infected person and other viruses related to respiratory infections by using images showing only the posterior wall of the pharynx without showing the left and right anterior palatal arches and the palatal droop. Features related to shape, such as color, vascular image, unevenness, etc., may be discovered to enable the distinction between infected with bacteria and non-infected with viruses and bacteria related to respiratory infections. Therefore, if such a feature is discovered, even with the camera of the second invention, which covers only the posterior wall of the pharynx, the subject can be infected with the new coronavirus and other respiratory infections. It is possible to distinguish between infected persons with viruses and bacteria and non-infected persons with viruses and bacteria related to respiratory infections. However, based on the findings obtained by the applicant at this time by diagnostic imaging using images taken by the camera of the second invention, the subjects are those infected with the new corona virus and other viruses / bacteria related to respiratory infections. If it is desired to be able to distinguish between an infected person and a non-infected person of a virus / bacterium related to a respiratory infection, the lens and the image pickup element in the camera of the second invention are made by an image pickup element based on the image light. The still image captured may include the posterior wall of the pharynx, the left and right anterior palatal arches, and the palatal ptosis.
As already mentioned, the posterior wall of the pharynx is hidden by the tongue under normal conditions, and cannot be observed or imaged from outside the oral cavity. On the other hand, when the subject inhales vigorously while making a voice, the base of the tongue is lowered and the posterior wall of the pharynx becomes visible from outside the oral cavity. In addition, when the posterior wall of the pharynx becomes visible from outside the oral cavity, the left and right anterior palatine arches and uvula also become visible from outside the oral cavity at the same time. Therefore, if the posterior wall of the pharynx is to be imaged using the camera of the second invention, the posterior wall of the pharynx is imaged by the image sensor at the moment when the posterior wall of the pharynx becomes visible from the outside of the oral cavity. There is a need to do. However, especially when the subject operates the shutter means by himself / herself, the operation of the shutter means tends to be delayed due to being distracted by the other act of inhaling. The human reaction rate usually has a delay of 0.2 seconds, at least about 0.1 seconds. Also, when trying to operate the shutter means, the shutter means is usually provided in the main body of the camera, so that the operation of the shutter means often causes camera shake, and the captured image is out of focus. , The captured still image is often useless for the purpose of diagnostic imaging.
The camera of the second invention, which can be used as a still image captured by selecting a still image captured at the moment when the shutter means is operated or at a timing earlier than that, is still for such image diagnosis purposes. It is also suitable for capturing images.
 口腔内を撮影するものである場合における第2発明のカメラは、第1発明の場合と同様に、前記照明光を通過させて、偏光方向が所定方向である直線偏光とする第1偏光板と、前記像光を通過させる、偏光方向が前記第1偏光板と直交する第2偏光板とを備えていてもよい。この場合の第2発明のカメラは、同様の構成を持つ第1発明が奏するのと同じ効果を得る。
 口腔内を撮影するものである場合における第2発明のカメラにおける前記光源は、第1発明の場合と同様に、その光源から出た前記照明光が前記第1偏光板を通過してから口腔内に向かう第1光源と、その光源から出た前記照明光が前記第1偏光板を通過せずに前記口腔内に向かう第2光源であって、前記第1光源と択一的に発光するものとを含んでいてもよい。その場合、前記撮像素子は、前記第1光源から口腔内に照射された前記照明光によって生じる前記像光により撮像される無反射画像と、前記第2光源から口腔内に照射された前記照明光によって生じる前記像光により撮像される反射画像との双方を撮像するようになっていてもよい。この場合の第2発明のカメラは、同様の構成を持つ第1発明が奏するのと同じ効果を得る。
 第2発明のカメラが第1光源と第2光源とを備える場合、前記第1光源と前記第2光源とは、前記撮像素子で生成される画像データに基づく画像が、交互に、前記第1光源からの照明光による静止画像と、前記第2光源からの照明光による静止画像となるようなタイミングで、交互に点灯するようになっていてもよい。つまり、撮像素子が画像データを生成するための撮像を行うタイミングと、第1光源及び第2光源の点灯のタイミングとは、同期した状態で制御されるようになっていてもよい。これによれば、交互に点灯する第1光源及び第2光源からの照明光により、撮像素子は、無反射画像としての静止画像についての画像データと、反射画像としての静止画像についての画像データとを交互に生成することになる。それにより、このカメラによれば、近いタイミングで撮像された無反射画像と反射画像とが対の状態で、次々に生成されることになる。一対の無反射画像及び反射画像は、口腔内の殆ど同じ位置を、殆ど同時に撮像することにより得られた一対の、いずれもが静止画像である無反射画像及び反射画像となるので、それらを用いて観察或いは画像診断を行うに相応しいものとなる。無反射画像と反射画像との双方を用いて観察或いは画像診断を行うことによる利益は、第1発明の説明で既に述べた通りである。
As in the case of the first invention, the camera of the second invention in the case of photographing the inside of the oral cavity has a first polarizing plate that allows the illumination light to pass through and linearly polarized light having a predetermined polarization direction. A second polarizing plate having a polarization direction orthogonal to that of the first polarizing plate, which allows the image light to pass through, may be provided. The camera of the second invention in this case obtains the same effect as that of the first invention having the same configuration.
As in the case of the first invention, the light source in the camera of the second invention in the case of photographing the inside of the oral cavity is the inside of the oral cavity after the illumination light emitted from the light source passes through the first polarizing plate. A first light source heading toward the above, and a second light source in which the illumination light emitted from the light source heads toward the oral cavity without passing through the first polarizing plate, and emits light selectively with the first light source. And may be included. In that case, the image pickup element includes a non-reflective image captured by the image light generated by the illumination light emitted from the first light source into the oral cavity and the illumination light emitted into the oral cavity from the second light source. It may be adapted to capture both the reflected image captured by the image light generated by the image light. The camera of the second invention in this case obtains the same effect as that of the first invention having the same configuration.
When the camera of the second invention includes a first light source and a second light source, the first light source and the second light source alternately have images based on image data generated by the image pickup element. The lights may be alternately turned on at a timing such that the still image due to the illumination light from the light source and the still image due to the illumination light from the second light source are obtained. That is, the timing at which the image pickup device performs image pickup for generating image data and the timing at which the first light source and the second light source are turned on may be controlled in a synchronized state. According to this, due to the illumination light from the first light source and the second light source that are alternately lit, the image sensor has the image data of the still image as the non-reflective image and the image data of the still image as the reflected image. Will be generated alternately. As a result, according to this camera, the non-reflective image and the reflected image captured at close timing are generated one after another in a paired state. Since the pair of non-reflective images and the reflected images are a pair of non-reflective images and reflective images, both of which are still images, obtained by imaging almost the same position in the oral cavity at almost the same time, they are used. It is suitable for observation or diagnostic imaging. The benefits of performing observation or diagnostic imaging using both non-reflective and reflective images are as already described in the description of the first invention.
 第2発明のカメラは、前記上書き記録部上にある画像データに基づく静止画像を表示するディスプレイを更に備えているとともに、前記選択手段は、前記上書き記録部上にある画像データの少なくとも1つを選択して特定するための入力を受け付けるためのユーザが操作する操作部を含んでいてもよい。
 このような構成が存在すれば、選択手段による画像データの選択、例えば、撮像範囲が正しく、且つ(例えば、手ぶれの影響がなく)ピントが合っているもの少なくとも1つの選択を、ユーザの判断に基づいてユーザの手作業により行えるようになる。なお、ディスプレイ、及び操作部は、必ずしも第2発明のカメラと一体である必要はなく、例えば、第2発明のカメラと組合せて用いられる他の装置(その例は、スマートフォンやタブレットである。)に設けられていても構わない。その場合にも、「カメラがディスプレイを備える」、「選択手段が操作部を含む」という要件は充足されるものとする。
 前記選択手段は、前記上書き記録部上にある画像データのうち、撮像範囲が正しく、且つ(例えば、手ぶれの影響がなく)ピントが合っているもの少なくとも1つを自動的に選択する、自動選択手段を含んでいてもよい。このようにすれば、選択手段による撮像範囲が正しく、且つピントが合っている静止画像についての画像データ少なくとも1つの選択を、自動的に行うことができるようになる。これによれば、ユーザの負担を軽減できるし、選択される静止画像の品質を一定範囲に保てるようになる。自動選択手段は、第2発明のカメラとは一体でなくてもよく、例えば、第2発明のカメラと組合せて用いられる他の装置(その例は、スマートフォンやタブレットである。)に設けられていても構わない。自動選択手段は、上書き記録部上に記録された複数の画像データの中から、撮像範囲が正しく、且つピントが合っている静止画像についての画像データを自動的に抽出する人工知能或いは人工知能を利用した機構を用いても構わない。
The camera of the second invention further includes a display for displaying a still image based on the image data on the overwrite recording unit, and the selection means captures at least one of the image data on the overwrite recording unit. It may include an operation unit operated by a user for accepting an input for selecting and specifying.
If such a configuration exists, it is up to the user to select image data by a selection means, for example, at least one selection in which the imaging range is correct and in focus (for example, without the influence of camera shake). Based on this, it will be possible to do it manually by the user. The display and the operation unit do not necessarily have to be integrated with the camera of the second invention, and for example, other devices used in combination with the camera of the second invention (examples are smartphones and tablets). It may be provided in. Even in that case, the requirements that "the camera has a display" and "the selection means includes the operation unit" are satisfied.
The selection means automatically selects at least one of the image data on the overwrite recording unit that has a correct imaging range and is in focus (for example, without the influence of camera shake). Means may be included. By doing so, it becomes possible to automatically select at least one image data for a still image in which the imaging range is correct and is in focus by the selection means. According to this, the burden on the user can be reduced, and the quality of the selected still image can be kept within a certain range. The automatic selection means does not have to be integrated with the camera of the second invention, and is provided in, for example, another device (eg, a smartphone or a tablet) used in combination with the camera of the second invention. It doesn't matter. The automatic selection means uses artificial intelligence or artificial intelligence to automatically extract image data of a still image that has a correct imaging range and is in focus from a plurality of image data recorded on the overwrite recording unit. You may use the mechanism used.
 次に、課題3を解決するための発明について説明する。課題3を解決するための発明を、便宜上、第3発明と称することにする。
 第3発明は、画像診断を自動診断で行う技術である。第3発明は自動診断装置に関するが、かかる自動診断装置が自動診断で用いる画像乃至画像データは、第1発明又は第2発明によるカメラで生成されたものであっても良いが、そうでなくても良い。
Next, an invention for solving the problem 3 will be described. The invention for solving the problem 3 will be referred to as a third invention for convenience.
The third invention is a technique for performing image diagnosis by automatic diagnosis. Although the third invention relates to an automatic diagnostic device, the image or image data used by the automatic diagnostic device in the automatic diagnosis may be generated by the camera according to the first invention or the second invention, but is not so. Is also good.
 本願発明者は、第3発明の一態様として、学習済みモデルの生成方法を提案する。
 この学習済みモデルの生成方法は、被験者の咽頭後壁部、左右の前口蓋弓、及び口蓋垂を含む撮像部位が写り込んだ静止画像についてのデータである画像データと、当該被験者が新型コロナウイルス感染者、呼吸器感染症に関するその他のウイルス・細菌の感染者、呼吸器感染症に関するウイルス・細菌の非感染者のいずれであるかという情報を含むデータとを教師データとして機械学習させ、前記画像データに基づく静止画像内の咽頭後壁部、左右の前口蓋弓、及び口蓋垂それぞれの色彩を特徴量として、当該画像データに前記撮像部位を写り込ませた被験者が、新型コロナウイルス感染者、呼吸器感染症に関するその他のウイルス・細菌の感染者、呼吸器感染症に関するウイルス・細菌の非感染者のいずれであるかを推定する学習済みモデルを生成する、呼吸器感染症に関する自動診断用の学習済みモデルの生成方法である。
 また、本願発明者は、第3発明の他の態様として、学習済みモデルを提案する。
 この学習済みモデルは、例えば、上述の学習済みモデルの生成方法によって生成されるものであり、後述する自動診断装置の中核をなすものである。第3発明による学習済みモデルは、被験者の咽頭後壁部、左右の前口蓋弓、及び口蓋垂を含む撮像部位が写り込んだ静止画像についてのデータである画像データと、当該被験者が新型コロナウイルス感染者、呼吸器感染症に関するその他のウイルス・細菌の感染者、呼吸器感染症に関するウイルス・細菌の非感染者のいずれであるかという情報を含むデータとを教師データとして機械学習させることにより生成された、前記画像データに基づく静止画像内の咽頭後壁部、左右の前口蓋弓、及び口蓋垂それぞれの色彩又は血管像を特徴量として、当該画像データに前記撮像部位を写り込ませた被験者が、新型コロナウイルス感染者、呼吸器感染症に関するその他のウイルス・細菌の感染者、呼吸器感染症に関するウイルス・細菌の非感染者のいずれであるかを推定するための、呼吸器感染症に関する自動診断用の学習済みモデル(単に、「学習済みモデル」という場合がある。)である。
 既に述べたが、本願発明者の研究によると、呼吸器感染症に関するウイルス・細菌に感染していない者においては、咽頭後壁部、左右の前口蓋弓、及び口蓋垂の色彩はいずれも、通常の色彩、一般的には薄いピンク色である。他方、呼吸器感染症に関する新型コロナウイルス以外のウイルス・細菌に感染している者においては、その殆どの場合、咽頭後壁部、左右の前口蓋弓、及び口蓋垂の色彩はいずれも、粘膜が発赤して赤色を呈する。そして、新型コロナウイルスに感染している者においては、咽頭後壁部と左右の前口蓋弓は発赤していずれも赤色を呈するものの、口蓋垂の色彩は、ウイルス・細菌に感染していない健常者と同じ薄いピンク色か或いはそれよりも更に薄い白に近いピンク色を呈する。白に近い薄いピンク色、ピンク色、発赤した赤色という色彩の違いは、毛細血管の増殖、拡張に起因して生じる。
 したがって、咽頭後壁部、左右の前口蓋弓、及び口蓋垂の色彩又は血管像の少なくとも一方を観察することにより、被験者を、新型コロナウイルス感染者と、呼吸器感染症に関するその他のウイルス・細菌の感染者と、呼吸器感染症に関するウイルス・細菌の非感染者とに区別することが可能となる。
 つまり、上述の如き学習済みモデルは、それに、被験者の撮像部位が写り込んだ静止画像の画像データを入力すると、その画像データに基づく静止画像中の撮像部位における色彩又は血管像の特徴量に基づいて、当該被験者を、新型コロナウイルス感染者と、呼吸器感染症に関するその他のウイルス・細菌の感染者と、呼吸器感染症に関するウイルス・細菌の非感染者との3つに区別することができるようなものとなる。
The inventor of the present application proposes a method of generating a trained model as one aspect of the third invention.
The method for generating this trained model is as follows: image data, which is data about a still image in which an imaging site including a subject's posterior pharyngeal wall, left and right anterior palatal arches, and palatal ptosis is captured, and the subject is infected with the new coronavirus. The image data is machine-learned as teacher data with data including information on whether the person is infected with other viruses / bacteria related to respiratory infections or non-infected persons with viruses / bacteria related to respiratory infections. The subjects who had the imaged site reflected in the image data using the colors of the posterior wall of the pharynx, the left and right anterior palatal arches, and the palatal ptosis in the still image based on the above were the new coronavirus infected person and the respiratory organs. Trained for automated diagnosis of respiratory infections, producing a trained model that estimates whether the person is infected with other viruses or bacteria related to infectious diseases or non-infected with viruses or bacteria related to respiratory infections. This is a model generation method.
Further, the inventor of the present application proposes a trained model as another aspect of the third invention.
This trained model is generated by, for example, the above-mentioned method for generating a trained model, and is the core of the automatic diagnostic apparatus described later. The trained model according to the third invention is image data which is data about a still image in which an imaging site including a subject's posterior pharyngeal wall, left and right anterior palatal arches, and palatal ptosis is captured, and the subject is infected with a new type of coronavirus. Generated by machine learning as teacher data with data including information on whether the person is infected with other viruses / bacteria related to respiratory infections or non-infected with viruses / bacteria related to respiratory infections. In addition, the subject who had the image-imposed site reflected in the image data was characterized by the color or blood vessel image of each of the posterior wall of the pharynx, the left and right anterior palatal arches, and the palatal ptosis in the still image based on the image data. Automatic diagnosis of respiratory infections to estimate whether you are infected with the new corona virus, infected with other viruses / bacteria related to respiratory infections, or non-infected with viruses / bacteria related to respiratory infections. A trained model for use (sometimes referred to simply as a "trained model").
As already mentioned, according to the research of the inventor of the present application, the colors of the posterior pharyngeal wall, the left and right anterior palatine arches, and the uvula are usually in those who are not infected with viruses and bacteria related to respiratory infections. The color of the uvula, generally light pink. On the other hand, in most cases of those infected with viruses / bacteria other than the new coronavirus related to respiratory infections, the mucous membrane is the color of the posterior pharyngeal wall, the left and right anterior palatine arches, and the uvula. It turns red and shows red. In those who are infected with the new corona virus, the posterior wall of the pharynx and the left and right anterior palatine arches are both reddish, but the color of the uvula is a healthy person who is not infected with the virus or bacteria. It has the same light pink color as, or even lighter than that, and has a pink color close to white. The difference in color between pale pink, pink, and reddish red, which is close to white, is caused by the proliferation and dilation of capillaries.
Therefore, by observing at least one of the posterior wall of the pharynx, the left and right anterior palatal arches, and the color or vascular image of the palatal ptosis, the subject was identified with a new coronavirus infected person and other viruses and bacteria related to respiratory infections. It is possible to distinguish between infected persons and non-infected persons of viruses / bacteria related to respiratory infections.
That is, in the trained model as described above, when the image data of the still image in which the imaged portion of the subject is captured is input to the trained model, the feature amount of the color or the blood vessel image in the imaged portion in the still image based on the image data is used. Therefore, the subject can be divided into three types: a person infected with the new coronavirus, a person infected with other viruses / bacteria related to respiratory infections, and a person not infected with the virus / bacteria related to respiratory infections. It will be something like.
 学習済みモデルの生成方法において、前記画像データは、前記撮像部位が写り込んだ無反射画像についてのデータであってもよい。無反射画像の撮像の方法については既に述べた通りである。既に述べたように、無反射画像は、色彩や粘膜の表面からやや奥に位置する血管の像である血管像を正確に把握するのに向いている。したがって、かかる生成方法で生成された学習済みモデルは、上述の3つの区別を、より正確に行えるようなものとなる。
 なお、このような学習済みモデルの生成方法で生成された学習済みモデルを用いて自動画像診断を実行する場合に、この学習済みモデルに入力すべきは、撮像部位が写り込んだ無反射画像についての画像データである。以前も、以降も同様であるが、学習済みモデルを用いて自動画像診断を実行する場合に学習済みモデルに入力すべきは、当然に、学習に用いたのと同種の画像データとすべきである。
 学習済みモデルの生成方法において、前記画像データは、前記撮像部位が写り込んだ無反射画像の画像データと、前記撮像部位が写り込んだ反射画像の画像データとを一対としたものであってもよい。無反射画像の撮像の方法については既に述べた通りである。既に述べたように、無反射画像は、色彩や血管像を正確に把握するのに向いている。他方、反射画像は、撮像の対象となる対象物の表面の凹凸等の形状を正確に把握するのに向いている。したがって、かかる生成方法で生成された学習済みモデルは、上述の3つの区別を、より正確に行えるようなものとなる。
 第3発明の学習済みモデルの生成方法において、特徴量に、咽頭後壁部、左右の前口蓋弓、及び口蓋垂の色彩と血管像の双方を含んでもよく、また、前記特徴量に、前記咽頭後壁部、左右の前口蓋弓、及び口蓋垂の表面の凹凸を含んでもよい。
 色彩と血管像の双方を対象として学習済みモデルを生成することにより、被験者を上述の3つに区分することをより正確に行えるようになる。また、それに加えて、咽頭後壁部、左右の前口蓋弓、及び口蓋垂の表面の凹凸をも学習済みモデルの学習対象とすることにより、同様の効果を得られることになる。
In the method of generating the trained model, the image data may be data about a non-reflective image in which the image pickup portion is reflected. The method of capturing a non-reflective image is as described above. As already mentioned, the non-reflective image is suitable for accurately grasping the blood vessel image, which is an image of a blood vessel located slightly behind the surface of the color and mucous membrane. Therefore, the trained model generated by such a generation method can make the above-mentioned three distinctions more accurately.
When performing automatic image diagnosis using the trained model generated by such a trained model generation method, what should be input to this trained model is the non-reflective image in which the imaged portion is reflected. It is the image data of. The same is true before and after, but when performing automatic diagnostic imaging using a trained model, what should be input to the trained model should, of course, be the same type of image data used for training. be.
In the method of generating the trained model, the image data may be a pair of the image data of the non-reflective image in which the image pickup portion is captured and the image data of the reflection image in which the image pickup portion is captured. good. The method of capturing a non-reflective image is as described above. As already mentioned, non-reflective images are suitable for accurately grasping colors and blood vessel images. On the other hand, the reflected image is suitable for accurately grasping the shape of the surface of the object to be imaged, such as unevenness. Therefore, the trained model generated by such a generation method can make the above-mentioned three distinctions more accurately.
In the method for generating the trained model of the third invention, the feature amount may include both the color and blood vessel image of the posterior wall of the pharynx, the left and right anterior palatine arches, and the uvula, and the feature amount may include the pharynx. It may include surface irregularities on the posterior wall, left and right anterior palatine arches, and uvula.
By generating a trained model for both color and blood vessel images, it becomes possible to more accurately divide the subject into the above three categories. In addition, the same effect can be obtained by targeting the unevenness of the surface of the posterior pharyngeal wall, the left and right anterior palatine arches, and the uvula as the training target of the trained model.
 本願発明者は、第3発明の更に他の態様として、上述した学習済みモデルを用いた呼吸器感染症に関する自動診断装置をも提案する。
 その自動診断装置は、ここまでに説明したいずれかの学習済みモデルを用いた呼吸器感染症に関する自動診断装置であって、呼吸器感染症の診断対象となる被験者の前記撮像部位が写り込んだ静止画像についてのデータである画像データを受付ける受付手段と、前記画像データに基づく静止画像内の咽頭後壁部、左右の前口蓋弓、及び口蓋垂それぞれの色彩又は血管像を特徴量として抽出する抽出手段と、前記学習済みモデルに、前記抽出手段で抽出された前記特徴量を入力することにより、当該画像データに前記撮像部位を写り込ませた被験者が、新型コロナウイルス感染者、呼吸器感染症に関するその他のウイルス・細菌の感染者、呼吸器感染症に関するウイルス・細菌の非感染者のいずれであるかの推定結果を出力する出力手段と、を備えてなる、呼吸器感染症に関する自動診断装置である。
 かかる自動診断装置によれば、学習済みモデルを生成するときに用いたのと同等の画像データを受付手段に入力することにより、人の判断を必要とせずに、その画像データに基づく静止画像に写り込んだ撮像部位を持つ被験者が、上述した3つの区分のいずれに属するかを判定することが可能となる。例えば、この自動診断装置は、被験者自身、或いは、上述した施設の担当者が、例えばスマートフォンやタブレット(スマートフォンが備えるカメラを、第1発明、第2発明のカメラのように用いるなら、スマートフォン等が、本願第1発明、第2発明におけるカメラを兼ねる場合もある。)からインターネットを用いて送って来た画像データに基づいて自動診断を行い、推定結果のデータを、その画像データの送信元のスマートフォンに、例えば即時にインターネットを用いて送り返す、といった利用方法が可能である。もちろんこの限りではないが、画像データの受信から推定結果の送信までの自動診断装置側で実行される処理の一切に人手不要とするのが好ましく、そうすることも、少なくとも原理的には可能である。
 本願発明者は、また、第3発明の更に他の態様として、上述した学習済みモデルを記録した記録媒体を有するコンピュータによって実行される呼吸器感染症に関する自動診断方法をも提案する。この方法の効果は、呼吸器感染症に関する自動診断装置の効果に等しい。
 一例となるその方法は、以上で説明したいずれかの学習済みモデルを記録した記録媒体を有するコンピュータによって実行される呼吸器感染症に関する自動診断方法である。
 そして、この方法は、コンピュータが実行する、呼吸器感染症の診断対象となる被験者の前記撮像部位が写り込んだ静止画像についてのデータである画像データを受付ける受付過程と、前記画像データに基づく静止画像内の咽頭後壁部、左右の前口蓋弓、及び口蓋垂それぞれの色彩又は血管像を特徴量として抽出する抽出過程と、前記学習済みモデルに、前記抽出過程で抽出された前記特徴量を入力することにより、当該画像データに前記撮像部位を写り込ませた被験者が、新型コロナウイルス感染者、呼吸器感染症に関するその他のウイルス・細菌の感染者、呼吸器感染症に関するウイルス・細菌の非感染者のいずれであるかの推定結果を出力する出力過程と、を含む。
 本願発明者は、また、第3発明の更に他の態様として、上述した学習済みモデルを用いた呼吸器感染症に関する自動診断装置として、所定のコンピュータを機能させるためのコンピュータプログラムをも提案する。このコンピュータプログラムの効果は、呼吸器感染症に関する自動診断装置の効果に等しく、また、所定の、例えば汎用のコンピュータを学習済みモデルを用いた呼吸器感染症に関する自動診断装置として機能させられることである。
 一例となるコンピュータプログラムは、以上で説明したいずれかの学習済みモデルを用いた呼吸器感染症に関する自動診断装置として所定のコンピュータを機能させるためのコンピュータプログラムである。
 そして、前記コンピュータプログラムは、前記コンピュータに、呼吸器感染症の診断対象となる被験者の前記撮像部位が写り込んだ静止画像についてのデータである画像データを受付ける受付過程と、前記画像データに基づく静止画像内の咽頭後壁部、左右の前口蓋弓、及び口蓋垂それぞれの色彩又は血管像を特徴量として抽出する抽出過程と、前記学習済みモデルに、前記抽出過程で抽出された前記特徴量を入力することにより、当該画像データに前記撮像部位を写り込ませた被験者が、新型コロナウイルス感染者、呼吸器感染症に関するその他のウイルス・細菌の感染者、呼吸器感染症に関するウイルス・細菌の非感染者のいずれであるかの推定結果を出力する出力過程と、を実行させるためのコンピュータプログラムである。
As yet another aspect of the third invention, the inventor of the present application also proposes an automatic diagnostic device for respiratory infections using the above-mentioned trained model.
The automatic diagnostic device is an automatic diagnostic device for respiratory infections using any of the trained models described so far, and the imaging site of the subject to be diagnosed for respiratory infections is captured. A receiving means for receiving image data, which is data about a still image, and an extraction for extracting the color or blood vessel image of each of the posterior pharyngeal wall, the left and right anterior palatal arches, and the palatal ptosis in the still image based on the image data as feature quantities. By inputting the feature amount extracted by the extraction means into the means and the trained model, the subject in which the image pickup site is reflected in the image data is a new coronavirus-infected person and a respiratory infection. An automatic diagnostic device for respiratory infections, comprising an output means for outputting estimation results of whether the person is infected with other viruses / bacteria related to respiratory infections or non-infected persons with viruses / bacteria related to respiratory infections. Is.
According to such an automatic diagnostic device, by inputting image data equivalent to that used when generating a trained model into the receiving means, a still image based on the image data can be obtained without human judgment. It is possible to determine which of the above-mentioned three categories the subject having the imaged imaging site belongs to. For example, in this automatic diagnostic device, if the subject himself or the person in charge of the above-mentioned facility uses, for example, a smartphone or a tablet (a camera provided in the smartphone is used like the camera of the first invention and the second invention, the smartphone or the like can be used. , It may also serve as the camera in the first and second inventions of the present application.) Automatic diagnosis is performed based on the image data sent from the Internet using the Internet, and the estimation result data is used as the source of the image data. It can be used to send it back to a smartphone immediately, for example, using the Internet. Of course, this is not the case, but it is preferable that all the processes executed on the automatic diagnostic device side from the reception of the image data to the transmission of the estimation result do not require manual labor, and at least in principle, it is possible. be.
The inventor of the present application also proposes, as yet another aspect of the third invention, an automatic diagnostic method for respiratory infections performed by a computer having a recording medium on which the trained model described above is recorded. The effect of this method is equal to the effect of an automated diagnostic device for respiratory infections.
An exemplary method is an automated diagnostic method for respiratory infections performed by a computer with a recording medium recording any of the trained models described above.
Then, this method includes a reception process of receiving image data, which is data about a still image of a subject to be diagnosed with a respiratory infection, which is executed by a computer, and stillness based on the image data. An extraction process for extracting the color or blood vessel image of the posterior wall of the pharynx, the left and right anterior palatal arches, and the palatal ptosis in the image as feature quantities, and the feature quantity extracted in the extraction process is input to the trained model. By doing so, the subject whose imaging site is reflected in the image data is a person infected with the new corona virus, a person infected with other viruses / bacteria related to respiratory infections, and a non-infected person with viruses / bacteria related to respiratory infections. Includes an output process that outputs the estimation result of which one.
The inventor of the present application also proposes, as still another aspect of the third invention, a computer program for operating a predetermined computer as an automatic diagnostic device for respiratory infections using the above-mentioned trained model. The effect of this computer program is equal to the effect of an automated diagnostic device for respiratory infections, and by allowing a given, eg, general purpose computer, to function as an automated diagnostic device for respiratory infections using a trained model. be.
An example computer program is a computer program for operating a predetermined computer as an automatic diagnostic device for respiratory infections using any of the trained models described above.
Then, the computer program receives image data which is data about a still image in which the imaged portion of a subject to be diagnosed with a respiratory infection is reflected in the computer, and a stillness based on the image data. An extraction process for extracting the color or blood vessel image of each of the posterior wall of the pharynx, the left and right anterior palatal arches, and the palatal ptosis in the image as a feature amount, and inputting the feature amount extracted in the extraction process into the trained model. By doing so, the subject whose imaging site is reflected in the image data is a person infected with the new corona virus, a person infected with other viruses / bacteria related to respiratory infections, and a non-infected person with viruses / bacteria related to respiratory infections. It is a computer program for executing an output process that outputs an estimation result of one of the two.
第1実施形態によるカメラ及びそれと組合せて用いられるコンピュータ装置の斜視図。FIG. 3 is a perspective view of a camera according to the first embodiment and a computer device used in combination with the camera. 図1に示したカメラの頭部の水平断面図。FIG. 1 is a horizontal cross-sectional view of the head of the camera shown in FIG. 図1に示したカメラの垂直断面図。FIG. 1 is a vertical cross-sectional view of the camera shown in FIG. 図1に示したコンピュータ装置のハードウェア構成を示す図。The figure which shows the hardware configuration of the computer apparatus shown in FIG. 図1に示したコンピュータ装置の内部に生成される機能ブロックを示すブロック図。The block diagram which shows the functional block generated inside the computer apparatus shown in FIG. 図1に示したカメラで、第1光源を用いて撮像を行うときにおける照明光と反射光の振る舞いを説明するための図。It is a figure for demonstrating the behavior of the illumination light and the reflected light at the time of taking an image using the 1st light source with the camera shown in FIG. 図1に示したカメラで、第2光源を用いて撮像を行うときにおける照明光と反射光の振る舞いを説明するための図。It is a figure for demonstrating the behavior of the illumination light and the reflected light at the time of taking an image using the 2nd light source by the camera shown in FIG. 変形例1におけるカメラの背面斜視図。The rear perspective view of the camera in the modification 1. 第3実施形態の学習装置を構成するコンピュータ装置内に生成される機能ブロックを示す機能ブロック図。The functional block diagram which shows the functional block generated in the computer apparatus which constitutes the learning apparatus of 3rd Embodiment. 図9に含まれる学習データ記録部に記録されているデータの内容を概念的に示す図。The figure which conceptually shows the content of the data recorded in the learning data recording part included in FIG. 第3実施形態の自動診断装置を含む自動診断システムの全体構成を示す図。The figure which shows the whole structure of the automatic diagnosis system including the automatic diagnosis apparatus of 3rd Embodiment. 図11に示した自動診断装置を構成するコンピュータ装置内に生成される機能ブロックを示す機能ブロック図。The functional block diagram which shows the functional block generated in the computer apparatus which constitutes the automatic diagnostic apparatus shown in FIG.
 以下、図面を参照しつつ本発明の好ましい第1~第3実施形態、及びその変形例について説明する。
 各実施形態、及び変形例の説明において、同一の対象には同一の符号を付すものとし、重複する説明は場合により省略するものとする。また、特に矛盾しない限りにおいて、各実施形態及び変形例に記載の技術内容は相互に組み合せることができるものとする。
Hereinafter, preferred first to third embodiments of the present invention and variations thereof will be described with reference to the drawings.
In the description of each embodiment and the modification, the same object shall be designated by the same reference numeral, and duplicate description may be omitted in some cases. Further, as long as there is no particular contradiction, the technical contents described in each embodiment and modification can be combined with each other.
≪第1実施形態≫
 図1に、この実施形態におけるカメラ1とその付属品であるコンピュータ装置100の概観を示す。図1に示したように、カメラ1は、コンピュータ装置100との組合せにより使用される。
 コンピュータ装置100は、カメラ1で撮像された画像データを、例えば、インターネットを介して、画像診断の行われる場所へと送る機能を有している。この機能は、後述する変形例1の場合と同様に、カメラ1自身に実装されていても構わない。画像診断の行われる場所とは、画像診断を医師その他の人が行うのであれば当該人がアクセスすることのできる装置(例えば、コンピュータ装置)であり、画像診断が機械によって行われるのであれば当該診断を行う自動診断装置或いは自動診断装置がアクセスすることのできる装置(例えば、コンピュータ装置)である。
 この実施形態におけるカメラ1は、人の呼吸器感染症の診断に用いられる画像を撮像するためのものである。つまり、この実施形態における被験者は人間である。
<< First Embodiment >>
FIG. 1 shows an overview of the camera 1 and its accessory computer device 100 in this embodiment. As shown in FIG. 1, the camera 1 is used in combination with the computer device 100.
The computer device 100 has a function of sending image data captured by the camera 1 to a place where image diagnosis is performed, for example, via the Internet. This function may be implemented in the camera 1 itself as in the case of the modification 1 described later. The place where the image diagnosis is performed is a device (for example, a computer device) that can be accessed by the doctor or other person if the image diagnosis is performed, and if the image diagnosis is performed by a machine, the device is concerned. An automatic diagnostic device that performs diagnosis or a device that can be accessed by the automatic diagnostic device (for example, a computer device).
The camera 1 in this embodiment is for capturing an image used for diagnosing a human respiratory tract infection. That is, the subject in this embodiment is a human being.
 この実施形態におけるカメラ1は、被験者たる人間の上気道の一部、より詳細には、咽頭後壁部、左右の前口蓋弓、及び口蓋垂を含む範囲を撮像することができるようなものとなっている。
 この実施形態におけるカメラ1は、手持ち可能とされた把持部10と、把持部10の上方前側に設けられた頭部20とを備えている。
 把持部10と頭部20は、これには限られないが例えば不透明な樹脂製である。少なくとも頭部20は、不透明な素材で構成するのが通常である。それらの内部は中空であり、後述するようにしてその内部に種々の部品が内蔵され、或いは取付けられる。把持部10及び頭部20には部品が内蔵されるので、それらは部品が内蔵される事実上のケースとして機能する。
 把持部10は、片手で手持ちすることができる形状となっており、これには限られないがこの実施形態では棒状或いは円柱状の形状とされている。
 頭部20は、これには限られないが、先端に向かってやや広がる(筒状でも、先端に向かってやや狭まっても良い。)断面略矩形状のフードと呼べるように構成された筒であり、不透明な素材例えば不透明な樹脂でできている。頭部20の先端(使用時に、被験者の顔に対面される側、図1における手前側)には開口21が設けられている。これには限られないが、この実施形態における開口21は、その四隅が丸められた矩形である。開口21は正方形や円形でも良い。このカメラ1は、被験者自身、或いは被験者以外の例えば、学校、レストラン、百貨店やスーパーマーケット、或いは映画館等の施設の担当者が把持部10を把持して、頭部20の先端の開口21の縁を被験者の口に向ける、という態様で使用される。
 把持部10か頭部20の適当な位置、例えば把持部10の前側には、スイッチ15が設けられている。スイッチ15は、後述する撮像素子による撮像を開始する切っ掛けとなる入力を行うための操作子であり、この実施形態では把持部10に押し込んで入力を行う押し釦とされている。もっとも、撮像素子が撮像を開始する切っ掛けとなる入力信号を生成することが可能であれば、スイッチ15は押し釦式である必要はない。もっといえば、スイッチ15の機能は、コンピュータ装置100に実装されていても構わない。
The camera 1 in this embodiment is capable of capturing a part of the upper respiratory tract of a human being as a subject, more specifically, a range including the posterior wall of the pharynx, the left and right anterior palatine arches, and the uvula. ing.
The camera 1 in this embodiment includes a grip portion 10 that can be held by hand, and a head portion 20 provided on the upper front side of the grip portion 10.
The grip portion 10 and the head portion 20 are made of, for example, an opaque resin, but not limited to this. At least the head 20 is usually made of an opaque material. The inside of them is hollow, and various parts are built in or attached to the inside thereof as described later. Since the grip portion 10 and the head 20 contain the parts, they function as a de facto case in which the parts are built.
The grip portion 10 has a shape that can be held by one hand, and is not limited to this, but in this embodiment, it has a rod shape or a columnar shape.
The head 20 is not limited to this, but is a cylinder configured to be called a hood having a substantially rectangular cross section, which slightly expands toward the tip (may be tubular or slightly narrows toward the tip). Yes, it is made of an opaque material, such as an opaque resin. An opening 21 is provided at the tip of the head 20 (the side facing the face of the subject at the time of use, the front side in FIG. 1). Although not limited to this, the opening 21 in this embodiment is a rectangle whose four corners are rounded. The opening 21 may be square or circular. In this camera 1, the subject himself or a person in charge of a facility other than the subject, for example, a school, a restaurant, a department store, a supermarket, or a movie theater, grips the grip portion 10, and the edge of the opening 21 at the tip of the head 20 is held. Is used in the form of pointing the subject's mouth.
A switch 15 is provided at an appropriate position of the grip portion 10 or the head 20, for example, on the front side of the grip portion 10. The switch 15 is an operator for performing an input that triggers the start of image pickup by an image pickup device, which will be described later, and in this embodiment, it is a push button that is pushed into the grip portion 10 to perform input. However, if the image pickup device can generate an input signal that triggers the start of image pickup, the switch 15 does not need to be a push button type. More specifically, the function of the switch 15 may be implemented in the computer device 100.
 図2にカメラ1における頭部20の水平断面図を、図3にカメラ1全体の垂直断面図をそれぞれ示す。図2、図3はいずれも原理図的な、概略的なものである。
 頭部20の内部の図1における手前側には、レンズ11、第1光源31a及び第2光源31bからなる光源31と、第1偏光板32とが設けられている。第1光源31aと第2光源31bは同じ構成でも良い。
 後述するように、第1光源31aと第2光源31bとは、この実施形態では異なる役割を担い、それらは択一的に点灯するようになっており、各々が点灯しているときに、後述する撮像素子12で撮像される画像が異なるものとなるようにされている。
 ただし、後述する無反射画像の撮像のみをカメラ1が行えば十分なのであれば、第2光源31bも含めて、すべて第1光源31aとすることができる。つまり、図中の第1光源31aと第2光源31bからの光はすべて、第1偏光板32を通過するようになっていてもよい。
 また、頭部20の内部の図1における奥側には、第2偏光板33及び撮像素子12が設けられている。
FIG. 2 shows a horizontal cross-sectional view of the head 20 of the camera 1, and FIG. 3 shows a vertical cross-sectional view of the entire camera 1. Both FIGS. 2 and 3 are schematic and schematic.
A light source 31 including a lens 11, a first light source 31a, and a second light source 31b, and a first polarizing plate 32 are provided on the front side of the inside of the head 20 in FIG. The first light source 31a and the second light source 31b may have the same configuration.
As will be described later, the first light source 31a and the second light source 31b play different roles in this embodiment, and they are alternately lit, and when each is lit, they will be described later. The images captured by the image pickup device 12 are different from each other.
However, if it is sufficient for the camera 1 to capture only the non-reflective image described later, the first light source 31a can be used in all cases including the second light source 31b. That is, all the light from the first light source 31a and the second light source 31b in the figure may pass through the first polarizing plate 32.
Further, a second polarizing plate 33 and an image pickup device 12 are provided on the inner side of the head 20 in FIG. 1 on the inner side.
 この実施形態において、第1光源31a、第2光源31bは、ともにその限りではないが、ともに複数である。
 第1光源31aと、第2光源31bはともに、照明光としての自然光を発するものである。それが可能な限り第1光源31aは、公知或いは周知の光源でよく、電球、LED等適宜のもので構成することができる。第2光源31bも同様である。この実施形態における第1光源31aはこれには限られないがLEDであり、第2光源31bも同様である。第1光源31a、第2光源31bはハードウェアとして見たときには同じものとすることができ、この実施形態ではそうされている。
 第1光源31a、第2光源31bはともに、ある程度の指向性をもって撮影の対象となる被験者の口腔内に向けて光を発するようになっている。各第1光源31a、及び第2光源31bは、それらから照射される光が適宜の方向に照射されるように、それらの向きが調整されている。
 なお、第1光源31a、第2光源31bはともに、頭部20内部に適宜の方法で固定された基板に固定されているが、基板の図示は省略している。
 第1光源31aが発する照明光の波長には特に制限はないが、照明光の波長は可視光領域とするのが好ましく、この実施形態では、一般的な白色光を第1光源31aが発するようになっている。なお、照明光の波長は、光軸に沿って並べられた、互いに偏光方向の直交する2つの偏光板である第1偏光板32及び後述する第2偏光板34に自然光を照射した場合に、その自然光が略すべて消える(例えば、90%以上消える)ような範囲に制限されているのが好ましい。照明光の波長をそのように制限するフィルタを第1光源31a、第2光源31bから撮像の対象となる口腔内までの間に配しておくことも可能である。
 この実施形態における第1光源31a、及び第2光源31bは、これには限られないが、スイッチ15が押された後、後述するように交互に点灯するようになっている。
In this embodiment, the first light source 31a and the second light source 31b are both, but not limited to, a plurality of both.
Both the first light source 31a and the second light source 31b emit natural light as illumination light. As far as possible, the first light source 31a may be a known or well-known light source, and may be an appropriate light source such as a light bulb or an LED. The same applies to the second light source 31b. The first light source 31a in this embodiment is an LED, but the same is true for the second light source 31b. The first light source 31a and the second light source 31b can be the same when viewed as hardware, which is the case in this embodiment.
Both the first light source 31a and the second light source 31b emit light toward the oral cavity of the subject to be photographed with a certain degree of directivity. The directions of the first light source 31a and the second light source 31b are adjusted so that the light emitted from them is emitted in an appropriate direction.
Both the first light source 31a and the second light source 31b are fixed to a substrate fixed inside the head 20 by an appropriate method, but the illustration of the substrate is omitted.
The wavelength of the illumination light emitted by the first light source 31a is not particularly limited, but the wavelength of the illumination light is preferably in the visible light region, and in this embodiment, the first light source 31a emits general white light. It has become. The wavelength of the illumination light is different when natural light is applied to the first polarizing plate 32, which is two polarizing plates arranged along the optical axis and whose polarization directions are orthogonal to each other, and the second polarizing plate 34, which will be described later. It is preferable that the natural light is limited to a range in which almost all of the natural light disappears (for example, 90% or more disappears). It is also possible to arrange a filter that limits the wavelength of the illumination light between the first light source 31a and the second light source 31b and the oral cavity to be imaged.
The first light source 31a and the second light source 31b in this embodiment are not limited to this, but are alternately turned on after the switch 15 is pressed, as will be described later.
 上述したように、この実施形態における第1光源31aは、これには限られないが複数である。各第1光源31aは、これには限られないが、頭部20の開口21付近に位置しており、この実施形態では、開口21の図1における水平方向の両外側における頭部20の壁のやや内側に位置している。開口21の右側と左側に位置する第1光源31aはそれぞれ、直線状に、より正確には、図1における鉛直方向に、この実施形態では複数個ずつ並んでいる。図3に示したように、開口21の右側において上下方向に並ぶ第1光源31aの数は4つであり、開口21の左側においても同様である。もっとも、当然に開口21の右側と左側の上下方向に並べた第1光源31aの数は4つずつである必要がなく、もっと言えば複数である必要もない。
 他方、第2光源31bは、この実施形態では、開口21の右側と左側に上述したようにそれぞれ上下に4つずつ並んだ第1光源31aの上側と下側に、それぞれ1つずつ設けられている。もっとも、第2光源31bの位置と数はこの限りではない。
As described above, the number of the first light sources 31a in this embodiment is not limited to this, but is plural. Each first light source 31a is, but is not limited to, located near the opening 21 of the head 20, and in this embodiment, the walls of the head 20 on both lateral sides of FIG. 1 of the opening 21 in the horizontal direction. It is located slightly inside. A plurality of first light sources 31a located on the right side and the left side of the opening 21 are arranged linearly, more accurately, in the vertical direction in FIG. 1, in this embodiment. As shown in FIG. 3, the number of the first light sources 31a arranged in the vertical direction on the right side of the opening 21 is four, and the same applies to the left side of the opening 21. However, of course, the number of the first light sources 31a arranged in the vertical direction on the right side and the left side of the opening 21 does not have to be four, and more specifically, it does not have to be a plurality.
On the other hand, in this embodiment, the second light source 31b is provided on the upper side and the lower side of the first light source 31a, which are arranged vertically four by four on the right side and the left side of the opening 21, respectively, as described above. There is. However, the position and number of the second light source 31b are not limited to this.
 頭部20の左右の端付近において上下方向に4つ並んだ第1光源31aの手前にはそれぞれ、それを通過した自然光である照明光を直線偏光にする偏光板である第1偏光板32が1枚ずつ配置されている。
 この実施形態における第1偏光板32は、図1、図3に示されたように、縦長の矩形である。この実施形態では、第1光源31aから出た光のうち、撮像素子12によって後述するように行われる撮像に寄与する照明光はすべて、第1偏光板32を通過するようになっており、第1偏光板32の上下方向の長さと水平方向の幅とはその観点から設計されている。
 なお、撮像素子12による撮像に寄与する照明光がすべて第1偏光板32を通過するようにするために、必ずしも必須ではないが、この実施形態では、レンズ11の周囲にその内縁が隙間なく接し、且つその外縁が頭部20の内周面に隙間なく接する、ドーナツ型であり光を通さない壁である隔壁19が、頭部20内に設けられている。隔壁19は、頭部20内の空間を、レンズ11の前側の空間と後側の空間とに区切るものである。かかる隔壁19の存在により、各第1光源31a、第2光源31bから発せられた光が、撮像の対象となる対象物に反射せずに直接レンズ11の後側の撮像素子12が存在する空間に到達しないようになる。
 他方、第2光源31bは、第1偏光板32よりも上と下に位置しており、それらから出た照明光は、第1偏光板32を通らずに、自然光のまま、被験者の口腔内に照射されるようになっている。
 第1光源31aから出た照明光が直線偏光となって被験者の口腔内に向かうようになっており、且つ第2光源31bから出た照明光が自然光のまま(或いは第2光源31bから出た照明光が、第1偏光板32とは振動方向が直交する第3偏光板とでも称すべき図示せぬ偏光板を通過することによって、第1偏光板32を通過することによって生じた直線偏光とは振動方向が直交する直線偏光として)被験者の口腔内に向かうようになっているのであれば、第1光源31a、第2光源31bの設置位置、第1偏光板32の形状、位置等の設計は、上述した例から適宜変更可能である。
 上述したように、第1偏光板32を通過した照明光は所定方向の偏光面を有する直線偏光になる。第1偏光板32を通過した直線偏光である照明光の偏光面は、これには限られないが、図1において水平となるようになっている。
In front of the four first light sources 31a arranged in the vertical direction near the left and right ends of the head 20, the first polarizing plate 32, which is a polarizing plate for linearly polarizing the illumination light which is the natural light passing through the first light source 31, is respectively. They are arranged one by one.
The first polarizing plate 32 in this embodiment is a vertically long rectangle as shown in FIGS. 1 and 3. In this embodiment, among the light emitted from the first light source 31a, all the illumination light that contributes to the image pickup performed by the image pickup element 12 as described later passes through the first polarizing plate 32, and is the first. 1 The length in the vertical direction and the width in the horizontal direction of the polarizing plate 32 are designed from that viewpoint.
Although it is not always necessary to allow all the illumination light contributing to the image pickup by the image pickup element 12 to pass through the first polarizing plate 32, in this embodiment, the inner edge thereof is in contact with the periphery of the lens 11 without a gap. Moreover, a partition wall 19 which is a donut-shaped wall that does not allow light to pass through is provided in the head 20 so that the outer edge thereof is in contact with the inner peripheral surface of the head 20 without a gap. The partition wall 19 divides the space inside the head 20 into a space on the front side and a space on the rear side of the lens 11. Due to the presence of the partition wall 19, the light emitted from each of the first light source 31a and the second light source 31b is not reflected by the object to be imaged, and the space in which the image sensor 12 on the rear side of the lens 11 directly exists. Will not reach.
On the other hand, the second light source 31b is located above and below the first polarizing plate 32, and the illumination light emitted from them irradiates the oral cavity of the subject as natural light without passing through the first polarizing plate 32. It is supposed to be done.
The illumination light emitted from the first light source 31a is linearly polarized and directed toward the subject's oral cavity, and the illumination light emitted from the second light source 31b remains natural light (or is emitted from the second light source 31b). Illumination light passes through a polarizing plate (not shown), which should be called a third polarizing plate whose vibration direction is orthogonal to that of the first polarizing plate 32, and thus linearly polarized light generated by passing through the first polarizing plate 32. If the light is directed toward the subject's oral cavity (as linear polarization with orthogonal vibration directions), the design of the installation position of the first light source 31a and the second light source 31b, the shape and position of the first polarizing plate 32, etc. Can be changed as appropriate from the above example.
As described above, the illumination light that has passed through the first polarizing plate 32 becomes linearly polarized light having a plane of polarization in a predetermined direction. The plane of polarization of the illumination light, which is linearly polarized light that has passed through the first polarizing plate 32, is not limited to this, but is horizontal in FIG.
 レンズ11は、照明光が口腔内の撮像対象となる対象物、この実施形態では、咽頭後壁部、左右の前口蓋弓、及び口蓋垂を含む範囲にあるもので反射して生じる反射光を撮像素子12に結像させるためのものである。それが可能な限りにおいて、レンズ11は一枚である必要はないし、レンズ11以外の必要な光学部品、例えば、鏡やプリズムを含んでいても良い。また、レンズ11は像の拡大の機能を持っていてもよいし、それ以外の機能を持っていても良い。この実施形態におけるレンズ11は、これには限られないが、1枚の拡大レンズである。
 撮像素子12は、反射光を捉え撮像を行うものである。この実施形態の撮像素子12は、カラーの撮像を行えるものであれば公知或いは周知のもの、更にいえば市販のものであっても構わない。撮像素子12は例えばCCD(Charge Coupled Device)、或いはCMOS(Complementary Metal Oxide Semiconductor)により構成することができる。撮像素子12は撮像により得た画像のデータを生成する。撮像素子12が撮像する画像は、動画像であっても良いしそうでなくてもよいが、この実施形態では所定時間おきに連続して静止画像の画像データを生成するようになっている。この場合における所定時間は、例えば、20mm秒から50mm秒であり、そうすると、1秒間に20から50個の静止画像の画像データが撮像素子12で生成されることになる。これは、事実上、撮像素子12が動画像の画像データを生成するということも可能である。
 撮像素子12は、接続線12aにて回路13に接続されている。回路13は、また、図示せぬ接続線によって第1光源31aと第2光源31bとにも接続されている。
 回路13は撮像素子12が生成した画像のデータを接続線12aを介して撮像素子12から受取るようになっている。回路13は、ビデオ信号の外部への出力に先んじて必要な処理、例えば明るさの調整や、必要であればアナログ/デジタル変換などを行う。回路13は、また、図示せぬ接続線によって接続された第1光源31aと第2光源31bの点灯と消灯のタイミングをも制御するように構成されている。第1光源31aと第2光源31bの点灯と消灯のタイミングについては、追って説明する。
 回路13は接続線13aを介して出力端子14に接続されている。出力端子14は、図示を省略のケーブル16を介して、コンピュータ装置100との接続を行うものである。ケーブル16と出力端子14との接続はどのように行われても良いが、例えば、USBその他の規格化された接続方式を用いるのが便利であろう。なお、カメラ1によって生成された動画像データのコンピュータ装置100への出力は、この実施形態のように有線で行われる必要はない。動画像データの出力が無線で行われる場合には、カメラ1は、出力端子14に代えて、例えばBluetooth(商標)による通信をコンピュータ装置100と行うための公知或いは周知の送受信機構を備えることになる。もちろん、コンピュータ装置100との接続を、有線と無線の双方で行えるようにすることも可能である。
 回路13は、また、上述したスイッチ15に接続線15aによって接続されている。スイッチ15からの入力信号を受けた回路13は、撮像素子12に撮像を開始させるとともに、第1光源31aと第2光源31bとを、後述するようなタイミングで点灯、消灯させるようになっている。
The lens 11 captures the reflected light generated by the illumination light reflected by an object to be imaged in the oral cavity, in this embodiment, including the posterior wall of the pharynx, the left and right anterior palatine arches, and the uvula. This is for forming an image on the element 12. As long as it is possible, the lens 11 does not have to be a single lens, and may include necessary optical components other than the lens 11, such as a mirror and a prism. Further, the lens 11 may have a function of magnifying an image, or may have a function other than that. The lens 11 in this embodiment is not limited to this, but is a magnifying lens.
The image pickup device 12 captures the reflected light and performs an image pickup. The image pickup device 12 of this embodiment may be a known or well-known image sensor 12 as long as it can perform color imaging, and may be a commercially available one. The image pickup device 12 can be configured by, for example, a CCD (Charge Coupled Device) or a CMOS (Complementary Metal Oxide Semiconductor). The image sensor 12 generates image data obtained by image pickup. The image captured by the image pickup device 12 may or may not be a moving image, but in this embodiment, image data of a still image is continuously generated at predetermined time intervals. In this case, the predetermined time is, for example, 20 mm seconds to 50 mm seconds, and then image data of 20 to 50 still images per second is generated by the image sensor 12. It is also possible that the image sensor 12 actually generates image data of a moving image.
The image pickup device 12 is connected to the circuit 13 by a connection line 12a. The circuit 13 is also connected to the first light source 31a and the second light source 31b by a connection line (not shown).
The circuit 13 receives the image data generated by the image pickup element 12 from the image pickup element 12 via the connection line 12a. The circuit 13 performs necessary processing such as brightness adjustment and analog / digital conversion if necessary prior to the output of the video signal to the outside. The circuit 13 is also configured to control the timing of turning on and off the first light source 31a and the second light source 31b connected by a connection line (not shown). The timing of turning on and off the first light source 31a and the second light source 31b will be described later.
The circuit 13 is connected to the output terminal 14 via the connection line 13a. The output terminal 14 is connected to the computer device 100 via a cable 16 (not shown). The connection between the cable 16 and the output terminal 14 may be performed in any way, but it may be convenient to use, for example, USB or other standardized connection method. It should be noted that the output of the moving image data generated by the camera 1 to the computer device 100 does not need to be performed by wire as in this embodiment. When the moving image data is output wirelessly, the camera 1 is provided with, for example, a known or well-known transmission / reception mechanism for communicating with the computer device 100 by, for example, Bluetooth ™, instead of the output terminal 14. Become. Of course, it is also possible to make the connection with the computer device 100 both wired and wireless.
The circuit 13 is also connected to the switch 15 described above by a connection line 15a. The circuit 13 that receives the input signal from the switch 15 causes the image pickup element 12 to start imaging, and turns on and off the first light source 31a and the second light source 31b at the timings described later. ..
 第2偏光板33は、上述したように第1偏光板32と同じものでできている偏光板であるものの、その機能が、第1光源31aから照射された自然光である照明光を直線偏光に変える第1偏光板32とは異なる。
 第2偏光板33は、第1偏光板32によって直線偏光とされた、第1光源31aから照射された照明光が対象物の表面で反射することによって生じた反射光のうちの、対象物の表面で反射された光である、追って詳しく説明する表面反射光に含まれる直線偏光の成分を遮断する機能を有している。
 照明光と反射光の光軸を基準として見れば、第1偏光板32と第2偏光板33とは、それらを通過した光の直線偏光における偏光面が直交するような向きとなるようになっており、つまり、第1偏光板32と第2偏光板33とは、その偏光方向が直交している。この実施形態でいえば、自然光が第2偏光板33を反射光と同じ向きで通過した場合、第2偏光板33を通過することによって生じた直線偏光の偏光面は、図1における鉛直方向となるようになっている。
 対象物からの反射光である像光は、レンズ11を通り、更には第2偏光板33を通過してからでなければ、撮像素子12に至ることができないようになっている。言い換えれば、第2偏光板33を通過できない像光は、撮像素子12で撮像されない。
As described above, the second polarizing plate 33 is a polarizing plate made of the same material as the first polarizing plate 32, but its function is to convert the illumination light, which is natural light emitted from the first light source 31a, into linearly polarized light. It is different from the first polarizing plate 32 to be changed.
The second polarizing plate 33 is the object of the reflected light generated by the illumination light emitted from the first light source 31a, which is linearly polarized by the first polarizing plate 32, and is reflected on the surface of the object. It has a function of blocking the linearly polarized light component contained in the surface-reflected light, which is the light reflected on the surface and will be described in detail later.
Looking at the optical axes of the illumination light and the reflected light as a reference, the first polarizing plate 32 and the second polarizing plate 33 are oriented so that the planes of polarization of the linearly polarized light passing through them are orthogonal to each other. That is, the polarization directions of the first polarizing plate 32 and the second polarizing plate 33 are orthogonal to each other. In this embodiment, when natural light passes through the second polarizing plate 33 in the same direction as the reflected light, the polarization plane of linearly polarized light generated by passing through the second polarizing plate 33 is in the vertical direction in FIG. It is supposed to be.
The image light, which is the reflected light from the object, can reach the image pickup device 12 only after passing through the lens 11 and further through the second polarizing plate 33. In other words, the image light that cannot pass through the second polarizing plate 33 is not captured by the image pickup device 12.
 次に、コンピュータ装置100について説明する。
 コンピュータ装置100は、一般的なコンピュータであり、市販のもので良い。この実施形態におけるコンピュータ装置100は、これには限られないが市販のタブレットである。なお、コンピュータ装置100は、以下に記載する構成、機能を備えている限り、必ずしもタブレットである必要はなく、スマートフォンや、ノート型パソコン、デスクトップ型パソコン等であっても構わない。スマートフォンや、パソコンである場合においても、コンピュータ装置100は市販のものであっても良い。タブレットとしては、例えば、Apple Japan合同会社が製造、販売を行うiPad(商標)シリーズを挙げることができる。スマートフォンとしては、例えば、同社が製造、販売を行うiPhone(商標)シリーズを挙げることができる。
Next, the computer device 100 will be described.
The computer device 100 is a general computer and may be a commercially available one. The computer device 100 in this embodiment is a commercially available tablet, but not limited to this. The computer device 100 does not necessarily have to be a tablet as long as it has the configurations and functions described below, and may be a smartphone, a notebook personal computer, a desktop personal computer, or the like. Even in the case of a smartphone or a personal computer, the computer device 100 may be commercially available. Examples of tablets include the iPad (trademark) series manufactured and sold by Apple Japan LLC. Examples of smartphones include the iPhone (trademark) series manufactured and sold by the company.
 コンピュータ装置100の外観は、図1に示されている。
 コンピュータ装置100は、ディスプレイ101を備えている。ディスプレイ101は、静止画像又は動画像、一般的にはその双方を表示するためのものであり、公知、或いは周知のものを用いることができる。ディスプレイ101は例えば、液晶ディスプレイである。コンピュータ装置100は、また入力装置102を備えている。入力装置102は、ユーザが所望の入力をコンピュータ装置100に対して行うためのものである。入力装置102は、公知或いは周知のものを用いることができる。この実施形態におけるコンピュータ装置100の入力装置102はボタン式のものとなっているが、これには限られず、テンキー、キーボード、トラックボール、マウスなどを用いることも可能である。特に、コンピュータ装置100がノート型パソコン、デスクトップ型パソコンである場合には、入力装置102はキーボードや、マウス等になるであろう。また、ディスプレイ101がタッチパネルである場合、ディスプレイ101は入力装置102の機能を兼ねることになり、この実施形態ではそうされている。
The appearance of the computer device 100 is shown in FIG.
The computer device 100 includes a display 101. The display 101 is for displaying a still image or a moving image, generally both of them, and a known or well-known display 101 can be used. The display 101 is, for example, a liquid crystal display. The computer device 100 also includes an input device 102. The input device 102 is for the user to make a desired input to the computer device 100. As the input device 102, a known or well-known input device 102 can be used. The input device 102 of the computer device 100 in this embodiment is a button type, but the input device 102 is not limited to this, and a numeric keypad, a keyboard, a trackball, a mouse, or the like can also be used. In particular, when the computer device 100 is a notebook personal computer or a desktop personal computer, the input device 102 may be a keyboard, a mouse, or the like. Further, when the display 101 is a touch panel, the display 101 also has a function of the input device 102, which is the case in this embodiment.
 コンピュータ装置100のハードウェア構成を、図4に示す。
 ハードウェアには、CPU(central processing unit)111、ROM(read only memory)112、RAM(random access memory)113、インターフェイス114が含まれており、これらはバス116によって相互に接続されている。
 CPU111は、演算を行う演算装置である。CPU111は、例えば、ROM112、或いはRAM113に記録されたコンピュータプログラムを実行することにより、後述する処理を実行する。図示をしていないが、ハードウェアはHDD(hard disk drive)その他の大容量記録装置を備えていてもよく、コンピュータプログラムは大容量記録装置に記録されていても構わない。
 ここでいうコンピュータプログラムには、カメラ1と協働して動作するコンピュータ装置100に、後述するようにして生成された画像データを、画像診断の実行される場所に送信する処理を実行させるためのコンピュータプログラムが少なくとも含まれる。このコンピュータプログラムは、コンピュータ装置100にプリインストールされていたものであっても良いし、コンピュータ装置100の出荷後にインストールされたものであっても良い。このコンピュータプログラムのコンピュータ装置100へのインストールは、メモリカード等の図示を省略の所定の記録媒体を介して行なわれても良いし、LAN或いはインターネットなどのネットワークを介して行なわれても構わない。
 ROM112は、CPU111が後述する処理を実行するために必要なコンピュータプログラムやデータを記録している。ROM112に記録されたコンピュータプログラムとしては、これに限られず、コンピュータ装置100がタブレットなのであれば、コンピュータ装置100をタブレットとして機能させるために必要な、例えば、電子メールを実行するためのコンピュータプログラムやデータが記録されている。コンピュータ装置100は、また、インターネット上のホームページを閲覧することも可能とされており、それを可能とするための公知或いは周知のwebブラウザを実装していてもよい。
 RAM113は、CPU111が処理を行うために必要なワーク領域を提供する。場合によっては、上述のコンピュータプログラムやデータの例えば少なくとも一部が記録されていてもよい。
 インターフェイス114は、バス116で接続されたCPU111やRAM113等と外部との間でデータのやり取りを行うものである。インターフェイス114には、上述のディスプレイ101と、入力装置102とが接続されている。入力装置102から入力された操作内容についてのデータは、インターフェイス114からバス116に入力されるようになっている。また、周知のようにディスプレイ101に画像を表示するための画像データは、インターフェイス114から、ディスプレイ101に出力されるようになっている。インターフェイス114は、また、上述したケーブル16から(正確には、ケーブル16と接続されたコンピュータ装置100が備える図示せぬ入力端子から)、画像データを受付けるようになっている。ケーブル16から入力された画像データは、インターフェイス114からバス116に送られるようになっている。
 インターフェイス114には送受信機構が接続されている。送受信機構は、例えば、コンピュータ装置100がカメラ1との通信を無線で行う場合における近距離無線通信を行うことができるようになっている。また、送受信機構は、インターネット通信を行うことが可能となっており、カメラ1から受取った画像データを、画像診断の行われる場所へとインターネット回線を通じて送信できるようになっている。
The hardware configuration of the computer device 100 is shown in FIG.
The hardware includes a CPU (central processing unit) 111, a ROM (read only memory) 112, a RAM (random access memory) 113, and an interface 114, which are connected to each other by a bus 116.
The CPU 111 is an arithmetic unit that performs arithmetic operations. The CPU 111 executes a process described later, for example, by executing a computer program recorded in the ROM 112 or the RAM 113. Although not shown, the hardware may be equipped with an HDD (hard disk drive) or other large-capacity recording device, and the computer program may be recorded on the large-capacity recording device.
The computer program referred to here is for causing a computer device 100 that operates in cooperation with the camera 1 to execute a process of transmitting image data generated as described later to a place where image diagnosis is executed. Contains at least computer programs. This computer program may be pre-installed in the computer device 100, or may be installed after the computer device 100 is shipped. The computer program may be installed in the computer device 100 via a predetermined recording medium such as a memory card (not shown), or may be installed via a network such as a LAN or the Internet.
The ROM 112 records computer programs and data necessary for the CPU 111 to execute a process described later. The computer program recorded in the ROM 112 is not limited to this, and if the computer device 100 is a tablet, the computer program or data necessary for the computer device 100 to function as a tablet, for example, for executing e-mail. Is recorded. The computer device 100 is also capable of browsing a home page on the Internet, and may be equipped with a publicly known or well-known web browser for making it possible.
The RAM 113 provides a work area required for the CPU 111 to perform processing. In some cases, for example, at least a part of the above-mentioned computer program or data may be recorded.
The interface 114 exchanges data between the CPU 111, the RAM 113, and the like connected by the bus 116 and the outside. The display 101 described above and the input device 102 are connected to the interface 114. The data about the operation content input from the input device 102 is input to the bus 116 from the interface 114. Further, as is well known, image data for displaying an image on the display 101 is output from the interface 114 to the display 101. The interface 114 also receives image data from the cable 16 described above (more precisely, from an input terminal (not shown) included in the computer device 100 connected to the cable 16). The image data input from the cable 16 is sent from the interface 114 to the bus 116.
A transmission / reception mechanism is connected to the interface 114. The transmission / reception mechanism is capable of performing short-range wireless communication, for example, when the computer device 100 wirelessly communicates with the camera 1. Further, the transmission / reception mechanism is capable of performing Internet communication, and can transmit image data received from the camera 1 to a place where image diagnosis is performed via an Internet line.
 CPU111がコンピュータプログラムを実行することにより、コンピュータ装置100の内部には、図5で示されたような機能ブロックが生成される。なお、以下に説明する機能ブロックは、コンピュータ装置100を上述したように機能させるための上述のコンピュータプログラム単体の機能により生成されていても良いが、上述のコンピュータプログラムと、コンピュータ装置100にインストールされたOSその他のコンピュータプログラムとの協働により生成されても良い。
 コンピュータ装置100には、本願発明の機能との関係で、入力部121、制御部122、画像データ記録部123、出力部124が生成される。
When the CPU 111 executes a computer program, a functional block as shown in FIG. 5 is generated inside the computer device 100. The functional block described below may be generated by the function of the above-mentioned computer program alone for making the computer device 100 function as described above, but is installed in the above-mentioned computer program and the computer device 100. It may be generated in collaboration with the OS and other computer programs.
In the computer device 100, an input unit 121, a control unit 122, an image data recording unit 123, and an output unit 124 are generated in relation to the functions of the present invention.
 入力部121は、インターフェイス114から、データを受付けるものである。入力部121が受付けるデータは、入力装置102から入力された、処理選択データと、ケーブル16から入力された画像データである。入力部121は、それらデータをインターフェイス114を介して受取った場合、それを制御部122に送るようになっている。処理選択データは、画像データをコンピュータ装置100内に記録するか、コンピュータ装置100から、画像診断を行う場所へと送信するかを選択するためのデータである。
 制御部122は、上述した処理選択データと、画像データとを受取る場合がある。制御部122は、処理選択データが画像データをコンピュータ装置100内に記録するというものであった場合には画像データを、画像データ記録部123に記録し、処理選択データが画像データをコンピュータ装置100から画像診断を行う場所へと送信するというものであった場合には、画像データを出力部124へと送るようになっている。
 画像データ記録部123は、上述したように画像データを記録するための、通常であればRAM113の一部である記録領域である。画像データは、その画像データがどの被験者のものであるのかを特定するための識別情報とともに画像データ記録部123に記録される。
 出力部124は、画像データを始めとするデータをインターフェイス114を介して外部へ出力する機能を有している。例えば、処理選択データで、画像データをコンピュータ装置100から画像診断を行う場所へと送信することが選択された場合には、出力部124は制御部122から受取った画像データをインターフェイス114を介して送受信機構へ送り、送信機構がその画像データを、インターネットを介して、画像診断を行う場所へと送信する。また、出力部124は、必要に応じて画像データをインターフェイス114を介してディスプレイ101へと送るようになっている。この場合、ディスプレイ101には、画像データに基づく画像が表示されることになる。
The input unit 121 receives data from the interface 114. The data received by the input unit 121 is the processing selection data input from the input device 102 and the image data input from the cable 16. When the input unit 121 receives the data via the interface 114, the input unit 121 sends it to the control unit 122. The processing selection data is data for selecting whether to record the image data in the computer device 100 or to transmit the image data from the computer device 100 to a place where the image diagnosis is performed.
The control unit 122 may receive the above-mentioned processing selection data and image data. When the processing selection data is to record the image data in the computer device 100, the control unit 122 records the image data in the image data recording unit 123, and the processing selection data records the image data in the computer device 100. When the data is transmitted from the image to the place where the image diagnosis is performed, the image data is transmitted to the output unit 124.
The image data recording unit 123 is a recording area that is normally a part of the RAM 113 for recording image data as described above. The image data is recorded in the image data recording unit 123 together with the identification information for identifying which subject the image data belongs to.
The output unit 124 has a function of outputting data including image data to the outside via the interface 114. For example, when the processing selection data selects to transmit the image data from the computer device 100 to the place where the image diagnosis is performed, the output unit 124 transmits the image data received from the control unit 122 via the interface 114. It is sent to the transmission / reception mechanism, and the transmission mechanism sends the image data to the place where the image diagnosis is performed via the Internet. Further, the output unit 124 sends the image data to the display 101 via the interface 114 as needed. In this case, the image based on the image data is displayed on the display 101.
 次に、以上で説明したカメラ1及びコンピュータ装置100の使用方法、及び動作を説明する。
 カメラ1及びコンピュータ装置100を用いる場合には、まず、上述したように、ケーブル16で、カメラ1とコンピュータ装置100とを接続する。
 また、ユーザ(被験者自身の場合もある。)は、コンピュータ装置100に記録されていた上述のコンピュータプログラムを立ち上げ、入力装置102を操作して、処理選択データを入力する。処理選択データは、上述したように、処理選択データは、画像データをコンピュータ装置100内に記録するか、コンピュータ装置100から、画像診断を行う場所へと送信するかを選択するためのデータである。例えば、コンピュータ装置100のディスプレイ101には、処理選択データの入力をユーザに促す画像が表示され、ユーザは、その画像による指示にしたがって処理選択データの入力を行う。ディスプレイ101へのかかる画像の表示は、例えば、制御部122が生成し、制御部122から、出力部124、インターフェイス114を介してディスプレイ101へと送られたデータによって行われる。
 入力装置102から入力された処理選択データは、インターフェイス114、入力部121を介して制御部122に入力される。これには限られないが、この実施形態では、ユーザが入力した処理選択データは、画像データをコンピュータ装置100から画像診断を行う場所へと送信することを選択するものであるものとする。
Next, the usage and operation of the camera 1 and the computer device 100 described above will be described.
When the camera 1 and the computer device 100 are used, first, as described above, the camera 1 and the computer device 100 are connected by the cable 16.
Further, the user (which may be the subject himself / herself) launches the above-mentioned computer program recorded in the computer device 100, operates the input device 102, and inputs the processing selection data. As described above, the process selection data is data for selecting whether to record the image data in the computer device 100 or to send the image data from the computer device 100 to the place where the image diagnosis is performed. .. For example, the display 101 of the computer device 100 displays an image prompting the user to input the processing selection data, and the user inputs the processing selection data according to the instruction by the image. The display of such an image on the display 101 is performed by, for example, data generated by the control unit 122 and sent from the control unit 122 to the display 101 via the output unit 124 and the interface 114.
The processing selection data input from the input device 102 is input to the control unit 122 via the interface 114 and the input unit 121. Although not limited to this, in this embodiment, it is assumed that the processing selection data input by the user selects to transmit the image data from the computer device 100 to the place where the image diagnosis is performed.
 この状態で、被験者自身、又は医師等或いは施設の担当者であるユーザがカメラ1の把持部10を把持し、その頭部20における開口21を、被験者の口に向ける。
 そして、スイッチ15をユーザは押す。
 そうすると、回路13は、撮像素子12に撮像を開始させるとともに、第1光源31aと第2光源31bとを交互に点灯させ始める。第1光源31aと第2光源31bとはどちらが先に点灯しても構わないが、この実施形態では、第1光源31aから先に点灯を開始するものとする。
 この実施形態における撮像素子12の撮像のタイミングは、これには限られないが、50mm秒間隔である。この実施形態では、撮像素子12の撮像のタイミングと、第1光源31a及び第2光源31bの点灯と消灯のタイミングとは、互いに同期させられている。より具体的には、撮像素子12が撮像を行い、ある静止画像の画像データが生成されたタイミングで第1光源31aが点灯しており、第2光源31bが消灯していたとしていたなら、次の画像データが生成されるタイミングでは、第2光源31bが点灯し、第1光源31aが消灯している。そして、その次の画像データが生成されるタイミングでは、第1光源31aが点灯し、第2光源31bが消灯しており、更にその次の画像データが生成されるタイミングでは、第2光源31bが点灯し、第1光源31aが消灯している。このように、このカメラ1で生成される静止画像についての画像データは、第1光源31aからの照明光に基づくものと、第2光源31bからの照明光に基づくものとが、交互に並ぶことになる。
In this state, the subject himself, a doctor, or a user who is in charge of the facility grips the grip portion 10 of the camera 1, and the opening 21 in the head 20 thereof is directed toward the mouth of the subject.
Then, the user presses the switch 15.
Then, the circuit 13 causes the image pickup device 12 to start imaging, and starts lighting the first light source 31a and the second light source 31b alternately. Either the first light source 31a or the second light source 31b may be turned on first, but in this embodiment, it is assumed that the first light source 31a is turned on first.
The timing of imaging of the image pickup device 12 in this embodiment is not limited to this, but is at intervals of 50 mm seconds. In this embodiment, the timing of imaging by the image pickup device 12 and the timing of turning on and off the first light source 31a and the second light source 31b are synchronized with each other. More specifically, if the first light source 31a is turned on and the second light source 31b is turned off at the timing when the image sensor 12 takes an image and the image data of a certain still image is generated, the following At the timing when the image data is generated, the second light source 31b is turned on and the first light source 31a is turned off. Then, at the timing when the next image data is generated, the first light source 31a is turned on and the second light source 31b is turned off, and at the timing when the next image data is further generated, the second light source 31b is turned on. It is lit and the first light source 31a is extinguished. As described above, in the image data of the still image generated by the camera 1, those based on the illumination light from the first light source 31a and those based on the illumination light from the second light source 31b are arranged alternately. become.
 この実施形態で撮像したいのは、被験者の口腔内の咽頭後壁部、左右の前口蓋弓、及び口蓋垂であるが、それら、例えば咽頭後壁部は、通常の状態では舌に隠れて見えない状態となっている。
 そこで、口腔に向けてカメラ1の開口21を向けられている被験者は、声を出しながら息を吸い込む。なるべく大きな声を出しながら、なるべく強く息を吸い込むのが好ましい。そうすると、僅かな時間、例えば、0.5秒内外であるが、舌の付け根が下がって、咽頭後壁部、左右の前口蓋弓、及び口蓋垂が覗く。
 それにより、カメラ1の撮像素子12は、咽頭後壁部、左右の前口蓋弓、及び口蓋垂を含んだ静止画像の画像データを連続して生成する。50mm秒ごとに連続して生成される画像データは、上述したように、第1光源31aから出た照明光に基づくものと、第2光源31bから出た照明光に基づくものとが交互に並ぶことになる。
What we want to image in this embodiment is the posterior pharyngeal wall, left and right anterior palatine arches, and uvula in the subject's oral cavity, but these, for example, the posterior pharyngeal wall, are hidden behind the tongue and cannot be seen under normal conditions. It is in a state.
Therefore, the subject whose opening 21 of the camera 1 is directed toward the oral cavity inhales while making a voice. It is preferable to inhale as strongly as possible while making a loud voice as much as possible. Then, for a short time, for example, 0.5 seconds inside and outside, the base of the tongue goes down, and the posterior wall of the pharynx, the left and right anterior palatine arches, and the uvula can be seen.
As a result, the image sensor 12 of the camera 1 continuously generates image data of a still image including the posterior wall of the pharynx, the left and right anterior palatine arches, and the uvula. As described above, the image data continuously generated every 50 mm seconds is alternately arranged with one based on the illumination light emitted from the first light source 31a and one based on the illumination light emitted from the second light source 31b. It will be.
 第1光源31aから出た照明光に基づく画像と、第2光源31bから出た照明光に基づく画像とが、どのようなものとなるかについて図6、図7を用いて説明する。 The image based on the illumination light emitted from the first light source 31a and the image based on the illumination light emitted from the second light source 31b will be described with reference to FIGS. 6 and 7.
 第1光源31aからの照明光によって撮像素子12で撮像される反射光がどのようなものかということを、概念的に図6に示す。
 図6(A)は唾液等の体液で濡れた対象物(上述の4つの部位の粘膜)の最表面で反射する反射光である表面反射光を、図6(B)は体液で濡れた対象物の表面から若干内側に入って、事実上、体液を除いた対象物自体の表面で反射する反射光である内部反射光の振る舞いを示している。また、太線の○印の中に引かれた直線は当該部分における照明光又は反射光の偏光面の向きを観念的に示しており、○印の中に放射状に線が引かれているのは当該部分における照明光又は反射光の直線偏光性が乱れている(例えば自然光化している。)ことを示している。
 第1光源31aから出た照明光は、第1偏光板32を通過する。第1偏光板32を通過した照明光は直線偏光になる。その場合の照明光である直線偏光の偏光面は、図1における水平方向である。ここまでは、図6(A)、(B)で共通である。
 第1偏光板32を通過した直線偏光である照明光は、対象物Xに当たり、対象物Xからの反射光となる。対象物Xで反射した反射光のうち表面反射光(体液の表面で反射されて生じた反射光)は、その偏光状態が理想的には維持されたままである。直線偏光である表面反射光は、自然光を通過させたときに生じる直線偏光の偏光面の向きが第1偏光板32と直交するようにされている第2偏光板33に遮断され、撮像素子12には届かない(図6(A))。
 他方、内部反射光(体液を通過して、粘膜の表面或いはそこからわずかに奥で反射された光)は、その偏光状態が乱れている。内部反射光は、その中に含まれる光のうち、表面反射光に含まれる直線偏光の偏光面と直交する方向で振動するものが、第2偏光板33を通過するので、その光量の半分程度が撮像素子12に到達することになる(図6(B))。
 結果として、第1光源31aからの照明光を用いて撮像素子12が画像を撮像するために利用される光は、内部反射光のみということになる。これがどのようなことを意味するかというと、第1光源31aに由来する照明光を用いて撮像素子12が撮像を行うことにより生成される画像は、ギラツキのない、艶消しの無反射画像になるということである。
FIG. 6 conceptually shows what kind of reflected light is captured by the image pickup device 12 by the illumination light from the first light source 31a.
FIG. 6A shows surface reflected light reflected on the outermost surface of an object wet with body fluid such as saliva (the mucous membranes of the above-mentioned four parts), and FIG. 6B shows an object wet with body fluid. It shows the behavior of internally reflected light, which is the reflected light that enters slightly inside from the surface of the object and is effectively reflected on the surface of the object itself excluding body fluids. In addition, the straight line drawn in the circle mark of the thick line conceptually indicates the direction of the polarizing surface of the illumination light or the reflected light in the relevant part, and the line drawn radially in the circle mark is. It shows that the linear polarization of the illumination light or the reflected light in the portion is disturbed (for example, it is naturally converted).
The illumination light emitted from the first light source 31a passes through the first polarizing plate 32. The illumination light that has passed through the first polarizing plate 32 becomes linearly polarized light. The polarization plane of linearly polarized light, which is the illumination light in that case, is the horizontal direction in FIG. Up to this point, it is common to FIGS. 6A and 6B.
The illumination light that is linearly polarized light that has passed through the first polarizing plate 32 hits the object X and becomes the reflected light from the object X. Of the reflected light reflected by the object X, the surface reflected light (reflected light generated by being reflected on the surface of the body fluid) ideally maintains its polarized state. The surface reflected light which is linearly polarized light is blocked by the second polarizing plate 33 whose direction of the polarizing surface of the linearly polarized light generated when natural light is passed is orthogonal to the first polarizing plate 32, and is blocked by the image pickup element 12. Does not reach (Fig. 6 (A)).
On the other hand, the polarized light of the internally reflected light (light that has passed through the body fluid and is reflected on the surface of the mucous membrane or slightly behind it) is disturbed. Of the light contained therein, the light that vibrates in the direction orthogonal to the polarization plane of the linearly polarized light contained in the surface reflected light passes through the second polarizing plate 33, and therefore is about half of the amount of light. Will reach the image pickup element 12 (FIG. 6B).
As a result, the light used for the image pickup device 12 to take an image by using the illumination light from the first light source 31a is only the internally reflected light. What this means is that the image generated by the image sensor 12 taking an image using the illumination light derived from the first light source 31a is a matte non-reflective image without glare. It means that it will be.
 第2光源31bからの照明光によって撮像素子12で撮像される反射光がどのようなものかということを、概念的に図7に示す。
 図7(A)、(B)はそれぞれ、図6(A)、(B)と同様に、表面反射光の振る舞いと、内部反射光の振る舞いを示している。照明光又は反射光の偏光面の向きと、直線偏光性が乱れていることを示す記号は、図6に倣う。
 第2光源31bから出た照明光は、第1偏光板32を通過しない。したがって、第2光源31bから出た照明光は、自然光のまま対象物Xに向かう。
 自然光である照明光は、対象物Xに当たり、対象物Xからの反射光となる。対象物Xで反射した反射光のうち表面反射光も、内部反射光もいずれも、自然光のままである(図7(A)、(B))。
 他方、したがって、偏光状態が乱れている表面反射光も、内部反射光も、図6(B)を用いて説明した場合と同様の理由で、その光量の半分程度が、第2偏光板33を通過し、撮像素子12に到達することになる(図7(A)、(B))。
 結果として、第2光源31bからの照明光を用いて撮像素子12が画像を撮像するために利用される光は、表面反射光と、内部反射光の両者を含むことになる。これがどのようなことを意味するかというと、第2光源31bに由来する照明光を用いて撮像素子12が撮像を行うことにより生成される画像は、ギラツキのある、艶有りの反射画像になるということである。
FIG. 7 conceptually shows what the reflected light captured by the image pickup device 12 is by the illumination light from the second light source 31b.
7 (A) and 7 (B) show the behavior of the surface reflected light and the behavior of the internally reflected light, respectively, as in FIGS. 6 (A) and 6 (B). The orientation of the polarizing surface of the illumination light or the reflected light and the symbol indicating that the linear polarization property is disturbed follow those in FIG.
The illumination light emitted from the second light source 31b does not pass through the first polarizing plate 32. Therefore, the illumination light emitted from the second light source 31b heads toward the object X as natural light.
Illumination light, which is natural light, hits the object X and becomes reflected light from the object X. Of the reflected light reflected by the object X, both the surface reflected light and the internally reflected light remain natural light (FIGS. 7A and 7B).
On the other hand, therefore, for the same reason as in the case described with reference to FIG. 6B, both the surface reflected light and the internally reflected light whose polarization state is disturbed have about half of the light amount of the second polarizing plate 33. It passes through and reaches the image pickup device 12 (FIGS. 7A and 7B).
As a result, the light used by the image pickup device 12 to capture an image using the illumination light from the second light source 31b includes both surface reflected light and internally reflected light. What this means is that the image generated by the image pickup device 12 taking an image using the illumination light derived from the second light source 31b becomes a glaring, glossy reflection image. That's what it means.
 撮像素子12が連続して生成した、静止画像についてのデータは、接続線12aを介して回路13へ送られ、回路13で必要に応じて適当な処理(明るさの調整等)をなされてから接続線13aを介して出力端子14に至る。そして、出力端子14からケーブル16を経て、コンピュータ装置100へと至るようになっている。
 画像データは、静止画像データが連続したものである。静止画像データは、インターフェイス114から入力部121を経て、制御部122へと送られ、更には、出力部124へ至る。
 出力部124は、インターフェイス114を介して送受信機構へと画像データを送り、画像データは、送受信機構から、インターネットを介して、画像診断の行われる場所へ送られる。画像データは、1つおきに、無反射画像についてのものと、反射画像についてのものとなる。
The data about the still image continuously generated by the image sensor 12 is sent to the circuit 13 via the connection line 12a, and after the circuit 13 performs appropriate processing (brightness adjustment, etc.) as necessary. It reaches the output terminal 14 via the connection line 13a. Then, it reaches the computer device 100 from the output terminal 14 via the cable 16.
The image data is a series of still image data. The still image data is sent from the interface 114 to the control unit 122 via the input unit 121, and further reaches the output unit 124.
The output unit 124 sends image data to the transmission / reception mechanism via the interface 114, and the image data is sent from the transmission / reception mechanism to a place where image diagnosis is performed via the Internet. Every other image data is for a non-reflective image and for a reflected image.
 画像診断の行われる場所は、例えば、医師が画像診断を実行するのであれば、医師がアクセスすることができるコンピュータ装置であり、自動診断装置が自動診断を実行するのであれば、自動診断装置或いは自動診断装置がアクセスすることができるコンピュータ装置である。
 また、画像診断の行われる場所へ送られる画像データは、静止画像のデータのうち、反射画像と無反射画像の静止画像についての画像データ1つずつ、或いは数個ずつであってもよい。或いは、動画像のデータと呼べるような多数に連なった、反射画像と無反射画像の静止画像が交互に並ぶ、そのような静止画像についての多数の画像データであっても構わない。後者の場合でも、医師、或いは自動診断装置は、同画像のデータの中から適宜、無反射画像の画像データのみを抜き出して無反射画像の動画を生成するとともに、反射画像の画像データのみを抜き出して反射画像の動画を生成すること、また、無反射画像と反射画像の画像データの中から画像診断に向く静止画像のデータを、1つずつ或いは数個ずつ選択して、画像診断を行うことができる。もちろん、このような画像診断に向く静止画像の選択は、カメラ1或いはコンピュータ装置100で行われてもよく、そのような選択を行うための技術として、第2実施形態で説明するような技術を応用することも可能である。
 そして、もちろんシステムの構成にもよるが、医師、或いは自動診断装置が診断を行った結果が、医師、或いは自動診断装置から、コンピュータ装置100へとインターネットを介して返信されるようにすることも可能である。
 処理選択データが、画像データをコンピュータ装置100内に記録する、というものであった場合には、画像データは、コンピュータ装置100内の画像データ記録部123に記録される。この画像データは、適宜の時期に、画像診断の行われる場所に送られ、画像診断に用いられる。
The place where the image diagnosis is performed is, for example, a computer device that the doctor can access if the doctor performs the image diagnosis, or an automatic diagnosis device or an automatic diagnosis device if the automatic diagnosis device performs the automatic diagnosis. A computer device that can be accessed by an automated diagnostic device.
Further, the image data sent to the place where the image diagnosis is performed may be one image data or several image data of the still image of the reflected image and the non-reflective image among the still image data. Alternatively, it may be a large number of image data about such a still image in which the still images of the reflected image and the non-reflective image are arranged alternately in a large number which can be called moving image data. Even in the latter case, the doctor or the automatic diagnostic device appropriately extracts only the image data of the non-reflective image from the data of the same image to generate a moving image of the non-reflective image, and extracts only the image data of the reflected image. To generate a moving image of a reflected image, and to perform image diagnosis by selecting one or several still image data suitable for image diagnosis from the image data of a non-reflective image and a reflected image. Can be done. Of course, the selection of a still image suitable for such image diagnosis may be performed by the camera 1 or the computer device 100, and as a technique for making such a selection, a technique as described in the second embodiment is used. It can also be applied.
And, of course, depending on the configuration of the system, the result of the diagnosis performed by the doctor or the automatic diagnostic device may be returned from the doctor or the automatic diagnostic device to the computer device 100 via the Internet. It is possible.
When the processing selection data is to record the image data in the computer device 100, the image data is recorded in the image data recording unit 123 in the computer device 100. This image data is sent to a place where image diagnosis is performed at an appropriate time and used for image diagnosis.
<変形例1>
 第1実施形態のカメラ1は、カメラ1単体で使用されるのではなく、カメラ1で生成された画像データを、インターネットを介して画像診断の行われる場所へと送信するコンピュータ装置100と組合せて用いられていた。
 しかしながら、現在のコンピュータ装置100の一例となるスマートフォンやタブレットは、カメラを備えるのが一般的である。したがって、第1実施形態におけるコンピュータ装置100に、第1実施形態のカメラ1の機能を兼ねさせることも可能である。変形例1は、そのような例である。
<Modification 1>
The camera 1 of the first embodiment is not used by the camera 1 alone, but is combined with a computer device 100 that transmits image data generated by the camera 1 to a place where image diagnosis is performed via the Internet. It was used.
However, a smartphone or tablet, which is an example of the current computer device 100, is generally equipped with a camera. Therefore, it is also possible to make the computer device 100 in the first embodiment also have the function of the camera 1 in the first embodiment. Modification 1 is such an example.
 変形例1のコンピュータ装置100は、第1実施形態で説明したものと同じで良い。これには限られないが、図1に示されたコンピュータ装置100の裏側には、図8(A)に示されたように、レンズ104と、光源105とが設けられている。
 レンズ104は、コンピュータ装置100の筐体から露出している。レンズ104はコンピュータ装置100が備えるカメラの一部をなす。よく知られているように、レンズ104の奥には図示を省略の撮像素子がある。撮像素子は、第1実施形態と同様に、CCD、或いはCMOSであり、動画、つまり連続した静止画像を撮像できるようになっている。静止画像の撮像間隔は、コンピュータ装置100にインストールするコンピュータプログラムの機能により適宜に調整することが可能であり、第1実施形態の撮像素子と同様の撮像間隔とすることももちろん可能である。
 撮像素子で撮像された画像は、公知又は周知の仕組みによりディスプレイ102に略実時間で表示させられるようになっている。
 光源105は、コンピュータ装置100の筐体の表面に、露出している。光源105が発する照明光は、第1実施形態の場合と同様で良い。ただし、変形例1の証明は1つであり、撮像素子での撮像が行われている間点灯し続ける。
The computer device 100 of the first modification may be the same as that described in the first embodiment. Although not limited to this, a lens 104 and a light source 105 are provided on the back side of the computer device 100 shown in FIG. 1 as shown in FIG. 8 (A).
The lens 104 is exposed from the housing of the computer device 100. The lens 104 forms a part of the camera included in the computer device 100. As is well known, there is an image sensor (not shown) behind the lens 104. Similar to the first embodiment, the image pickup element is a CCD or CMOS, and can capture a moving image, that is, a continuous still image. The image pickup interval of the still image can be appropriately adjusted by the function of the computer program installed in the computer device 100, and of course, the image pickup interval can be the same as that of the image pickup device of the first embodiment.
The image captured by the image pickup device can be displayed on the display 102 in substantially real time by a known or well-known mechanism.
The light source 105 is exposed on the surface of the housing of the computer device 100. The illumination light emitted by the light source 105 may be the same as in the case of the first embodiment. However, there is only one proof of the first modification, and the light continues to be lit while the image pickup is performed by the image pickup device.
 変形例1のコンピュータ装置100は、図8(B)に示したような、偏光板140と組合せて用いられる。
 偏光板140は、これには限られないがともに矩形であり、両者を組合せると横長の矩形となる、第1偏光板141、及び第2偏光板142とを含んで構成されている。
 第1偏光板141と、第2偏光板142とは一体とされていてもされていなくてもよいが、この実施形態ではそれらは一体とされている。第1偏光板141は、第1実施形態における第1偏光板32に相当するものであり、第2偏光板142は第1実施形態における第2偏光板33に相当するものである。第1偏光板141と、第2偏光板142の偏光方向は互いに直交しており、例えば、第1偏光板141の偏光方向は、筐体の上側の辺に沿う方向であり、第2偏光板142の偏光方向は、筐体の横側の辺に沿う方向である。図8(A)の第1偏光板141と、第2偏光板142に付された横線、及び縦線は、それらの偏光方向を示している。
 偏光板140は、コンピュータ装置100の筐体の背面に着脱自在に固定することが可能となっている。変形例1における偏光板140は、図8(B)における背面に塗布された、少なくとも1回、可能であれば数十回程度の着脱が可能とされた公知或いは周知の粘着剤によって、コンピュータ装置100の筐体に着脱自在に固定できるようになっている。かかる固定は粘着剤によるのではなく、両面テープ、磁石、係止、機械的な嵌合等、適宜の方法で行うことができる。
 偏光板140がコンピュータ装置100の筐体に固定されたとき、第1偏光板141は、第1実施形態の第1偏光板32と同様に光源105の前側に位置することになる。他方、偏光板140がコンピュータ装置100の筐体に固定されたとき、第2偏光板142は、第1実施形態の第2偏光板33と同様にレンズ104の前側、つまり、図示せぬ撮像素子の前側に位置することになる。そのような位置関係を保った状態で、偏光板140をコンピュータ装置100の光源105及びレンズ104の前に位置させることが可能なように、偏光板140における第1偏光板140及び第2偏光板142の大きさ、形状は設計されている。
 このような偏光板140を取付けた状態でコンピュータ装置100による撮像を行うと、光源105から出た照明光は、必ず第1偏光板141を通って撮像の対象となる対象物へと向かうことになる。他方、照明光が対象物に当たって生じた像光は、必ず第2偏光板142を通ってから、レンズ104を経て撮像素子に向かうこととなる。その状態は、第1実施形態において図6で示した状態と同じであるから、変形例1のコンピュータ装置100の撮像素子が撮像する画像は、無反射画像となる。
The computer device 100 of the first modification is used in combination with the polarizing plate 140 as shown in FIG. 8 (B).
The polarizing plate 140 is configured to include, but is not limited to, a first polarizing plate 141 and a second polarizing plate 142, both of which are rectangular, and when both are combined, a horizontally long rectangular shape is formed.
The first polarizing plate 141 and the second polarizing plate 142 may or may not be integrated, but in this embodiment, they are integrated. The first polarizing plate 141 corresponds to the first polarizing plate 32 in the first embodiment, and the second polarizing plate 142 corresponds to the second polarizing plate 33 in the first embodiment. The polarization directions of the first polarizing plate 141 and the second polarizing plate 142 are orthogonal to each other. For example, the polarization direction of the first polarizing plate 141 is along the upper side of the housing, and the second polarizing plate is oriented. The polarization direction of 142 is a direction along the lateral side of the housing. The horizontal and vertical lines attached to the first polarizing plate 141 and the second polarizing plate 142 in FIG. 8A indicate their polarization directions.
The polarizing plate 140 can be detachably fixed to the back surface of the housing of the computer device 100. The polarizing plate 140 in the first modification is a computer device using a known or well-known adhesive applied to the back surface in FIG. 8B, which can be attached and detached at least once, preferably several tens of times. It can be detachably fixed to 100 housings. Such fixing can be performed by an appropriate method such as double-sided tape, magnet, locking, mechanical fitting, etc., instead of using an adhesive.
When the polarizing plate 140 is fixed to the housing of the computer device 100, the first polarizing plate 141 is located on the front side of the light source 105 like the first polarizing plate 32 of the first embodiment. On the other hand, when the polarizing plate 140 is fixed to the housing of the computer device 100, the second polarizing plate 142 is on the front side of the lens 104, that is, an image pickup element (not shown), like the second polarizing plate 33 of the first embodiment. It will be located on the front side of. The first polarizing plate 140 and the second polarizing plate in the polarizing plate 140 can be positioned in front of the light source 105 and the lens 104 of the computer device 100 while maintaining such a positional relationship. The size and shape of 142 are designed.
When an image is taken by the computer device 100 with such a polarizing plate 140 attached, the illumination light emitted from the light source 105 always passes through the first polarizing plate 141 and heads toward the object to be imaged. Become. On the other hand, the image light generated when the illumination light hits the object always passes through the second polarizing plate 142 and then toward the image pickup element via the lens 104. Since the state is the same as the state shown in FIG. 6 in the first embodiment, the image captured by the image pickup device of the computer device 100 of the first modification is a non-reflective image.
 変形例1のコンピュータ装置100のハードウェア構成は、図4に示した、第1実施形態のコンピュータ装置100の場合と同じである。ただし、インターフェイス114に入力される画像データは、コンピュータ装置100外のカメラ1から送られてきた第1実施形態の場合と異なり、コンピュータ装置100内の図示せぬ撮像素子から送られて来る。
 また、変形例1のコンピュータ装置100内に生成される機能ブロックは、第1実施形態の場合と同様であり、図5に示した通りである。変形例1における各機能ブロックの機能は、第1実施形態におけるそれらの機能と概ね同じで良く、この実施形態ではそうされている。ただし、変形例1のコンピュータ装置100では、撮像素子で撮像中の連続した静止画像は、事実上の動画像として、略実時間でコンピュータ装置100のディスプレイ101に表示される。これは、例えば、撮像素子が連続して生成した静止画像についての画像データをインターフェイス114、入力部121を介して受取った制御部122が、出力部124、インターフェイス114を介してディスプレイ101に送るようにすることで実現することができる。なお、第1実施形態でも、カメラ1で撮像中の画像を、コンピュータ装置100のディスプレイ101に略実時間で表示する構成を採用することが可能である。
The hardware configuration of the computer device 100 of the first modification is the same as that of the computer device 100 of the first embodiment shown in FIG. However, the image data input to the interface 114 is sent from an image pickup element (not shown) in the computer device 100, unlike the case of the first embodiment sent from the camera 1 outside the computer device 100.
Further, the functional block generated in the computer device 100 of the first modification is the same as in the case of the first embodiment, and is as shown in FIG. The function of each functional block in the first embodiment may be substantially the same as those functions in the first embodiment, and is so in this embodiment. However, in the computer device 100 of the first modification, the continuous still image being captured by the image sensor is displayed as a de facto moving image on the display 101 of the computer device 100 in substantially real time. This means that, for example, the control unit 122, which receives image data about still images continuously generated by the image pickup device via the interface 114 and the input unit 121, sends the image data to the display 101 via the output unit 124 and the interface 114. It can be realized by setting. Also in the first embodiment, it is possible to adopt a configuration in which the image being captured by the camera 1 is displayed on the display 101 of the computer device 100 in substantially real time.
 変形例1のコンピュータ装置100の使用方法は以下の通りである。
 まず、第1実施形態で説明したのと同様に、コンピュータ装置100の入力装置102を用いて、処理選択データを入力する。処理選択データの内容は、第1実施形態と同じ2通りとすることができ、この実施形態ではそうされている。
 次に、被験者自身、或いは、被験者の撮像を行う医師等が、コンピュータ装置100のカメラの一部をなすレンズ104を被験者の口腔内に向ける。そして、コンピュータ装置100の例えば画面に表示されている、図示を省略の、「撮像素子での撮像を開始する旨」の意思表示を行うボタンを被験者自身、或いは医師等が操作することにより、撮像が開始される。変形例1では、撮像が開始されると同時に、光源105が点灯するようになっている。
The method of using the computer device 100 of the first modification is as follows.
First, the processing selection data is input using the input device 102 of the computer device 100 in the same manner as described in the first embodiment. The content of the processing selection data can be the same two ways as in the first embodiment, which is the case in this embodiment.
Next, the subject himself or a doctor or the like who takes an image of the subject points the lens 104, which forms a part of the camera of the computer device 100, into the oral cavity of the subject. Then, the subject himself or a doctor or the like operates a button displayed on the computer device 100, for example, on the screen to indicate the intention of "starting image pickup by the image pickup device" (not shown), thereby performing image pickup. Is started. In the first modification, the light source 105 is turned on at the same time as the imaging is started.
 第1実施形態と同様に、被験者が声を出しながら息を吸うと、舌が下がって、被験者の舌の付け根が下がって、咽頭後壁部、左右の前口蓋弓、及び口蓋垂が口腔内で覗く。
 それにより、撮像素子は被験者の咽頭後壁部、左右の前口蓋弓、及び口蓋垂を含む静止画像を連続して、或いは動画で撮像することができる。コンピュータ装置100が備えるカメラの仕様にもよるが、変形例1ではとりあえず、コンピュータ装置100が備える撮像素子及びレンズ104は、上述のような範囲の画像の撮像を行えるようになっているものとする。
 撮像素子12は、咽頭後壁部、左右の前口蓋弓、及び口蓋垂を含んだ静止画像の画像データを連続して生成する。例えば、50mm秒ごとに連続して生成される画像データは、上述したように、無反射画像の画像データとなる。
As in the first embodiment, when the subject inhales while speaking, the tongue is lowered, the base of the subject's tongue is lowered, and the posterior pharyngeal wall, the left and right anterior palatine arches, and the uvula are in the oral cavity. look in.
Thereby, the image sensor can continuously or moving a still image including the posterior wall of the pharynx of the subject, the left and right anterior palatine arches, and the uvula. Although it depends on the specifications of the camera included in the computer device 100, in the first modification, it is assumed that the image sensor and the lens 104 included in the computer device 100 can capture an image in the above range. ..
The image sensor 12 continuously generates image data of a still image including the posterior wall of the pharynx, the left and right anterior palatine arches, and the uvula. For example, the image data continuously generated every 50 mm seconds becomes the image data of the non-reflective image as described above.
 撮像素子が連続して生成した、静止画像についてのデータは、インターフェイス114、入力部121を介して制御部122へと送られ、上述したように、制御部122から、出力部124、インターフェイス114を介してディスプレイ101へと略実時間で送られる。
 ディスプレイ101には、撮像素子が撮像を行っている最中の無反射画像の事実上の動画が、略実時間で表示される。図8に示したように、コンピュータ装置100の筐体において、ディスプレイ101がレンズ104の裏側に存在する場合であれば、つまり、コンピュータ装置100におけるカメラがいわゆるアウトカメラなのであれば、被験者以外の第三者は、ディスプレイ101に表示される動画を見ながら撮像を行うことにより、被験者の咽頭後壁部、左右の前口蓋弓、及び口蓋垂が画像中に写り込んでいるかを、常に確認することができる。
 逆に、コンピュータ装置100におけるカメラが、筐体の同じ面にレンズ104とディスプレイ101が存在するいわゆるインカメラの構成を採用しているのであれば、コンピュータ装置100を把持して撮像を行う被験者自身が、撮像中の動画を、ディスプレイ101にて確認することが可能となる。
The data about the still image continuously generated by the image sensor is sent to the control unit 122 via the interface 114 and the input unit 121, and as described above, the control unit 122 sends the output unit 124 and the interface 114 to the control unit 122. It is sent to the display 101 via the display 101 in substantially real time.
The display 101 displays a de facto moving image of the non-reflective image while the image sensor is taking an image in substantially real time. As shown in FIG. 8, in the housing of the computer device 100, if the display 101 is behind the lens 104, that is, if the camera in the computer device 100 is a so-called out-camera, the subject other than the subject. By performing imaging while watching the moving image displayed on the display 101, the three parties can always confirm whether the subject's pharyngeal posterior wall, left and right anterior palatine arches, and uvula are reflected in the image. can.
On the contrary, if the camera in the computer device 100 adopts a so-called in-camera configuration in which the lens 104 and the display 101 are present on the same surface of the housing, the subject himself / herself who holds the computer device 100 and performs imaging. However, it is possible to confirm the moving image being imaged on the display 101.
 無反射画像の画像データは、また、出力部124から送受信機構へ送られ、処理選択データの種類にもよるが、第1実施形態の場合と同じように、そこからインターネットを介して、画像診断の行われる場所へ送られる。
 もちろん、処理選択データが、画像データをコンピュータ装置100内に記録する、というものであった場合には、画像データは、コンピュータ装置100内の画像データ記録部123に記録される。
 画像データに基づく画像診断の方法は、第1実施形態で説明した通りであるが、変形例1の場合の画像診断は、無反射画像のみでの画像診断となる。
The image data of the non-reflective image is also sent from the output unit 124 to the transmission / reception mechanism, and although it depends on the type of processing selection data, the image diagnosis is performed from there via the Internet as in the case of the first embodiment. Will be sent to the place where the event is held.
Of course, when the processing selection data is to record the image data in the computer device 100, the image data is recorded in the image data recording unit 123 in the computer device 100.
The method of image diagnosis based on image data is as described in the first embodiment, but the image diagnosis in the case of the modification 1 is an image diagnosis using only a non-reflective image.
≪第2実施形態≫
 第2実施形態について説明する。
 第2実施形態のカメラ1は、第1実施形態のカメラ1と同様に、画像診断用の画像を撮像するためのものである。もっとも、第2実施形態のカメラ1の撮像対象は、必ずしも口腔内とは限らず、鼻腔内でも良いし、もっといえば、身体の一部である必要もない。第2実施形態のカメラ1の撮像対象は、例えば、一般的な一眼レフカメラやミラーレスカメラと同様、風景や、スポーツシーンや、自動車、列車、動物等であっても良い。被写体の動きが早い場合の撮像に、第2実施形態のカメラ1は向いている。
<< Second Embodiment >>
The second embodiment will be described.
The camera 1 of the second embodiment is for capturing an image for image diagnosis, like the camera 1 of the first embodiment. However, the image pickup target of the camera 1 of the second embodiment is not necessarily in the oral cavity, but may be in the nasal cavity, and more specifically, it does not have to be a part of the body. The image pickup target of the camera 1 of the second embodiment may be, for example, a landscape, a sports scene, a car, a train, an animal, or the like, like a general single-lens reflex camera or a mirrorless camera. The camera 1 of the second embodiment is suitable for imaging when the subject moves quickly.
 とはいえ、第2実施形態のカメラ1は、第1実施形態の場合と同様に、呼吸器感染症についての画像診断を行うためのカメラであるものとする。
 第2実施形態のカメラ1は、第1実施形態の場合と同様に、図1に示したように構成されており、第1実施形態の場合と同様に、コンピュータ装置100とともに用いられる。
However, the camera 1 of the second embodiment is assumed to be a camera for performing image diagnosis for respiratory infections as in the case of the first embodiment.
The camera 1 of the second embodiment is configured as shown in FIG. 1 as in the case of the first embodiment, and is used together with the computer device 100 as in the case of the first embodiment.
 第2実施形態のカメラ1の構成は、回路13を除いて第1実施形態の場合と同じとすることができ、この実施形態ではそうされている。
 異なるのは、第2実施形態のカメラ1の回路13には、第1実施形態には存在しなかった上書き記録部が内蔵されているという点、及び第2実施形態のカメラ1におけるスイッチ15を操作したときのカメラ1の動作が、第1実施形態の場合と異なるという点である。
 なお、第2実施形態のカメラ1の撮像素子12及びレンズ11は、少なくとも咽頭後壁部或いはその一部の撮像を行えるようなものとなっていれば良いが、この実施形態では、第1実施形態の場合と同じく、咽頭後壁部、左右の前口蓋弓、及び口蓋垂を撮像できるようになっている。
The configuration of the camera 1 of the second embodiment can be the same as that of the first embodiment except for the circuit 13, and this is the case in this embodiment.
The difference is that the circuit 13 of the camera 1 of the second embodiment has a built-in overwrite recording unit which did not exist in the first embodiment, and the switch 15 in the camera 1 of the second embodiment is provided. The point is that the operation of the camera 1 when operated is different from that of the first embodiment.
The image sensor 12 and the lens 11 of the camera 1 of the second embodiment may be capable of capturing at least the posterior wall of the pharynx or a part thereof, but in this embodiment, the first embodiment is used. As in the case of the morphology, the posterior wall of the pharynx, the left and right anterior palatine arches, and the uvula can be imaged.
 上書き記録部は、撮像素子12が連続して生成した画像データを記録する記録媒体である。具体的には、撮像素子12が連続して生成した静止画像についての画像データを、古いものから順に上書きしながら所定時間分記録する。上書き記録部は、この実施形態ではリングバッファであるが、撮像素子12が連続して生成した静止画像についての画像データを、古いものから順に上書きしながら所定時間分記録するという機能は、リングバッファによらずとも、例えば、一般的なメモリであるRAMによっても達成することができる。
 撮像素子12が撮像して静止画像の画像データを生成する時間間隔は第1実施形態に倣うことができる。上書き記録部に記録される画像データは、その時点から過去に遡って、0.3秒から1秒分、好ましくは0.5秒±0.1秒分とする。その程度の時間分遡って、後述するように画像データを選択することができるようになっていれば、後述するスイッチ15を操作することにより、スイッチ15を操作した者が撮りたかった静止画像が、高い確率で、上書き記録部に存在していることになる。この実施形態では、常に過去0.5秒間分の静止画像の画像データが、上書き記録部に記録された状態が保たれるようになっている。撮像素子12が50mm秒毎に撮像を行い、静止画像についての画像データを生成するようになっているとすれば、上書き記録部には常に、10個の画像データが記録された状態が保たれる。この限りではないが、この実施形態では、上書き記録部にはその時点から過去0.5秒間分の静止画像についての画像データが記録されるものとし、また、撮像素子における画像データの生成間隔は、50mm秒であるものとする。
The overwrite recording unit is a recording medium for recording image data continuously generated by the image sensor 12. Specifically, the image data of the still images continuously generated by the image pickup device 12 is recorded for a predetermined time while overwriting the oldest ones in order. The overwrite recording unit is a ring buffer in this embodiment, but the function of recording image data for still images continuously generated by the image sensor 12 for a predetermined time while overwriting the oldest ones in order is a ring buffer. However, it can also be achieved by, for example, RAM, which is a general memory.
The time interval at which the image pickup device 12 takes an image and generates image data of a still image can follow that of the first embodiment. The image data recorded in the overwrite recording unit is set to 0.3 seconds to 1 second, preferably 0.5 seconds ± 0.1 seconds, going back to the past from that time. If the image data can be selected as described later by going back by that amount of time, by operating the switch 15 described later, the still image that the person who operated the switch 15 wanted to take can be obtained. , There is a high probability that it will exist in the overwrite recording section. In this embodiment, the image data of the still image for the past 0.5 seconds is always maintained in the state of being recorded in the overwrite recording unit. If the image sensor 12 takes an image every 50 mm seconds and generates image data for a still image, the overwrite recording unit always keeps a state in which 10 image data are recorded. Is done. Although not limited to this, in this embodiment, it is assumed that image data for the past 0.5 seconds from that time is recorded in the overwrite recording unit, and the image data generation interval in the image sensor is set. , 50 mm seconds.
 第1実施形態のスイッチ15は、撮像素子12に撮像を開始させるとともに、第1光源31aと第2光源31bの点灯、消灯を開始させる機能を有していた。
 これに対して、第2実施形態におけるスイッチ15は、スイッチ15を操作した瞬間に、上書き記録部に記録されている画像データの上書きを中止する機能を有している。ただし、スイッチ15を操作した瞬間の直後の撮像素子12の撮像のタイミングで生成された画像データは、最も古い画像データに上書きされた状態で、上書き記録部に記録されても良い。
The switch 15 of the first embodiment has a function of causing the image pickup device 12 to start imaging and to start turning on and off the first light source 31a and the second light source 31b.
On the other hand, the switch 15 in the second embodiment has a function of stopping the overwriting of the image data recorded in the overwrite recording unit at the moment when the switch 15 is operated. However, the image data generated at the timing of image pickup of the image pickup device 12 immediately after the moment when the switch 15 is operated may be recorded in the overwrite recording unit in a state of being overwritten with the oldest image data.
 第2実施形態のコンピュータ装置100は基本的に第1実施形態のコンピュータ装置100と同じである。そのハードウェア構成は、図4に示した、第1実施形態のコンピュータ装置100の場合と同じである。
 また、第2実施形態のコンピュータ装置100内に生成される機能ブロックは、第1実施形態の場合と同様であり、図5に示した通りである。第2実施形態における各機能ブロックの機能は、第1実施形態におけるそれらの機能と概ね同じで良く、この実施形態ではそうされている。ただし、第2実施形態のコンピュータ装置100では、変形例1のコンピュータ装置100の場合と同じく、カメラの撮像素子12で撮像中の連続した静止画像が、事実上の動画像として、略実時間でコンピュータ装置100のディスプレイ101に表示されるようになっている。つまり、カメラ1の撮像素子12が連続して生成した静止画像についての画像データは、ケーブル16を介してコンピュータ装置100に送られる。コンピュータ装置100に送られたその画像データは、インターフェイス114、入力部121を介して制御部122が受取り、出力部124、インターフェイス114を介してディスプレイ101に送る。それにより、カメラの撮像素子12で撮像中の連続した静止画像が、事実上の動画像として、略実時間でコンピュータ装置100のディスプレイ101に表示される。
 また、コンピュータ装置100の制御部122は、本願の第2発明における選択手段としての機能も有する。その機能については後述する。その機能を実現するために、後述するように、コンピュータ装置100の制御部122は、カメラ1の上書き記録部に記録されているすべての画像データを、ケーブル16、インターフェイス114、入力部121を介して読み出すことができるようになっている。
The computer device 100 of the second embodiment is basically the same as the computer device 100 of the first embodiment. The hardware configuration is the same as that of the computer device 100 of the first embodiment shown in FIG.
Further, the functional block generated in the computer device 100 of the second embodiment is the same as that of the first embodiment, and is as shown in FIG. The function of each functional block in the second embodiment may be substantially the same as those functions in the first embodiment, and is so in this embodiment. However, in the computer device 100 of the second embodiment, as in the case of the computer device 100 of the first modification, the continuous still image being imaged by the image sensor 12 of the camera becomes a de facto moving image in substantially real time. It is displayed on the display 101 of the computer device 100. That is, the image data of the still images continuously generated by the image sensor 12 of the camera 1 is sent to the computer device 100 via the cable 16. The image data sent to the computer device 100 is received by the control unit 122 via the interface 114 and the input unit 121, and sent to the display 101 via the output unit 124 and the interface 114. As a result, the continuous still image being captured by the image sensor 12 of the camera is displayed as a virtual moving image on the display 101 of the computer device 100 in substantially real time.
Further, the control unit 122 of the computer device 100 also has a function as a selection means in the second invention of the present application. Its function will be described later. In order to realize the function, as will be described later, the control unit 122 of the computer device 100 transfers all the image data recorded in the overwrite recording unit of the camera 1 via the cable 16, the interface 114, and the input unit 121. It can be read out.
 第2実施形態のカメラ1及びコンピュータ装置100の使用方法、及び動作について説明する。
 第2実施形態のカメラ1及びコンピュータ装置100を用いるとき、第2実施形態のカメラ1及びコンピュータ装置100は、第1実施形態の場合と同様に、ケーブル16にて、接続される。
 第2実施形態では、その状態で、カメラ1の撮像素子12は撮像を開始し、また、第1光源31aと第2光源31bとは、交互に点灯し始める。なお、カメラ1が撮像素子12による撮像と、第1光源31aと第2光源31bの交互の点灯とを開始する条件として、カメラ1に設けたスイッチ15とは別のスイッチの操作や、スイッチ15の長押し等の通常のスイッチ15の操作とは異なる操作を設定しておくことももちろん可能である。
 また、第1実施形態の場合と同様に、カメラ1による被験者の口腔内の撮像を開始するのに前後して、第1実施形態の場合と同様の処理選択データの入力を行っておく。
The usage and operation of the camera 1 and the computer device 100 of the second embodiment will be described.
When the camera 1 and the computer device 100 of the second embodiment are used, the camera 1 and the computer device 100 of the second embodiment are connected by the cable 16 as in the case of the first embodiment.
In the second embodiment, in that state, the image sensor 12 of the camera 1 starts image pickup, and the first light source 31a and the second light source 31b start to light alternately. As a condition for the camera 1 to start imaging by the image sensor 12 and alternating lighting of the first light source 31a and the second light source 31b, an operation of a switch different from the switch 15 provided on the camera 1 or a switch 15 is performed. Of course, it is also possible to set an operation different from the normal operation of the switch 15, such as a long press of.
Further, as in the case of the first embodiment, before and after starting the imaging of the oral cavity of the subject by the camera 1, the same processing selection data as in the case of the first embodiment is input.
 第2実施形態のコンピュータ装置100における撮像素子12は、第1実施形態の場合と同様に、無反射画像である静止画像についての画像データと、反射画像である静止画像についての画像データとを、50mm秒毎に交互に生成する。
 画像データは次々に生成され、上書き記録部に記録される。最初に画像データが生成されてから0.5秒が過ぎると、上述したように上書き記録部には常に、過去0.5秒分の10個のデータが記録された状態が保たれるようになる。また、次々に生成される画像データは、カメラ1からコンピュータ装置100に次々に送られ、ディスプレイ101上には、カメラ1の撮像素子12が撮像している静止画像が、略実時間で次々に表示されることになる。これは、ディスプレイ101上に、撮像素子12が撮像している事実上の動画が、略実時間で表示される、ということを意味する。
As in the case of the first embodiment, the image sensor 12 in the computer device 100 of the second embodiment obtains image data of a still image which is a non-reflective image and image data of a still image which is a reflected image. It is generated alternately every 50 mm seconds.
Image data is generated one after another and recorded in the overwrite recording unit. After 0.5 seconds have passed since the image data was first generated, as described above, the overwrite recording unit always keeps the state in which 10 data for the past 0.5 seconds have been recorded. Become. Further, the image data generated one after another is sent one after another from the camera 1 to the computer device 100, and the still images captured by the image sensor 12 of the camera 1 are displayed one after another on the display 101 in substantially real time. It will be displayed. This means that the virtual moving image captured by the image pickup device 12 is displayed on the display 101 in substantially real time.
 第2実施形態では、被験者自身、或いは、被験者を撮像する医師等の第三者が、一方の手でカメラ1を把持し、また、もう一方の手でコンピュータ装置100を把持する。両手を用いて、カメラ1の開口21を被験者の口腔内に向け、また、コンピュータ装置100のディスプレイ101を見る。
 そうすることにより、撮像を行う被験者、または医師等の第三者は、ディスプレイ101に表示されているその時点における動画像を確認しながら、カメラ1での撮像を行えるようになる。
 被験者は、声を出しながら息を吸い込む。そうすると、咽頭後壁部及びその周辺が、口腔外から観察可能に口腔内で覗く。その瞬間を確認した撮像を行う被験者、または医師等の第三者はスイッチ15を操作する。スイッチ15を操作する、つまり、スイッチ15を押すタイミングは、撮像を行う者が、咽頭後壁部、左右の前口蓋弓、及び口蓋垂を撮像するのに相応しいと判断した瞬間で良い。つまり、一般的なカメラのシャッター手段を操作するのと同様の感覚で、撮像を行う者はスイッチ15を操作すれば良い。
 被験者がスイッチ15を操作すると、上述したように、スイッチ15からの入力は、回路13に伝わり、上書き記録部は、画像データの上書きを中止する。これにより、上書き記録部には、スイッチ15を押した瞬間から、概ね過去0.5秒間に撮像素子12によって撮像された静止画像の画像データが残ることになる。
 上書き記録部に残っている静止画像についての画像データには、高い確率で、咽頭後壁部、左右の前口蓋弓、及び口蓋垂が写り込んでおり、且つピントの合った静止画像についての画像データが含まれている。
 なぜなら、ここに記録されているデータは、少なくとも最後の1つの画像データを除いては、スイッチ15を押す前のカメラ1に手ぶれの生じにくい状態で撮像された静止画像の画像データであるからである。また、一般的に人間の反応速度には0.2秒(超人的な者でも0.1秒)程度の遅延があるところ、スイッチ15を操作した時点から過去0.5秒分の複数の画像データのうちの任意のものを選択できるのであれば、その中に、本来撮像すべきタイミングで撮像された静止画像についての画像データが存在することが殆どであるからある。このような理由から、上書き記録部に残すべき過去の画像データは、最低0.3秒分、長くても1秒分あれば十分であり、0.5秒±0.1秒分程度が好ましい。
In the second embodiment, the subject himself or a third party such as a doctor who images the subject grips the camera 1 with one hand and the computer device 100 with the other hand. Using both hands, the opening 21 of the camera 1 is directed into the oral cavity of the subject, and the display 101 of the computer device 100 is viewed.
By doing so, the subject to be imaged or a third party such as a doctor can take an image with the camera 1 while confirming the moving image at that time displayed on the display 101.
The subject inhales aloud. Then, the posterior wall of the pharynx and its surroundings can be observed in the oral cavity from outside the oral cavity. A subject who performs imaging after confirming the moment, or a third party such as a doctor operates the switch 15. The timing of operating the switch 15, that is, pressing the switch 15, may be the moment when the person performing the imaging determines that it is suitable for imaging the posterior wall of the pharynx, the left and right anterior palatine arches, and the uvula. That is, the person performing the imaging may operate the switch 15 with the same feeling as operating the shutter means of a general camera.
When the subject operates the switch 15, as described above, the input from the switch 15 is transmitted to the circuit 13, and the overwrite recording unit stops overwriting the image data. As a result, the image data of the still image captured by the image pickup device 12 in the past 0.5 seconds remains in the overwrite recording unit from the moment the switch 15 is pressed.
The image data of the still image remaining in the overwrite recording unit has a high probability of reflecting the posterior wall of the pharynx, the left and right anterior palatine arches, and the uvula, and the image data of the still image in focus. It is included.
This is because the data recorded here is the image data of a still image captured by the camera 1 before the switch 15 is pressed, in a state where camera shake is unlikely to occur, except for at least the last image data. be. In general, the reaction speed of humans has a delay of about 0.2 seconds (0.1 seconds even for superhumans), but a plurality of images for the past 0.5 seconds from the time when the switch 15 is operated. This is because if any data can be selected, there is almost always image data about a still image captured at the timing when it should be captured. For this reason, the past image data to be left in the overwrite recording unit should be at least 0.3 seconds, and at most 1 second, preferably about 0.5 seconds ± 0.1 seconds. ..
 スイッチ15を操作したら、被験者又は医師等の第三者は、コンピュータ装置100を操作する。
 この実施形態では、被験者又は医師等は、コンピュータ装置100の指示に従い、入力装置102を操作して、コンピュータ装置100に、カメラ1の上書き記録部に記録部に記録されているすべての画像データを読み込ませる。入力装置102から入力された「上書き記録部に記録された画像データを読み込め」という指示についてのデータは、インターフェイス114、入力部121を介して制御部122に送られ、そのデータを受取った制御部122が、ケーブル16を介して、カメラ1の上書き記録部に記録されていたすべての、つまりこの実施形態であれば10個の画像データを読み込む。
 被験者又は医師等は、コンピュータ装置100の入力装置102を操作することにより、コンピュータ装置100のディスプレイ101に、10個の画像データに基づく静止画像を、例えば1つずつ表示することができる。静止画像のディスプレイ101への表示は、制御部122が、出力部124、インターフェイス114を介してディスプレイ101に画像データを送ることによって行うことができる。
 ディスプレイ101に表示される10個の静止画像のうち5つは、無反射画像である静止画像であり、残りの5つは、反射画像である静止画像である。被験者又は医師等は、無反射画像である静止画像と反射画像である静止画像それぞれの中から、撮像範囲が正しく、且つピントの合っているものを少なくとも1つずつ選択する。かかる選択は、入力装置102からの入力によって行うことができる。
 そのようにして選択された無反射画像である静止画像についての画像データ少なくとも1つ(せいぜい、1つから3つ程度)と、反射画像である静止画像についての画像データ少なくとも1つ(せいぜい、1つから3つ程度)とは、処理選択データが、「コンピュータ装置100から、画像診断を行う場所へと画像データを送信する」というものであった場合には、第1実施形態の場合と同様にして、制御部122から出力部124、インターフェイス114、送受信機構、インターネットを介して、画像診断を行う場所へと送られる。
 また、無反射画像である静止画像についての画像データ少なくとも1つと、反射画像である静止画像についての画像データ少なくとも1つとは、処理選択データが、「画像データを画像データ記録部123へ記録する」というものであった場合には、第1実施形態の場合と同様にして、制御部122から画像データ記録部123へと送られ、画像データ記録部123に記録される。
 以後の、画像データの利用方法は、第1実施形態と同様である。
After operating the switch 15, a third party such as a subject or a doctor operates the computer device 100.
In this embodiment, the subject, the doctor, or the like operates the input device 102 according to the instruction of the computer device 100 to input all the image data recorded in the overwriting recording unit of the camera 1 to the computer device 100. Load it. The data regarding the instruction "read the image data recorded in the overwrite recording unit" input from the input device 102 is sent to the control unit 122 via the interface 114 and the input unit 121, and the control unit receives the data. The 122 reads all the image data recorded in the overwrite recording unit of the camera 1 via the cable 16, that is, 10 image data in this embodiment.
By operating the input device 102 of the computer device 100, the subject, the doctor, or the like can display, for example, one still image based on ten image data on the display 101 of the computer device 100. The display of the still image on the display 101 can be performed by the control unit 122 sending image data to the display 101 via the output unit 124 and the interface 114.
Five of the ten still images displayed on the display 101 are still images that are non-reflective images, and the remaining five are still images that are reflective images. The subject, the doctor, or the like selects at least one of the still image, which is a non-reflective image, and the still image, which is a reflected image, in which the imaging range is correct and in focus. Such selection can be made by input from the input device 102.
At least one image data (at most one to three) for a still image which is a non-reflective image and at least one image data (at most one) for a still image which is a reflected image selected in this way. 3) means that the processing selection data is "transmitting image data from the computer device 100 to a place where image diagnosis is performed", as in the case of the first embodiment. Then, it is sent from the control unit 122 to the place where the image diagnosis is performed via the output unit 124, the interface 114, the transmission / reception mechanism, and the Internet.
Further, at least one image data of the still image which is a non-reflective image and at least one image data of the still image which is a reflected image are the processing selection data "recording the image data in the image data recording unit 123". In that case, it is sent from the control unit 122 to the image data recording unit 123 and recorded in the image data recording unit 123 in the same manner as in the case of the first embodiment.
The method of using the image data thereafter is the same as that of the first embodiment.
 なお、この実施形態では、スイッチ15が押された後における上書き記録部に記録された複数の画像データの中からの、画像診断の行われる場所へ送信される(又は画像データ記録部123に記録される)画像データの選択は、カメラ1及びコンピュータ装置100を操作する者の判断による手入力によって行われていた。
 他方、スイッチ15が操作された後の上書き記録部から制御部122への画像データの読み込み、或いは上書き記録部から制御部122へ読み込まれた複数の画像データの中からの画像データの選択を、制御部122が自動的に行うようにしても良い。
 上書き記録部から制御部122へ読み込まれた複数の画像データの中から制御部122が画像データを選択するための条件としては、撮像範囲が正しく、且つピントが合っているというものを用いることができる。撮像範囲が正しいか否かは、公知或いは周知の画像認識技術を用いれば容易に判定可能である。また、ピントが合っているか否かは、公知或いは周知のエッジ検出技術を用いれば容易に判定可能である。また、かかる判定に、人工知能或いは人工知能を用いた機構を用いることも可能である。
In this embodiment, among the plurality of image data recorded in the overwrite recording unit after the switch 15 is pressed, the data is transmitted to the place where the image diagnosis is performed (or recorded in the image data recording unit 123). The selection of the image data is performed by manual input at the discretion of the operator of the camera 1 and the computer device 100.
On the other hand, the image data is read from the overwrite recording unit to the control unit 122 after the switch 15 is operated, or the image data is selected from the plurality of image data read from the overwrite recording unit to the control unit 122. The control unit 122 may automatically perform the operation.
As a condition for the control unit 122 to select image data from a plurality of image data read from the overwrite recording unit to the control unit 122, it is necessary to use a condition that the imaging range is correct and the image data is in focus. can. Whether or not the imaging range is correct can be easily determined by using a known or well-known image recognition technique. Further, whether or not the subject is in focus can be easily determined by using a known or well-known edge detection technique. Further, it is also possible to use artificial intelligence or a mechanism using artificial intelligence for such determination.
 なお、この実施形態では、上書き記録部に記録された複数の画像データからの画像診断の行われる場所へ送信される(又は画像データ記録部123に記録される)画像データの選択はコンピュータ装置100にて行われていた。しかしながら、この選択を、カメラ1側で行うようにすることも可能である。
 ただし、カメラ1を操作する者が手入力によりその選択を行う場合には、カメラ1自体に、スイッチ15を操作した直後の段階で上書き記録部に記録されている複数の画像データによる画像のすべてを、例えば1つずつすべて表示するためのディスプレイをカメラ1自体に設けること、及びそのような表示と、画像データの選択を行うための入力を行うための入力装置をカメラ1自体に設けること、の双方が必要となろう。他方、そのような選択を自動的に行うのであれば、コンピュータ装置100の制御部122が実行した上述の処理を自動的に行う機構を、カメラ1の例えば回路13中に設けておけば、上述したディスプレイも、入力装置も省略することができる。
 また、変形例1で説明したように、第2実施形態でも、カメラ1を省略して、コンピュータ装置100に、カメラ1の機能を兼ねさせることも可能である。この場合には、コンピュータ装置100はそもそもディスプレイも、入力装置も備えているのであるから、上書き記録部に記録された複数の画像データからの画像診断の行われる場所へ送信される(又は画像データ記録部123に記録される)画像データの選択は、手入力によるにしろ、自動的な選択がなされるにせよ、コンピュータ装置100で実行するに問題はない。
 ただし、カメラ1の機能をコンピュータ装置100が担うことによりカメラ1が省略される場合には、コンピュータ装置100のカメラで撮像される静止画像はすべて、無反射画像についてのものとなるので、上述の選択は、無反射画像の静止画像についての画像データの中でのみ行われることになる。
In this embodiment, the computer device 100 selects the image data transmitted (or recorded in the image data recording unit 123) to the place where the image diagnosis is performed from the plurality of image data recorded in the overwrite recording unit. It was done at. However, it is also possible to make this selection on the camera 1 side.
However, when the person who operates the camera 1 manually makes the selection, all the images of the plurality of image data recorded in the overwrite recording unit immediately after the switch 15 is operated on the camera 1 itself. For example, the camera 1 itself is provided with a display for displaying all of them one by one, and the camera 1 itself is provided with an input device for performing such display and input for selecting image data. Both will be needed. On the other hand, if such a selection is to be made automatically, if a mechanism for automatically performing the above-mentioned processing executed by the control unit 122 of the computer device 100 is provided in, for example, the circuit 13 of the camera 1, the above-mentioned description will be made. The display and the input device can be omitted.
Further, as described in the first modification, it is also possible to omit the camera 1 and allow the computer device 100 to also have the function of the camera 1 in the second embodiment. In this case, since the computer device 100 includes both a display and an input device in the first place, it is transmitted (or image data) from a plurality of image data recorded in the overwrite recording unit to a place where image diagnosis is performed. There is no problem in executing the selection of the image data (recorded in the recording unit 123) by the computer device 100, whether it is manually input or automatically selected.
However, when the camera 1 is omitted because the computer device 100 is responsible for the function of the camera 1, all the still images captured by the camera of the computer device 100 are related to the non-reflective image. The selection will only be made in the image data for the still image of the non-reflective image.
≪第3実施形態≫
 第3実施形態では、学習済みモデルを用いた呼吸器感染症に関する自動診断装置(以下、単に「自動診断装置」と称する場合がある。)について説明する。
 自動診断装置は、後述するように、学習済みモデルを含んでいる。したがって、自動診断装置を得るにはまず、学習済みモデルを得る必要がある。この実施形態では、学習済みモデルを得るために必要な装置を、便宜上学習装置と称することとする。
<< Third Embodiment >>
In the third embodiment, an automatic diagnostic device for respiratory infections using a trained model (hereinafter, may be simply referred to as an “automatic diagnostic device”) will be described.
The automatic diagnostic device includes a trained model, as described below. Therefore, in order to obtain an automatic diagnostic device, it is first necessary to obtain a trained model. In this embodiment, the device necessary for obtaining the trained model will be referred to as a learning device for convenience.
 上述したように、自動診断装置を成立させるには、自動診断装置に加え、学習装置が必要となる。しかしながら、自動診断装置と、学習装置とは、必要なハードウェア構成は同じとすることができるので、それらにインストールするコンピュータプログラムを適宜なものとすることにより、1つの装置にまとめることが可能である。 As mentioned above, in order to establish an automatic diagnostic device, a learning device is required in addition to the automatic diagnostic device. However, since the required hardware configuration can be the same for the automatic diagnostic device and the learning device, it is possible to combine them into one device by appropriately setting the computer programs to be installed in them. be.
 自動診断装置、学習装置はともに、コンピュータ装置を含んでいる。このコンピュータ装置は、第1実施形態、第2実施形態でいうコンピュータ装置100とは異なるものである。
 自動診断装置と学習装置とに含まれるコンピュータ装置は同じ構成とすることができ、この実施形態では同じである。
Both the automatic diagnostic device and the learning device include a computer device. This computer device is different from the computer device 100 in the first embodiment and the second embodiment.
The computer devices included in the automatic diagnostic device and the learning device can have the same configuration, and are the same in this embodiment.
(学習装置について)
 ハードウェア的に見れば自動診断装置の構成も同じであるが、取り敢えず、学習装置のハードウェア構成につて説明する。
 とはいえ、ハードウェア構成に関しては、第3実施形態のコンピュータ装置のハードウェアは通り一遍のものであり、第1実施形態で説明した、図4に示したコンピュータ装置100におけるハードウェア構成と変わりがない。ただし、第3実施形態におけるコンピュータ装置では、第1実施形態におけるコンピュータ装置100とは異なり、HDD、SSD(Solid State Drive)その他の大容量記録装置を備えている。
(About learning device)
From a hardware perspective, the configuration of the automatic diagnostic device is the same, but for the time being, the hardware configuration of the learning device will be explained.
However, regarding the hardware configuration, the hardware of the computer device of the third embodiment is uniform, and is different from the hardware configuration of the computer device 100 shown in FIG. 4 described in the first embodiment. There is no. However, unlike the computer device 100 in the first embodiment, the computer device according to the third embodiment includes an HDD, an SSD (Solid State Drive), and other large-capacity recording devices.
 大容量記録装置の符号を115とし、その他の符号は図4に示したままとして、学習装置を構成するコンピュータ装置のハードウェアについて説明する。
 学習装置を構成するハードウェアには、CPU111、ROM112、RAM113、インターフェイス114、大容量記録装置115が含まれており、これらはバス116によって相互に接続されている。インターフェイス114には、送受信機構が接続されており、インターネットを介しての通信を行えるようになっているが、この機能は、コンピュータ装置が学習装置として機能する場合には遊んでおり、コンピュータ装置が自動診断装置として機能する場合にのみ機能する。インターフェイス114には、入力装置、ディスプレイ等が接続されていても構わないし実際にはそれらが接続されていることが大半であろうが、学習装置、或いは自動診断装置として機能する場合のコンピュータ装置では、それらは大きな意味を持たないので、それらについての言及は行わない。
 CPU111、ROM112、RAM113、インターフェイス114、バス116の機能は、第1実施形態の場合と変わらない。大容量記録装置115には、コンピュータ装置を学習装置として機能させるためのコンピュータプログラム及び必要なデータが記録されている。このコンピュータプログラムとデータの少なくとも一部は、ROM112、RAM113に記録されていても良い。
 コンピュータ装置を学習装置として機能させるために必要な後述の処理をコンピュータ装置に実行させるためのコンピュータプログラムは、コンピュータ装置にプリインストールされていたものであっても良いし、ポストインストールされたものであっても良い。このコンピュータプログラムのコンピュータ装置へのインストールは、メモリカード等の図示を省略の所定の記録媒体を介して行なわれても良いし、LAN或いはインターネットなどのネットワークを介して行なわれても構わない。もちろん、コンピュータプログラムには、上記コンピュータプログラムの他、OSその他の必要なコンピュータプログラムが含まれていても構わない。
The hardware of the computer device constituting the learning device will be described with the reference numeral of the large-capacity recording device set to 115 and the other reference numerals kept as shown in FIG.
The hardware constituting the learning device includes a CPU 111, a ROM 112, a RAM 113, an interface 114, and a large-capacity recording device 115, which are connected to each other by a bus 116. A transmission / reception mechanism is connected to the interface 114 to enable communication via the Internet, but this function is idle when the computer device functions as a learning device, and the computer device is playing. It works only when it functions as an automatic diagnostic device. An input device, a display, or the like may be connected to the interface 114, and in most cases they are connected, but in a computer device that functions as a learning device or an automatic diagnostic device, the interface 114 may be connected. , They don't make much sense, so I won't mention them.
The functions of the CPU 111, ROM 112, RAM 113, interface 114, and bus 116 are the same as those in the first embodiment. The large-capacity recording device 115 records a computer program and necessary data for making the computer device function as a learning device. At least a part of this computer program and data may be recorded in the ROM 112 and the RAM 113.
The computer program for causing the computer device to execute the processing described later necessary for the computer device to function as the learning device may be pre-installed in the computer device or may be post-installed. May be. The computer program may be installed in a computer device via a predetermined recording medium such as a memory card (not shown), or may be installed via a network such as a LAN or the Internet. Of course, the computer program may include an OS and other necessary computer programs in addition to the above computer program.
 以上説明したコンピュータ装置は以下の機能ブロックによって、学習装置として機能するために必要な処理を実行する。
 CPU111がコンピュータプログラムを実行することにより、コンピュータ装置内部には、図9で示されたような機能ブロックが生成される。なお、以下の機能ブロックは、コンピュータ装置に、コンピュータ装置が学習装置として機能するために必要な以下に述べるような処理を実行させるための上述のコンピュータプログラム単体の機能により生成されていても良いが、上述のコンピュータプログラムと、コンピュータ装置にインストールされたOSその他のコンピュータプログラムとの協働により生成されても良い。
 コンピュータ装置内には、本願発明の機能との関係で、学習データ記録部311、特徴量抽出部312、学習中モデル部313、正誤判定部314が生成される。
The computer device described above executes the processing necessary for functioning as a learning device by the following functional blocks.
When the CPU 111 executes a computer program, a functional block as shown in FIG. 9 is generated inside the computer device. The following functional blocks may be generated by the function of the above-mentioned computer program alone for causing the computer device to execute the processing described below necessary for the computer device to function as the learning device. , The above-mentioned computer program may be generated in cooperation with an OS or other computer program installed in a computer device.
In the computer device, a learning data recording unit 311, a feature amount extraction unit 312, a learning model unit 313, and a correctness determination unit 314 are generated in relation to the functions of the present invention.
 学習データ記録部311には、学習中モデル部313が機械学習に用いるためのデータが記録されている。
 学習データ記録部311に記録されているのデータを概念的に図10に示す。
 学習データ記録部311に記録されているのは、多数の画像データである。
 画像データは、被験者の、咽頭後壁部、左右の前口蓋弓、及び口蓋垂を含む撮像部位が写り込んだ静止画像についてのデータである画像データ400である。この例では、略同時に、略同じ位置について撮像された(例えば、第2実施形態で説明したカメラ1及びコンピュータ装置100を用いて生成され、選択された)、無反射画像の静止画像である画像400Aについての画像データと、反射画像の静止画像である画像400Bについての画像データとが、対にされた状態で記録されている。
 また、各画像データ400には、当該画像データ400に写り込んだ撮像部位を持つ被験者が、新型コロナウイルス感染者、呼吸器感染症に関するその他のウイルス・細菌の感染者、呼吸器感染症に関するウイルス・細菌の非感染者のいずれであるかを示すタグ410のデータが、互いに対応付けられた状態で付されている。タグ410のデータには、その被験者の性別、年齢、人種、症状の程度等の他の情報が含まれていても構わない。図10に示した例では、性別の情報がタグ410に含まれている。
 なお、無反射画像の画像400A中、Aの符号が付されているのが舌、Bの符号が付されているのが咽頭後壁部、Cの符号が付されているのが左右の前口蓋弓、Dの符号が付されているのが口蓋垂である。
 画像データ400は、無反射画像の画像400Aとタグ410のデータだけでもよい。その場合には、反射画像の画像400Bの画像データは、学習データ記録部311には存在しない。
The learning data recording unit 311 records data for use by the learning model unit 313 for machine learning.
The data recorded in the learning data recording unit 311 is conceptually shown in FIG.
A large number of image data are recorded in the learning data recording unit 311.
The image data is image data 400, which is data about a still image of the subject, including the posterior wall of the pharynx, the left and right anterior palatine arches, and the imaged region including the uvula. In this example, images that are still images of non-reflective images taken at about the same position at about the same time (eg, generated and selected using the camera 1 and computer device 100 described in the second embodiment). The image data for the 400A and the image data for the image 400B, which is a still image of the reflected image, are recorded in a paired state.
Further, in each image data 400, the subject having the imaging site reflected in the image data 400 is a person infected with a new coronavirus, a person infected with other viruses / bacteria related to respiratory infections, and a virus related to respiratory infections. -The data of the tag 410 indicating which of the non-infected persons of the virus is attached is attached in a state of being associated with each other. The data of the tag 410 may include other information such as the subject's gender, age, race, degree of symptoms, and the like. In the example shown in FIG. 10, the gender information is included in the tag 410.
In the non-reflective image 400A, the code A is attached to the tongue, the symbol B is attached to the posterior wall of the pharynx, and the reference C is attached to the front left and right. The uvula is labeled with the palatine arch and D.
The image data 400 may be only the data of the image 400A of the non-reflective image and the tag 410. In that case, the image data of the image 400B of the reflected image does not exist in the learning data recording unit 311.
 特徴量抽出部312は、学習データ記録部311から画像データを、それと紐付けられたタグとともに読み出し、無反射画像の画像400Aと反射画像の画像400Bの特徴量を抽出する機能を有している。
 無反射画像の画像400Aから抽出すべき特徴量と、反射画像の画像400Bから抽出すべき特徴量とは、同じでも良いし、同じでなくても良い。
 無反射画像の画像400Aから、抽出すべき特徴量は、画像400Aに写り込んだ咽頭後壁部、左右の前口蓋弓、及び口蓋垂のそれぞれにおける色彩或いは血管像、又は、それらの双方である。それに加えて、無反射画像の画像400Aにおける咽頭後壁部の或いは、咽頭後壁部、左右の前口蓋弓、及び口蓋垂のそれぞれにおける凹凸をも特徴量としても良い。反射画像の画像400Bについても同様である。
 例えば、無反射画像の画像400Aから抽出すべき特徴量を、画像400Aに写り込んだ咽頭後壁部、左右の前口蓋弓、及び口蓋垂のそれぞれにおける色彩及び血管像の双方、及び咽頭後壁部の凹凸とすることができる。同様に、反射画像の画像400Bから、抽出すべき特徴量を、無反射像の画像400Aの場合と同じく、画像400Bに写り込んだ咽頭後壁部、左右の前口蓋弓、及び口蓋垂のそれぞれにおける色彩及び血管像の双方、及び咽頭後壁部の凹凸とすることができる。
 無反射画像の画像400Aから抽出すべき特徴量と、反射画像の画像400Bから抽出すべき特徴量を異ならせる場合には、無反射画像の画像400Aからは咽頭後壁部、左右の前口蓋弓、及び口蓋垂のそれぞれにおける色彩や血管像の把握を行いやすく、反射画像の画像400Bからは咽頭後壁部、左右の前口蓋弓、及び口蓋垂のそれぞれにおける凹凸の把握を行いやすいことを考慮して、例えば、無反射画像の画像400Aから、咽頭後壁部、左右の前口蓋弓、及び口蓋垂のそれぞれにおける色彩と血管像の少なくとも一方を特徴量として抽出するとともに、反射画像の画像400Bから、咽頭後壁部の或いは、咽頭後壁部、左右の前口蓋弓、及び口蓋垂のそれぞれにおける凹凸を特徴量として抽出するようにすることができる。
 これには限られないが、この実施形態では、無反射画像の画像400Aから、咽頭後壁部、左右の前口蓋弓、及び口蓋垂のそれぞれにおける色彩と血管像の双方を特徴量として抽出するとともに、反射画像の画像400Bから、咽頭後壁部の凹凸を特徴量として抽出することとしている。
 なお、学習データ記録部311に記録されている画像データ400に無反射画像の画像400Aしか含まれていない場合には、特徴量抽出部312は、無反射画像の画像400Aのみから、無反射画像の画像400Aについて説明した上述したいずれかの特徴量を抽出するようにする。
The feature amount extraction unit 312 has a function of reading image data from the learning data recording unit 311 together with a tag associated with the learning data recording unit 311 and extracting the feature amounts of the non-reflection image image 400A and the reflection image image 400B. ..
The feature amount to be extracted from the image 400A of the non-reflective image and the feature amount to be extracted from the image 400B of the reflected image may or may not be the same.
The feature quantity to be extracted from the non-reflective image 400A is the color or blood vessel image in each of the posterior pharyngeal wall, the left and right anterior palatine arches, and the uvula reflected in the image 400A, or both of them. In addition, the unevenness of the posterior pharyngeal wall or the posterior pharyngeal wall, the left and right anterior palatine arches, and the uvula in the non-reflective image 400A may be used as a feature quantity. The same applies to the image 400B of the reflected image.
For example, the features to be extracted from the image 400A of the non-reflective image are reflected in the image 400A in both the color and blood vessel images of the posterior pharyngeal wall, the left and right anterior palatine arches, and the uvula, respectively, and the posterior pharyngeal wall. Can be uneven. Similarly, the feature amount to be extracted from the reflected image 400B is obtained in each of the posterior pharyngeal wall, the left and right anterior palatine arches, and the uvula reflected in the image 400B, as in the case of the non-reflective image 400A. Both the color and the blood vessel image, and the unevenness of the posterior wall of the pharynx can be used.
When the feature amount to be extracted from the non-reflective image image 400A and the feature amount to be extracted from the reflection image image 400B are different, the non-reflective image image 400A shows the posterior wall of the pharynx and the left and right anterior palatine arches. In consideration of the fact that it is easy to grasp the color and blood vessel image in each of the uvula, and it is easy to grasp the unevenness in each of the posterior wall of the pharynx, the left and right anterior palatine arches, and the uvula from the image 400B of the reflection image. For example, at least one of the color and the blood vessel image in each of the posterior wall of the pharynx, the left and right anterior palatine arches, and the uvula is extracted as a feature amount from the image 400A of the non-reflective image, and the pharynx is extracted from the image 400B of the reflection image. The unevenness of the posterior wall or the posterior wall of the pharynx, the left and right anterior palatine arches, and the uvula can be extracted as feature quantities.
Although not limited to this, in this embodiment, both the color and the blood vessel image in each of the posterior pharyngeal wall, the left and right anterior palatine arches, and the uvula are extracted as feature quantities from the image 400A of the non-reflective image. From the image 400B of the reflection image, the unevenness of the posterior wall of the pharynx is extracted as a feature amount.
When the image data 400 recorded in the learning data recording unit 311 includes only the non-reflective image 400A, the feature amount extraction unit 312 uses only the non-reflective image 400A to obtain a non-reflective image. Any of the above-mentioned feature quantities described for the image 400A of the above is extracted.
 無反射画像の画像400Aから、画像400Aに写り込んだ咽頭後壁部、左右の前口蓋弓、及び口蓋垂のそれぞれにおける色彩及び血管像の双方を特徴量として抽出する技術としては、公知、或いは周知のものを用いれば良い。
 無反射画像の画像400Aから、画像400Aに写り込んだ咽頭後壁部、左右の前口蓋弓、及び口蓋垂のそれぞれにおける色彩及び血管像の双方、及び咽頭後壁部の凹凸を特徴量として抽出する場合には、まず、例えば、公知或いは周知のエッジ検出の技術を用いて、画像400Aの中から、咽頭後壁部、左右の前口蓋弓、及び口蓋垂の範囲を抽出する。
 咽頭後壁部、左右の前口蓋弓、及び口蓋垂のそれぞれの範囲の色彩を特徴量として抽出するには、例えば、咽頭後壁部、左右の前口蓋弓、及び口蓋垂のそれぞれの範囲に含まれる各画素の色彩を表すデータ(例えば、RGB値)の平均値を取ることにより、咽頭後壁部、左右の前口蓋弓、及び口蓋垂のそれぞれの範囲の色彩を抽出することができる。色彩の認識には、Lab色空間の技術を用いても良い。
 また、咽頭後壁部、左右の前口蓋弓、及び口蓋垂のそれぞれの範囲の血管像を特徴量として抽出するには、咽頭後壁部、左右の前口蓋弓、及び口蓋垂のそれぞれの範囲の中で、より精度の高いエッジ検出を行うことにより、血管の輪郭を抽出し、それにより血管像を特徴として抽出する。血管像を特徴として抽出する場合には、例えば、画像400Aの、咽頭後壁部、左右の前口蓋弓、及び口蓋垂の各範囲における血管の太さ(例えば、血管の太さの平均、血管の最大太さ等)や、血管の本数、或いは、ある程度の太さよりも細い血管の本数、血管の合算の長さ等の少なくとも1つを抽出するようにすることができる。
 反射画像の画像400Bから、咽頭後壁部の凹凸を特徴量として抽出する場合も、公知或いは周知技術を用いてそれをなすことができる。反射画像の画像400Bから、咽頭後壁部の凹凸を特徴量として抽出する場合には、無反射画像の画像400Aの場合と同様に、まず、例えば、エッジ検出の技術を用いて、画像400Bの中から咽頭後壁部の写り込んでいる範囲を特定する。
 そして、画像400B中の咽頭後壁部の範囲の中で、例えば、体液によって生じた光の反射によって白く光っている部分を凸部の一部としてその数をカウントすることによって、咽頭後壁部の凹凸を検出することができる。
 特徴量抽出部312は、無反射画像である画像400Aと、反射画像である画像400Bについて抽出した特徴量を、画像データに付されていたタグ410のデータとともに学習中モデル部313に送る。
Known or well-known techniques for extracting both color and blood vessel images of the posterior pharyngeal wall, left and right anterior palatine arches, and uvula reflected in the image 400A from the non-reflective image 400A as feature quantities. You can use the one.
From the non-reflective image 400A, both the color and blood vessel images of the posterior pharyngeal wall, the left and right anterior palatine arches, and the uvula reflected in the image 400A, and the unevenness of the posterior pharyngeal wall are extracted as feature quantities. In this case, first, for example, using a known or well-known edge detection technique, the range of the posterior wall of the pharynx, the left and right anterior palatine arches, and the uvula is extracted from the image 400A.
In order to extract the color of each range of the posterior pharyngeal wall, the left and right anterior palatine arches, and the uvula as a feature amount, for example, it is included in each range of the posterior pharyngeal wall, the left and right anterior palatine arches, and the uvula. By taking the average value of data representing the color of each pixel (for example, RGB value), the color of each range of the posterior wall of the pharynx, the left and right anterior palatine arches, and the uvula can be extracted. The technique of Lab color space may be used for color recognition.
In addition, in order to extract the vascular image of each range of the posterior pharyngeal wall, the left and right anterior palatine arches, and the uvula as feature quantities, the blood vessel image of the posterior pharyngeal wall, the left and right anterior palatine arches, and the uvula can be extracted. By performing more accurate edge detection, the contour of the blood vessel is extracted, thereby extracting the blood vessel image as a feature. When extracting the blood vessel image as a feature, for example, the thickness of the blood vessel in each range of the posterior wall of the pharynx, the left and right anterior palatine arches, and the uvula of the image 400A (for example, the average thickness of the blood vessels, the blood vessel). It is possible to extract at least one such as the maximum thickness), the number of blood vessels, the number of blood vessels thinner than a certain thickness, the total length of blood vessels, and the like.
Even when the unevenness of the posterior wall of the pharynx is extracted as a feature amount from the image 400B of the reflected image, it can be done by using a known or well-known technique. When extracting the unevenness of the posterior wall of the pharynx as a feature quantity from the image 400B of the reflected image, first, for example, using the technique of edge detection, the image 400B is used as in the case of the image 400A of the non-reflective image. Identify the area of the posterior wall of the pharynx from the inside.
Then, in the range of the posterior wall of the pharynx in the image 400B, for example, the portion that is shining white due to the reflection of light generated by the body fluid is counted as a part of the convex portion, and the posterior wall of the pharynx is counted. Unevenness can be detected.
The feature amount extraction unit 312 sends the feature amount extracted for the image 400A which is a non-reflective image and the image 400B which is a reflection image to the learning model unit 313 together with the data of the tag 410 attached to the image data.
 学習中モデル部313は、教師あり学習を行う一般的な人工知能である。
 学習中モデル部313は、特徴量抽出部312から受取った特徴量を、画像データに付されたタグ410とともに入力され、学習を行うようになっている。
 学習中モデル部313には、「新型コロナウイルス感染者」とのタグが付された特徴量、「呼吸器感染症に関するその他のウイルス・細菌の感染者」のタグが付された特徴量と、「呼吸器感染症に関するウイルス・細菌の非感染者」のタグが付された特徴量がそれぞれ、例えば、数十個から数千個入力されることにより、「新型コロナウイルス感染者」の被験者についての画像データの特徴量と、「呼吸器感染症に関するその他のウイルス・細菌の感染者」の被験者についての画像データの特徴量と、「呼吸器感染症に関するウイルス・細菌の非感染者」の被験者についての画像データの特徴量と、をそれぞれ学習する。
 それにより、学習中モデル部313は、タグの付されていない状態で画像データの特徴量を入力された場合に、学習によって構成されたパラメータに基づいて、画像データに写り込んだ撮像部位を持つ被験者が、新型コロナウイルス感染者と、呼吸器感染症に関するその他のウイルス・細菌の感染者と、呼吸器感染症に関するウイルス・細菌の非感染者のうちのどれであるかということを示す推定結果を出力するものとなる。
The learning model unit 313 is a general artificial intelligence that performs supervised learning.
The learning model unit 313 inputs the feature amount received from the feature amount extraction unit 312 together with the tag 410 attached to the image data, and performs learning.
The learning model unit 313 has a feature amount tagged as "new corona virus infected person", a feature amount tagged as "other virus / bacterial infected person related to respiratory infection", and a feature amount. For subjects of "new corona virus infected person" by inputting, for example, tens to thousands of feature quantities tagged with "non-infected person of virus / bacterium related to respiratory infection", for example. Image data features and image data features of "other virus / bacterial infected persons related to respiratory infections" and subjects of "virus / bacterial non-infected persons related to respiratory infections" Learn about the feature amount of the image data and each of them.
As a result, the learning model unit 313 has an image pickup portion reflected in the image data based on the parameters configured by the training when the feature amount of the image data is input in the untagged state. Estimated results showing whether the subject is infected with the new coronavirus, infected with other viruses / bacteria related to respiratory infections, or non-infected with viruses / bacteria related to respiratory infections. Will be output.
 正誤判定部314は学習中モデル部313の推定結果の精度をより上げるために用いられる。
 その場合、ある程度の学習が終わった学習済みモデル部313にタグ410の付されていない状態で特徴量を入力し、推定結果を出力させ、それを正誤判定部314に送る。他方、正誤判定部314は、学習中モデル部313に送られた特徴量に付されていたタグ410を学習データ記録部311から読み出し、推定結果が正しいか否かを判定する。正誤判定部314は、推定結果の正誤にしたがって、或いは、推定結果の正しさの割合にしたがって、推定結果がより正しいものとなるように、学習中モデル部313におけるパラメータを修正する。
 もっとも、正誤判定部314及びそれが行う上述の処理は、省略することも可能である。
 学習中モデル部313の推定結果の精度が殆ど上がらなくなったら、学習中モデル部313は機械学習を終えた、と判断することができる。
 それにより、学習中モデル部313は、呼吸器感染症に関するウイルス・細菌に感染していない者においては、咽頭後壁部、左右の前口蓋弓、及び口蓋垂の色彩はいずれも、通常の色彩、一般的には薄いピンク色であるのに対して、呼吸器感染症に関する新型コロナウイルス以外のウイルス・細菌に感染している者においては、咽頭後壁部、左右の前口蓋弓、及び口蓋垂の色彩がいずれも、粘膜が発赤して赤色を呈し、新型コロナウイルスに感染している者においては、咽頭後壁部と左右の前口蓋弓はいずれも発赤して赤色を呈するものの、口蓋垂の色彩は、ウイルス・細菌に感染していない健常者と同様の薄いピンク色か、場合によってはそれよりも薄い、より白色に近いピンク色を呈する、とうことを学習する。また、上述した発赤して赤色を呈する部分においては、毛細血管の増殖や拡張が生じているということも学習する。
 機械学習を終えた学習済みモデル部313が、呼吸器感染症に関する自動診断用の学習済みモデルである。
The correctness determination unit 314 is used to further improve the accuracy of the estimation result of the learning model unit 313.
In that case, the feature amount is input to the trained model unit 313 after learning to some extent without the tag 410 attached, the estimation result is output, and the result is sent to the correctness determination unit 314. On the other hand, the correctness determination unit 314 reads the tag 410 attached to the feature amount sent to the learning model unit 313 from the learning data recording unit 311 and determines whether or not the estimation result is correct. The correctness determination unit 314 modifies the parameters in the learning model unit 313 so that the estimation result becomes more correct according to the correctness of the estimation result or the ratio of the correctness of the estimation result.
However, the correctness determination unit 314 and the above-mentioned processing performed by the unit can be omitted.
When the accuracy of the estimation result of the learning model unit 313 hardly improves, it can be determined that the learning model unit 313 has completed machine learning.
As a result, the learning model unit 313 has normal colors for the posterior wall of the pharynx, the left and right anterior palatal arches, and the palatal ptosis in those who are not infected with viruses and bacteria related to respiratory infections. In contrast to the light pink color in general, in those infected with viruses and bacteria other than the new coronavirus related to respiratory infections, the posterior wall of the pharynx, the left and right anterior palatal arches, and the palatal ptosis In both cases, the mucous membrane is reddish and red, and in those infected with the new corona virus, the posterior wall of the pharynx and the left and right anterior palatal arches are both reddish and red, but the color of the palatal ptosis. Learns to have a pale pink color similar to that of a healthy person who is not infected with a virus or bacterium, or in some cases a lighter pink color that is closer to white. It is also learned that the capillaries are proliferated and dilated in the reddish and reddish portion described above.
The trained model unit 313 that has completed machine learning is a trained model for automatic diagnosis of respiratory tract infections.
(自動診断装置について)
 次に、自動診断装置について説明する。
 自動診断装置は、上述したように、コンピュータ装置により構成される。そして、自動診断装置300は、図11に示したように、ネットワークであるインターネット400を介して、カメラ1に接続されたコンピュータ装置100と接続した状態で用いられる。自動診断装置300は、カメラ1及びコンピュータ装置100との協働により、自動診断システムを構成する。もちろん、各々がカメラ1と接続された複数のコンピュータ装置100が自動診断装置300に接続されることも可能である。場合によっては、カメラ1の機能を兼ね備える変形例1で説明した如き構成を備えるコンピュータ装置100が、インターネット400を介して自動診断装置に接続されることもあり得る。
 カメラ1は、第2実施形態で説明したものであるとする。第3実施形態では、自動診断装置300には、カメラ1から、被験者の撮像部位が写り込んだ無反射画像及び反射画像である画像各1つずつの画像データが、コンピュータ装置100とインターネット400を介して送られてくるものとする。
 自動診断装置300を構成するコンピュータ装置のハードウェア構成は、学習装置と同じである。ただし、主に大容量記録装置115に記録されているコンピュータプログラム及びデータが、自動診断装置300と学習装置では異なる。
 コンピュータ装置を自動診断装置として機能させるために必要な後述の処理をコンピュータ装置に実行させるためのコンピュータプログラムは、コンピュータ装置にプリインストールされていたものであっても良いし、ポストインストールされたものであっても良い。このコンピュータプログラムのコンピュータ装置へのインストールは、メモリカード等の図示を省略の所定の記録媒体を介して行なわれても良いし、LAN或いはインターネットなどのネットワークを介して行なわれても構わない。
(About automatic diagnostic equipment)
Next, the automatic diagnostic apparatus will be described.
As described above, the automatic diagnostic device is composed of a computer device. Then, as shown in FIG. 11, the automatic diagnostic apparatus 300 is used in a state of being connected to the computer apparatus 100 connected to the camera 1 via the Internet 400 which is a network. The automatic diagnosis device 300 constitutes an automatic diagnosis system in cooperation with the camera 1 and the computer device 100. Of course, a plurality of computer devices 100, each of which is connected to the camera 1, can be connected to the automatic diagnostic device 300. In some cases, the computer device 100 having the configuration as described in the first modification having the function of the camera 1 may be connected to the automatic diagnostic device via the Internet 400.
It is assumed that the camera 1 is the one described in the second embodiment. In the third embodiment, in the automatic diagnostic apparatus 300, the image data of the non-reflective image and the reflected image of the subject's imaged portion are captured by the camera 1 on the computer device 100 and the Internet 400. It shall be sent via.
The hardware configuration of the computer device constituting the automatic diagnosis device 300 is the same as that of the learning device. However, the computer programs and data mainly recorded in the large-capacity recording device 115 are different between the automatic diagnostic device 300 and the learning device.
The computer program for causing the computer device to execute the processing described later necessary for the computer device to function as the automatic diagnostic device may be pre-installed in the computer device or may be post-installed. It may be there. The computer program may be installed in a computer device via a predetermined recording medium such as a memory card (not shown), or may be installed via a network such as a LAN or the Internet.
 以上説明した自動診断装置を構成するコンピュータ装置は以下の機能ブロックによって、自動診断装置として機能するために必要な処理を実行する。
 CPU111がコンピュータプログラムを実行することにより、コンピュータ装置内部には、図12で示されたような機能ブロックが生成される。なお、以下の機能ブロックは、コンピュータ装置に、コンピュータ装置が自動診断装置として機能するために必要な以下に述べるような処理を実行させるための上述のコンピュータプログラム単体の機能により生成されていても良いが、上述のコンピュータプログラムと、コンピュータ装置にインストールされたOSその他のコンピュータプログラムとの協働により生成されても良い。
 コンピュータ装置内には、本願発明の機能との関係で、入力部321、特徴量抽出部322、学習済モデル部323、出力部324が生成される。
The computer device constituting the automatic diagnostic device described above executes the processing necessary for functioning as the automatic diagnostic device by the following functional blocks.
When the CPU 111 executes a computer program, a functional block as shown in FIG. 12 is generated inside the computer device. The following functional blocks may be generated by the function of the above-mentioned computer program alone for causing the computer device to execute the processing described below necessary for the computer device to function as the automatic diagnostic device. However, it may be generated by the cooperation between the above-mentioned computer program and an OS or other computer program installed in the computer device.
In the computer device, an input unit 321, a feature amount extraction unit 322, a trained model unit 323, and an output unit 324 are generated in relation to the functions of the present invention.
 入力部321は、インターフェイスと接続されている。インターフェイスは、更に図示を省略の送受信機構と接続されている。送受信機構は、インターネット400を介して送られてきた、カメラ1で生成され、コンピュータ装置100が送信した、被験者の撮像部位が写り込んだ、一対の無反射画像と反射画像の静止画像についての画像データを受取るようになっている。
 入力部321は、送受信機構が受取ったその画像データを、インターフェイスから受取るようになっている。
The input unit 321 is connected to the interface. The interface is further connected to a transmission / reception mechanism (not shown). The transmission / reception mechanism is an image of a pair of non-reflective images and still images of a reflected image, which are transmitted via the Internet 400, generated by the camera 1, and transmitted by the computer device 100, in which the imaged portion of the subject is reflected. You are supposed to receive data.
The input unit 321 is adapted to receive the image data received by the transmission / reception mechanism from the interface.
 特徴量抽出部322は、学習装置における特徴量抽出部312と同様に、受取った一対の無反射画像と反射画像の静止画像についての画像データから、特徴量の抽出を行う。
 特徴量抽出部322が抽出する特徴量は、学習装置の特徴抽出部312において抽出された特徴量と同じとする。特徴量の抽出方法も同じである。
 特徴量抽出部322は、生成した特徴量のデータを学習済モデル部323に送るようになっている。
Similar to the feature amount extraction unit 312 in the learning device, the feature amount extraction unit 322 extracts the feature amount from the image data of the pair of received non-reflective images and the still images of the reflected images.
The feature amount extracted by the feature amount extraction unit 322 is the same as the feature amount extracted by the feature extraction unit 312 of the learning device. The method for extracting features is the same.
The feature amount extraction unit 322 sends the generated feature amount data to the trained model unit 323.
 学習済モデル部323は、機械学習を終えた学習装置における学習中モデル部313であり、学習によって構成されたパラメータを含んでいる。
 学習済モデル部323は、学習中モデル部313がそうであったように、特徴量のデータの入力を受取ると、推定結果を出力するようになっている。推定結果は、上述したように、特徴量が抽出された一対の無反射画像と反射画像の静止画像についての画像データに写り込んだ撮像部位を持つ被験者が、新型コロナウイルス感染者と、呼吸器感染症に関するその他のウイルス・細菌の感染者と、呼吸器感染症に関するウイルス・細菌の非感染者のうちのどれであるかということを示すものとなる。
 もちろんこの限りではないが、この実施形態では推定結果は、被験者が新型コロナウイルス感染者であると推定された場合には、「新型コロナウイルス感染者」というテキストデータ、被験者が新型コロナウイルス以外の呼吸器感染症に関するウイルス・細菌の感染者であると推定された場合には、「新型コロナウイルス以外のウイルス・細菌の感染者」というテキストデータ、被験者が呼吸器感染症に関するウイルス・細菌の非感染者であると推定された場合には、「非感染者」というテキストデータであるものとする。
 学習済モデル部323から出力された推定結果のデータは、出力部324に送られるようになっている。
The trained model unit 323 is a learning model unit 313 in the learning device that has completed machine learning, and includes parameters configured by learning.
The trained model unit 323 outputs an estimation result when it receives an input of feature amount data, as was the case with the trained model unit 313. As described above, the estimation results show that the subjects with the imaging site reflected in the image data of the pair of non-reflective images and the still images of the reflected images from which the feature quantity was extracted are the new coronavirus infected person and the respiratory organs. It indicates whether the person is infected with other viruses / bacteria related to infectious diseases or non-infected with viruses / bacteria related to respiratory infections.
Of course, this is not the case, but in this embodiment, the estimation result is text data of "new coronavirus infected person" when the subject is estimated to be a new coronavirus infected person, and the subject is other than the new coronavirus. If it is presumed that the person is infected with a virus or bacterium related to respiratory infection, the text data "Infected person with a virus or bacterium other than the new corona virus", the subject is not a virus or bacterium related to respiratory infection. If it is presumed to be an infected person, the text data shall be "non-infected person".
The estimation result data output from the trained model unit 323 is sent to the output unit 324.
 出力部324は、インターフェイスを介して送受信機構と接続されている。
 出力部324は、受取った推定結果のデータを、インターフェイスを介して送受信機構へ送るようになっている。送受信機構は、受取った推定結果のデータを、インターネット400を介して、その推定結果生成のきっかけとなった画像データの送信元であったコンピュータ装置100に対して送信するようになっている。
The output unit 324 is connected to the transmission / reception mechanism via an interface.
The output unit 324 sends the received estimation result data to the transmission / reception mechanism via the interface. The transmission / reception mechanism transmits the received estimation result data to the computer device 100, which is the source of the image data that triggered the generation of the estimation result, via the Internet 400.
 コンピュータ装置100は、その送受信機構で推定結果のデータを受取る。推定結果のデータは、インターフェイス114を介して入力部121へと送られ、制御部122、出力部124、インターフェイス114を介してディスプレイ101へと送られる。
 ディスプレイ101には、推定結果のデータに基づく表示、つまり、上述の3つのテキストデータに基づく、「新型コロナウイルス感染者」、「新型コロナウイルス以外のウイルス・細菌の感染者」、「非感染者」のいずれかの表示がなされる。
 それにより、被験者は、自身が、被験者が新型コロナウイルス感染者、新型コロナウイルス以外の呼吸器感染症に関するウイルス・細菌の感染者、呼吸器感染症に関するウイルス・細菌の非感染者のいずれであるのかを知ることができる。
 このような自動診断を、カメラ1とコンピュータ装置100を学校、レストラン、百貨店やスーパーマーケット、或いは映画館等の施設に配することによって行い、非感染者のみを施設に入場させるとすれば、その施設の中は安心できる空間となる。また、このようなカメラ1とコンピュータ装置100(或いは、変形例1で説明したコンピュータ装置100単体)を各家庭に配することとすれば、各家庭の家族は、例えば外出前に、新型コロナウイルスの感染確認を行えるようになる。
 しかも、新型コロナウイルスの感染に対する免疫反応に起因して生じる発赤や血管像の変化は、新型コロナウイルスの症状が出るよりも早く生じるので、新型コロナウイルスの無症状感染者が、感染を広めるという事態を効果的に防ぐことができるようになる。
The computer device 100 receives the estimation result data by its transmission / reception mechanism. The estimation result data is sent to the input unit 121 via the interface 114, and is sent to the display 101 via the control unit 122, the output unit 124, and the interface 114.
The display 101 shows a display based on the estimation result data, that is, a "new coronavirus infected person", a "virus / bacterial infected person other than the new coronavirus", and a "non-infected person" based on the above three text data. Is displayed.
As a result, the subject is either a person infected with the new coronavirus, a person infected with a virus / bacterium related to a respiratory infection other than the new coronavirus, or a person who is not infected with a virus / bacterium related to a respiratory infection. You can know if it is.
If such automatic diagnosis is performed by arranging the camera 1 and the computer device 100 in a facility such as a school, a restaurant, a department store, a supermarket, or a movie theater, and only non-infected persons are allowed to enter the facility, that facility. The inside will be a safe space. Further, if such a camera 1 and a computer device 100 (or a single computer device 100 described in the modified example 1) are arranged in each home, the family of each home can, for example, before going out, the new coronavirus. You will be able to confirm the infection.
Moreover, the redness and changes in the vascular image caused by the immune response to the infection with the new coronavirus occur earlier than the symptoms of the new coronavirus, so asymptomatic infected persons with the new coronavirus spread the infection. You will be able to prevent the situation effectively.

Claims (23)

  1.  口腔内に照明光を照射する光源と、前記照明光が口腔内で反射することによって生じた像光を通過させるレンズと、前記レンズを通過した像光を撮像して得た画像についての画像データを生成する撮像素子とを有する、口腔外から口腔内を撮像するカメラであって、
     前記レンズ、及び前記撮像素子は、前記像光に基づいて前記撮像素子によって撮像される画像が、咽頭後壁部、左右の前口蓋弓、及び口蓋垂を含むものとなるようになっている、
     カメラ。
    Image data about a light source that irradiates the oral cavity with illumination light, a lens that allows the image light generated by the reflection of the illumination light to pass through the oral cavity, and an image obtained by imaging the image light that has passed through the lens. A camera that captures the inside of the oral cavity from outside the oral cavity, and has an image pickup element that produces light.
    The lens and the image sensor are adapted so that the image captured by the image sensor based on the image light includes the posterior wall of the pharynx, the left and right anterior palatine arches, and the uvula.
    camera.
  2.  前記照明光を通過させて、偏光方向が所定方向である直線偏光とする第1偏光板と、前記像光を通過させる、偏光方向が前記第1偏光板と直交する第2偏光板とを備えている、
     請求項1記載のカメラ。
    It is provided with a first polarizing plate that allows the illumination light to pass through and is linearly polarized in a predetermined polarization direction, and a second polarizing plate that allows the image light to pass through and has a polarization direction orthogonal to the first polarizing plate. ing,
    The camera according to claim 1.
  3.  前記光源は、その光源から出た前記照明光が前記第1偏光板を通過してから口腔内に向かう第1光源と、その光源から出た前記照明光が前記第1偏光板を通過せずに前記口腔内に向かう第2光源であって、前記第1光源と択一的に発光するものとを含んでいるとともに、
     前記撮像素子は、前記第1光源から口腔内に照射された前記照明光によって生じる前記像光により撮像される無反射画像と、前記第2光源から口腔内に照射された前記照明光によって生じる前記像光により撮像される反射画像との双方を撮像するようになっている、
     請求項2記載のカメラ。
    The light source is a first light source in which the illumination light emitted from the light source passes through the first polarizing plate and then heads into the oral cavity, and the illumination light emitted from the light source does not pass through the first polarizing plate. Including a second light source directed into the oral cavity, which emits light selectively with the first light source.
    The image pickup element is a non-reflective image captured by the image light generated by the illumination light emitted from the first light source into the oral cavity, and the illumination light generated by the illumination light emitted into the oral cavity from the second light source. It is designed to capture both the reflected image captured by the image light.
    The camera according to claim 2.
  4.  ユーザが操作することにより前記撮像素子が静止画像である画像についての画像データを生成するシャッター手段を有しているとともに、
     前記シャッター手段を一回操作することにより、静止画像である前記無反射画像の画像データと、静止画像である前記反射画像の画像データとの双方を、少なくとも1つずつ生成するようになっている、
     請求項3記載のカメラ。
    The image pickup device has a shutter means for generating image data for an image which is a still image by being operated by the user, and also has a shutter means.
    By operating the shutter means once, at least one image data of the non-reflective image which is a still image and an image data of the reflected image which is a still image are generated. ,
    The camera according to claim 3.
  5.  対象物からの像光を通過させるレンズと、レンズを通過した像光を撮像して得た画像についての画像データを生成する撮像素子とを有する、カメラであって、
     前記撮像素子は、所定時間ごとに静止画像についての画像データを連続して生成するようになっているとともに、
     前記画像データを、古いものから順に上書きしながら所定時間分記録する上書き記録部と、
     ユーザが操作することにより、前記上書き記録部上の前記画像データの上書きを停止させるシャッター手段と、
     前記シャッター手段を操作した後に、その時点で前記上書き記録部上にある画像データのうちの任意のものを選択する選択手段と、
     を備えている、カメラ。
    A camera having a lens that allows image light from an object to pass through, and an image sensor that generates image data for an image obtained by imaging the image light that has passed through the lens.
    The image sensor is adapted to continuously generate image data for a still image at predetermined time intervals, and also to continuously generate image data.
    An overwrite recording unit that records the image data for a predetermined time while overwriting the image data in order from the oldest one.
    A shutter means for stopping the overwriting of the image data on the overwriting recording unit by the user's operation, and
    After operating the shutter means, a selection means for selecting any image data on the overwrite recording unit at that time, and a selection means.
    The camera is equipped with.
  6.  前記カメラは、口腔外から口腔内を撮像するものとされており、
     口腔内に照明光を照射する光源を備えており、
     前記レンズ、及び前記撮像素子は、前記像光に基づいて前記撮像素子によって撮像される画像が、咽頭後壁部を含むものとなるようになっている、
     請求項5記載のカメラ。
    The camera is supposed to capture the inside of the oral cavity from outside the oral cavity.
    It is equipped with a light source that irradiates the oral cavity with illumination light.
    The lens and the image sensor are adapted so that the image captured by the image sensor based on the image light includes the posterior wall of the pharynx.
    The camera according to claim 5.
  7.  前記選択手段で選択された画像データを外部に送信する送信手段を備えている、
     請求項5又は6記載のカメラ。
    A transmission means for transmitting the image data selected by the selection means to the outside is provided.
    The camera according to claim 5 or 6.
  8.  前記上書き記録部は、リングバッファである、
     請求項5又は6記載のカメラ。
    The overwrite recording unit is a ring buffer.
    The camera according to claim 5 or 6.
  9.  前記照明光を通過させて、偏光方向が所定方向である直線偏光とする第1偏光板と、前記像光を通過させる、偏光方向が前記第1偏光板と直交する第2偏光板とを備えている、
     請求項5記載のカメラ。
    It is provided with a first polarizing plate that allows the illumination light to pass through and is linearly polarized in a predetermined polarization direction, and a second polarizing plate that allows the image light to pass through and has a polarization direction orthogonal to the first polarizing plate. ing,
    The camera according to claim 5.
  10.  前記光源は、その光源から出た前記照明光が前記第1偏光板を通過してから口腔内に向かう第1光源と、その光源から出た前記照明光が前記第1偏光板を通過せずに前記口腔内に向かう第2光源であって、前記第1光源と択一的に発光するものとを含んでいるとともに、
     前記撮像素子は、前記第1光源から口腔内に照射された前記照明光によって生じる前記像光により撮像される無反射画像と、前記第2光源から口腔内に照射された前記照明光によって生じる前記像光により撮像される反射画像との双方を撮像するようになっている、
     請求項9記載のカメラ。
    The light source is a first light source in which the illumination light emitted from the light source passes through the first polarizing plate and then heads into the oral cavity, and the illumination light emitted from the light source does not pass through the first polarizing plate. Including a second light source directed into the oral cavity, which emits light selectively with the first light source.
    The image pickup element is a non-reflective image captured by the image light generated by the illumination light emitted from the first light source into the oral cavity, and the illumination light generated by the illumination light emitted into the oral cavity from the second light source. It is designed to capture both the reflected image captured by the image light.
    The camera according to claim 9.
  11.  前記第1光源と前記第2光源とは、前記撮像素子で生成される画像データに基づく画像が、交互に、前記第1光源からの照明光による静止画像と、前記第2光源からの照明光による静止画像となるようなタイミングで、交互に点灯するようになっている、
     請求項10記載のカメラ。
    In the first light source and the second light source, the images based on the image data generated by the image pickup element are alternately a still image by the illumination light from the first light source and the illumination light from the second light source. It is designed to light up alternately at the timing that makes it a still image.
    The camera according to claim 10.
  12.  前記上書き記録部上にある画像データに基づく静止画像を表示するディスプレイを更に備えているとともに、
     前記選択手段は、前記上書き記録部上にある画像データの少なくとも1つを選択して特定するための入力を受け付けるためのユーザが操作する操作部を含んでいる、
     請求項5記載のカメラ。
    It is further equipped with a display for displaying a still image based on the image data on the overwrite recording unit, and also has a display.
    The selection means includes an operation unit operated by a user for accepting an input for selecting and specifying at least one of image data on the overwrite recording unit.
    The camera according to claim 5.
  13.  前記選択手段は、前記上書き記録部上にある画像データのうち、撮像範囲が正しく、且つピントが合っているもの少なくとも1つを自動的に選択する、自動選択手段を含んでいる、
     請求項5記載のカメラ。
    The selection means includes an automatic selection means that automatically selects at least one of the image data on the overwrite recording unit that has a correct imaging range and is in focus.
    The camera according to claim 5.
  14.  前記レンズ、及び前記撮像素子は、前記像光に基づいて前記撮像素子によって撮像される画像が、咽頭後壁部、左右の前口蓋弓、及び口蓋垂を含むものとなるようになっている、
     請求項6記載のカメラ。
    The lens and the image sensor are adapted so that the image captured by the image sensor based on the image light includes the posterior wall of the pharynx, the left and right anterior palatine arches, and the uvula.
    The camera according to claim 6.
  15.  被験者の咽頭後壁部、左右の前口蓋弓、及び口蓋垂を含む撮像部位が写り込んだ静止画像についてのデータである画像データと、当該被験者が新型コロナウイルス感染者、呼吸器感染症に関するその他のウイルス・細菌の感染者、呼吸器感染症に関するウイルス・細菌の非感染者のいずれであるかという情報を含むデータとを教師データとして機械学習させ、
     前記画像データに基づく静止画像内の咽頭後壁部、左右の前口蓋弓、及び口蓋垂それぞれの色彩を特徴量として、当該画像データに前記撮像部位を写り込ませた被験者が、新型コロナウイルス感染者、呼吸器感染症に関するその他のウイルス・細菌の感染者、呼吸器感染症に関するウイルス・細菌の非感染者のいずれであるかを推定する学習済みモデルを生成する、
     呼吸器感染症に関する自動診断用の学習済みモデルの生成方法。
    Image data that is data on still images of the subject's posterior pharyngeal wall, left and right anterior palatal arches, and imaging sites including the palatal ptosis, as well as other subjects related to new coronavirus infections and respiratory infections. Machine-learned as teacher data with data including information on whether the person is infected with a virus or a bacterium or a non-infected person with a virus or bacterium related to a respiratory tract infection.
    The subject who had the imaged site reflected in the image data was a person infected with the new corona virus, using the colors of the posterior wall of the pharynx, the left and right anterior palatal arches, and the palatal ptosis in the still image based on the image data as characteristic quantities. Generates a trained model to estimate whether a person is infected with other viruses / bacteria related to respiratory infections or a non-infected person with viruses / bacteria related to respiratory infections.
    How to generate a trained model for automatic diagnosis of respiratory infections.
  16.  前記画像データは、前記撮像部位が写り込んだ無反射画像についてのデータである、
     請求項15記載の学習済みモデルの生成方法。
    The image data is data about a non-reflective image in which the image pickup portion is reflected.
    The method for generating a trained model according to claim 15.
  17.  前記画像データは、前記撮像部位が写り込んだ無反射画像の画像データと、前記撮像部位が写り込んだ反射画像の画像データとを一対としたものである、
     請求項15記載の学習済みモデルの生成方法。
    The image data is a pair of image data of a non-reflective image in which the image pickup portion is captured and image data of a reflection image in which the image pickup portion is captured.
    The method for generating a trained model according to claim 15.
  18.  前記画像データに基づく静止画像内の咽頭後壁部、左右の前口蓋弓、及び口蓋垂それぞれの色彩及び血管像を特徴量として、当該画像データに前記撮像部位を写り込ませた被験者が、新型コロナウイルス感染者、呼吸器感染症に関するその他のウイルス・細菌の感染者、呼吸器感染症に関するウイルス・細菌の非感染者のいずれであるかを推定する学習済みモデルを生成する、
     請求項15~17のいずれかに記載の学習済みモデルの生成方法。
    The subject who had the imaged site reflected in the image data was the new corona, featuring the colors and blood vessel images of the posterior wall of the pharynx, the left and right anterior palatal arches, and the palatal ptosis in the still image based on the image data. Generate a trained model to estimate whether you are infected with a virus, infected with other viruses or bacteria related to respiratory infections, or uninfected with viruses or bacteria related to respiratory infections.
    The method for generating a trained model according to any one of claims 15 to 17.
  19.  前記特徴量に、前記咽頭後壁部、左右の前口蓋弓、及び口蓋垂の表面の凹凸を含む、
     請求項15~18のいずれかに記載の学習済みモデルの生成方法。
    The feature amount includes the unevenness of the surface of the posterior wall of the pharynx, the left and right anterior palatine arches, and the uvula.
    The method for generating a trained model according to any one of claims 15 to 18.
  20.  被験者の咽頭後壁部、左右の前口蓋弓、及び口蓋垂を含む撮像部位が写り込んだ静止画像についてのデータである画像データと、当該被験者が新型コロナウイルス感染者、呼吸器感染症に関するその他のウイルス・細菌の感染者、呼吸器感染症に関するウイルス・細菌の非感染者のいずれであるかという情報を含むデータとを教師データとして機械学習させることにより生成された、
     前記画像データに基づく静止画像内の咽頭後壁部、左右の前口蓋弓、及び口蓋垂それぞれの色彩又は血管像を特徴量として、当該画像データに前記撮像部位を写り込ませた被験者が、新型コロナウイルス感染者、呼吸器感染症に関するその他のウイルス・細菌の感染者、呼吸器感染症に関するウイルス・細菌の非感染者のいずれであるかを推定するための、
     呼吸器感染症に関する自動診断用の学習済みモデル。
    Image data that is data on still images of the subject's posterior pharyngeal wall, left and right anterior palatal arches, and imaging sites including the palatal ptosis, as well as other subjects related to new coronavirus infections and respiratory infections. Generated by machine-learning as teacher data with data including information on whether the person is infected with a virus or a bacterium or a non-infected person with a virus or bacterium related to a respiratory tract infection.
    The subject who had the imaged site reflected in the image data was the new corona, featuring the color or blood vessel image of the posterior wall of the pharynx, the left and right anterior palatal arches, and the palatal ptosis in the still image based on the image data. To estimate whether the person is infected with a virus, infected with other viruses / bacteria related to respiratory infections, or non-infected with viruses / bacteria related to respiratory infections.
    A trained model for automatic diagnosis of respiratory infections.
  21.  請求項20の学習済みモデルを用いた呼吸器感染症に関する自動診断装置であって、
     呼吸器感染症の診断対象となる被験者の前記撮像部位が写り込んだ静止画像についてのデータである画像データを受付ける受付手段と、
     前記画像データに基づく静止画像内の咽頭後壁部、左右の前口蓋弓、及び口蓋垂それぞれの色彩又は血管像を特徴量として抽出する抽出手段と、
     前記学習済みモデルに、前記抽出手段で抽出された前記特徴量を入力することにより、当該画像データに前記撮像部位を写り込ませた被験者が、新型コロナウイルス感染者、呼吸器感染症に関するその他のウイルス・細菌の感染者、呼吸器感染症に関するウイルス・細菌の非感染者のいずれであるかの推定結果を出力する出力手段と、
     を備えてなる、呼吸器感染症に関する自動診断装置。
    An automatic diagnostic device for respiratory infections using the trained model of claim 20.
    A reception means for receiving image data, which is data about a still image in which the image pickup site of a subject to be diagnosed with a respiratory infection is captured, and
    An extraction means for extracting the color or blood vessel image of each of the posterior pharyngeal wall, the left and right anterior palatine arches, and the uvula in the still image based on the image data as a feature amount.
    By inputting the feature amount extracted by the extraction means into the trained model, the subject whose image pickup site is reflected in the image data is a new coronavirus-infected person and other respiratory infection-related subjects. An output means for outputting the estimation result of whether the person is infected with a virus or a bacterium or a non-infected person with a virus or a bacterium related to a respiratory tract infection.
    An automatic diagnostic device for respiratory infections.
  22.  請求項20の学習済みモデルを記録した記録媒体を有するコンピュータによって実行される呼吸器感染症に関する自動診断方法であって、
     コンピュータが実行する、
     呼吸器感染症の診断対象となる被験者の前記撮像部位が写り込んだ静止画像についてのデータである画像データを受付ける受付過程と、
     前記画像データに基づく静止画像内の咽頭後壁部、左右の前口蓋弓、及び口蓋垂それぞれの色彩又は血管像を特徴量として抽出する抽出過程と、
     前記学習済みモデルに、前記抽出過程で抽出された前記特徴量を入力することにより、当該画像データに前記撮像部位を写り込ませた被験者が、新型コロナウイルス感染者、呼吸器感染症に関するその他のウイルス・細菌の感染者、呼吸器感染症に関するウイルス・細菌の非感染者のいずれであるかの推定結果を出力する出力過程と、
     を含む、呼吸器感染症に関する自動診断方法。
    An automatic diagnostic method for respiratory infections performed by a computer having a recording medium on which the trained model of claim 20 is recorded.
    Computer runs,
    The reception process of accepting image data, which is data about a still image in which the imaged part of a subject to be diagnosed with a respiratory infection is captured, and
    An extraction process for extracting the colors or blood vessel images of the posterior pharyngeal wall, the left and right anterior palatine arches, and the uvula in the still image based on the image data as feature quantities.
    By inputting the feature amount extracted in the extraction process into the trained model, the subject whose image pickup site is reflected in the image data is a new coronavirus-infected person and other respiratory infection-related subjects. An output process that outputs the estimation result of whether the person is infected with a virus or a bacterium or a non-infected person with a virus or bacterium related to a respiratory tract infection.
    Automatic diagnostic methods for respiratory infections, including.
  23.  請求項20の学習済みモデルを用いた呼吸器感染症に関する自動診断装置として所定のコンピュータを機能させるためのコンピュータプログラムであって、
     前記コンピュータプログラムが、前記コンピュータに、
     呼吸器感染症の診断対象となる被験者の前記撮像部位が写り込んだ静止画像についてのデータである画像データを受付ける受付過程と、
     前記画像データに基づく静止画像内の咽頭後壁部、左右の前口蓋弓、及び口蓋垂それぞれの色彩又は血管像を特徴量として抽出する抽出過程と、
     前記学習済みモデルに、前記抽出過程で抽出された前記特徴量を入力することにより、当該画像データに前記撮像部位を写り込ませた被験者が、新型コロナウイルス感染者、呼吸器感染症に関するその他のウイルス・細菌の感染者、呼吸器感染症に関するウイルス・細菌の非感染者のいずれであるかの推定結果を出力する出力過程と、
     を実行させるためのコンピュータプログラム。
    A computer program for operating a predetermined computer as an automatic diagnostic device for respiratory infections using the trained model of claim 20.
    The computer program is on the computer.
    The reception process of accepting image data, which is data about a still image in which the imaged part of a subject to be diagnosed with a respiratory infection is captured, and
    An extraction process for extracting the colors or blood vessel images of the posterior pharyngeal wall, the left and right anterior palatine arches, and the uvula in the still image based on the image data as feature quantities.
    By inputting the feature amount extracted in the extraction process into the trained model, the subject whose image pickup site is reflected in the image data is a new coronavirus-infected person and other respiratory infection-related subjects. An output process that outputs the estimation result of whether the person is infected with a virus or a bacterium or a non-infected person with a virus or bacterium related to a respiratory tract infection.
    A computer program to run.
PCT/JP2021/044366 2020-12-02 2021-12-02 Camera, method for generating trained model pertaining to respiratory infection, trained model pertaining to respiratory infection, automatic diagnosis method pertaining to respiratory infection, and computer program WO2022118939A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2020-200137 2020-12-02
JP2020200137A JP2022087967A (en) 2020-12-02 2020-12-02 Camera, generation method of learned model about respiratory infection diseases, learned model about respiratory infection diseases, automatic diagnostic method about respiratory infection diseases and computer program

Publications (1)

Publication Number Publication Date
WO2022118939A1 true WO2022118939A1 (en) 2022-06-09

Family

ID=81853386

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/044366 WO2022118939A1 (en) 2020-12-02 2021-12-02 Camera, method for generating trained model pertaining to respiratory infection, trained model pertaining to respiratory infection, automatic diagnosis method pertaining to respiratory infection, and computer program

Country Status (2)

Country Link
JP (1) JP2022087967A (en)
WO (1) WO2022118939A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003275179A (en) * 2002-03-22 2003-09-30 Kao Corp Device and method for measuring flesh color
JP2008302146A (en) * 2007-06-11 2008-12-18 Olympus Medical Systems Corp Endoscope apparatus and endoscope image controlling device
WO2019131327A1 (en) * 2017-12-28 2019-07-04 アイリス株式会社 Oral photographing apparatus, medical apparatus, and program
JP2019205614A (en) * 2018-05-29 2019-12-05 デルマ医療合資会社 Examination apparatus, enlargement module, control method, and examination system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003275179A (en) * 2002-03-22 2003-09-30 Kao Corp Device and method for measuring flesh color
JP2008302146A (en) * 2007-06-11 2008-12-18 Olympus Medical Systems Corp Endoscope apparatus and endoscope image controlling device
WO2019131327A1 (en) * 2017-12-28 2019-07-04 アイリス株式会社 Oral photographing apparatus, medical apparatus, and program
JP2019205614A (en) * 2018-05-29 2019-12-05 デルマ医療合資会社 Examination apparatus, enlargement module, control method, and examination system

Also Published As

Publication number Publication date
JP2022087967A (en) 2022-06-14

Similar Documents

Publication Publication Date Title
JP7387185B2 (en) Systems, methods and computer program products for physiological monitoring
Bonilha et al. Vocal fold phase asymmetries in patients with voice disorders: a study across visualization techniques
US9848811B2 (en) Cognitive function testing system, cognitive function estimation system, cognitive function testing method, and cognitive function estimation method
WO2017010683A1 (en) Smartphone with telemedical device
US20150050628A1 (en) Autism diagnosis support method and system, and autism diagnosis support device
US20170188930A1 (en) Animation-based autism spectrum disorder assessment
Bonilha et al. Phase asymmetries in normophonic speakers: visual judgments and objective findings
Koulaouzidis et al. How should we do colon capsule endoscopy reading: a practical guide
JP2007125151A (en) Diagnostic system and diagnostic apparatus
US8982204B2 (en) Inspection management apparatus, system, and method, and computer readable recording medium
Khanam et al. Noncontact sensing of contagion
JP2007289657A (en) Image recording apparatus, image recording method, and image recording program
JP6118917B2 (en) Flat-scan video chymography system for analyzing vocal cord mucosal motion and method for analyzing vocal cord mucosal motion using the same
WO2022118939A1 (en) Camera, method for generating trained model pertaining to respiratory infection, trained model pertaining to respiratory infection, automatic diagnosis method pertaining to respiratory infection, and computer program
JP2018512918A (en) Video laryngeal endoscope system with flat-scan video chymography and laryngeal strobe copy function
WO2023061402A1 (en) Method for collecting and presenting physiological signal data and position information, and server and system for implementing same
KR101908632B1 (en) System and method for generating image for real-time visualization of laryngeal video-stroboscopy, high-speed videolaryngoscopy, and virtual two dimensional scanning digital kymography-development
JP5276454B2 (en) Facial expression measurement method, facial expression measurement program, and facial expression measurement apparatus
WO2020158720A1 (en) Mental and physical state assessment system, assessment device, method, and computer program
Perry et al. Instrumental assessment in cleft palate care
JP2022055656A (en) Assist device and assist method
CN208755973U (en) Assist the device of remote traditional Chinese medical diagnosis
TWI261110B (en) Fever screening method and system
WO2023181417A1 (en) Imaging device, program, and method
WO2022252803A1 (en) Screening method, device, storage medium, and program product

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21900689

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21900689

Country of ref document: EP

Kind code of ref document: A1