WO2022118939A1 - Caméra, procédé de génération d'un modèle entraîné inhérent à une infection respiratoire, modèle entraîné inhérent à une infection respiratoire, procédé de diagnostic automatique inhérent à une infection respiratoire, et programme informatique - Google Patents

Caméra, procédé de génération d'un modèle entraîné inhérent à une infection respiratoire, modèle entraîné inhérent à une infection respiratoire, procédé de diagnostic automatique inhérent à une infection respiratoire, et programme informatique Download PDF

Info

Publication number
WO2022118939A1
WO2022118939A1 PCT/JP2021/044366 JP2021044366W WO2022118939A1 WO 2022118939 A1 WO2022118939 A1 WO 2022118939A1 JP 2021044366 W JP2021044366 W JP 2021044366W WO 2022118939 A1 WO2022118939 A1 WO 2022118939A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
image data
light source
light
infected
Prior art date
Application number
PCT/JP2021/044366
Other languages
English (en)
Japanese (ja)
Inventor
正男 山本
Original Assignee
正男 山本
株式会社Eggs
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 正男 山本, 株式会社Eggs filed Critical 正男 山本
Publication of WO2022118939A1 publication Critical patent/WO2022118939A1/fr

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/04Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances
    • A61B1/045Control thereof
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/06Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor with illuminating arrangements
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/06Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor with illuminating arrangements
    • A61B1/07Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor with illuminating arrangements using light-conductive means, e.g. optical fibres
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/24Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor for the mouth, i.e. stomatoscopes, e.g. with tongue depressors; Instruments for opening or keeping open the mouth
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/17Systems in which incident light is modified in accordance with the properties of the material investigated
    • G01N21/21Polarisation-affecting properties
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/17Systems in which incident light is modified in accordance with the properties of the material investigated
    • G01N21/25Colour; Spectral properties, i.e. comparison of effect of material on the light at two or more different wavelengths or wavelength bands
    • G01N21/27Colour; Spectral properties, i.e. comparison of effect of material on the light at two or more different wavelengths or wavelength bands using photo-electric detection ; circuits for computing concentration
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N33/00Investigating or analysing materials by specific methods not covered by groups G01N1/00 - G01N31/00
    • G01N33/48Biological material, e.g. blood, urine; Haemocytometers
    • G01N33/483Physical analysis of biological material

Definitions

  • the present invention relates to a diagnostic imaging technique for respiratory infections.
  • the term "respiratory tract infection” means an infectious disease in which symptoms appear in the respiratory tract in general, and includes cases where the cause is a virus or a bacterium.
  • Various respiratory infections are known, but the most problematic one in the world today is clearly the infection caused by the new coronavirus (COVID-19).
  • COVID-19 the new coronavirus
  • Early detection of infected persons is an important issue to prevent the spread of the new coronavirus.
  • the PCR test is a test to detect the gene of the new coronavirus from the sample collected from the subject, and if the gene of the new coronavirus can be detected, the subject who provided the sample is infected with the new coronavirus. Is determined. There is also an antibody test.
  • the PCR test sufficiently corresponds to the purpose of early detection of the infected person. I can't do it. Even in the case of an antibody test, it takes a certain amount of time from the infection of the new coronavirus to generate an antibody in the body of the infected person, so it is difficult to detect the infected person of the new coronavirus at an early stage even by the antibody test.
  • the above story is not limited to the new coronavirus infection. It is self-evident that the spread of other respiratory infections can be suppressed if the infected person can be detected early. For example, even when a new respiratory infection that follows the current new coronavirus infection appears, it is strongly expected that early detection of the infected person will be important.
  • An object of the present invention is to provide a simple and inexpensive technique for early detection of a person infected with a respiratory tract infection.
  • the inventor of the present application continued research to solve the above-mentioned problems. As a result, the following findings were obtained.
  • epithelial cells such as the upper respiratory tract (mainly the oral cavity, pharynx, and nasal cavity) and vascular endothelial cells.
  • the oral cavity, pharynx, and nasal cavity become important routes of entry of the new coronavirus into the body due to the large expression of its functional receptors in the oral cavity, pharynx, and nasal cavity. Therefore, mutations occur in the relevant part of the person infected with the new coronavirus.
  • the time when the mutation occurs in this part is earlier than the time when the symptom appears, and therefore, it is possible that the subject can be identified as the infected person of the new coronavirus by each diagnostic technique explained in the background technique section. Extremely expensive. Considering these points, it can be concluded that it is useful to observe the epithelial cells and blood vessels of the upper respiratory tract for early detection of infected persons with respiratory infections such as coronavirus infection. can. Of course, such observations (or observations and diagnoses) can also be made by the physician directly observing the subject's upper respiratory tract, however, if simplicity and cheapness are pursued, such observations may be made in that part. It is better to use diagnostic imaging.
  • a person in charge of a school, restaurant, department store or supermarket, or movie theater will be able to have a subject who uses the facility undergo a diagnostic imaging.
  • diagnostic imaging it will be possible to use diagnostic imaging in a screening manner, and it will be possible to isolate infected persons from non-infected persons at an early stage. Not only that, non-infected persons will be able to engage in economic activities, thus reducing the damage caused by respiratory infections to the economy.
  • the image diagnosis is performed on the visitors of a certain facility, it is highly probable that all the persons in the facility are non-infected, so for example, a dinner in a restaurant.
  • the restaurant user is freed from the dressed behavior from the conventional way of thinking that a mask should be worn at the time.
  • it can also be used in combination with other tests, for example, a definitive diagnosis is made later by a PCR test.
  • the present invention has been made based on the above findings, and provides as a specific technique a diagnostic imaging technique useful for early detection of respiratory infections.
  • the first issue (problem 1) is which range of the upper respiratory tract of the subject exhibiting the symptom of respiratory infection is preferable as the image used in the diagnostic imaging. Unless an appropriate area of the upper respiratory tract is imaged, subjects cannot be distinguished from those infected with the new coronavirus infection, those infected with other viruses or bacteria, and those who are not infected. Further, according to the research of the inventor of the present application, it is also found that the accuracy of image diagnosis is affected by the nature of the image that captures the state of the upper respiratory tract, in addition to the range of the upper respiratory tract that is imaged. It has become clear.
  • the second problem is at what timing the image of the upper respiratory tract is used as the image used in the image diagnosis.
  • the posterior wall of the pharynx of the upper respiratory tract is hidden and invisible under normal conditions. For example, when breathing in while making a voice, the posterior wall of the pharynx is exposed so that it can be seen from outside the oral cavity, but it is difficult to take an image from outside the oral cavity using a camera at that timing. And this issue can be viewed more universally.
  • the camera usually has a shutter means (in the present application, the term "shutter means” includes not only a shutter button which is a physical button but also a non-physical button such as one displayed on a smartphone screen or the like).
  • the invention for solving the problem 1 is referred to as the first invention for convenience.
  • the subject in order to distinguish a subject from a person infected with a new type of coronavirus infection, a person infected with another virus or a bacterium, and a non-infected person by diagnostic imaging, the subject is imaged by an imaging device.
  • the image should include the posterior wall of the pharynx, the left and right anterior palatine arches, and the uvula. The reason is as follows.
  • the colors of the posterior pharyngeal wall, the left and right anterior palatine arches, and the uvula are all normal colors and general colors in those who are not infected with viruses and bacteria related to respiratory infections. Is a light pink color.
  • the mucous membrane is the color of the posterior wall of the pharynx, the left and right anterior palatine arches, and the uvula. It turns red and shows red. This is because when a virus or bacterium adheres to the mucous membrane of the site, the capillaries grow or dilate due to an immune reaction.
  • the posterior wall of the pharynx and the left and right anterior palatine arches are both reddish and red, but the color of the uvula is the virus.
  • the color was as light pink as that of a healthy person who was not infected with the virus, or in some cases, a lighter pink color that was closer to white.
  • reddish and reddish part since the growth and dilation of capillaries occur, a person infected with the new coronavirus infection, a person infected with another virus or a bacterium, and a non-infected person.
  • the camera according to the first invention is for capturing an image to enable such a distinction.
  • the camera captures an image obtained by imaging a light source that irradiates the oral cavity with illumination light, a lens that passes the image light generated by the reflection of the illumination light in the oral cavity, and the image light that has passed through the lens. It is a camera that captures an image of the inside of the oral cavity from the outside of the oral cavity, which has an image pickup element that generates image data of the light beam.
  • the lens and the image sensor in this camera so that the image captured by the image sensor based on the image light includes the posterior wall of the pharynx, the left and right anterior palatine arches, and the uvula. It has become.
  • the camera in the first invention includes a light source.
  • the light source irradiates the oral cavity with illumination light from outside the oral cavity. Since the inside of the oral cavity is a closed space that does not allow outside light to enter, by allowing the camera to take an image with the illumination light emitted from the light source, the properties of the image to be taken (for example, the color of what is reflected in the image). Can be stabilized. Of course, this increases the accuracy of diagnostic imaging.
  • the camera according to the first invention also has a lens that allows the image light generated by the illumination light emitted from the light source to be reflected in the oral cavity to pass through, and an image pickup device that captures the image light that has passed through the lens to generate image data. And have.
  • the lens and the image sensor in this camera are adapted so that the image captured by the image sensor based on the image light includes the posterior wall of the pharynx, the left and right anterior palatine arches, and the uvula. That is, the lens forms an image on the image sensor so that the image to be imaged includes the posterior wall of the pharynx, the left and right anterior palatine arches, and the uvula.
  • the number of lenses may be one, or may be composed of a plurality of lenses.
  • another optical element for example, a mirror or a prism
  • the posterior wall of the pharynx, the front left and right in the image captured by the image sensor. If the image light is imaged on the image sensor in a state that includes the palatal arch and the palate drop, "the lens and the image captured by the image sensor based on the image light are images taken by the image sensor. , The posterior wall of the pharynx, the left and right anterior palatal arches, and the palatal ptosis are included.
  • Such a camera can be used to view subjects from outside the oral cavity, without having to insert a portion of it into the oral cavity, with a person infected with the new corona virus and other virus / bacterial infections related to respiratory infections. It is possible to capture images necessary for performing diagnostic imaging to distinguish between non-infected persons of viruses and bacteria related to infectious diseases. By being able to take images from outside the oral cavity, even if the person in charge of facilities such as schools, restaurants, department stores, supermarkets, or movie theaters does not have specialized knowledge or skills related to medicine, they can use endoscopes, etc. Compared with the case of using the facility, it becomes easier to capture an image necessary for the above-mentioned purpose in the oral cavity of a subject who uses the facility.
  • the camera according to the first invention includes a light source that emits illumination light.
  • the light source may be a single light source or a plurality of light sources.
  • the camera has a first polarizing plate that allows the illumination light to pass through and is linearly polarized in a predetermined polarization direction, and a second polarizing plate that passes the image light and has a polarization direction orthogonal to the first polarizing plate. And may be provided.
  • the illumination light emitted into the oral cavity is linearly polarized.
  • Illumination light which is linearly polarized light, is reflected in the oral cavity as described above and becomes image light.
  • the image light is divided into two types of light.
  • One is surface reflected light reflected on the surface of the mucous membrane in the oral cavity such as saliva
  • the other is internally reflected light transmitted through saliva and reflected on the surface of the mucous membrane itself in the oral cavity.
  • Surface reflected light is light that causes glare in the captured image and may cause overexposure in the image, but on the other hand, it well expresses the unevenness and shape of the object reflected in the image. ..
  • Such an image is referred to as a reflection image in the present application.
  • the internally reflected light is suitable for obtaining an image without glare, and is not suitable for grasping the unevenness and shape of an object reflected in the image, but the object reflected in the image.
  • the surface-reflected light ideally completely maintains the linearly polarized light property originally possessed by the image light, and the internally reflected light loses the linearly polarized property originally possessed by the image light and becomes natural light. have. Therefore, if a second polarizing plate whose polarization direction is orthogonal to the first polarization is arranged in the optical path of the image light (to be exact, the vibration of the linear polarization of the light passing through the first polarizing plate).
  • the image pickup element of the camera using the first polarizing plate and the second polarizing plate as described above can perform imaging only by the internally reflected light passing through the second polarizing plate.
  • the image which is a non-reflective image obtained by the internally reflected light is used for observing a blood vessel image which is an image of a blood vessel existing inside a color or mucous membrane without glare based on body fluid such as saliva. It will be suitable.
  • Such images show subjects with new coronavirus infections and other viral and bacterial infections related to respiratory infections, depending on the color or vascular image of the posterior pharyngeal wall, left and right anterior palatal arches, and palatal ptosis.
  • the non-reflective image is a blood vessel inside the mucous membrane in the posterior pharyngeal wall, the left and right anterior palatine arches, and the uvula, which tend to be obscured by the glare caused by the reflection of the illumination light by the body fluid according to the reflected image. Since the blood vessel image, which is the image of the uvula, can be seen well, it is extremely suitable for observing the blood vessel image. Therefore, according to such a camera, the accuracy of image diagnosis can be improved.
  • the light source in the camera of the first invention is a first light source in which the illumination light emitted from the light source passes through the first polarizing plate and then heads into the oral cavity, and the illumination light emitted from the light source is the first.
  • a second light source that goes into the oral cavity without passing through the polarizing plate and that emits light selectively with the first light source is included, and the image pickup element is in the oral cavity from the first light source.
  • the image captured by the image pickup device based on the illumination light emitted from the first light source and passing through the first polarizing plate is an image that is a non-reflective image according to the principle described in the previous paragraph.
  • it may pass through the third polarizing plate, which is a polarizing plate having the same polarization direction as the second polarizing plate, without exiting the second light source and passing through the first polarizing plate (for example, it may pass through the third polarizing plate having the same polarization direction as the second polarizing plate).
  • the reflected image light the image light is natural light if there is no third polarizing plate
  • both the surface reflected light and the internally reflected light are natural light.
  • both the surface reflected light and the internally reflected light contained in the image light which are both natural light, ideally half of their light amount passes through the second polarizing plate.
  • the image captured by the image pickup device becomes an image in which the surface reflected light and the internally reflected light are combined.
  • the image is basically the same as an ordinary image captured by an ordinary camera using natural light as the illumination light, so there is glare and reflection that makes it easy to grasp the unevenness and shape of the object reflected in the image. It becomes an image. That is, this camera can capture both the non-reflective image and the reflected image by alternately turning on the first light source and the second light source.
  • the pharynx It is advisable to observe at least one of the color and angiography of the posterior wall, the left and right anterior palatal arches, and the palatal appendage.
  • lymphoid follicles occur in the posterior wall of the pharynx in a person infected with influenza virus. These lymphoid follicles create irregularities on the posterior wall of the pharynx, which is normally smooth.
  • the subjects are a person infected with the new corona virus, a person infected with other viruses / bacteria related to respiratory infections, and a respiratory infection. It may be possible to distinguish between non-infected persons of viruses and bacteria related to the disease with higher accuracy.
  • a camera having a first light source and a second light source as described above non-reflective images of the posterior wall of the pharynx centered on at least one of color or blood vessel images, left and right anterior palatine arches, and uvula. It is advantageous from the above-mentioned viewpoint because it enables both the observation by the image of the posterior wall of the pharynx centered on the unevenness or shape, the left and right anterior palatine arches, and the image of the uvula.
  • the image data of the image captured by the camera of the first invention is used for image diagnosis.
  • the camera of the first invention has, for example, a transmission / reception mechanism, and may transmit image data of the captured image to a place where image diagnosis is performed via the transmission / reception mechanism.
  • the destination of the image data is, for example, a computer device that the doctor can access if the doctor executes the image diagnosis, and an automatic diagnosis device or an automatic diagnosis if the automatic diagnosis device executes the automatic diagnosis.
  • the transmission / reception mechanism may be known or well-known, and may be a standardized existing one.
  • the communication executed by the transmission / reception mechanism may be, for example, wireless communication via an Internet line and may execute communication by a 5th generation mobile communication system.
  • the camera itself transmits the image data of the captured image to the place where the image diagnosis is performed.
  • the communication executed by the transmission / reception mechanism may be short-range wireless communication.
  • An example is Wi-Fi TM, which is one of the wireless LAN standards.
  • the camera sends image data to a predetermined wireless router by executing Wi-Fi communication, and transmits image data about the captured image to a place where the wireless router performs image diagnosis. Is common.
  • Another example of short-range wireless communication is Bluetooth TM.
  • the camera sends image data to a predetermined smartphone or tablet by executing Bluetooth communication, and sends image data about the captured image to a place where the smartphone or tablet performs image diagnosis. Is common.
  • the image captured by the camera of the first invention may or may not be a still image. Still images may be continuously captured. In the case where still images are continuously captured, if the time interval during which the still images are captured is shortened, it will eventually become equivalent to a moving image. In other words, the image captured by the camera of the first invention may be a still image or a moving image. Therefore, the image data sent from the camera directly or indirectly to the place where the image diagnosis is performed may be image data about a still image, or a continuous still image or a moving image. It may be image data about. When the image data sent from the camera to the place where the image diagnosis is performed is for a still image, the transmission process is lighter and the transmission time can be shortened.
  • the camera may have a shutter means for generating image data for an image whose image sensor is a still image by being operated by a user.
  • the camera allows the user to operate the shutter means once to obtain at least one of both the image data of the non-reflective image which is a still image and the image data of the reflected image which is a still image. It may be generated one by one.
  • at least one image data about the non-reflective image and the still image of the reflected image is generated by the camera and sent to the place where the image diagnosis is performed.
  • the invention for solving the problem 2 will be referred to as a second invention for convenience. Similar to the first invention, the second invention can be applied to a camera for capturing an image used for diagnostic imaging, but its application range is wider.
  • the camera is a camera having a lens that allows image light from an object to pass through, and an image pickup element that generates image data for an image obtained by capturing the image light that has passed through the lens, and is the image pickup element.
  • It is a camera equipped with a means of selection.
  • the image pickup device in the camera of the second invention continuously generates image data of a still image at predetermined time intervals.
  • the time interval for generating image data for a still image may or may not be constant, but is generally constant. If the interval becomes short, the image sensor will generate image data for the moving image.
  • the camera according to the second invention has an overwrite recording unit that records image data for a predetermined time while overwriting the image data in order from the oldest one, and a shutter means that stops overwriting of the image data on the overwrite recording unit by the user's operation. And, after operating the shutter means, it is provided with a selection means for selecting any one of the image data on the overwrite recording unit at that time.
  • the overwrite recording unit is typically a ring buffer, but it is also possible to adopt a configuration in which image data is recorded while overwriting in a general memory.
  • the camera according to the second invention records image data generated one after another by the image sensor in an overwrite recording unit for a predetermined time while overwriting the oldest ones in order. That is, the overwrite recording unit is designed to always maintain a state in which image data for a predetermined time in the past has been recorded.
  • the camera of the second invention has a shutter means.
  • the shutter means may appear to the user to function as a means for determining when to capture a still image in a typical camera, but in the camera of the second invention, it is in fact. , It functions to stop the overwriting of the image data on the overwrite recording unit by the user's operation.
  • the camera of the second invention has a selection means for selecting any image data on the overwrite recording unit at that time after the user operates the shutter means. Therefore, in the camera of the second invention, when the user operates the shutter means in order to take an image, the image is taken by the image sensor during a predetermined time before the moment when the shutter means is operated, and the overwrite recording unit is used. It becomes possible to select an appropriate image data from the past image data for a predetermined time portion from the timing at the same time when the shutter means is pressed, which is recorded in. This image selected by the selection means should be transmitted to the outside from the transmission means if the camera of the second invention includes a transmission means for transmitting image data to the outside (that is, described in the first invention).
  • the transmission means is equivalent to the transmission / reception mechanism described in the first invention. Due to the existence of such a shutter means, an image sensor, an overwrite recording unit, and a selection means, the camera of the second invention captures a still image when the object to be imaged is in a desired state without missing a shutter chance. , It becomes possible to take an image (or select the imaged data) at a timing desired by the user. In addition, by making it possible to select an image at the timing before operating the shutter means, which is one of the causes of camera shake, the image captured by the camera of the second invention is affected by camera shake. It will be difficult.
  • the camera of the second invention is supposed to take an image of the inside of the oral cavity from outside the oral cavity, includes a light source for irradiating the inside of the oral cavity with illumination light, and the lens and the image pickup element are based on the image light.
  • the image captured by the image pickup device may include the posterior wall of the pharynx.
  • the effect of imaging the inside of the oral cavity from outside the oral cavity is equivalent to the effect caused by being able to image from outside the oral cavity described in the first invention.
  • the effect of the camera of the second invention provided with a light source for irradiating the illumination light in the oral cavity is equivalent to the effect caused by the camera of the first invention by providing the light source for irradiating the illumination light in the oral cavity.
  • the lens and the image pickup device in the camera of the second invention so that the image captured by the image pickup element based on the image light includes the pharyngeal posterior wall portion (may be a part of the pharyngeal posterior wall portion).
  • the lens and the image sensor in the camera of the first invention are such that the posterior wall of the pharynx of the subject, the left and right anterior palatal arches, and the palatal drop are reflected in the image captured by the image sensor.
  • the image pickup range is narrower than that of the camera of the first invention.
  • the applicant's findings regarding respiratory infections are "observing the colors of the posterior wall of the pharynx, the left and right anterior palatal arches, and the palatal ptosis. This makes it possible to distinguish subjects from those infected with the new corona virus, those infected with other viruses / bacteria related to respiratory infections, and those who are not infected with viruses / bacteria related to respiratory infections. " That is. Therefore, the image captured by the camera of the second invention including only the posterior wall of the pharynx may not show the left and right anterior palatal arches and the palatal tract. In that case, the second invention.
  • subjects are divided into those infected with the new corona virus, those infected with other viruses / bacteria related to respiratory infections, and those who are not infected with viruses / bacteria related to respiratory infections. It may not be possible. However, it may be possible to diagnose some respiratory infections based on images of the posterior wall of the pharynx, and at least distinguish between infected and non-infected with some virus or bacterium. (For example, it is possible to find an infected person with influenza virus from the unevenness of the surface of the posterior wall of the pharynx), so it can be said that the camera of the second invention described above is sufficiently meaningful.
  • the subjects could be identified as a new corona virus-infected person and other viruses related to respiratory infections by using images showing only the posterior wall of the pharynx without showing the left and right anterior palatal arches and the palatal droop.
  • Features related to shape such as color, vascular image, unevenness, etc., may be discovered to enable the distinction between infected with bacteria and non-infected with viruses and bacteria related to respiratory infections. Therefore, if such a feature is discovered, even with the camera of the second invention, which covers only the posterior wall of the pharynx, the subject can be infected with the new coronavirus and other respiratory infections.
  • the subjects are those infected with the new corona virus and other viruses / bacteria related to respiratory infections. If it is desired to be able to distinguish between an infected person and a non-infected person of a virus / bacterium related to a respiratory infection, the lens and the image pickup element in the camera of the second invention are made by an image pickup element based on the image light.
  • the still image captured may include the posterior wall of the pharynx, the left and right anterior palatal arches, and the palatal ptosis.
  • the posterior wall of the pharynx is hidden by the tongue under normal conditions, and cannot be observed or imaged from outside the oral cavity.
  • the base of the tongue is lowered and the posterior wall of the pharynx becomes visible from outside the oral cavity.
  • the posterior wall of the pharynx becomes visible from outside the oral cavity, the left and right anterior palatine arches and uvula also become visible from outside the oral cavity at the same time.
  • the posterior wall of the pharynx is imaged using the camera of the second invention, the posterior wall of the pharynx is imaged by the image sensor at the moment when the posterior wall of the pharynx becomes visible from the outside of the oral cavity.
  • the human reaction rate usually has a delay of 0.2 seconds, at least about 0.1 seconds.
  • the shutter means is usually provided in the main body of the camera, so that the operation of the shutter means often causes camera shake, and the captured image is out of focus.
  • the captured still image is often useless for the purpose of diagnostic imaging.
  • the camera of the second invention which can be used as a still image captured by selecting a still image captured at the moment when the shutter means is operated or at a timing earlier than that, is still for such image diagnosis purposes. It is also suitable for capturing images.
  • the camera of the second invention in the case of photographing the inside of the oral cavity has a first polarizing plate that allows the illumination light to pass through and linearly polarized light having a predetermined polarization direction.
  • a second polarizing plate having a polarization direction orthogonal to that of the first polarizing plate, which allows the image light to pass through, may be provided.
  • the camera of the second invention in this case obtains the same effect as that of the first invention having the same configuration.
  • the light source in the camera of the second invention in the case of photographing the inside of the oral cavity is the inside of the oral cavity after the illumination light emitted from the light source passes through the first polarizing plate.
  • the image pickup element includes a non-reflective image captured by the image light generated by the illumination light emitted from the first light source into the oral cavity and the illumination light emitted into the oral cavity from the second light source. It may be adapted to capture both the reflected image captured by the image light generated by the image light.
  • the camera of the second invention in this case obtains the same effect as that of the first invention having the same configuration.
  • the first light source and the second light source alternately have images based on image data generated by the image pickup element.
  • the lights may be alternately turned on at a timing such that the still image due to the illumination light from the light source and the still image due to the illumination light from the second light source are obtained. That is, the timing at which the image pickup device performs image pickup for generating image data and the timing at which the first light source and the second light source are turned on may be controlled in a synchronized state.
  • the image sensor has the image data of the still image as the non-reflective image and the image data of the still image as the reflected image. Will be generated alternately.
  • the non-reflective image and the reflected image captured at close timing are generated one after another in a paired state. Since the pair of non-reflective images and the reflected images are a pair of non-reflective images and reflective images, both of which are still images, obtained by imaging almost the same position in the oral cavity at almost the same time, they are used. It is suitable for observation or diagnostic imaging. The benefits of performing observation or diagnostic imaging using both non-reflective and reflective images are as already described in the description of the first invention.
  • the camera of the second invention further includes a display for displaying a still image based on the image data on the overwrite recording unit, and the selection means captures at least one of the image data on the overwrite recording unit.
  • a display for displaying a still image based on the image data on the overwrite recording unit
  • the selection means captures at least one of the image data on the overwrite recording unit.
  • It may include an operation unit operated by a user for accepting an input for selecting and specifying. If such a configuration exists, it is up to the user to select image data by a selection means, for example, at least one selection in which the imaging range is correct and in focus (for example, without the influence of camera shake). Based on this, it will be possible to do it manually by the user.
  • the display and the operation unit do not necessarily have to be integrated with the camera of the second invention, and for example, other devices used in combination with the camera of the second invention (examples are smartphones and tablets). It may be provided in.
  • the selection means automatically selects at least one of the image data on the overwrite recording unit that has a correct imaging range and is in focus (for example, without the influence of camera shake). Means may be included. By doing so, it becomes possible to automatically select at least one image data for a still image in which the imaging range is correct and is in focus by the selection means. According to this, the burden on the user can be reduced, and the quality of the selected still image can be kept within a certain range.
  • the automatic selection means does not have to be integrated with the camera of the second invention, and is provided in, for example, another device (eg, a smartphone or a tablet) used in combination with the camera of the second invention. It doesn't matter.
  • the automatic selection means uses artificial intelligence or artificial intelligence to automatically extract image data of a still image that has a correct imaging range and is in focus from a plurality of image data recorded on the overwrite recording unit. You may use the mechanism used.
  • the third invention is a technique for performing image diagnosis by automatic diagnosis.
  • the third invention relates to an automatic diagnostic device, the image or image data used by the automatic diagnostic device in the automatic diagnosis may be generated by the camera according to the first invention or the second invention, but is not so. Is also good.
  • the inventor of the present application proposes a method of generating a trained model as one aspect of the third invention.
  • the method for generating this trained model is as follows: image data, which is data about a still image in which an imaging site including a subject's posterior pharyngeal wall, left and right anterior palatal arches, and palatal ptosis is captured, and the subject is infected with the new coronavirus.
  • image data is machine-learned as teacher data with data including information on whether the person is infected with other viruses / bacteria related to respiratory infections or non-infected persons with viruses / bacteria related to respiratory infections.
  • the subjects who had the imaged site reflected in the image data using the colors of the posterior wall of the pharynx, the left and right anterior palatal arches, and the palatal ptosis in the still image based on the above were the new coronavirus infected person and the respiratory organs. Trained for automated diagnosis of respiratory infections, producing a trained model that estimates whether the person is infected with other viruses or bacteria related to infectious diseases or non-infected with viruses or bacteria related to respiratory infections. This is a model generation method. Further, the inventor of the present application proposes a trained model as another aspect of the third invention. This trained model is generated by, for example, the above-mentioned method for generating a trained model, and is the core of the automatic diagnostic apparatus described later.
  • the trained model according to the third invention is image data which is data about a still image in which an imaging site including a subject's posterior pharyngeal wall, left and right anterior palatal arches, and palatal ptosis is captured, and the subject is infected with a new type of coronavirus.
  • Generated by machine learning as teacher data with data including information on whether the person is infected with other viruses / bacteria related to respiratory infections or non-infected with viruses / bacteria related to respiratory infections.
  • the subject who had the image-imposed site reflected in the image data was characterized by the color or blood vessel image of each of the posterior wall of the pharynx, the left and right anterior palatal arches, and the palatal ptosis in the still image based on the image data.
  • Automatic diagnosis of respiratory infections to estimate whether you are infected with the new corona virus, infected with other viruses / bacteria related to respiratory infections, or non-infected with viruses / bacteria related to respiratory infections.
  • a trained model for use (sometimes referred to simply as a "trained model").
  • the colors of the posterior pharyngeal wall, the left and right anterior palatine arches, and the uvula are usually in those who are not infected with viruses and bacteria related to respiratory infections.
  • the color of the uvula generally light pink.
  • the mucous membrane is the color of the posterior pharyngeal wall, the left and right anterior palatine arches, and the uvula. It turns red and shows red.
  • the posterior wall of the pharynx and the left and right anterior palatine arches are both reddish, but the color of the uvula is a healthy person who is not infected with the virus or bacteria. It has the same light pink color as, or even lighter than that, and has a pink color close to white. The difference in color between pale pink, pink, and reddish red, which is close to white, is caused by the proliferation and dilation of capillaries.
  • the subject was identified with a new coronavirus infected person and other viruses and bacteria related to respiratory infections. It is possible to distinguish between infected persons and non-infected persons of viruses / bacteria related to respiratory infections. That is, in the trained model as described above, when the image data of the still image in which the imaged portion of the subject is captured is input to the trained model, the feature amount of the color or the blood vessel image in the imaged portion in the still image based on the image data is used.
  • the subject can be divided into three types: a person infected with the new coronavirus, a person infected with other viruses / bacteria related to respiratory infections, and a person not infected with the virus / bacteria related to respiratory infections. It will be something like.
  • the image data may be data about a non-reflective image in which the image pickup portion is reflected.
  • the method of capturing a non-reflective image is as described above.
  • the non-reflective image is suitable for accurately grasping the blood vessel image, which is an image of a blood vessel located slightly behind the surface of the color and mucous membrane. Therefore, the trained model generated by such a generation method can make the above-mentioned three distinctions more accurately.
  • what should be input to this trained model is the non-reflective image in which the imaged portion is reflected. It is the image data of.
  • the image data may be a pair of the image data of the non-reflective image in which the image pickup portion is captured and the image data of the reflection image in which the image pickup portion is captured. good.
  • the method of capturing a non-reflective image is as described above.
  • non-reflective images are suitable for accurately grasping colors and blood vessel images.
  • the reflected image is suitable for accurately grasping the shape of the surface of the object to be imaged, such as unevenness.
  • the trained model generated by such a generation method can make the above-mentioned three distinctions more accurately.
  • the feature amount may include both the color and blood vessel image of the posterior wall of the pharynx, the left and right anterior palatine arches, and the uvula, and the feature amount may include the pharynx. It may include surface irregularities on the posterior wall, left and right anterior palatine arches, and uvula.
  • the inventor of the present application also proposes an automatic diagnostic device for respiratory infections using the above-mentioned trained model.
  • the automatic diagnostic device is an automatic diagnostic device for respiratory infections using any of the trained models described so far, and the imaging site of the subject to be diagnosed for respiratory infections is captured.
  • a receiving means for receiving image data which is data about a still image, and an extraction for extracting the color or blood vessel image of each of the posterior pharyngeal wall, the left and right anterior palatal arches, and the palatal ptosis in the still image based on the image data as feature quantities.
  • an automatic diagnostic device for respiratory infections comprising an output means for outputting estimation results of whether the person is infected with other viruses / bacteria related to respiratory infections or non-infected persons with viruses / bacteria related to respiratory infections.
  • an automatic diagnostic device by inputting image data equivalent to that used when generating a trained model into the receiving means, a still image based on the image data can be obtained without human judgment. It is possible to determine which of the above-mentioned three categories the subject having the imaged imaging site belongs to.
  • this automatic diagnostic device if the subject himself or the person in charge of the above-mentioned facility uses, for example, a smartphone or a tablet (a camera provided in the smartphone is used like the camera of the first invention and the second invention, the smartphone or the like can be used. , It may also serve as the camera in the first and second inventions of the present application.) Automatic diagnosis is performed based on the image data sent from the Internet using the Internet, and the estimation result data is used as the source of the image data. It can be used to send it back to a smartphone immediately, for example, using the Internet.
  • an automatic diagnostic method for respiratory infections performed by a computer having a recording medium on which the trained model described above is recorded.
  • the effect of this method is equal to the effect of an automated diagnostic device for respiratory infections.
  • An exemplary method is an automated diagnostic method for respiratory infections performed by a computer with a recording medium recording any of the trained models described above.
  • this method includes a reception process of receiving image data, which is data about a still image of a subject to be diagnosed with a respiratory infection, which is executed by a computer, and stillness based on the image data.
  • An extraction process for extracting the color or blood vessel image of the posterior wall of the pharynx, the left and right anterior palatal arches, and the palatal ptosis in the image as feature quantities, and the feature quantity extracted in the extraction process is input to the trained model.
  • the subject whose imaging site is reflected in the image data is a person infected with the new corona virus, a person infected with other viruses / bacteria related to respiratory infections, and a non-infected person with viruses / bacteria related to respiratory infections.
  • the inventor of the present application also proposes, as still another aspect of the third invention, a computer program for operating a predetermined computer as an automatic diagnostic device for respiratory infections using the above-mentioned trained model.
  • the effect of this computer program is equal to the effect of an automated diagnostic device for respiratory infections, and by allowing a given, eg, general purpose computer, to function as an automated diagnostic device for respiratory infections using a trained model.
  • An example computer program is a computer program for operating a predetermined computer as an automatic diagnostic device for respiratory infections using any of the trained models described above.
  • the computer program receives image data which is data about a still image in which the imaged portion of a subject to be diagnosed with a respiratory infection is reflected in the computer, and a stillness based on the image data.
  • image data is data about a still image in which the imaged portion of a subject to be diagnosed with a respiratory infection is reflected in the computer, and a stillness based on the image data.
  • An extraction process for extracting the color or blood vessel image of each of the posterior wall of the pharynx, the left and right anterior palatal arches, and the palatal ptosis in the image as a feature amount, and inputting the feature amount extracted in the extraction process into the trained model.
  • the subject whose imaging site is reflected in the image data is a person infected with the new corona virus, a person infected with other viruses / bacteria related to respiratory infections, and a non-infected person with viruses / bacteria related to respiratory infections. It is a computer program for executing an output process that outputs an estimation result of one of the two.
  • FIG. 3 is a perspective view of a camera according to the first embodiment and a computer device used in combination with the camera.
  • FIG. 1 is a horizontal cross-sectional view of the head of the camera shown in FIG.
  • FIG. 1 is a vertical cross-sectional view of the camera shown in FIG.
  • the block diagram which shows the functional block generated inside the computer apparatus shown in FIG. It is a figure for demonstrating the behavior of the illumination light and the reflected light at the time of taking an image using the 1st light source with the camera shown in FIG. It is a figure for demonstrating the behavior of the illumination light and the reflected light at the time of taking an image using the 2nd light source by the camera shown in FIG.
  • the functional block diagram which shows the functional block generated in the computer apparatus which constitutes the learning apparatus of 3rd Embodiment.
  • the figure which conceptually shows the content of the data recorded in the learning data recording part included in FIG. The figure which shows the whole structure of the automatic diagnosis system including the automatic diagnosis apparatus of 3rd Embodiment.
  • the functional block diagram which shows the functional block generated in the computer apparatus which constitutes the automatic diagnostic apparatus shown in FIG.
  • FIG. 1 shows an overview of the camera 1 and its accessory computer device 100 in this embodiment.
  • the camera 1 is used in combination with the computer device 100.
  • the computer device 100 has a function of sending image data captured by the camera 1 to a place where image diagnosis is performed, for example, via the Internet. This function may be implemented in the camera 1 itself as in the case of the modification 1 described later.
  • the place where the image diagnosis is performed is a device (for example, a computer device) that can be accessed by the doctor or other person if the image diagnosis is performed, and if the image diagnosis is performed by a machine, the device is concerned.
  • An automatic diagnostic device that performs diagnosis or a device that can be accessed by the automatic diagnostic device (for example, a computer device).
  • the camera 1 in this embodiment is for capturing an image used for diagnosing a human respiratory tract infection. That is, the subject in this embodiment is a human being.
  • the camera 1 in this embodiment is capable of capturing a part of the upper respiratory tract of a human being as a subject, more specifically, a range including the posterior wall of the pharynx, the left and right anterior palatine arches, and the uvula. ing.
  • the camera 1 in this embodiment includes a grip portion 10 that can be held by hand, and a head portion 20 provided on the upper front side of the grip portion 10.
  • the grip portion 10 and the head portion 20 are made of, for example, an opaque resin, but not limited to this. At least the head 20 is usually made of an opaque material. The inside of them is hollow, and various parts are built in or attached to the inside thereof as described later.
  • the grip portion 10 and the head 20 contain the parts, they function as a de facto case in which the parts are built.
  • the grip portion 10 has a shape that can be held by one hand, and is not limited to this, but in this embodiment, it has a rod shape or a columnar shape.
  • the head 20 is not limited to this, but is a cylinder configured to be called a hood having a substantially rectangular cross section, which slightly expands toward the tip (may be tubular or slightly narrows toward the tip). Yes, it is made of an opaque material, such as an opaque resin.
  • An opening 21 is provided at the tip of the head 20 (the side facing the face of the subject at the time of use, the front side in FIG. 1).
  • the opening 21 in this embodiment is a rectangle whose four corners are rounded.
  • the opening 21 may be square or circular.
  • the subject himself or a person in charge of a facility other than the subject for example, a school, a restaurant, a department store, a supermarket, or a movie theater, grips the grip portion 10, and the edge of the opening 21 at the tip of the head 20 is held. Is used in the form of pointing the subject's mouth.
  • a switch 15 is provided at an appropriate position of the grip portion 10 or the head 20, for example, on the front side of the grip portion 10.
  • the switch 15 is an operator for performing an input that triggers the start of image pickup by an image pickup device, which will be described later, and in this embodiment, it is a push button that is pushed into the grip portion 10 to perform input. However, if the image pickup device can generate an input signal that triggers the start of image pickup, the switch 15 does not need to be a push button type. More specifically, the function of the switch 15 may be implemented in the computer device 100.
  • FIG. 2 shows a horizontal cross-sectional view of the head 20 of the camera 1
  • FIG. 3 shows a vertical cross-sectional view of the entire camera 1.
  • a light source 31 including a lens 11, a first light source 31a, and a second light source 31b, and a first polarizing plate 32 are provided on the front side of the inside of the head 20 in FIG.
  • the first light source 31a and the second light source 31b may have the same configuration.
  • the first light source 31a and the second light source 31b play different roles in this embodiment, and they are alternately lit, and when each is lit, they will be described later.
  • the images captured by the image pickup device 12 are different from each other.
  • the first light source 31a can be used in all cases including the second light source 31b. That is, all the light from the first light source 31a and the second light source 31b in the figure may pass through the first polarizing plate 32. Further, a second polarizing plate 33 and an image pickup device 12 are provided on the inner side of the head 20 in FIG. 1 on the inner side.
  • the first light source 31a and the second light source 31b are both, but not limited to, a plurality of both. Both the first light source 31a and the second light source 31b emit natural light as illumination light. As far as possible, the first light source 31a may be a known or well-known light source, and may be an appropriate light source such as a light bulb or an LED. The same applies to the second light source 31b.
  • the first light source 31a in this embodiment is an LED, but the same is true for the second light source 31b.
  • the first light source 31a and the second light source 31b can be the same when viewed as hardware, which is the case in this embodiment.
  • Both the first light source 31a and the second light source 31b emit light toward the oral cavity of the subject to be photographed with a certain degree of directivity.
  • the directions of the first light source 31a and the second light source 31b are adjusted so that the light emitted from them is emitted in an appropriate direction.
  • Both the first light source 31a and the second light source 31b are fixed to a substrate fixed inside the head 20 by an appropriate method, but the illustration of the substrate is omitted.
  • the wavelength of the illumination light emitted by the first light source 31a is not particularly limited, but the wavelength of the illumination light is preferably in the visible light region, and in this embodiment, the first light source 31a emits general white light. It has become.
  • the wavelength of the illumination light is different when natural light is applied to the first polarizing plate 32, which is two polarizing plates arranged along the optical axis and whose polarization directions are orthogonal to each other, and the second polarizing plate 34, which will be described later. It is preferable that the natural light is limited to a range in which almost all of the natural light disappears (for example, 90% or more disappears). It is also possible to arrange a filter that limits the wavelength of the illumination light between the first light source 31a and the second light source 31b and the oral cavity to be imaged.
  • the first light source 31a and the second light source 31b in this embodiment are not limited to this, but are alternately turned on after the switch 15 is pressed, as will be described later.
  • the number of the first light sources 31a in this embodiment is not limited to this, but is plural.
  • Each first light source 31a is, but is not limited to, located near the opening 21 of the head 20, and in this embodiment, the walls of the head 20 on both lateral sides of FIG. 1 of the opening 21 in the horizontal direction. It is located slightly inside.
  • a plurality of first light sources 31a located on the right side and the left side of the opening 21 are arranged linearly, more accurately, in the vertical direction in FIG. 1, in this embodiment.
  • the number of the first light sources 31a arranged in the vertical direction on the right side of the opening 21 is four, and the same applies to the left side of the opening 21.
  • the number of the first light sources 31a arranged in the vertical direction on the right side and the left side of the opening 21 does not have to be four, and more specifically, it does not have to be a plurality.
  • the second light source 31b is provided on the upper side and the lower side of the first light source 31a, which are arranged vertically four by four on the right side and the left side of the opening 21, respectively, as described above. There is.
  • the position and number of the second light source 31b are not limited to this.
  • the first polarizing plate 32 which is a polarizing plate for linearly polarizing the illumination light which is the natural light passing through the first light source 31, is respectively. They are arranged one by one.
  • the first polarizing plate 32 in this embodiment is a vertically long rectangle as shown in FIGS. 1 and 3.
  • all the illumination light that contributes to the image pickup performed by the image pickup element 12 as described later passes through the first polarizing plate 32, and is the first. 1
  • the length in the vertical direction and the width in the horizontal direction of the polarizing plate 32 are designed from that viewpoint.
  • a partition wall 19 which is a donut-shaped wall that does not allow light to pass through is provided in the head 20 so that the outer edge thereof is in contact with the inner peripheral surface of the head 20 without a gap.
  • the partition wall 19 divides the space inside the head 20 into a space on the front side and a space on the rear side of the lens 11.
  • the light emitted from each of the first light source 31a and the second light source 31b is not reflected by the object to be imaged, and the space in which the image sensor 12 on the rear side of the lens 11 directly exists. Will not reach.
  • the second light source 31b is located above and below the first polarizing plate 32, and the illumination light emitted from them irradiates the oral cavity of the subject as natural light without passing through the first polarizing plate 32. It is supposed to be done.
  • the illumination light emitted from the first light source 31a is linearly polarized and directed toward the subject's oral cavity, and the illumination light emitted from the second light source 31b remains natural light (or is emitted from the second light source 31b).
  • Illumination light passes through a polarizing plate (not shown), which should be called a third polarizing plate whose vibration direction is orthogonal to that of the first polarizing plate 32, and thus linearly polarized light generated by passing through the first polarizing plate 32.
  • a polarizing plate (not shown), which should be called a third polarizing plate whose vibration direction is orthogonal to that of the first polarizing plate 32, and thus linearly polarized light generated by passing through the first polarizing plate 32.
  • the design of the installation position of the first light source 31a and the second light source 31b, the shape and position of the first polarizing plate 32, etc. Can be changed as appropriate from the above example.
  • the illumination light that has passed through the first polarizing plate 32 becomes linearly polarized light having a plane of polarization in a predetermined direction.
  • the plane of polarization of the illumination light which is linearly polarized light that has passed through the first polarizing plate 32, is not limited to this, but is
  • the lens 11 captures the reflected light generated by the illumination light reflected by an object to be imaged in the oral cavity, in this embodiment, including the posterior wall of the pharynx, the left and right anterior palatine arches, and the uvula. This is for forming an image on the element 12.
  • the lens 11 does not have to be a single lens, and may include necessary optical components other than the lens 11, such as a mirror and a prism. Further, the lens 11 may have a function of magnifying an image, or may have a function other than that.
  • the lens 11 in this embodiment is not limited to this, but is a magnifying lens.
  • the image pickup device 12 captures the reflected light and performs an image pickup.
  • the image pickup device 12 of this embodiment may be a known or well-known image sensor 12 as long as it can perform color imaging, and may be a commercially available one.
  • the image pickup device 12 can be configured by, for example, a CCD (Charge Coupled Device) or a CMOS (Complementary Metal Oxide Semiconductor).
  • the image sensor 12 generates image data obtained by image pickup.
  • the image captured by the image pickup device 12 may or may not be a moving image, but in this embodiment, image data of a still image is continuously generated at predetermined time intervals. In this case, the predetermined time is, for example, 20 mm seconds to 50 mm seconds, and then image data of 20 to 50 still images per second is generated by the image sensor 12.
  • the image sensor 12 actually generates image data of a moving image.
  • the image pickup device 12 is connected to the circuit 13 by a connection line 12a.
  • the circuit 13 is also connected to the first light source 31a and the second light source 31b by a connection line (not shown).
  • the circuit 13 receives the image data generated by the image pickup element 12 from the image pickup element 12 via the connection line 12a.
  • the circuit 13 performs necessary processing such as brightness adjustment and analog / digital conversion if necessary prior to the output of the video signal to the outside.
  • the circuit 13 is also configured to control the timing of turning on and off the first light source 31a and the second light source 31b connected by a connection line (not shown). The timing of turning on and off the first light source 31a and the second light source 31b will be described later.
  • the circuit 13 is connected to the output terminal 14 via the connection line 13a.
  • the output terminal 14 is connected to the computer device 100 via a cable 16 (not shown).
  • the connection between the cable 16 and the output terminal 14 may be performed in any way, but it may be convenient to use, for example, USB or other standardized connection method.
  • the output of the moving image data generated by the camera 1 to the computer device 100 does not need to be performed by wire as in this embodiment.
  • the camera 1 is provided with, for example, a known or well-known transmission / reception mechanism for communicating with the computer device 100 by, for example, Bluetooth TM, instead of the output terminal 14. Become.
  • the circuit 13 is also connected to the switch 15 described above by a connection line 15a.
  • the circuit 13 that receives the input signal from the switch 15 causes the image pickup element 12 to start imaging, and turns on and off the first light source 31a and the second light source 31b at the timings described later. ..
  • the second polarizing plate 33 is a polarizing plate made of the same material as the first polarizing plate 32, but its function is to convert the illumination light, which is natural light emitted from the first light source 31a, into linearly polarized light. It is different from the first polarizing plate 32 to be changed.
  • the second polarizing plate 33 is the object of the reflected light generated by the illumination light emitted from the first light source 31a, which is linearly polarized by the first polarizing plate 32, and is reflected on the surface of the object. It has a function of blocking the linearly polarized light component contained in the surface-reflected light, which is the light reflected on the surface and will be described in detail later.
  • the first polarizing plate 32 and the second polarizing plate 33 are oriented so that the planes of polarization of the linearly polarized light passing through them are orthogonal to each other. That is, the polarization directions of the first polarizing plate 32 and the second polarizing plate 33 are orthogonal to each other.
  • the polarization plane of linearly polarized light generated by passing through the second polarizing plate 33 is in the vertical direction in FIG. It is supposed to be.
  • the image light which is the reflected light from the object, can reach the image pickup device 12 only after passing through the lens 11 and further through the second polarizing plate 33. In other words, the image light that cannot pass through the second polarizing plate 33 is not captured by the image pickup device 12.
  • the computer device 100 is a general computer and may be a commercially available one.
  • the computer device 100 in this embodiment is a commercially available tablet, but not limited to this.
  • the computer device 100 does not necessarily have to be a tablet as long as it has the configurations and functions described below, and may be a smartphone, a notebook personal computer, a desktop personal computer, or the like. Even in the case of a smartphone or a personal computer, the computer device 100 may be commercially available. Examples of tablets include the iPad (trademark) series manufactured and sold by Apple Japan LLC. Examples of smartphones include the iPhone (trademark) series manufactured and sold by the company.
  • the appearance of the computer device 100 is shown in FIG.
  • the computer device 100 includes a display 101.
  • the display 101 is for displaying a still image or a moving image, generally both of them, and a known or well-known display 101 can be used.
  • the display 101 is, for example, a liquid crystal display.
  • the computer device 100 also includes an input device 102.
  • the input device 102 is for the user to make a desired input to the computer device 100.
  • a known or well-known input device 102 can be used as the input device 102.
  • the input device 102 of the computer device 100 in this embodiment is a button type, but the input device 102 is not limited to this, and a numeric keypad, a keyboard, a trackball, a mouse, or the like can also be used.
  • the input device 102 may be a keyboard, a mouse, or the like.
  • the display 101 is a touch panel, the display 101 also has a function of the input device 102, which is the case in this embodiment.
  • the hardware configuration of the computer device 100 is shown in FIG.
  • the hardware includes a CPU (central processing unit) 111, a ROM (read only memory) 112, a RAM (random access memory) 113, and an interface 114, which are connected to each other by a bus 116.
  • the CPU 111 is an arithmetic unit that performs arithmetic operations.
  • the CPU 111 executes a process described later, for example, by executing a computer program recorded in the ROM 112 or the RAM 113.
  • the hardware may be equipped with an HDD (hard disk drive) or other large-capacity recording device, and the computer program may be recorded on the large-capacity recording device.
  • the computer program referred to here is for causing a computer device 100 that operates in cooperation with the camera 1 to execute a process of transmitting image data generated as described later to a place where image diagnosis is executed.
  • This computer program may be pre-installed in the computer device 100, or may be installed after the computer device 100 is shipped.
  • the computer program may be installed in the computer device 100 via a predetermined recording medium such as a memory card (not shown), or may be installed via a network such as a LAN or the Internet.
  • the ROM 112 records computer programs and data necessary for the CPU 111 to execute a process described later.
  • the computer program recorded in the ROM 112 is not limited to this, and if the computer device 100 is a tablet, the computer program or data necessary for the computer device 100 to function as a tablet, for example, for executing e-mail. Is recorded.
  • the computer device 100 is also capable of browsing a home page on the Internet, and may be equipped with a publicly known or well-known web browser for making it possible.
  • the RAM 113 provides a work area required for the CPU 111 to perform processing. In some cases, for example, at least a part of the above-mentioned computer program or data may be recorded.
  • the interface 114 exchanges data between the CPU 111, the RAM 113, and the like connected by the bus 116 and the outside.
  • the display 101 described above and the input device 102 are connected to the interface 114.
  • the data about the operation content input from the input device 102 is input to the bus 116 from the interface 114.
  • image data for displaying an image on the display 101 is output from the interface 114 to the display 101.
  • the interface 114 also receives image data from the cable 16 described above (more precisely, from an input terminal (not shown) included in the computer device 100 connected to the cable 16).
  • the image data input from the cable 16 is sent from the interface 114 to the bus 116.
  • a transmission / reception mechanism is connected to the interface 114.
  • the transmission / reception mechanism is capable of performing short-range wireless communication, for example, when the computer device 100 wirelessly communicates with the camera 1. Further, the transmission / reception mechanism is capable of performing Internet communication, and can transmit image data received from the camera 1 to a place where image diagnosis is performed via an Internet line.
  • a functional block as shown in FIG. 5 is generated inside the computer device 100.
  • the functional block described below may be generated by the function of the above-mentioned computer program alone for making the computer device 100 function as described above, but is installed in the above-mentioned computer program and the computer device 100. It may be generated in collaboration with the OS and other computer programs.
  • an input unit 121, a control unit 122, an image data recording unit 123, and an output unit 124 are generated in relation to the functions of the present invention.
  • the input unit 121 receives data from the interface 114.
  • the data received by the input unit 121 is the processing selection data input from the input device 102 and the image data input from the cable 16.
  • the processing selection data is data for selecting whether to record the image data in the computer device 100 or to transmit the image data from the computer device 100 to a place where the image diagnosis is performed.
  • the control unit 122 may receive the above-mentioned processing selection data and image data. When the processing selection data is to record the image data in the computer device 100, the control unit 122 records the image data in the image data recording unit 123, and the processing selection data records the image data in the computer device 100.
  • the image data recording unit 123 is a recording area that is normally a part of the RAM 113 for recording image data as described above.
  • the image data is recorded in the image data recording unit 123 together with the identification information for identifying which subject the image data belongs to.
  • the output unit 124 has a function of outputting data including image data to the outside via the interface 114. For example, when the processing selection data selects to transmit the image data from the computer device 100 to the place where the image diagnosis is performed, the output unit 124 transmits the image data received from the control unit 122 via the interface 114.
  • the transmission mechanism sends the image data to the place where the image diagnosis is performed via the Internet. Further, the output unit 124 sends the image data to the display 101 via the interface 114 as needed. In this case, the image based on the image data is displayed on the display 101.
  • the camera 1 and the computer device 100 When the camera 1 and the computer device 100 are used, first, as described above, the camera 1 and the computer device 100 are connected by the cable 16. Further, the user (which may be the subject himself / herself) launches the above-mentioned computer program recorded in the computer device 100, operates the input device 102, and inputs the processing selection data.
  • the process selection data is data for selecting whether to record the image data in the computer device 100 or to send the image data from the computer device 100 to the place where the image diagnosis is performed. ..
  • the display 101 of the computer device 100 displays an image prompting the user to input the processing selection data, and the user inputs the processing selection data according to the instruction by the image.
  • the display of such an image on the display 101 is performed by, for example, data generated by the control unit 122 and sent from the control unit 122 to the display 101 via the output unit 124 and the interface 114.
  • the processing selection data input from the input device 102 is input to the control unit 122 via the interface 114 and the input unit 121.
  • the processing selection data input by the user selects to transmit the image data from the computer device 100 to the place where the image diagnosis is performed.
  • the subject himself, a doctor, or a user who is in charge of the facility grips the grip portion 10 of the camera 1, and the opening 21 in the head 20 thereof is directed toward the mouth of the subject. Then, the user presses the switch 15. Then, the circuit 13 causes the image pickup device 12 to start imaging, and starts lighting the first light source 31a and the second light source 31b alternately. Either the first light source 31a or the second light source 31b may be turned on first, but in this embodiment, it is assumed that the first light source 31a is turned on first.
  • the timing of imaging of the image pickup device 12 in this embodiment is not limited to this, but is at intervals of 50 mm seconds.
  • the timing of imaging by the image pickup device 12 and the timing of turning on and off the first light source 31a and the second light source 31b are synchronized with each other. More specifically, if the first light source 31a is turned on and the second light source 31b is turned off at the timing when the image sensor 12 takes an image and the image data of a certain still image is generated, the following At the timing when the image data is generated, the second light source 31b is turned on and the first light source 31a is turned off. Then, at the timing when the next image data is generated, the first light source 31a is turned on and the second light source 31b is turned off, and at the timing when the next image data is further generated, the second light source 31b is turned on.
  • the posterior pharyngeal wall, left and right anterior palatine arches, and uvula in the subject's oral cavity are hidden behind the tongue and cannot be seen under normal conditions. It is in a state. Therefore, the subject whose opening 21 of the camera 1 is directed toward the oral cavity inhales while making a voice. It is preferable to inhale as strongly as possible while making a loud voice as much as possible. Then, for a short time, for example, 0.5 seconds inside and outside, the base of the tongue goes down, and the posterior wall of the pharynx, the left and right anterior palatine arches, and the uvula can be seen.
  • the image sensor 12 of the camera 1 continuously generates image data of a still image including the posterior wall of the pharynx, the left and right anterior palatine arches, and the uvula.
  • the image data continuously generated every 50 mm seconds is alternately arranged with one based on the illumination light emitted from the first light source 31a and one based on the illumination light emitted from the second light source 31b. It will be.
  • FIG. 6 conceptually shows what kind of reflected light is captured by the image pickup device 12 by the illumination light from the first light source 31a.
  • FIG. 6A shows surface reflected light reflected on the outermost surface of an object wet with body fluid such as saliva (the mucous membranes of the above-mentioned four parts), and
  • FIG. 6B shows an object wet with body fluid. It shows the behavior of internally reflected light, which is the reflected light that enters slightly inside from the surface of the object and is effectively reflected on the surface of the object itself excluding body fluids.
  • the straight line drawn in the circle mark of the thick line conceptually indicates the direction of the polarizing surface of the illumination light or the reflected light in the relevant part, and the line drawn radially in the circle mark is.
  • the illumination light emitted from the first light source 31a passes through the first polarizing plate 32.
  • the illumination light that has passed through the first polarizing plate 32 becomes linearly polarized light.
  • the polarization plane of linearly polarized light which is the illumination light in that case, is the horizontal direction in FIG. Up to this point, it is common to FIGS. 6A and 6B.
  • the illumination light that is linearly polarized light that has passed through the first polarizing plate 32 hits the object X and becomes the reflected light from the object X.
  • the surface reflected light (reflected light generated by being reflected on the surface of the body fluid) ideally maintains its polarized state.
  • the surface reflected light which is linearly polarized light is blocked by the second polarizing plate 33 whose direction of the polarizing surface of the linearly polarized light generated when natural light is passed is orthogonal to the first polarizing plate 32, and is blocked by the image pickup element 12. Does not reach (Fig. 6 (A)).
  • the polarized light of the internally reflected light (light that has passed through the body fluid and is reflected on the surface of the mucous membrane or slightly behind it) is disturbed.
  • the light that vibrates in the direction orthogonal to the polarization plane of the linearly polarized light contained in the surface reflected light passes through the second polarizing plate 33, and therefore is about half of the amount of light. Will reach the image pickup element 12 (FIG. 6B).
  • the light used for the image pickup device 12 to take an image by using the illumination light from the first light source 31a is only the internally reflected light.
  • the image generated by the image sensor 12 taking an image using the illumination light derived from the first light source 31a is a matte non-reflective image without glare. It means that it will be.
  • FIG. 7 conceptually shows what the reflected light captured by the image pickup device 12 is by the illumination light from the second light source 31b.
  • 7 (A) and 7 (B) show the behavior of the surface reflected light and the behavior of the internally reflected light, respectively, as in FIGS. 6 (A) and 6 (B).
  • the orientation of the polarizing surface of the illumination light or the reflected light and the symbol indicating that the linear polarization property is disturbed follow those in FIG.
  • the illumination light emitted from the second light source 31b does not pass through the first polarizing plate 32. Therefore, the illumination light emitted from the second light source 31b heads toward the object X as natural light. Illumination light, which is natural light, hits the object X and becomes reflected light from the object X.
  • both the surface reflected light and the internally reflected light remain natural light (FIGS. 7A and 7B).
  • both the surface reflected light and the internally reflected light have about half of the light amount of the second polarizing plate 33. It passes through and reaches the image pickup device 12 (FIGS. 7A and 7B).
  • the light used by the image pickup device 12 to capture an image using the illumination light from the second light source 31b includes both surface reflected light and internally reflected light. What this means is that the image generated by the image pickup device 12 taking an image using the illumination light derived from the second light source 31b becomes a glaring, glossy reflection image. That's what it means.
  • the data about the still image continuously generated by the image sensor 12 is sent to the circuit 13 via the connection line 12a, and after the circuit 13 performs appropriate processing (brightness adjustment, etc.) as necessary. It reaches the output terminal 14 via the connection line 13a. Then, it reaches the computer device 100 from the output terminal 14 via the cable 16.
  • the image data is a series of still image data.
  • the still image data is sent from the interface 114 to the control unit 122 via the input unit 121, and further reaches the output unit 124.
  • the output unit 124 sends image data to the transmission / reception mechanism via the interface 114, and the image data is sent from the transmission / reception mechanism to a place where image diagnosis is performed via the Internet. Every other image data is for a non-reflective image and for a reflected image.
  • the place where the image diagnosis is performed is, for example, a computer device that the doctor can access if the doctor performs the image diagnosis, or an automatic diagnosis device or an automatic diagnosis device if the automatic diagnosis device performs the automatic diagnosis.
  • the image data sent to the place where the image diagnosis is performed may be one image data or several image data of the still image of the reflected image and the non-reflective image among the still image data. Alternatively, it may be a large number of image data about such a still image in which the still images of the reflected image and the non-reflective image are arranged alternately in a large number which can be called moving image data.
  • the doctor or the automatic diagnostic device appropriately extracts only the image data of the non-reflective image from the data of the same image to generate a moving image of the non-reflective image, and extracts only the image data of the reflected image.
  • To generate a moving image of a reflected image and to perform image diagnosis by selecting one or several still image data suitable for image diagnosis from the image data of a non-reflective image and a reflected image. Can be done.
  • the selection of a still image suitable for such image diagnosis may be performed by the camera 1 or the computer device 100, and as a technique for making such a selection, a technique as described in the second embodiment is used. It can also be applied.
  • the result of the diagnosis performed by the doctor or the automatic diagnostic device may be returned from the doctor or the automatic diagnostic device to the computer device 100 via the Internet. It is possible.
  • the processing selection data is to record the image data in the computer device 100
  • the image data is recorded in the image data recording unit 123 in the computer device 100. This image data is sent to a place where image diagnosis is performed at an appropriate time and used for image diagnosis.
  • the camera 1 of the first embodiment is not used by the camera 1 alone, but is combined with a computer device 100 that transmits image data generated by the camera 1 to a place where image diagnosis is performed via the Internet. It was used.
  • a smartphone or tablet which is an example of the current computer device 100, is generally equipped with a camera. Therefore, it is also possible to make the computer device 100 in the first embodiment also have the function of the camera 1 in the first embodiment. Modification 1 is such an example.
  • the computer device 100 of the first modification may be the same as that described in the first embodiment.
  • a lens 104 and a light source 105 are provided on the back side of the computer device 100 shown in FIG. 1 as shown in FIG. 8 (A).
  • the lens 104 is exposed from the housing of the computer device 100.
  • the lens 104 forms a part of the camera included in the computer device 100.
  • the image pickup element is a CCD or CMOS, and can capture a moving image, that is, a continuous still image.
  • the image pickup interval of the still image can be appropriately adjusted by the function of the computer program installed in the computer device 100, and of course, the image pickup interval can be the same as that of the image pickup device of the first embodiment.
  • the image captured by the image pickup device can be displayed on the display 102 in substantially real time by a known or well-known mechanism.
  • the light source 105 is exposed on the surface of the housing of the computer device 100.
  • the illumination light emitted by the light source 105 may be the same as in the case of the first embodiment. However, there is only one proof of the first modification, and the light continues to be lit while the image pickup is performed by the image pickup device.
  • the computer device 100 of the first modification is used in combination with the polarizing plate 140 as shown in FIG. 8 (B).
  • the polarizing plate 140 is configured to include, but is not limited to, a first polarizing plate 141 and a second polarizing plate 142, both of which are rectangular, and when both are combined, a horizontally long rectangular shape is formed.
  • the first polarizing plate 141 and the second polarizing plate 142 may or may not be integrated, but in this embodiment, they are integrated.
  • the first polarizing plate 141 corresponds to the first polarizing plate 32 in the first embodiment
  • the second polarizing plate 142 corresponds to the second polarizing plate 33 in the first embodiment.
  • the polarization directions of the first polarizing plate 141 and the second polarizing plate 142 are orthogonal to each other.
  • the polarization direction of the first polarizing plate 141 is along the upper side of the housing, and the second polarizing plate is oriented.
  • the polarization direction of 142 is a direction along the lateral side of the housing.
  • the horizontal and vertical lines attached to the first polarizing plate 141 and the second polarizing plate 142 in FIG. 8A indicate their polarization directions.
  • the polarizing plate 140 can be detachably fixed to the back surface of the housing of the computer device 100.
  • the polarizing plate 140 in the first modification is a computer device using a known or well-known adhesive applied to the back surface in FIG.
  • the polarizing plate 140 When the polarizing plate 140 is fixed to the housing of the computer device 100, the first polarizing plate 141 is located on the front side of the light source 105 like the first polarizing plate 32 of the first embodiment.
  • the second polarizing plate 142 is on the front side of the lens 104, that is, an image pickup element (not shown), like the second polarizing plate 33 of the first embodiment.
  • the first polarizing plate 140 and the second polarizing plate in the polarizing plate 140 can be positioned in front of the light source 105 and the lens 104 of the computer device 100 while maintaining such a positional relationship.
  • the size and shape of 142 are designed.
  • the hardware configuration of the computer device 100 of the first modification is the same as that of the computer device 100 of the first embodiment shown in FIG.
  • the image data input to the interface 114 is sent from an image pickup element (not shown) in the computer device 100, unlike the case of the first embodiment sent from the camera 1 outside the computer device 100.
  • the functional block generated in the computer device 100 of the first modification is the same as in the case of the first embodiment, and is as shown in FIG.
  • the function of each functional block in the first embodiment may be substantially the same as those functions in the first embodiment, and is so in this embodiment.
  • the continuous still image being captured by the image sensor is displayed as a de facto moving image on the display 101 of the computer device 100 in substantially real time.
  • control unit 122 which receives image data about still images continuously generated by the image pickup device via the interface 114 and the input unit 121, sends the image data to the display 101 via the output unit 124 and the interface 114. It can be realized by setting. Also in the first embodiment, it is possible to adopt a configuration in which the image being captured by the camera 1 is displayed on the display 101 of the computer device 100 in substantially real time.
  • the method of using the computer device 100 of the first modification is as follows. First, the processing selection data is input using the input device 102 of the computer device 100 in the same manner as described in the first embodiment.
  • the content of the processing selection data can be the same two ways as in the first embodiment, which is the case in this embodiment.
  • the subject himself or a doctor or the like who takes an image of the subject points the lens 104, which forms a part of the camera of the computer device 100, into the oral cavity of the subject.
  • the subject himself or a doctor or the like operates a button displayed on the computer device 100, for example, on the screen to indicate the intention of "starting image pickup by the image pickup device" (not shown), thereby performing image pickup. Is started.
  • the light source 105 is turned on at the same time as the imaging is started.
  • the image sensor can continuously or moving a still image including the posterior wall of the pharynx of the subject, the left and right anterior palatine arches, and the uvula.
  • the image sensor and the lens 104 included in the computer device 100 can capture an image in the above range. ..
  • the image sensor 12 continuously generates image data of a still image including the posterior wall of the pharynx, the left and right anterior palatine arches, and the uvula. For example, the image data continuously generated every 50 mm seconds becomes the image data of the non-reflective image as described above.
  • the data about the still image continuously generated by the image sensor is sent to the control unit 122 via the interface 114 and the input unit 121, and as described above, the control unit 122 sends the output unit 124 and the interface 114 to the control unit 122. It is sent to the display 101 via the display 101 in substantially real time.
  • the display 101 displays a de facto moving image of the non-reflective image while the image sensor is taking an image in substantially real time. As shown in FIG. 8, in the housing of the computer device 100, if the display 101 is behind the lens 104, that is, if the camera in the computer device 100 is a so-called out-camera, the subject other than the subject.
  • the three parties can always confirm whether the subject's pharyngeal posterior wall, left and right anterior palatine arches, and uvula are reflected in the image. can.
  • the camera in the computer device 100 adopts a so-called in-camera configuration in which the lens 104 and the display 101 are present on the same surface of the housing, the subject himself / herself who holds the computer device 100 and performs imaging. However, it is possible to confirm the moving image being imaged on the display 101.
  • the image data of the non-reflective image is also sent from the output unit 124 to the transmission / reception mechanism, and although it depends on the type of processing selection data, the image diagnosis is performed from there via the Internet as in the case of the first embodiment. Will be sent to the place where the event is held.
  • the processing selection data is to record the image data in the computer device 100
  • the image data is recorded in the image data recording unit 123 in the computer device 100.
  • the method of image diagnosis based on image data is as described in the first embodiment, but the image diagnosis in the case of the modification 1 is an image diagnosis using only a non-reflective image.
  • the second embodiment is for capturing an image for image diagnosis, like the camera 1 of the first embodiment.
  • the image pickup target of the camera 1 of the second embodiment is not necessarily in the oral cavity, but may be in the nasal cavity, and more specifically, it does not have to be a part of the body.
  • the image pickup target of the camera 1 of the second embodiment may be, for example, a landscape, a sports scene, a car, a train, an animal, or the like, like a general single-lens reflex camera or a mirrorless camera.
  • the camera 1 of the second embodiment is suitable for imaging when the subject moves quickly.
  • the camera 1 of the second embodiment is assumed to be a camera for performing image diagnosis for respiratory infections as in the case of the first embodiment.
  • the camera 1 of the second embodiment is configured as shown in FIG. 1 as in the case of the first embodiment, and is used together with the computer device 100 as in the case of the first embodiment.
  • the configuration of the camera 1 of the second embodiment can be the same as that of the first embodiment except for the circuit 13, and this is the case in this embodiment.
  • the difference is that the circuit 13 of the camera 1 of the second embodiment has a built-in overwrite recording unit which did not exist in the first embodiment, and the switch 15 in the camera 1 of the second embodiment is provided.
  • the point is that the operation of the camera 1 when operated is different from that of the first embodiment.
  • the image sensor 12 and the lens 11 of the camera 1 of the second embodiment may be capable of capturing at least the posterior wall of the pharynx or a part thereof, but in this embodiment, the first embodiment is used. As in the case of the morphology, the posterior wall of the pharynx, the left and right anterior palatine arches, and the uvula can be imaged.
  • the overwrite recording unit is a recording medium for recording image data continuously generated by the image sensor 12. Specifically, the image data of the still images continuously generated by the image pickup device 12 is recorded for a predetermined time while overwriting the oldest ones in order.
  • the overwrite recording unit is a ring buffer in this embodiment, but the function of recording image data for still images continuously generated by the image sensor 12 for a predetermined time while overwriting the oldest ones in order is a ring buffer.
  • it can also be achieved by, for example, RAM, which is a general memory.
  • the time interval at which the image pickup device 12 takes an image and generates image data of a still image can follow that of the first embodiment.
  • the image data recorded in the overwrite recording unit is set to 0.3 seconds to 1 second, preferably 0.5 seconds ⁇ 0.1 seconds, going back to the past from that time. If the image data can be selected as described later by going back by that amount of time, by operating the switch 15 described later, the still image that the person who operated the switch 15 wanted to take can be obtained. , There is a high probability that it will exist in the overwrite recording section.
  • the image data of the still image for the past 0.5 seconds is always maintained in the state of being recorded in the overwrite recording unit. If the image sensor 12 takes an image every 50 mm seconds and generates image data for a still image, the overwrite recording unit always keeps a state in which 10 image data are recorded. Is done. Although not limited to this, in this embodiment, it is assumed that image data for the past 0.5 seconds from that time is recorded in the overwrite recording unit, and the image data generation interval in the image sensor is set. , 50 mm seconds.
  • the switch 15 of the first embodiment has a function of causing the image pickup device 12 to start imaging and to start turning on and off the first light source 31a and the second light source 31b.
  • the switch 15 in the second embodiment has a function of stopping the overwriting of the image data recorded in the overwrite recording unit at the moment when the switch 15 is operated.
  • the image data generated at the timing of image pickup of the image pickup device 12 immediately after the moment when the switch 15 is operated may be recorded in the overwrite recording unit in a state of being overwritten with the oldest image data.
  • the computer device 100 of the second embodiment is basically the same as the computer device 100 of the first embodiment.
  • the hardware configuration is the same as that of the computer device 100 of the first embodiment shown in FIG.
  • the functional block generated in the computer device 100 of the second embodiment is the same as that of the first embodiment, and is as shown in FIG.
  • the function of each functional block in the second embodiment may be substantially the same as those functions in the first embodiment, and is so in this embodiment.
  • the continuous still image being imaged by the image sensor 12 of the camera becomes a de facto moving image in substantially real time. It is displayed on the display 101 of the computer device 100.
  • the image data of the still images continuously generated by the image sensor 12 of the camera 1 is sent to the computer device 100 via the cable 16.
  • the image data sent to the computer device 100 is received by the control unit 122 via the interface 114 and the input unit 121, and sent to the display 101 via the output unit 124 and the interface 114.
  • the continuous still image being captured by the image sensor 12 of the camera is displayed as a virtual moving image on the display 101 of the computer device 100 in substantially real time.
  • the control unit 122 of the computer device 100 also has a function as a selection means in the second invention of the present application. Its function will be described later. In order to realize the function, as will be described later, the control unit 122 of the computer device 100 transfers all the image data recorded in the overwrite recording unit of the camera 1 via the cable 16, the interface 114, and the input unit 121. It can be read out.
  • the camera 1 and the computer device 100 of the second embodiment are connected by the cable 16 as in the case of the first embodiment.
  • the image sensor 12 of the camera 1 starts image pickup, and the first light source 31a and the second light source 31b start to light alternately.
  • an operation of a switch different from the switch 15 provided on the camera 1 or a switch 15 is performed.
  • the same processing selection data as in the case of the first embodiment is input.
  • the image sensor 12 in the computer device 100 of the second embodiment obtains image data of a still image which is a non-reflective image and image data of a still image which is a reflected image. It is generated alternately every 50 mm seconds. Image data is generated one after another and recorded in the overwrite recording unit. After 0.5 seconds have passed since the image data was first generated, as described above, the overwrite recording unit always keeps the state in which 10 data for the past 0.5 seconds have been recorded. Become. Further, the image data generated one after another is sent one after another from the camera 1 to the computer device 100, and the still images captured by the image sensor 12 of the camera 1 are displayed one after another on the display 101 in substantially real time. It will be displayed. This means that the virtual moving image captured by the image pickup device 12 is displayed on the display 101 in substantially real time.
  • the subject himself or a third party such as a doctor who images the subject grips the camera 1 with one hand and the computer device 100 with the other hand. Using both hands, the opening 21 of the camera 1 is directed into the oral cavity of the subject, and the display 101 of the computer device 100 is viewed.
  • the subject to be imaged or a third party such as a doctor can take an image with the camera 1 while confirming the moving image at that time displayed on the display 101.
  • the subject inhales aloud.
  • the posterior wall of the pharynx and its surroundings can be observed in the oral cavity from outside the oral cavity.
  • a subject who performs imaging after confirming the moment, or a third party such as a doctor operates the switch 15.
  • the timing of operating the switch 15, that is, pressing the switch 15, may be the moment when the person performing the imaging determines that it is suitable for imaging the posterior wall of the pharynx, the left and right anterior palatine arches, and the uvula. That is, the person performing the imaging may operate the switch 15 with the same feeling as operating the shutter means of a general camera.
  • the input from the switch 15 is transmitted to the circuit 13, and the overwrite recording unit stops overwriting the image data.
  • the image data of the still image captured by the image pickup device 12 in the past 0.5 seconds remains in the overwrite recording unit from the moment the switch 15 is pressed.
  • the image data of the still image remaining in the overwrite recording unit has a high probability of reflecting the posterior wall of the pharynx, the left and right anterior palatine arches, and the uvula, and the image data of the still image in focus. It is included. This is because the data recorded here is the image data of a still image captured by the camera 1 before the switch 15 is pressed, in a state where camera shake is unlikely to occur, except for at least the last image data. be. In general, the reaction speed of humans has a delay of about 0.2 seconds (0.1 seconds even for superhumans), but a plurality of images for the past 0.5 seconds from the time when the switch 15 is operated.
  • the past image data to be left in the overwrite recording unit should be at least 0.3 seconds, and at most 1 second, preferably about 0.5 seconds ⁇ 0.1 seconds. ..
  • a third party such as a subject or a doctor operates the computer device 100.
  • the subject, the doctor, or the like operates the input device 102 according to the instruction of the computer device 100 to input all the image data recorded in the overwriting recording unit of the camera 1 to the computer device 100. Load it.
  • the data regarding the instruction "read the image data recorded in the overwrite recording unit” input from the input device 102 is sent to the control unit 122 via the interface 114 and the input unit 121, and the control unit receives the data.
  • the 122 reads all the image data recorded in the overwrite recording unit of the camera 1 via the cable 16, that is, 10 image data in this embodiment.
  • the subject, the doctor, or the like can display, for example, one still image based on ten image data on the display 101 of the computer device 100.
  • the display of the still image on the display 101 can be performed by the control unit 122 sending image data to the display 101 via the output unit 124 and the interface 114.
  • Five of the ten still images displayed on the display 101 are still images that are non-reflective images, and the remaining five are still images that are reflective images.
  • the subject, the doctor, or the like selects at least one of the still image, which is a non-reflective image, and the still image, which is a reflected image, in which the imaging range is correct and in focus. Such selection can be made by input from the input device 102.
  • At least one image data (at most one to three) for a still image which is a non-reflective image and at least one image data (at most one) for a still image which is a reflected image selected in this way. 3) means that the processing selection data is "transmitting image data from the computer device 100 to a place where image diagnosis is performed", as in the case of the first embodiment. Then, it is sent from the control unit 122 to the place where the image diagnosis is performed via the output unit 124, the interface 114, the transmission / reception mechanism, and the Internet.
  • At least one image data of the still image which is a non-reflective image and at least one image data of the still image which is a reflected image are the processing selection data "recording the image data in the image data recording unit 123". In that case, it is sent from the control unit 122 to the image data recording unit 123 and recorded in the image data recording unit 123 in the same manner as in the case of the first embodiment. The method of using the image data thereafter is the same as that of the first embodiment.
  • the data is transmitted to the place where the image diagnosis is performed (or recorded in the image data recording unit 123).
  • the selection of the image data is performed by manual input at the discretion of the operator of the camera 1 and the computer device 100.
  • the image data is read from the overwrite recording unit to the control unit 122 after the switch 15 is operated, or the image data is selected from the plurality of image data read from the overwrite recording unit to the control unit 122.
  • the control unit 122 may automatically perform the operation.
  • the control unit 122 As a condition for the control unit 122 to select image data from a plurality of image data read from the overwrite recording unit to the control unit 122, it is necessary to use a condition that the imaging range is correct and the image data is in focus. can. Whether or not the imaging range is correct can be easily determined by using a known or well-known image recognition technique. Further, whether or not the subject is in focus can be easily determined by using a known or well-known edge detection technique. Further, it is also possible to use artificial intelligence or a mechanism using artificial intelligence for such determination.
  • the computer device 100 selects the image data transmitted (or recorded in the image data recording unit 123) to the place where the image diagnosis is performed from the plurality of image data recorded in the overwrite recording unit. It was done at. However, it is also possible to make this selection on the camera 1 side. However, when the person who operates the camera 1 manually makes the selection, all the images of the plurality of image data recorded in the overwrite recording unit immediately after the switch 15 is operated on the camera 1 itself. For example, the camera 1 itself is provided with a display for displaying all of them one by one, and the camera 1 itself is provided with an input device for performing such display and input for selecting image data. Both will be needed.
  • the control unit 122 of the computer device 100 if such a selection is to be made automatically, if a mechanism for automatically performing the above-mentioned processing executed by the control unit 122 of the computer device 100 is provided in, for example, the circuit 13 of the camera 1, the above-mentioned description will be made.
  • the display and the input device can be omitted.
  • the computer device 100 since the computer device 100 includes both a display and an input device in the first place, it is transmitted (or image data) from a plurality of image data recorded in the overwrite recording unit to a place where image diagnosis is performed.
  • an automatic diagnostic device for respiratory infections using a trained model (hereinafter, may be simply referred to as an “automatic diagnostic device”) will be described.
  • the automatic diagnostic device includes a trained model, as described below. Therefore, in order to obtain an automatic diagnostic device, it is first necessary to obtain a trained model.
  • the device necessary for obtaining the trained model will be referred to as a learning device for convenience.
  • a learning device is required in addition to the automatic diagnostic device.
  • the required hardware configuration can be the same for the automatic diagnostic device and the learning device, it is possible to combine them into one device by appropriately setting the computer programs to be installed in them. be.
  • Both the automatic diagnostic device and the learning device include a computer device.
  • This computer device is different from the computer device 100 in the first embodiment and the second embodiment.
  • the computer devices included in the automatic diagnostic device and the learning device can have the same configuration, and are the same in this embodiment.
  • the configuration of the automatic diagnostic device is the same, but for the time being, the hardware configuration of the learning device will be explained.
  • the hardware of the computer device of the third embodiment is uniform, and is different from the hardware configuration of the computer device 100 shown in FIG. 4 described in the first embodiment. There is no.
  • the computer device according to the third embodiment includes an HDD, an SSD (Solid State Drive), and other large-capacity recording devices.
  • the hardware of the computer device constituting the learning device will be described with the reference numeral of the large-capacity recording device set to 115 and the other reference numerals kept as shown in FIG.
  • the hardware constituting the learning device includes a CPU 111, a ROM 112, a RAM 113, an interface 114, and a large-capacity recording device 115, which are connected to each other by a bus 116.
  • a transmission / reception mechanism is connected to the interface 114 to enable communication via the Internet, but this function is idle when the computer device functions as a learning device, and the computer device is playing. It works only when it functions as an automatic diagnostic device.
  • An input device, a display, or the like may be connected to the interface 114, and in most cases they are connected, but in a computer device that functions as a learning device or an automatic diagnostic device, the interface 114 may be connected. , They don't make much sense, so I won't mention them.
  • the functions of the CPU 111, ROM 112, RAM 113, interface 114, and bus 116 are the same as those in the first embodiment.
  • the large-capacity recording device 115 records a computer program and necessary data for making the computer device function as a learning device. At least a part of this computer program and data may be recorded in the ROM 112 and the RAM 113.
  • the computer program for causing the computer device to execute the processing described later necessary for the computer device to function as the learning device may be pre-installed in the computer device or may be post-installed. May be.
  • the computer program may be installed in a computer device via a predetermined recording medium such as a memory card (not shown), or may be installed via a network such as a LAN or the Internet.
  • the computer program may include an OS and other necessary computer programs in addition to the above computer program.
  • the computer device described above executes the processing necessary for functioning as a learning device by the following functional blocks.
  • a functional block as shown in FIG. 9 is generated inside the computer device.
  • the following functional blocks may be generated by the function of the above-mentioned computer program alone for causing the computer device to execute the processing described below necessary for the computer device to function as the learning device.
  • the above-mentioned computer program may be generated in cooperation with an OS or other computer program installed in a computer device.
  • a learning data recording unit 311, a feature amount extraction unit 312, a learning model unit 313, and a correctness determination unit 314 are generated in relation to the functions of the present invention.
  • the learning data recording unit 311 records data for use by the learning model unit 313 for machine learning.
  • the data recorded in the learning data recording unit 311 is conceptually shown in FIG.
  • a large number of image data are recorded in the learning data recording unit 311.
  • the image data is image data 400, which is data about a still image of the subject, including the posterior wall of the pharynx, the left and right anterior palatine arches, and the imaged region including the uvula.
  • images that are still images of non-reflective images taken at about the same position at about the same time eg, generated and selected using the camera 1 and computer device 100 described in the second embodiment).
  • the image data for the 400A and the image data for the image 400B, which is a still image of the reflected image, are recorded in a paired state.
  • the subject having the imaging site reflected in the image data 400 is a person infected with a new coronavirus, a person infected with other viruses / bacteria related to respiratory infections, and a virus related to respiratory infections.
  • the data of the tag 410 indicating which of the non-infected persons of the virus is attached is attached in a state of being associated with each other.
  • the data of the tag 410 may include other information such as the subject's gender, age, race, degree of symptoms, and the like. In the example shown in FIG. 10, the gender information is included in the tag 410.
  • the code A is attached to the tongue
  • the symbol B is attached to the posterior wall of the pharynx
  • the reference C is attached to the front left and right.
  • the uvula is labeled with the palatine arch and D.
  • the image data 400 may be only the data of the image 400A of the non-reflective image and the tag 410. In that case, the image data of the image 400B of the reflected image does not exist in the learning data recording unit 311.
  • the feature amount extraction unit 312 has a function of reading image data from the learning data recording unit 311 together with a tag associated with the learning data recording unit 311 and extracting the feature amounts of the non-reflection image image 400A and the reflection image image 400B. ..
  • the feature amount to be extracted from the image 400A of the non-reflective image and the feature amount to be extracted from the image 400B of the reflected image may or may not be the same.
  • the feature quantity to be extracted from the non-reflective image 400A is the color or blood vessel image in each of the posterior pharyngeal wall, the left and right anterior palatine arches, and the uvula reflected in the image 400A, or both of them.
  • the unevenness of the posterior pharyngeal wall or the posterior pharyngeal wall, the left and right anterior palatine arches, and the uvula in the non-reflective image 400A may be used as a feature quantity.
  • the image 400B of the reflected image the features to be extracted from the image 400A of the non-reflective image are reflected in the image 400A in both the color and blood vessel images of the posterior pharyngeal wall, the left and right anterior palatine arches, and the uvula, respectively, and the posterior pharyngeal wall. Can be uneven.
  • the feature amount to be extracted from the reflected image 400B is obtained in each of the posterior pharyngeal wall, the left and right anterior palatine arches, and the uvula reflected in the image 400B, as in the case of the non-reflective image 400A. Both the color and the blood vessel image, and the unevenness of the posterior wall of the pharynx can be used.
  • the non-reflective image image 400A shows the posterior wall of the pharynx and the left and right anterior palatine arches.
  • the unevenness of the posterior wall or the posterior wall of the pharynx, the left and right anterior palatine arches, and the uvula can be extracted as feature quantities.
  • both the color and the blood vessel image in each of the posterior pharyngeal wall, the left and right anterior palatine arches, and the uvula are extracted as feature quantities from the image 400A of the non-reflective image.
  • the unevenness of the posterior wall of the pharynx is extracted as a feature amount.
  • the feature amount extraction unit 312 uses only the non-reflective image 400A to obtain a non-reflective image. Any of the above-mentioned feature quantities described for the image 400A of the above is extracted.
  • the range of the posterior wall of the pharynx, the left and right anterior palatine arches, and the uvula is extracted from the image 400A.
  • the range of the posterior wall of the pharynx, the left and right anterior palatine arches, and the uvula is extracted from the image 400A.
  • it is included in each range of the posterior pharyngeal wall, the left and right anterior palatine arches, and the uvula.
  • the color of each range of the posterior wall of the pharynx, the left and right anterior palatine arches, and the uvula can be extracted.
  • the technique of Lab color space may be used for color recognition.
  • the blood vessel image of the posterior pharyngeal wall, the left and right anterior palatine arches, and the uvula can be extracted.
  • the contour of the blood vessel is extracted, thereby extracting the blood vessel image as a feature.
  • the thickness of the blood vessel in each range of the posterior wall of the pharynx, the left and right anterior palatine arches, and the uvula of the image 400A (for example, the average thickness of the blood vessels, the blood vessel). It is possible to extract at least one such as the maximum thickness), the number of blood vessels, the number of blood vessels thinner than a certain thickness, the total length of blood vessels, and the like. Even when the unevenness of the posterior wall of the pharynx is extracted as a feature amount from the image 400B of the reflected image, it can be done by using a known or well-known technique.
  • the image 400B is used as in the case of the image 400A of the non-reflective image. Identify the area of the posterior wall of the pharynx from the inside. Then, in the range of the posterior wall of the pharynx in the image 400B, for example, the portion that is shining white due to the reflection of light generated by the body fluid is counted as a part of the convex portion, and the posterior wall of the pharynx is counted. Unevenness can be detected.
  • the feature amount extraction unit 312 sends the feature amount extracted for the image 400A which is a non-reflective image and the image 400B which is a reflection image to the learning model unit 313 together with the data of the tag 410 attached to the image data.
  • the learning model unit 313 is a general artificial intelligence that performs supervised learning.
  • the learning model unit 313 inputs the feature amount received from the feature amount extraction unit 312 together with the tag 410 attached to the image data, and performs learning.
  • the learning model unit 313 has a feature amount tagged as "new corona virus infected person", a feature amount tagged as "other virus / bacterial infected person related to respiratory infection", and a feature amount. For subjects of "new corona virus infected person” by inputting, for example, tens to thousands of feature quantities tagged with "non-infected person of virus / bacterium related to respiratory infection", for example.
  • Image data features and image data features of "other virus / bacterial infected persons related to respiratory infections” and subjects of "virus / bacterial non-infected persons related to respiratory infections” Learn about the feature amount of the image data and each of them.
  • the learning model unit 313 has an image pickup portion reflected in the image data based on the parameters configured by the training when the feature amount of the image data is input in the untagged state. Estimated results showing whether the subject is infected with the new coronavirus, infected with other viruses / bacteria related to respiratory infections, or non-infected with viruses / bacteria related to respiratory infections. Will be output.
  • the correctness determination unit 314 is used to further improve the accuracy of the estimation result of the learning model unit 313.
  • the feature amount is input to the trained model unit 313 after learning to some extent without the tag 410 attached, the estimation result is output, and the result is sent to the correctness determination unit 314.
  • the correctness determination unit 314 reads the tag 410 attached to the feature amount sent to the learning model unit 313 from the learning data recording unit 311 and determines whether or not the estimation result is correct.
  • the correctness determination unit 314 modifies the parameters in the learning model unit 313 so that the estimation result becomes more correct according to the correctness of the estimation result or the ratio of the correctness of the estimation result.
  • the correctness determination unit 314 and the above-mentioned processing performed by the unit can be omitted.
  • the accuracy of the estimation result of the learning model unit 313 hardly improves, it can be determined that the learning model unit 313 has completed machine learning.
  • the learning model unit 313 has normal colors for the posterior wall of the pharynx, the left and right anterior palatal arches, and the palatal ptosis in those who are not infected with viruses and bacteria related to respiratory infections.
  • the posterior wall of the pharynx, the left and right anterior palatal arches, and the palatal ptosis In both cases, the mucous membrane is reddish and red, and in those infected with the new corona virus, the posterior wall of the pharynx and the left and right anterior palatal arches are both reddish and red, but the color of the palatal ptosis. Learns to have a pale pink color similar to that of a healthy person who is not infected with a virus or bacterium, or in some cases a lighter pink color that is closer to white. It is also learned that the capillaries are proliferated and dilated in the reddish and reddish portion described above.
  • the trained model unit 313 that has completed machine learning is a trained model for automatic diagnosis of respiratory tract infections.
  • the automatic diagnostic apparatus is composed of a computer device. Then, as shown in FIG. 11, the automatic diagnostic apparatus 300 is used in a state of being connected to the computer apparatus 100 connected to the camera 1 via the Internet 400 which is a network.
  • the automatic diagnosis device 300 constitutes an automatic diagnosis system in cooperation with the camera 1 and the computer device 100.
  • a plurality of computer devices 100 each of which is connected to the camera 1, can be connected to the automatic diagnostic device 300.
  • the computer device 100 having the configuration as described in the first modification having the function of the camera 1 may be connected to the automatic diagnostic device via the Internet 400. It is assumed that the camera 1 is the one described in the second embodiment.
  • the image data of the non-reflective image and the reflected image of the subject's imaged portion are captured by the camera 1 on the computer device 100 and the Internet 400. It shall be sent via.
  • the hardware configuration of the computer device constituting the automatic diagnosis device 300 is the same as that of the learning device.
  • the computer programs and data mainly recorded in the large-capacity recording device 115 are different between the automatic diagnostic device 300 and the learning device.
  • the computer program for causing the computer device to execute the processing described later necessary for the computer device to function as the automatic diagnostic device may be pre-installed in the computer device or may be post-installed. It may be there.
  • the computer program may be installed in a computer device via a predetermined recording medium such as a memory card (not shown), or may be installed via a network such as a LAN or the Internet.
  • the computer device constituting the automatic diagnostic device described above executes the processing necessary for functioning as the automatic diagnostic device by the following functional blocks.
  • a functional block as shown in FIG. 12 is generated inside the computer device.
  • the following functional blocks may be generated by the function of the above-mentioned computer program alone for causing the computer device to execute the processing described below necessary for the computer device to function as the automatic diagnostic device. However, it may be generated by the cooperation between the above-mentioned computer program and an OS or other computer program installed in the computer device.
  • an input unit 321, a feature amount extraction unit 322, a trained model unit 323, and an output unit 324 are generated in relation to the functions of the present invention.
  • the input unit 321 is connected to the interface.
  • the interface is further connected to a transmission / reception mechanism (not shown).
  • the transmission / reception mechanism is an image of a pair of non-reflective images and still images of a reflected image, which are transmitted via the Internet 400, generated by the camera 1, and transmitted by the computer device 100, in which the imaged portion of the subject is reflected. You are supposed to receive data.
  • the input unit 321 is adapted to receive the image data received by the transmission / reception mechanism from the interface.
  • the feature amount extraction unit 322 extracts the feature amount from the image data of the pair of received non-reflective images and the still images of the reflected images.
  • the feature amount extracted by the feature amount extraction unit 322 is the same as the feature amount extracted by the feature extraction unit 312 of the learning device.
  • the method for extracting features is the same.
  • the feature amount extraction unit 322 sends the generated feature amount data to the trained model unit 323.
  • the trained model unit 323 is a learning model unit 313 in the learning device that has completed machine learning, and includes parameters configured by learning.
  • the trained model unit 323 outputs an estimation result when it receives an input of feature amount data, as was the case with the trained model unit 313.
  • the estimation results show that the subjects with the imaging site reflected in the image data of the pair of non-reflective images and the still images of the reflected images from which the feature quantity was extracted are the new coronavirus infected person and the respiratory organs. It indicates whether the person is infected with other viruses / bacteria related to infectious diseases or non-infected with viruses / bacteria related to respiratory infections.
  • the estimation result is text data of "new coronavirus infected person" when the subject is estimated to be a new coronavirus infected person, and the subject is other than the new coronavirus. If it is presumed that the person is infected with a virus or bacterium related to respiratory infection, the text data "Infected person with a virus or bacterium other than the new corona virus", the subject is not a virus or bacterium related to respiratory infection. If it is presumed to be an infected person, the text data shall be "non-infected person".
  • the estimation result data output from the trained model unit 323 is sent to the output unit 324.
  • the output unit 324 is connected to the transmission / reception mechanism via an interface.
  • the output unit 324 sends the received estimation result data to the transmission / reception mechanism via the interface.
  • the transmission / reception mechanism transmits the received estimation result data to the computer device 100, which is the source of the image data that triggered the generation of the estimation result, via the Internet 400.
  • the computer device 100 receives the estimation result data by its transmission / reception mechanism.
  • the estimation result data is sent to the input unit 121 via the interface 114, and is sent to the display 101 via the control unit 122, the output unit 124, and the interface 114.
  • the display 101 shows a display based on the estimation result data, that is, a "new coronavirus infected person", a "virus / bacterial infected person other than the new coronavirus", and a "non-infected person” based on the above three text data. Is displayed.
  • the subject is either a person infected with the new coronavirus, a person infected with a virus / bacterium related to a respiratory infection other than the new coronavirus, or a person who is not infected with a virus / bacterium related to a respiratory infection.
  • the inside will be a safe space.
  • each home can, for example, before going out, the new coronavirus. You will be able to confirm the infection. Moreover, the redness and changes in the vascular image caused by the immune response to the infection with the new coronavirus occur earlier than the symptoms of the new coronavirus, so asymptomatic infected persons with the new coronavirus spread the infection. You will be able to prevent the situation effectively.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Pathology (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • Animal Behavior & Ethology (AREA)
  • Medical Informatics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Chemical & Material Sciences (AREA)
  • Optics & Photonics (AREA)
  • Immunology (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • General Physics & Mathematics (AREA)
  • Medicinal Chemistry (AREA)
  • Mathematical Physics (AREA)
  • Theoretical Computer Science (AREA)
  • Urology & Nephrology (AREA)
  • Hematology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Food Science & Technology (AREA)
  • Dentistry (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Investigating Or Analysing Biological Materials (AREA)
  • Investigating Or Analysing Materials By Optical Means (AREA)
  • Endoscopes (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

L'invention concerne une caméra qui peut capturer une image qui peut être utilisée pour un diagnostic d'image pour une infection respiratoire. La caméra comprend un objectif 11, un élément d'imagerie 12, une première source de lumière 31a, une seconde source de lumière 31b, une première plaque de polarisation 32 et une seconde plaque de polarisation 33. La lumière d'éclairage émise par la première source de lumière 31a traverse la première plaque de polarisation 32 et se transforme en lumière de polarisation, et la lumière d'image générée après avoir touché un objet à imager traverse l'objectif 11 et la seconde plaque de polarisation 33 et est imagée par l'élément d'imagerie 12. La lumière d'éclairage émise par la seconde source de lumière 31b ne traverse pas la première plaque de polarisation 33 et reste naturelle, et la lumière d'image traverse l'objectif 11 et la seconde plaque de polarisation 33 et est imagée par l'élément d'imagerie 12. Une portée d'imagerie de l'élément d'imagerie 12 comprend une partie de paroi postérieure d'un pharynx, des arcs de tonsille antérieurs gauche et droit, et la luette.
PCT/JP2021/044366 2020-12-02 2021-12-02 Caméra, procédé de génération d'un modèle entraîné inhérent à une infection respiratoire, modèle entraîné inhérent à une infection respiratoire, procédé de diagnostic automatique inhérent à une infection respiratoire, et programme informatique WO2022118939A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2020-200137 2020-12-02
JP2020200137A JP2022087967A (ja) 2020-12-02 2020-12-02 カメラ、呼吸器感染症に関する学習済みモデルの生成方法、呼吸器感染症に関する学習済モデル、呼吸器感染症に関する自動診断方法、並びにコンピュータプログラム

Publications (1)

Publication Number Publication Date
WO2022118939A1 true WO2022118939A1 (fr) 2022-06-09

Family

ID=81853386

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/044366 WO2022118939A1 (fr) 2020-12-02 2021-12-02 Caméra, procédé de génération d'un modèle entraîné inhérent à une infection respiratoire, modèle entraîné inhérent à une infection respiratoire, procédé de diagnostic automatique inhérent à une infection respiratoire, et programme informatique

Country Status (2)

Country Link
JP (1) JP2022087967A (fr)
WO (1) WO2022118939A1 (fr)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003275179A (ja) * 2002-03-22 2003-09-30 Kao Corp 肌色測定装置および方法
JP2008302146A (ja) * 2007-06-11 2008-12-18 Olympus Medical Systems Corp 内視鏡装置及び内視鏡画像制御装置
WO2019131327A1 (fr) * 2017-12-28 2019-07-04 アイリス株式会社 Appareil de photographie buccale, appareil médical et programme
JP2019205614A (ja) * 2018-05-29 2019-12-05 デルマ医療合資会社 検査装置、拡大モジュール、制御方法および検査システム

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003275179A (ja) * 2002-03-22 2003-09-30 Kao Corp 肌色測定装置および方法
JP2008302146A (ja) * 2007-06-11 2008-12-18 Olympus Medical Systems Corp 内視鏡装置及び内視鏡画像制御装置
WO2019131327A1 (fr) * 2017-12-28 2019-07-04 アイリス株式会社 Appareil de photographie buccale, appareil médical et programme
JP2019205614A (ja) * 2018-05-29 2019-12-05 デルマ医療合資会社 検査装置、拡大モジュール、制御方法および検査システム

Also Published As

Publication number Publication date
JP2022087967A (ja) 2022-06-14

Similar Documents

Publication Publication Date Title
JP7387185B2 (ja) 生理学的モニタのためのシステム、方法、及びコンピュータプログラム製品
KR102199020B1 (ko) 천장형 인공지능 건강 모니터링 장치 및 이를 이용한 원격 의료 진단 방법
WO2021179624A1 (fr) Procédé et système de surveillance, dispositif électronique, et support d'informations
Bonilha et al. Vocal fold phase asymmetries in patients with voice disorders: a study across visualization techniques
US9848811B2 (en) Cognitive function testing system, cognitive function estimation system, cognitive function testing method, and cognitive function estimation method
WO2017010683A1 (fr) Téléphone intelligent doté d'un dispositif de télémédecine
US20150050628A1 (en) Autism diagnosis support method and system, and autism diagnosis support device
US20170188930A1 (en) Animation-based autism spectrum disorder assessment
Bonilha et al. Phase asymmetries in normophonic speakers: visual judgments and objective findings
Koulaouzidis et al. How should we do colon capsule endoscopy reading: a practical guide
EP4231314A1 (fr) Système d'assistance chirurgicale, procédé d'assistance chirurgicale, appareil de traitement d'informations et programme de traitement d'informations
JP2007125151A (ja) 診断システム及び診断装置
JP2007289656A (ja) 画像記録装置、画像記録方法、および画像記録プログラム
Khanam et al. Noncontact sensing of contagion
JP6541795B2 (ja) 平面スキャンビデオキモグラフィーと喉頭ストロボスコピー機能があるビデオ喉頭内視鏡システム
JP2007289657A (ja) 画像記録装置、画像記録方法、および画像記録プログラム
JP6118917B2 (ja) 声帯粘膜運動状態を分析するための平面スキャンビデオキモグラフィーシステム及びこれを利用した声帯粘膜運動状態の分析方法
WO2022118939A1 (fr) Caméra, procédé de génération d'un modèle entraîné inhérent à une infection respiratoire, modèle entraîné inhérent à une infection respiratoire, procédé de diagnostic automatique inhérent à une infection respiratoire, et programme informatique
WO2023061402A1 (fr) Procédé de collecte et de présentation de données de signal physiologique et d'informations de position, et serveur et système pour sa mise en oeuvre
KR101908632B1 (ko) 실시간 후두스트로보스코피, 고속 후두내시경 검사, 및 평면 스캔 디지털 카이모그래피를 위한 영상 생성 시스템 및 방법
JP5276454B2 (ja) 表情測定方法、表情測定プログラムならびに表情測定装置
WO2020158720A1 (fr) Système d'évaluation d'état mental et physique, dispositif d'évaluation, procédé et programme informatique
JP2022055656A (ja) アシスト装置およびアシスト方法
TWI261110B (en) Fever screening method and system
WO2023181417A1 (fr) Dispositif d'imagerie, programme et procédé

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21900689

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21900689

Country of ref document: EP

Kind code of ref document: A1