WO2018083853A1 - Visual field sensitivity estimation device, method for controlling visual field sensitivity estimation device, and program - Google Patents

Visual field sensitivity estimation device, method for controlling visual field sensitivity estimation device, and program Download PDF

Info

Publication number
WO2018083853A1
WO2018083853A1 PCT/JP2017/028491 JP2017028491W WO2018083853A1 WO 2018083853 A1 WO2018083853 A1 WO 2018083853A1 JP 2017028491 W JP2017028491 W JP 2017028491W WO 2018083853 A1 WO2018083853 A1 WO 2018083853A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
visual field
field sensitivity
thickness
retinal layer
Prior art date
Application number
PCT/JP2017/028491
Other languages
French (fr)
Japanese (ja)
Inventor
山西 健司
佳生 森野
俊允 上坂
亮 朝岡
博史 村田
太一 木脇
宏樹 杉浦
Original Assignee
国立大学法人 東京大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 国立大学法人 東京大学 filed Critical 国立大学法人 東京大学
Priority to JP2018548560A priority Critical patent/JPWO2018083853A1/en
Publication of WO2018083853A1 publication Critical patent/WO2018083853A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/02Subjective types, i.e. testing apparatus requiring the active assistance of the patient
    • A61B3/024Subjective types, i.e. testing apparatus requiring the active assistance of the patient for determining the visual field, e.g. perimeter types

Definitions

  • the present invention relates to a visual field sensitivity estimation apparatus, a visual field sensitivity estimation apparatus control method, and a program.
  • Non-patent Document 1 Japanese Patent Application Laidification
  • Retinal layer thickness information can be measured relatively easily with an optical coherence tomography (OCT). Therefore, if visual field sensitivity can be estimated from retinal layer thickness information obtained by OCT, the examination burden can be reduced and the treatment efficiency can be reduced. Can be expected to do.
  • OCT optical coherence tomography
  • the present situation is that the method for estimating the visual field sensitivity has not been studied even at the research stage, including the conventional research described above.
  • the present invention has been made in view of the above circumstances, and provides a visual field sensitivity estimation device, a visual field sensitivity estimation device control method, and a program capable of estimating visual field sensitivity information from retinal layer thickness information.
  • a visual field sensitivity estimation device a visual field sensitivity estimation device control method
  • a program capable of estimating visual field sensitivity information from retinal layer thickness information.
  • a visual field sensitivity estimation device which includes at least one of information related to the thickness of the eye retinal layer and information related to visual field sensitivity for each information provider.
  • the provided information means for accepting information related to the thickness of the retinal layer of the prediction target person who is the target of visual field prediction, information related to the thickness of the retinal layer included in the accepted provided information, and the prediction target Retinal layer thickness feature amount information for each information provider and prediction target person based on information on the thickness of the person's retinal layer, and information on the visual field sensitivity included in the received provided information
  • a means for extracting visual field sensitivity feature amount information for each information provider, and information relating to the thickness of the retinal layer and information relating to visual field sensitivity among the information provided for each of the accepted information providers Including at least information provided Pair feature amount information matrix including retinal layer thickness feature amount information based on information related to the thickness of the retinal layer included in the target pair information and information related to visual field sensitivity,
  • visual field sensitivity information can be estimated from retinal layer thickness information.
  • the visual field sensitivity estimation apparatus 1 includes a control unit 11, a storage unit 12, an operation unit 13, an output unit 14, and an interface unit 15, as illustrated in FIG. Composed.
  • the control unit 11 is a program control device such as a CPU, and operates according to a program stored in the storage unit 12.
  • information providers information on the thickness of the retinal layer (examination results), or information on the visual field sensitivity (examination results)
  • the information (age, race, etc.) regarding the attributes of each information provider is provided and input to the visual field sensitivity estimation apparatus 1.
  • the control unit 11 receives provided information including at least one of information related to the thickness of the retinal layer and information related to visual field sensitivity for each information provider.
  • control unit 11 accepts information (test result) related to the thickness of the retinal layer for the patient (predictor) who is the target of visual field sensitivity prediction. Then, the control unit 11 approximately reproduces the information relating to the thickness of the retinal layer and the information relating to the thickness of the retinal layer of the prediction target person among the received provision information, for each information provider and prediction target person Retinal layer thickness feature quantity information is extracted by calculation. In addition, the control unit 11 extracts, by calculation, visual field sensitivity feature amount information for each information provider that approximately reproduces information related to visual field sensitivity among the provided information.
  • various methods such as non-negative matrix decomposition (NMF), principal component analysis, and auto-encoder can be used for calculation of retinal layer thickness feature amount information and visual field sensitivity feature amount information.
  • NMF non-negative matrix decomposition
  • principal component analysis principal component analysis
  • auto-encoder can be used for calculation of retinal layer thickness feature amount information and visual field sensitivity feature amount information.
  • the control unit 11 uses, as the target pair information, at least part of the provided information including both information related to the thickness of the retinal layer and information related to the visual field sensitivity among the provided information for each information provider.
  • a pair feature amount information matrix including retinal layer thickness feature amount information based on information related to the thickness of the retinal layer included in the image and information related to the visual field sensitivity, and visual field sensitivity feature amount information is generated.
  • control unit 11 sets the visual field sensitivity feature amount information of the prediction target person as defect information, and includes feature amount information including the retinal layer thickness feature amount information of the prediction target person and the visual field sensitivity feature amount information set as the defect information.
  • a sequence is generated, and this feature amount information sequence is connected to a pair feature amount information matrix to generate a combined feature amount information sequence.
  • the control unit 11 estimates a parameter of the latent variable model that approximately reproduces the combined feature amount information sequence obtained by the connection, and learns the latent variable model.
  • the latent variable model a general latent variable model, structural non-negative matrix decomposition (Structured NMF), Gaussian Bernoulli ⁇ restricted Boltzmann machine (Gaussian Bernoulli RBM) can be used.
  • the control unit 11 uses the estimated parameter to obtain a reconstructed combined feature amount information sequence that is approximately reproduced by the latent variable model. At this time, information of a portion corresponding to the visual field sensitivity feature amount information of the prediction target person in the reproduced combined feature amount information sequence becomes information different from the missing information depending on an approximate reproduction result.
  • the control unit 11 approximately reproduces and outputs the prediction information of the visual field sensitivity as the prediction result by using information of a portion corresponding to the visual field sensitivity characteristic amount information of the prediction target person in the reproduced combined feature value information sequence. .
  • the specific processing contents of the control unit 11 will be described in detail later.
  • the storage unit 12 is a memory device or the like and holds a program executed by the control unit 11. This program may be provided by being stored in a computer-readable non-transitory recording medium and copied to the storage unit 12.
  • the storage unit 12 also operates as a work memory for the control unit 11.
  • the operation unit 13 is a device such as a mouse or a keyboard, and accepts the content of the user's instruction operation and outputs it to the control unit 11.
  • the output unit 14 is a display or the like, and outputs information according to an instruction input from the control unit 11.
  • the interface unit 15 is a USB (Universal Serial Bus), a network interface, or the like, and relates to information (examination result) on the thickness of a patient's retinal layer or visual field sensitivity from an external memory, an examination apparatus, a server, or the like.
  • the information is accepted and output to the control unit 11.
  • the interface unit 15 may send information to be output to an external device or the like in accordance with an instruction input from the control unit 11.
  • the control unit 11 includes a receiving unit 21, a retinal layer thickness feature quantity information extraction unit 22, a visual field sensitivity feature quantity information extraction unit 23, and a pair feature quantity.
  • the information matrix generation unit 24, the combined feature amount information matrix generation unit 25, the parameter estimation unit 26, the estimation processing unit 27, and the output unit 28 are configured.
  • At least one of information related to the thickness of the retinal layer (test result) and information related to visual field sensitivity (test result) is received from a plurality of patients who are information providers.
  • the receiving unit 21 accepts input of provided information including at least one of information related to the thickness of the retinal layer and information related to visual field sensitivity for each information provider.
  • the thickness of the nerve fiber layer (NFL) in the macular region of the information provider and the thickness of the ganglion cell layer in the macular region (GCL + IPL) As information relating to visual field sensitivity, visual field sensitivity information (TH) at a plurality of observation points within a predetermined angle (for example, 10 degrees or 30 degrees) of the central visual field is used.
  • the receiving unit 21 in this example has a nerve fiber layer thickness (NFL) in the macula, a ganglion cell layer thickness (GCL + IPL) in the macula, and a central visual field predetermined angle (here, 10 degrees).
  • vector data (TH) (hereinafter simply referred to as visual field sensitivity information) having information on visual field sensitivity at a plurality of observation points.
  • the NFL, GCL + IPL, and TH data of each information provider are exemplified in FIGS. 3A, 3N, 3B, GCL + IPL, and FIG. 3C, TH. Can be expressed as two-dimensional image data.
  • the receiving unit 21 calculates a pixel value (such as a luminance value) of each pixel included in each two-dimensional image data or a value represented by each pixel (measurement value of retinal layer thickness or visual field sensitivity itself).
  • a pixel value such as a luminance value
  • Each of NFL, GCL + IPL, and TH is one-dimensional vector data arranged in a predetermined order (for example, so-called scan line order) and having a predetermined number of data elements (here, the number of data elements is the number of pixels). Accept data.
  • the receiving unit 21 is obtained by issuing identification information (which may be a unique identification number or the like) for identifying the information provider for each information provider, and is identified by the identification information with respect to the identification information.
  • identification information which may be a unique identification number or the like
  • the thickness of the nerve fiber layer in the macula (NFL) is provided by an information provider, the thickness of the nerve fiber layer in the macula (NFL), the thickness of the ganglion cell layer in the macula (GCL + IPL), and the visual field sensitivity at a plurality of observation points within the central visual field at a predetermined angle.
  • the information (TH) is stored in the storage unit 12 in association with it. For information that has not been input, vector data in which arbitrary values (for example, “0”) are arranged as many as a predetermined number of elements is recorded as missing information.
  • the receiving unit 21 receives information (examination result) related to the thickness of the retinal layer for a patient (prediction target person) whose visual field sensitivity is to be predicted.
  • the receiving unit 21 is obtained by issuing identification information for identifying the prediction target person (may be a unique identification number or the like as with the information provider), and is provided from the prediction target person for the identification information.
  • the nerve fiber layer thickness (NFL) in the macula and the ganglion cell layer thickness (GCL + IPL) in the macula are stored in the storage unit 12 in association with each other.
  • an arbitrary value (hereinafter, this arbitrary value is set to “0” as an example) is arranged by the number of elements determined in advance as the field sensitivity information. Record vector data.
  • the retinal layer thickness feature quantity information extraction unit 22 includes information (vector data) related to the thickness of the retinal layer that is not missing information among the information (provided information) stored in the storage unit 12 and provided from the information provider. Then, a matrix in which information (vector data) related to the thickness of the retinal layer of the prediction target person is connected is generated. Then, the retinal layer thickness feature amount information extraction unit 22 extracts retinal layer feature amount information that approximately reproduces the generated matrix.
  • the retinal layer thickness feature amount information extraction unit 22 includes the NFL vector data (referred to as column vectors) of each information provider that is not missing information and the NFL vector of the prediction target person.
  • Data (column vectors) are connected and arranged in the row direction to obtain a matrix X (NFL) having a size of NFL data elements ⁇ (number of information providers + 1).
  • the GCL + IPL vector data (column vector) of each information provider is not missing information and the GCL + IPL vector data (column vector) of the prediction target person is connected in the row direction.
  • Arrangement is performed to obtain a matrix X (GCL + IPL) having a size of the number of data elements of GCL + IPL ⁇ (the number of information providers + 1).
  • the identification information of the data information provider or the person to be predicted is associated and recorded in the storage unit 12.
  • NMF non-negative matrix decomposition
  • the retinal layer thickness feature amount information extraction unit 22 outputs the base mixing coefficient H (D) as retinal layer thickness feature amount information.
  • W (D) has the size of the number of data elements of domain D ⁇ the number of feature quantities (number of elements of feature quantities), and H (D) is the number of elements of domain D feature quantities.
  • Number of information providers + 1).
  • the retinal layer thickness feature amount information corresponding to the information provider or the prediction target person in the i-th column of X (D) is a column vector in the i-th column of H (D).
  • the visual field sensitivity feature amount information extraction unit 23 outputs the base mixing coefficient H (D) as visual field sensitivity feature amount information.
  • the visual field sensitivity feature amount information corresponding to the information provider in the i-th column of X (D) is a column vector in the i-th column of H (D).
  • the pair feature amount information matrix generation unit 24 is included in each of retinal layer thickness feature amount information H (NFL), H (GCL + IPL), and visual field sensitivity feature amount information H (TH) as schematically illustrated in FIG.
  • the column associated with the identification information of the information provider (referred to as both providers) that provides both the retinal layer thickness information and the visual field sensitivity information is extracted (S1).
  • the pair feature amount information matrix generation unit 24 sequentially selects the identification information of both providers as attention identification information, retinal layer thickness feature amount information H (NFL), H (GCL + IPL), and visual field sensitivity feature amount information H (TH).
  • the vector data associated with the selected attention identification information extracted from each of the columns is connected and arranged to generate a feature matrix (S2).
  • the pair feature quantity information matrix generation unit 24 uses a certain mutual provider (a set of identification information ⁇ Pk ⁇ (k is an integer equal to or less than the number of information providers + the number of prediction subjects (1)).
  • a set of identification information ⁇ Pk ⁇ a set of identification information ⁇ Pk ⁇ (k is an integer equal to or less than the number of information providers + the number of prediction subjects (1).
  • feature matrixes Hp (NFL), Hp in which retinal layer thickness feature quantity information and visual field sensitivity feature quantity information relating to Pa, Pb, Pc. (GCL + IPL), Hp (TH) is obtained.
  • the pair feature quantity information matrix generation unit 24
  • the combined feature quantity information matrix generation unit 25 extracts the retinal layer thickness feature quantity information h (NFL) and h (GCL + IPL) of the prediction target person from the retinal layer thickness feature quantity information H (NFL) and H (GCL + IPL).
  • the combined feature amount information matrix generation unit 25 also sets the visual field sensitivity feature amount information h ′ (TH) of the prediction target person as missing information (vector data in which all components are “0”) (( S4)). Then, the combined feature amount information matrix generation unit 25 connects h ′ (NFL), h ′ (GCL + IPL) and h ′ (TH) set as missing information in the column direction to generate an additional information sequence, and FIG. As schematically illustrated, this additional information string is used as additional information h ′ and connected to the pair feature amount information matrix in the row direction to generate a combined feature amount information matrix ⁇ (S5).
  • the parameter estimation unit 26 estimates the parameters of the latent variable model that approximately reproduces the combined feature amount information matrix ⁇ by calculation. Specifically, the parameter estimation unit 26 approximately decomposes the combined feature amount information matrix ⁇ into a product of a relational parameter matrix F representing a latent variable model and a latent variable matrix B. That is, Find F and B. This decomposition may also be performed using non-negative matrix decomposition.
  • M is a matrix having the same size as ⁇ , and is a matrix of “0” for an element corresponding to an element that is a missing value of ⁇ , and “1” for the other elements.
  • a symbol with a dot in it means an element product of a matrix (a product is calculated for each corresponding element).
  • the estimation processing unit 27 calculates a product of the relational parameter matrix F obtained by the parameter estimation unit 26 and the latent variable matrix B to obtain a reconstructed feature quantity matrix ⁇ ′′, and is in a range corresponding to the additional information h ′. Extract column vector h ′′. Unlike the original combined feature quantity information matrix ⁇ , ⁇ ′′ obtained here is changed to the additional information h ′ of the combined feature quantity information matrix ⁇ due to the influence of components other than the additional information h ′ of the combined feature quantity information matrix ⁇ . Of the corresponding column vector h ′′, the component of the partial column vector h ′′ (TH) that is originally in the range corresponding to h ′ (TH) set as missing information is generally “0” (the above arbitrary value). ) Is a different value.
  • the estimation processing unit 27 includes a base W (NFL) related to X (NFL) obtained by the visual field sensitivity feature amount information extraction unit 23, a base W (GCL + IPL) related to X (GCL + IPL), and a matrix X (TH).
  • a base ⁇ is generated by concatenating the base W (TH) related to in the column direction.
  • the estimation processing unit 27 multiplies the base ⁇ by the column vector h ′′ extracted from the reconstructed feature quantity matrix ⁇ ′′ to obtain predicted visual field sensitivity information x ′′ (column vector) (Equation (3)).
  • the estimation processing unit 27 outputs the column vector x ′′ as a prediction result of the visual field sensitivity of the target patient to the output unit 28.
  • the output unit 28 displays the prediction result of the visual field sensitivity of the target patient on the display unit 14.
  • the output unit 28 obtains the size of the image data (vertical Pv pixel ⁇ horizontal Ph pixel) as input visual field sensitivity information, and an image having the same size as this.
  • each pixel of the initialized image data is set based on the value of the corresponding component in the column vector output by the estimation processing unit 27.
  • the image data representing the visual field sensitivity is generated and displayed.
  • this output part 28 is the value within the value range nearest to the said value about the value of each component in column vector x "which is a prediction result of the visual field sensitivity of the target patient that exceeds a predetermined value range.
  • the value range is set to 0 or more and 40 or less, and there is a value of 45 in the column vector x ′′, the value is set to a value closest to 45 in the value range. Reset to 40.
  • the visual field sensitivity estimation apparatus 1 basically has the above-described configuration and operates as follows.
  • the user of the visual field sensitivity estimation apparatus 1 in advance receives at least one of information (examination result) relating to the thickness of the retinal layer and information (examination result) relating to visual field sensitivity from a plurality of patients who are information providers.
  • the information is received and input to the visual field sensitivity estimation apparatus 1.
  • the visual field sensitivity estimation apparatus 1 accepts input of provided information including at least one of information related to the thickness of the retinal layer and information related to visual field sensitivity for each information provider.
  • the information relating to the thickness of the retinal layer includes the thickness (NFL) of the nerve fiber layer in the macular region of the information provider, and the ganglion cell layer in the macular region as well.
  • Thickness GCL + IPL
  • TH visual field sensitivity
  • the visual field sensitivity estimation device 1 issues unique identification information for identifying an information provider for each information provider, and the macular provided from the information provider identified by the identification information to the identification information.
  • a storage unit that correlates the nerve fiber layer thickness (NFL) in the head part, the ganglion cell layer thickness (GCL + IPL) in the macula part, and field sensitivity information (TH) at a plurality of observation points within a predetermined angle of the central field of view. 12. For information that has not been input, vector data in which “0” is arranged for the predetermined number of elements is recorded as missing information.
  • the user also obtains information (examination result) related to the thickness of the retinal layer for the patient (prediction target person) for which the visual field sensitivity is predicted, and inputs the information to the visual field sensitivity estimation apparatus 1.
  • the visual field sensitivity estimation device 1 issues unique identification information for specifying the prediction target person, and for the identification information, the nerve fiber layer thickness (NFL) in the macular part of the prediction target person and the nerve in the macular part.
  • the node cell layer thickness (GCL + IPL) is stored in the storage unit 12 in association with each other.
  • NFL and GCL + IPL are also set as one-dimensional vector data for the prediction target person, and the field sensitivity information that is not input is set as missing information, and “0” is arranged as the number of elements determined in advance as field sensitivity information. Record vector data.
  • the field-of-view sensitivity estimation apparatus 1 receives the instruction to predict the field-of-view sensitivity of the person to be predicted, starts the process illustrated in FIG. 5, and first extracts feature amount information that reproduces each vector data (S11). ). Specifically, the field-of-view sensitivity estimation apparatus 1 includes NFL vector data (column vector) of each information provider that is not missing information and NFL vector data (column vector) of the prediction target person. Are connected in the row direction to obtain a matrix X (NFL) having a size of NFL data dimension ⁇ (number of information providers + 1).
  • NNL matrix X
  • the GCL + IPL vector data (column vector) of each information provider is not missing information and the GCL + IPL vector data (column vector) of the prediction target person is connected in the row direction.
  • the matrix X (GCL + IPL) is obtained by arranging the data dimensions of GCL + IPL ⁇ (number of information providers + 1).
  • NMF non-negative matrix decomposition
  • NMF non-negative matrix decomposition
  • the base mixing coefficient H (D) is output as visual field sensitivity characteristic amount information.
  • the visual field sensitivity estimation apparatus 1 also includes retinal layer thickness information and visual field sensitivity information among columns included in the retinal layer thickness characteristic amount information H (NFL), H (GCL + IPL), and visual field sensitivity feature amount information H (TH). And the vector data of the column associated with the identification information of the information provider (referred to as the both providers) that provide both, and a predetermined latent variable model (retinal layer) using the extracted vector data A learning process related to a latent variable model related to both the thickness feature amount information and the visual field sensitivity feature amount information is executed (S12).
  • the visual field sensitivity estimation apparatus 1 sequentially selects the identification information of both providers as attention identification information, retinal layer thickness feature amount information H (NFL), H (GCL + IPL), and visual field sensitivity feature amount information H ( Vector data associated with the selected attention identification information extracted from the TH) column are connected and arranged.
  • the retinal layer thickness feature amount information and the visual field sensitivity feature amount information related to a certain provider are feature amount matrices Hp (NFL), Hp (GCL + IPL), Hp (TH) arranged in the respective i columns. ) Is obtained.
  • the field-of-view sensitivity estimation apparatus 1 connects these feature quantity matrices Hp (NFL), Hp (GCL + IPL), and Hp (TH) in the column direction to generate a pair feature quantity information matrix.
  • the visual field sensitivity estimation apparatus 1 also extracts the retinal layer thickness feature amount information h (NFL) and h (GCL + IPL) of the prediction target person from the retinal layer thickness feature amount information H (NFL) and H (GCL + IPL). Further, the visual field sensitivity estimation apparatus 1 sets the visual field sensitivity characteristic amount information h ′ (TH) of the prediction target person as missing information (vector data in which all components are “0”), and h ′ (NFL), h ′. The additional information h ′ is generated by connecting (GCL + IPL) and h ′ (TH) set as missing information in the column direction. The field-of-view sensitivity estimation apparatus 1 generates a combined feature amount information matrix ⁇ by combining additional information h ′ with the pair feature amount information matrix in the row direction.
  • the visual field sensitivity estimation apparatus 1 calculates model parameters of a latent variable model that approximately reproduces the combined feature amount information matrix ⁇ . Specifically, here, the combined feature amount information matrix ⁇ is approximately decomposed into a product of the relational parameter matrix F representing the latent variable model and the latent variable matrix B using non-negative matrix decomposition.
  • the visual field sensitivity estimation apparatus 1 obtains visual field sensitivity feature amount information of the prediction target person using the learned latent variable model (S13). Specifically, the visual field sensitivity estimation apparatus 1 calculates a product of the relational parameter matrix F obtained by the above processing and the latent variable matrix B to obtain a reconstructed feature matrix ⁇ ′′, and corresponds to the additional information h ′. A column vector h ′′ in the range to be extracted is extracted.
  • the visual field sensitivity estimation apparatus 1 obtains predicted visual field sensitivity information based on the obtained visual field sensitivity feature amount information (S14). Specifically, the base W (NFL) related to X (NFL) obtained in step S11, the base W (GCL + IPL) related to X (GCL + IPL), and the base W (TH) related to the matrix X (TH) are obtained. A base ⁇ is generated by connecting in the column direction. The field-of-view sensitivity estimation apparatus 1 obtains predicted field-of-view sensitivity information x ′′ (column vector) by multiplying this base ⁇ by the column vector h ′′ extracted from the reconstructed feature quantity matrix ⁇ ′′, and this column vector x ′′ is the target. It outputs as a prediction result of a patient's visual field sensitivity (S15).
  • the visual field sensitivity estimation device 1 arranges pixels in which pixel values are set based on the values of the respective components of the column vector obtained here, generates image data representing visual field sensitivity information, and outputs the display data. To do.
  • the matrix obtained as a result of the operation The column vector To obtain a reconstructed feature matrix ⁇ ′′, and extract a column vector h ′′ in a range corresponding to the additional information h ′.
  • ⁇ ′′ obtained here is different from the original combined feature information matrix ⁇ , and is added to the additional information h ′ of the combined feature information matrix ⁇ by the influence of components other than the additional information h ′ of the combined feature information matrix ⁇ .
  • the component of the partial column vector h ′′ (TH) that is originally in the range corresponding to h ′ (TH) set as missing information is generally a numerical value different from “0”. Become.
  • M is a matrix having the same size as ⁇ (TH), and “0” is assigned to elements corresponding to elements that are missing values in ⁇ (TH), and other elements.
  • the estimation processing unit 27 uses the basis W (NFL) related to X (NFL) obtained by the visual field sensitivity feature amount information extraction unit 23, the basis W (GCL + IPL) related to X (GCL + IPL), and the matrix X ( TH) is connected to the base W (TH) in the column direction to generate a base ⁇ , and this base ⁇ is multiplied by a column vector h ′′ extracted from the reconstructed feature matrix ⁇ ′′ to predict visual field sensitivity information x. ′′ (Column vector) is obtained.
  • pair feature amount information used for generating model parameters may be selected based on a predetermined condition. This selection is performed as follows, for example.
  • control unit 11 of the visual field sensitivity estimation apparatus 1 performs a clustering process on at least one of NFL and GCL + IPL provided by the information provider and the prediction target person.
  • a widely known method such as principal component analysis or k-means method can be employed.
  • the control unit 11 acquires identification information that identifies an information provider who has provided NFL or GCL + IPL, which is determined to belong to the same cluster as at least one of NFL or GCL + IPL of the prediction target person.
  • the control unit 11 of the visual field sensitivity estimation apparatus 1 operates as the pair characteristic amount information matrix generation unit 24 in the retinal layer thickness feature amount information H (NFL), H (GCL + IPL), and the visual field sensitivity feature amount information H (TH).
  • the identification information of the information provider both providers
  • the column being associated with the acquired identification information Is extracted as target pair information.
  • control unit 11 concatenates and arranges the vector data included in the extracted target pair information, and obtains feature quantity matrices Hp (NFL), Hp (GCL + IPL), and Hp (TH). Further, the control unit 11 concatenates these feature amount matrices in the column direction to generate a pair feature amount information sequence, and further concatenates each pair feature amount information sequence in the row direction to arrange the pair feature amount information matrix. obtain.
  • the following processing is executed in the same manner as in the above example.
  • the provision information includes both information relating to the thickness of the retinal layer and information relating to the visual field sensitivity, and is included in the provision information.
  • Feature information based on the target pair information with the information related to the thickness of the retinal layer being provided as the target pair information by a predetermined clustering process and belonging to the same cluster as the information related to the thickness of the retinal layer of the prediction target person
  • the model parameter is calculated using the model parameter by selecting the feature amount based on the information provided by the information provider having the same tendency as the prediction target person for the heterogeneous retinal layer thickness information.
  • the prediction accuracy can be improved.
  • the control unit 11 of the field-of-view sensitivity estimation apparatus 1 is a point on the space represented by the information provider vector using at least one vector of NFL or GCL + IPL provided from the information provider and the prediction target person. Then, a point where a distance (Mahalanobis distance or the like) from a point on the space represented by the vector of the prediction target person is less than a predetermined threshold (or less than a distance exceeding the predetermined number of information providers) is extracted. Then, the control unit 11 extracts identification information that identifies an information provider who provides at least one of NFL and GCL + IPL, which is a vector related to the extracted point, and performs an operation as the pair feature amount information matrix generation unit 24. May be.
  • the conditions for selecting the pair feature information used when generating the model parameters are related to the attributes of the information provider and the prediction target person in addition to the conditions using the result of clustering the information of the retinal layer thickness.
  • Information may be used.
  • attributes are age (may be by age group), gender, race (for example, European, African or Asian), retinal blood vessel position, etc.
  • the parameters of the latent variable model are estimated using the feature amount based on the information provided by the information provider who provides both the retinal layer thickness information and the visual field sensitivity information.
  • the visual field sensitivity information may be data converted to decibels, or data before decibel conversion (linear data) may be used.
  • the visual field sensitivity estimation apparatus 1 uses the column vector x ′′, which is a prediction result of the visual field sensitivity of the target patient, as information based on the error distribution (for example, an error distribution such as an average error distribution). And the column vector after the correction may be output as a prediction result of the visual field sensitivity of the target patient.
  • the error distribution for example, an error distribution such as an average error distribution.
  • a plurality of sets of prediction results (column vector x ′′) are used to obtain a distribution of true values r with respect to the prediction value p (FIG. 6).
  • control unit 11 operates according to a program stored in the storage unit 12 and relates to the thickness of the retinal layer from a plurality of patients (referred to as information providers).
  • Information (examination result) and information related to visual field sensitivity (inspection result) and information (age, race, etc.) related to attributes of each information provider are received and input to the visual field sensitivity estimation device 1.
  • the control unit 11 accepts provided information including information related to the thickness of the retinal layer for each information provider and information related to visual field sensitivity.
  • the control unit 11 accepts provision information including information related to the thickness of the retinal layer of the eye and information related to the visual field sensitivity for each information provider, and uses the information related to the thickness of the retinal layer as learning data.
  • the learning model object in the machine-learned state is held in the storage unit 12 using the information related to the visual field sensitivity of the information provider corresponding to as teacher data.
  • the control unit 11 accepts information related to the thickness of the retinal layer of the prediction target person who is the target of the visual field prediction, and uses the retinal layer thickness feature amount information of the prediction target person as input to the learning model object. Generate sensitivity estimation data.
  • control unit 11 is functionally configured including an acceptance unit 21, a training processing unit 31, an estimation processing unit 27 ′, and an output unit 28.
  • the control unit 11 is functionally configured including an acceptance unit 21, a training processing unit 31, an estimation processing unit 27 ′, and an output unit 28.
  • the accepting unit 21 accepts input of provision information including information on the thickness of the retinal layer for each information provider and information on the visual field sensitivity.
  • the thickness (NFL) of the nerve fiber layer in the macular portion of the information provider and the macular similarly
  • the thickness of the rod-shaped cone layer (RCL) is used.
  • the information related to the visual field sensitivity includes a plurality of observation points (in the following example, arranged in an M ⁇ N matrix) within a predetermined angle (for example, 10 degrees or 30 degrees) of the central visual field.
  • the field sensitivity information (TH) at the observation point) is used.
  • the receiving unit 21 in this example for each information provider, the thickness of the nerve fiber layer (NFL) in the macula, the thickness of the ganglion cell layer in the macula (GCL + IPL), and the thickness of the rod-shaped cone layer ( RCL) and matrix data (TH) whose components are visual field sensitivity information at a plurality of observation points within a central visual field predetermined angle (here, 10 degrees) (hereinafter simply referred to as visual field sensitivity information). And receive.
  • the NFL, GCL + IPL, and TH data of each information provider are as illustrated in FIG. 3A (NFL), FIG. 3B (GCL + IPL), FIG. 3C, and TH.
  • each can be expressed as two-dimensional image data (here, the number of pixels in the width direction is determined in advance).
  • the receiving unit 21 calculates a pixel value (such as a luminance value) of each pixel included in each two-dimensional image data or a value represented by each pixel (measurement value of retinal layer thickness or visual field sensitivity itself).
  • NFL, GCL + IPL, RCL, TH as one-dimensional vector data in which the number of data elements arranged in a predetermined order (for example, so-called scan line order) is predetermined (here, the number of data elements is the number of pixels).
  • the receiving unit 21 is obtained by issuing identification information (which may be a unique identification number or the like) for identifying the information provider for each information provider, and is identified by the identification information with respect to the identification information.
  • identification information which may be a unique identification number or the like
  • the thickness of the nerve fiber layer in the macula (NFL), the thickness of the ganglion cell layer in the macula (GCL + IPL), the thickness of the rod cone layer (RCL), and the center Field sensitivity information (TH) at a plurality of observation points within a predetermined field of view is associated with each other and stored in the storage unit 12, and the training processing unit 31 is instructed to perform a learning process based on the stored information.
  • vector data in which arbitrary values (for example, “0”) are arranged as many as a predetermined number of elements may be recorded as missing information.
  • the receiving unit 21 receives information (examination result) related to the thickness of the retinal layer for a patient (prediction target person) whose visual field sensitivity is to be predicted.
  • the receiving unit 21 is obtained by issuing identification information for identifying the prediction target person (may be a unique identification number or the like as with the information provider), and is provided from the prediction target person for the identification information.
  • the nerve fiber layer thickness (NFL) in the macula part, the ganglion cell layer thickness (GCL + IPL) in the macula part, and the rod-shaped cone layer thickness (RCL) are stored in the storage unit 12 in association with each other. .
  • the receiving unit 21 instructs the estimation processing unit 27 ′ to perform the estimation process.
  • an arbitrary value (hereinafter, this arbitrary value is set to “0” as an example) is arranged by the number of elements determined in advance as the field sensitivity information. Record vector data.
  • the training processing unit 31 receives an instruction from the receiving unit 21 that learning processing should be performed, and stores in advance, based on the learning data, information received by the receiving unit 21 and stored in the storage unit 12 as learning data. Machine learning processing of the learning model object stored in the unit 12 is performed.
  • the learning model object is multi-layer neural network data.
  • the first layer to which learning data is directly input is a convolutional layer (CNN: Convolutional Neural Network).
  • CNN Convolutional Neural Network
  • the learning model object that is subjected to learning processing by the training processing unit 31 of the present embodiment is similar to the learning model object known as VGG16, for example, a convolutional layer that extracts a characteristic distribution of data, and a distribution image And a pooling layer for reducing the size (for example, max pooling is performed here).
  • the layer on the output side of the learning model object according to the present embodiment is preferably a loosely coupled layer instead of the fully coupled layer. An example of this output layer will be described later.
  • the learning data of the learning model object is data for three channels arranged in an X ⁇ Y matrix (that is, a data string arranged in 3 ⁇ X ⁇ Y).
  • the learning model object has a thickness of the nerve fiber layer (NFL) in the macular region or a ganglion cell in the macular region in advance.
  • NNL nerve fiber layer
  • General image data for example, image net (J.Deng, et.al., ImageNet: A large-scale hierarchical image database) different from the layer thickness (GCL + IPL) and the thickness of the rod cone layer (RCL) Learning based on .In (IEEE) Conference (on) Computer (Vision) and Pattern (Recognition, 2009. CVPR (2009, 248-255) may be performed.
  • channels of the respective color components of the three primary colors (red R, green G, and blue B) representing the pixels of the image data are assigned to each channel of the learning data.
  • the channel information of the red component of the image data is input to the first channel
  • the channel information of the green component of the image data is input to the second channel, and so on. Let it be processed. Since such a learning processing method is widely known as a machine learning method for image data, a detailed description thereof will be omitted.
  • the layer immediately before the output layer has a data string having the same size as the output data (in this example, a two-dimensional array of field sensitivity information (TH) at the time of output)
  • a plurality of data for example, L
  • the value of data to be arranged at the position (m, n) of the l-th data string in this data string is denoted as dlmn.
  • the output layer of the learning model object here is parameters A and B (A is a L ⁇ M ⁇ N real data string that can be changed by learning, and the position (m, The data value corresponding to n) is expressed as almn, and B is an M ⁇ N real data string, and the data value corresponding to the position (m, n) is expressed as bmn),
  • A is a L ⁇ M ⁇ N real data string that can be changed by learning
  • the position (m, The data value corresponding to n) is expressed as almn
  • B is an M ⁇ N real data string
  • the data value corresponding to the position (m, n) is expressed as bmn
  • the spatial correspondence (OCT-VF) between the optical coherence tomography and the visual field is obtained without fully coupling the final layer and the previous layer. Therefore, the model is suppressed in complexity, overlearning can be prevented, and the prediction rate is improved.
  • a and B are Xavier's method (Xavier et.al., Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the International Conference on Artificial Intelligence and Statistics ((AISTATSty) Intelligence ⁇ ⁇ ⁇ ⁇ and Statistics.) Etc.
  • the training processing unit 31 uses the information received by the receiving unit 21 as learning data, and among the learning data, the nerve fiber layer thickness (NFL) in the macula and the ganglion cell layer thickness (GCL + IPL) in the macula
  • the information about the thickness (RCL) of the rod-like cone layer is input to the learning model object.
  • the training processing unit 31 sets the thickness (NFL) of the nerve fiber layer in the macula to the first channel, the thickness of the ganglion cell layer (GCL + IPL) in the macula to the second channel, and the rod-shaped cone layer. Thickness (RCL) information is input separately for each channel of data to be input, such as the third channel.
  • the training processing unit 31 has a real data string xmn arranged in an M ⁇ N matrix output from the output layer of the learning model object as a result of the input, and the visual field sensitivity that is correct data included in the learning data.
  • the parameters A and B of the output layer and the convolution layer included in the learning model object are updated (generally, the pooling layer has Although there is no parameter to be updated, it is not described here, but it includes layers other than those described here, and the layer includes parameters to be updated by backpropagation. If this is the case, the parameters of the layer are also updated here).
  • the estimation processing unit 27 ′ receives an instruction from the receiving unit 21 that the estimation process should be performed, and receives the information received by the receiving unit 21 and stored in the storage unit 12 (the prediction target person's nerve fiber layer in the macular region). After performing the training process in the training processing unit 31 using the thickness (NFL), the thickness of the ganglion cell layer in the macula (GCL + IPL), and the thickness of the rod-shaped cone layer (RCL) as input data Input for the learning model object.
  • the thickness NNL
  • GCL + IPL the thickness of the ganglion cell layer in the macula
  • RCL rod-shaped cone layer
  • the estimation processing unit 27 ′ uses the nerve fiber layer thickness (NFL) in the macular region as the first channel, and the ganglion cell layer thickness (GCL + IPL) in the macula region as the second channel. Information on the thickness (RCL) of the body cone layer is input separately for each channel of data to be input, such as the third channel.
  • NNL nerve fiber layer thickness
  • GCL + IPL ganglion cell layer thickness
  • the estimation processing unit 27 ′ uses the real data string xmn arranged in an M ⁇ N matrix output from the output layer of the learning model object as a result of the input as it is, and the field of view of the target patient (prediction target person). Output as a prediction result of sensitivity information (TH). That is, the output unit 28 according to this example of the present embodiment outputs the data THmn at the position (m, n) of the visual field sensitivity information (TH) as the prediction result to be output.
  • the position sensitivity value xmn is set, and information on the visual field sensitivity prediction result is output as M ⁇ N image data.
  • the learning model object having the convolutional layer is obtained by providing the thickness of the nerve fiber layer in the macula (NFL) and the thickness of the ganglion cell layer in the macula (GCL + IPL) provided by the information provider. Learning based on the thickness (RCL) of the rod-shaped cone layer and the field sensitivity information (TH) at a plurality of observation points within a predetermined angle of the central field of view.
  • the model object is not limited to this example.
  • learning may be performed as follows using a learning model object including two to three layers of all connected layers (referred to here as a small object for distinction). That is, the nerve fiber layer thickness (NFL) in the macula, the ganglion cell layer thickness (GCL + IPL), and the rod cone layer thickness (RCL) provided by the information provider An output obtained by inputting information to a learning model object having a convolutional layer learned by the above-described method is obtained as teacher data.
  • NNL nerve fiber layer thickness
  • GCL + IPL ganglion cell layer thickness
  • RCL rod cone layer thickness
  • the nerve fiber layer thickness (NFL) in the macula may be subjected to learning processing. Since such a method is a process widely known as a so-called “distillation method”, a detailed description thereof is omitted here.
  • the estimation processing by the estimation processing unit 27 ′ may be executed using such a small object.

Abstract

In the present invention, provided information including information pertaining to visual field sensitivity and/or information pertaining to the thickness of a retina layer of an eye for each information provider, and information pertaining to the thickness of the retina layer of a prediction object person are received, and retina layer thickness feature value information for each information provider and prediction object person and visual field sensitivity feature value information for each information provider are extracted. A parameter is estimated for reconstituting the retina layer thickness feature value information of the prediction object person and feature value information based on information included by object pair information among the provided information including both the information pertaining to retina layer thickness and the information pertaining to visual field sensitivity, and the information pertaining to visual field sensitivity of the prediction object person is estimated on the basis of information reconstituted using the parameter.

Description

視野感度推定装置、視野感度推定装置の制御方法、及びプログラムField-of-view sensitivity estimation apparatus, method of controlling field-of-view sensitivity estimation apparatus, and program
 本発明は、視野感度推定装置、視野感度推定装置の制御方法、及びプログラムに関する。 The present invention relates to a visual field sensitivity estimation apparatus, a visual field sensitivity estimation apparatus control method, and a program.
 緑内障等の治療など、患者の視野の状態を把握することを必要とする治療がある。現状、視野の状態把握には、光刺激による視野検査法などがあるが、治療のために十分な情報を得るためには、視野となり得る多数の点での患者の見え方を検査する必要があり、検査の負担が大きい問題点がある。 There are treatments that require grasping the patient's visual field, such as treatment for glaucoma. Currently, visual field conditions include visual field inspection methods using light stimulation, but in order to obtain sufficient information for treatment, it is necessary to examine how the patient looks at many points that can be the visual field. There is a problem that the burden of inspection is large.
 一方、近年、乳頭部網膜層厚の部分領域平均と視野感度の関係性の有無についての研究がなされている(非特許文献1)。 On the other hand, in recent years, research has been conducted on whether or not there is a relationship between the average partial area of the retinal layer thickness of the nipple and visual field sensitivity (Non-patent Document 1).
 網膜層厚の情報は、光干渉断層計(OCT)により、比較的簡便に測定できるため、このOCTによる網膜層厚の情報から視野感度が推定できれば、検査負担が軽減でき、治療の効率に寄与することが期待できる。しかしながら、上記従来の研究を含め、研究段階ですら視野感度を推定する方法までは検討されていないのが現状である。 Retinal layer thickness information can be measured relatively easily with an optical coherence tomography (OCT). Therefore, if visual field sensitivity can be estimated from retinal layer thickness information obtained by OCT, the examination burden can be reduced and the treatment efficiency can be reduced. Can be expected to do. However, the present situation is that the method for estimating the visual field sensitivity has not been studied even at the research stage, including the conventional research described above.
 本発明は上記実情に鑑みて為されたもので、網膜層厚の情報から視野感度の情報を推定することのできる視野感度推定装置、視野感度推定装置の制御方法、及びプログラムを提供することを、その目的の一つとする。 The present invention has been made in view of the above circumstances, and provides a visual field sensitivity estimation device, a visual field sensitivity estimation device control method, and a program capable of estimating visual field sensitivity information from retinal layer thickness information. One of its purposes.
 上記従来例の問題点を解決する本発明の一態様は、視野感度推定装置であって、情報提供者ごとの目の網膜層の厚さに係る情報と視野感度に係る情報との少なくとも一方を含む提供情報、及び、視野予測の対象となる予測対象者の網膜層の厚さに係る情報を受け入れる手段と、前記受け入れた提供情報に含まれる前記網膜層の厚さに係る情報と、予測対象者の網膜層の厚さに係る情報とに基づいて、情報提供者及び予測対象者ごとの網膜層厚特徴量情報を抽出する手段と、前記受け入れた提供情報に含まれる前記視野感度に係る情報に基づいて、情報提供者ごとの視野感度特徴量情報を抽出する手段と、前記受け入れた情報提供者ごとの提供情報のうち、網膜層の厚さに係る情報と視野感度に係る情報との双方を含む提供情報の少なくとも一部を対象ペア情報として、対象ペア情報が含む網膜層の厚さに係る情報と視野感度に係る情報とに基づく網膜層厚特徴量情報と、視野感度特徴量情報とを含むペア特徴量情報行列を生成する手段と、視野予測の対象となる予測対象者の視野感度特徴量情報を欠損情報として設定し、予測対象者の網膜層厚特徴量情報と、前記欠損情報として設定した視野感度特徴量情報とを含む特徴量情報列を得て、前記ペア特徴量情報行列に連結して結合特徴量情報行列を生成する手段と、当該結合ペア特徴量情報行列を再構成する潜在変数モデルのパラメータを推定する手段と、を含み、前記欠損情報である視野感度に係る情報を、前記推定された潜在変数モデルのパラメータに基づいて再構成された結合ペア特徴量情報行列を用いて推定し、視野感度の情報として出力することとしたものである。 One aspect of the present invention that solves the problems of the above-described conventional example is a visual field sensitivity estimation device, which includes at least one of information related to the thickness of the eye retinal layer and information related to visual field sensitivity for each information provider. The provided information, means for accepting information related to the thickness of the retinal layer of the prediction target person who is the target of visual field prediction, information related to the thickness of the retinal layer included in the accepted provided information, and the prediction target Retinal layer thickness feature amount information for each information provider and prediction target person based on information on the thickness of the person's retinal layer, and information on the visual field sensitivity included in the received provided information And a means for extracting visual field sensitivity feature amount information for each information provider, and information relating to the thickness of the retinal layer and information relating to visual field sensitivity among the information provided for each of the accepted information providers Including at least information provided Pair feature amount information matrix including retinal layer thickness feature amount information based on information related to the thickness of the retinal layer included in the target pair information and information related to visual field sensitivity, and visual field sensitivity feature amount information And the visual field sensitivity feature quantity information of the prediction target person who is the target of the visual field prediction are set as the defect information, and the retinal layer thickness characteristic quantity information of the prediction target person and the visual field sensitivity feature quantity set as the defect information Obtaining a feature quantity information sequence including information and connecting to the pair feature quantity information matrix to generate a combined feature quantity information matrix; and a parameter of a latent variable model for reconstructing the combined pair feature quantity information matrix Estimating information using the combined pair feature quantity information matrix reconstructed based on the parameters of the estimated latent variable model. In which it was decided to output as information.
 本発明によると、網膜層厚の情報から視野感度の情報を推定できる。 According to the present invention, visual field sensitivity information can be estimated from retinal layer thickness information.
本発明の実施の形態に係る視野感度推定装置の例を表す構成ブロック図である。It is a configuration block diagram showing an example of a visual field sensitivity estimation apparatus according to an embodiment of the present invention. 本発明の実施の形態に係る視野感度推定装置の例を表す機能ブロック図である。It is a functional block diagram showing the example of the visual field sensitivity estimation apparatus which concerns on embodiment of this invention. 本発明の実施の形態に係る視野感度推定装置に入力されるデータの例を表す説明図である。It is explanatory drawing showing the example of the data input into the visual field sensitivity estimation apparatus which concerns on embodiment of this invention. 本発明の実施の形態に係る視野感度推定装置における処理の内容例を表す説明図である。It is explanatory drawing showing the example of the content of the process in the visual field sensitivity estimation apparatus which concerns on embodiment of this invention. 本発明の実施の形態に係る視野感度推定装置における処理の流れの一例を表すフローチャート図である。It is a flowchart figure showing an example of the flow of the process in the visual field sensitivity estimation apparatus which concerns on embodiment of this invention. 本発明の実施の形態に係る視野感度推定装置における予測値に対する真値の分布例を表す説明図である。It is explanatory drawing showing the distribution example of the true value with respect to the predicted value in the visual field sensitivity estimation apparatus which concerns on embodiment of this invention. 本発明の実施の形態のもう一つの例に係る視野感度推定装置の例を表す機能ブロック図である。It is a functional block diagram showing the example of the visual field sensitivity estimation apparatus which concerns on another example of embodiment of this invention.
 本発明の実施の形態について図面を参照しながら説明する。本発明の実施の形態に係る視野感度推定装置1は、図1に例示するように、制御部11と、記憶部12と、操作部13と、出力部14と、インタフェース部15とを含んで構成される。 Embodiments of the present invention will be described with reference to the drawings. The visual field sensitivity estimation apparatus 1 according to the embodiment of the present invention includes a control unit 11, a storage unit 12, an operation unit 13, an output unit 14, and an interface unit 15, as illustrated in FIG. Composed.
 制御部11は、CPU等のプログラム制御デバイスであり、記憶部12に格納されたプログラムに従って動作する。本実施の形態の例では、複数の患者(情報提供者と呼ぶ)から、網膜層の厚さに係る情報(検査結果)、または視野感度に係る情報(検査結果)の少なくとも一方の情報と、各情報提供者の属性に関する情報(年齢、人種等)との提供を受け、視野感度推定装置1に入力する。制御部11は、この情報提供者ごとの網膜層の厚さに係る情報と視野感度に係る情報との少なくとも一方を含む提供情報を受け入れる。 The control unit 11 is a program control device such as a CPU, and operates according to a program stored in the storage unit 12. In the example of the present embodiment, from a plurality of patients (referred to as information providers), information on the thickness of the retinal layer (examination results), or information on the visual field sensitivity (examination results), The information (age, race, etc.) regarding the attributes of each information provider is provided and input to the visual field sensitivity estimation apparatus 1. The control unit 11 receives provided information including at least one of information related to the thickness of the retinal layer and information related to visual field sensitivity for each information provider.
 また制御部11は、視野感度の予測の対象となる患者(予測対象者)についての網膜層の厚さに係る情報(検査結果)を受け入れる。そして制御部11は、当該受け入れた提供情報のうち網膜層の厚さに係る情報と予測対象者の網膜層の厚さに係る情報とを近似的に再現する、情報提供者及び予測対象者ごとの網膜層厚特徴量情報を演算により抽出する。また制御部11は、提供情報のうち視野感度に係る情報を近似的に再現する、情報提供者ごとの視野感度特徴量情報を演算により抽出する。ここで網膜層厚特徴量情報や視野感度特徴量情報の演算には、非負値行列分解(NMF)や、主成分分析、オートエンコーダ等の種々の方法を利用できる。 Further, the control unit 11 accepts information (test result) related to the thickness of the retinal layer for the patient (predictor) who is the target of visual field sensitivity prediction. Then, the control unit 11 approximately reproduces the information relating to the thickness of the retinal layer and the information relating to the thickness of the retinal layer of the prediction target person among the received provision information, for each information provider and prediction target person Retinal layer thickness feature quantity information is extracted by calculation. In addition, the control unit 11 extracts, by calculation, visual field sensitivity feature amount information for each information provider that approximately reproduces information related to visual field sensitivity among the provided information. Here, various methods such as non-negative matrix decomposition (NMF), principal component analysis, and auto-encoder can be used for calculation of retinal layer thickness feature amount information and visual field sensitivity feature amount information.
 制御部11は、情報提供者ごとの提供情報のうち、網膜層の厚さに係る情報と視野感度に係る情報との双方を含む提供情報の少なくとも一部を対象ペア情報として、当該対象ペア情報が含む網膜層の厚さに係る情報と視野感度に係る情報とに基づく網膜層厚特徴量情報と、視野感度特徴量情報とを含むペア特徴量情報行列を生成する。 The control unit 11 uses, as the target pair information, at least part of the provided information including both information related to the thickness of the retinal layer and information related to the visual field sensitivity among the provided information for each information provider. A pair feature amount information matrix including retinal layer thickness feature amount information based on information related to the thickness of the retinal layer included in the image and information related to the visual field sensitivity, and visual field sensitivity feature amount information is generated.
 また制御部11は、予測対象者の視野感度特徴量情報を欠損情報として設定し、予測対象者の網膜層厚特徴量情報と、欠損情報として設定した視野感度特徴量情報とを含む特徴量情報列を生成し、この特徴量情報列をペア特徴量情報行列に連結して結合特徴量情報列を生成する。制御部11は、当該連結により得られた結合特徴量情報列を近似的に再現する潜在変数モデルのパラメータを推定し、潜在変数モデルを学習する。ここでの潜在変数モデルは、一般の潜在変数モデルや、構造的非負値行列分解(Structured NMF)、ガウシアン・ベルヌーイ・制限つきボルツマンマシン(Gaussian Bernoulli RBM)を用いることができる。 Further, the control unit 11 sets the visual field sensitivity feature amount information of the prediction target person as defect information, and includes feature amount information including the retinal layer thickness feature amount information of the prediction target person and the visual field sensitivity feature amount information set as the defect information. A sequence is generated, and this feature amount information sequence is connected to a pair feature amount information matrix to generate a combined feature amount information sequence. The control unit 11 estimates a parameter of the latent variable model that approximately reproduces the combined feature amount information sequence obtained by the connection, and learns the latent variable model. As the latent variable model, a general latent variable model, structural non-negative matrix decomposition (Structured NMF), Gaussian Bernoulli · restricted Boltzmann machine (Gaussian Bernoulli RBM) can be used.
 制御部11は、当該推定されたパラメータを用い、潜在変数モデルにより近似的に再現された再現結合特徴量情報列を得る。このとき、再現結合特徴量情報列のうち、予測対象者の視野感度特徴量情報に相当する部分の情報は、近似的な再現の結果により、欠損情報とは異なる情報となる。制御部11は、再現結合特徴量情報列のうち、予測対象者の視野感度特徴量情報に相当する部分の情報を用い、予測結果としての視野感度の予測情報を近似的に再現して出力する。この制御部11の具体的な処理の内容については後に詳しく述べる。 The control unit 11 uses the estimated parameter to obtain a reconstructed combined feature amount information sequence that is approximately reproduced by the latent variable model. At this time, information of a portion corresponding to the visual field sensitivity feature amount information of the prediction target person in the reproduced combined feature amount information sequence becomes information different from the missing information depending on an approximate reproduction result. The control unit 11 approximately reproduces and outputs the prediction information of the visual field sensitivity as the prediction result by using information of a portion corresponding to the visual field sensitivity characteristic amount information of the prediction target person in the reproduced combined feature value information sequence. . The specific processing contents of the control unit 11 will be described in detail later.
 記憶部12は、メモリデバイス等であり、制御部11によって実行されるプログラムを保持する。このプログラムは、コンピュータ可読、かつ非一時的な記録媒体に格納されて提供され、この記憶部12に複写されたものであってもよい。またこの記憶部12は制御部11のワークメモリとしても動作する。 The storage unit 12 is a memory device or the like and holds a program executed by the control unit 11. This program may be provided by being stored in a computer-readable non-transitory recording medium and copied to the storage unit 12. The storage unit 12 also operates as a work memory for the control unit 11.
 操作部13は、マウスやキーボード等のデバイスであり、利用者の指示操作の内容を受け入れて制御部11に対して出力する。出力部14は、ディスプレイ等であり、制御部11から入力される指示に従い、情報を出力する。 The operation unit 13 is a device such as a mouse or a keyboard, and accepts the content of the user's instruction operation and outputs it to the control unit 11. The output unit 14 is a display or the like, and outputs information according to an instruction input from the control unit 11.
 インタフェース部15は、USB(Universal Serial Bus)や、ネットワークインタフェース等であり、外部のメモリや検査装置、サーバ等から、患者の網膜層の厚さに係る情報(検査結果)、または視野感度に係る情報を受け入れて、制御部11に出力する。またこのインタフェース部15は、制御部11から入力される指示に従って、外部の装置等に対して、出力の対象となる情報を送出してもよい。 The interface unit 15 is a USB (Universal Serial Bus), a network interface, or the like, and relates to information (examination result) on the thickness of a patient's retinal layer or visual field sensitivity from an external memory, an examination apparatus, a server, or the like. The information is accepted and output to the control unit 11. The interface unit 15 may send information to be output to an external device or the like in accordance with an instruction input from the control unit 11.
 ここで制御部11の動作について説明する。本実施の形態の例ではこの制御部11は、図2に例示するように、受入部21と、網膜層厚特徴量情報抽出部22と、視野感度特徴量情報抽出部23と、ペア特徴量情報行列生成部24と、結合特徴量情報行列生成部25と、パラメータ推定部26と、推定処理部27と、出力部28とを含んで構成される。 Here, the operation of the control unit 11 will be described. In the example of the present embodiment, as illustrated in FIG. 2, the control unit 11 includes a receiving unit 21, a retinal layer thickness feature quantity information extraction unit 22, a visual field sensitivity feature quantity information extraction unit 23, and a pair feature quantity. The information matrix generation unit 24, the combined feature amount information matrix generation unit 25, the parameter estimation unit 26, the estimation processing unit 27, and the output unit 28 are configured.
 本実施の形態の一例では、情報提供者である複数の患者から網膜層の厚さに係る情報(検査結果)と、視野感度に係る情報(検査結果)との少なくとも一方の情報提供を受ける。受入部21は、この情報提供者ごとの網膜層の厚さに係る情報と視野感度に係る情報との少なくとも一方を含む提供情報の入力を受け入れる。 In an example of this embodiment, at least one of information related to the thickness of the retinal layer (test result) and information related to visual field sensitivity (test result) is received from a plurality of patients who are information providers. The receiving unit 21 accepts input of provided information including at least one of information related to the thickness of the retinal layer and information related to visual field sensitivity for each information provider.
 具体的にここでの説明では、網膜層の厚さに係る情報として、情報提供者の黄斑部における神経繊維層の厚み(NFL)と、同様に黄斑部における神経節細胞層の厚み(GCL+IPL)とを用い、視野感度に係る情報として、中心視野所定角度(例えば10度または30度)以内の複数の観測点における視野感度の情報(TH)を用いるものとする。 Specifically, in this description, as information related to the thickness of the retinal layer, the thickness of the nerve fiber layer (NFL) in the macular region of the information provider and the thickness of the ganglion cell layer in the macular region (GCL + IPL) As information relating to visual field sensitivity, visual field sensitivity information (TH) at a plurality of observation points within a predetermined angle (for example, 10 degrees or 30 degrees) of the central visual field is used.
 この例の受入部21は、情報提供者ごとに、黄斑部における神経繊維層の厚み(NFL)と、黄斑部における神経節細胞層の厚み(GCL+IPL)と、中心視野所定角度(ここでは10度とする)以内の複数の観測点における視野感度の情報を成分としたベクトルデータ(TH)(以下簡潔に、視野感度の情報と呼ぶ)とを受ける。なお、これら各情報提供者のNFL,GCL+IPL,THの各データは、図3(a)(NFL),図3(b)(GCL+IPL),及び図3(c)(TH)に例示するように、それぞれ二次元の画像データとして表現できる。本実施の形態の受入部21は、それぞれの二次元の画像データに含まれる各画素の画素値(輝度の値など)または各画素が表す値(網膜層厚や視野感度の測定値そのもの)を所定の順序(例えば、いわゆるスキャンライン順)に配列した、データ要素数が予め決められている(ここではデータ要素数は画素数となる)一次元のベクトルデータとして、NFL,GCL+IPL,THの各データを受け入れるものとする。 For each information provider, the receiving unit 21 in this example has a nerve fiber layer thickness (NFL) in the macula, a ganglion cell layer thickness (GCL + IPL) in the macula, and a central visual field predetermined angle (here, 10 degrees). And vector data (TH) (hereinafter simply referred to as visual field sensitivity information) having information on visual field sensitivity at a plurality of observation points. The NFL, GCL + IPL, and TH data of each information provider are exemplified in FIGS. 3A, 3N, 3B, GCL + IPL, and FIG. 3C, TH. Can be expressed as two-dimensional image data. The receiving unit 21 according to the present embodiment calculates a pixel value (such as a luminance value) of each pixel included in each two-dimensional image data or a value represented by each pixel (measurement value of retinal layer thickness or visual field sensitivity itself). Each of NFL, GCL + IPL, and TH is one-dimensional vector data arranged in a predetermined order (for example, so-called scan line order) and having a predetermined number of data elements (here, the number of data elements is the number of pixels). Accept data.
 受入部21は、情報提供者ごとに、情報提供者を特定する識別情報(固有の識別番号等でよい)を発行する等して得て、当該識別情報に対して、当該識別情報で特定される情報提供者から提供された、黄斑部における神経繊維層の厚み(NFL)と、黄斑部における神経節細胞層の厚み(GCL+IPL)と、中心視野所定角度以内の複数の観測点における視野感度の情報(TH)とを関連付けて記憶部12に格納する。なお、入力されなかった情報については、欠損情報として、予め決められた要素の数だけ任意の値(例えば「0」)を配列したベクトルデータを記録する。 The receiving unit 21 is obtained by issuing identification information (which may be a unique identification number or the like) for identifying the information provider for each information provider, and is identified by the identification information with respect to the identification information. Provided by an information provider, the thickness of the nerve fiber layer in the macula (NFL), the thickness of the ganglion cell layer in the macula (GCL + IPL), and the visual field sensitivity at a plurality of observation points within the central visual field at a predetermined angle. The information (TH) is stored in the storage unit 12 in association with it. For information that has not been input, vector data in which arbitrary values (for example, “0”) are arranged as many as a predetermined number of elements is recorded as missing information.
 また、受入部21は、視野感度の予測の対象となる患者(予測対象者)についての網膜層の厚さに係る情報(検査結果)を受け入れる。受入部21は、予測対象者を特定する識別情報(情報提供者と同様、固有の識別番号等でよい)を発行する等して得て、当該識別情報に対して、予測対象者から提供された、黄斑部における神経繊維層の厚み(NFL)と、黄斑部における神経節細胞層の厚み(GCL+IPL)とを関連付けて記憶部12に格納する。なお、入力されない視野感度の情報については、欠損情報として、視野感度の情報として予め決められた要素の数だけ任意の値(以下では一例としてこの任意の値を「0」とする)を配列したベクトルデータを記録する。 In addition, the receiving unit 21 receives information (examination result) related to the thickness of the retinal layer for a patient (prediction target person) whose visual field sensitivity is to be predicted. The receiving unit 21 is obtained by issuing identification information for identifying the prediction target person (may be a unique identification number or the like as with the information provider), and is provided from the prediction target person for the identification information. The nerve fiber layer thickness (NFL) in the macula and the ganglion cell layer thickness (GCL + IPL) in the macula are stored in the storage unit 12 in association with each other. Regarding the field sensitivity information that is not input, as the missing information, an arbitrary value (hereinafter, this arbitrary value is set to “0” as an example) is arranged by the number of elements determined in advance as the field sensitivity information. Record vector data.
 網膜層厚特徴量情報抽出部22は、記憶部12に格納された、情報提供者から提供された情報(提供情報)のうち、欠損情報でない網膜層の厚さに係る情報(ベクトルデータ)と、予測対象者の網膜層の厚さに係る情報(ベクトルデータ)とを連結した行列を生成する。そして網膜層厚特徴量情報抽出部22は、この生成した行列を近似的に再現する、網膜層特徴量情報を抽出する。本実施の形態の例では、網膜層厚特徴量情報抽出部22は、各情報提供者のNFLのベクトルデータ(列ベクトルとする)のうち、欠損情報でないものと、予測対象者のNFLのベクトルデータ(列ベクトルとする)とを行方向に連結して配列して、NFLのデータ要素数×(情報提供者数+1)の大きさの行列X(NFL)を得る。同様に、各情報提供者のGCL+IPLのベクトルデータ(列ベクトルとする)のうち、欠損情報でないものと、予測対象者のGCL+IPLのベクトルデータ(列ベクトルとする)とを、行方向に連結して配列して、GCL+IPLのデータ要素数×(情報提供者数+1)の大きさの行列X(GCL+IPL)を得る。 The retinal layer thickness feature quantity information extraction unit 22 includes information (vector data) related to the thickness of the retinal layer that is not missing information among the information (provided information) stored in the storage unit 12 and provided from the information provider. Then, a matrix in which information (vector data) related to the thickness of the retinal layer of the prediction target person is connected is generated. Then, the retinal layer thickness feature amount information extraction unit 22 extracts retinal layer feature amount information that approximately reproduces the generated matrix. In the example of the present embodiment, the retinal layer thickness feature amount information extraction unit 22 includes the NFL vector data (referred to as column vectors) of each information provider that is not missing information and the NFL vector of the prediction target person. Data (column vectors) are connected and arranged in the row direction to obtain a matrix X (NFL) having a size of NFL data elements × (number of information providers + 1). Similarly, the GCL + IPL vector data (column vector) of each information provider is not missing information and the GCL + IPL vector data (column vector) of the prediction target person is connected in the row direction. Arrangement is performed to obtain a matrix X (GCL + IPL) having a size of the number of data elements of GCL + IPL × (the number of information providers + 1).
 このとき網膜層厚特徴量情報抽出部22は、X(NFL),X(GCL+IPL)のそれぞれについて、列を特定する情報i(i=1,2,…)と、i列目に対応するベクトルデータの情報提供者または予測対象者の識別情報とを関連付けて、記憶部12に記録しておく。 At this time, the retinal layer thickness feature quantity information extraction unit 22 determines information i (i = 1, 2,...) Specifying a column and a vector corresponding to the i-th column for each of X (NFL) and X (GCL + IPL). The identification information of the data information provider or the person to be predicted is associated and recorded in the storage unit 12.
 そして網膜層厚特徴量情報抽出部22は、得られた行列X(D)(D=NFL,GCL+IPL)を、それぞれ非負値行列分解(NMF)して、基底W(D)と、基底の混合係数H(D)との積に分解する。
Figure JPOXMLDOC01-appb-M000001
Then, the retinal layer thickness feature amount information extraction unit 22 performs non-negative matrix decomposition (NMF) on the obtained matrix X (D) (D = NFL, GCL + IPL), and mixes the basis W (D) and the basis. Decompose into product with coefficient H (D).
Figure JPOXMLDOC01-appb-M000001
 網膜層厚特徴量情報抽出部22は、この基底の混合係数H(D)を網膜層厚特徴量情報として出力する。またこの(1)式においてW(D)はドメインDのデータ要素数×特徴量の数(特徴量の要素数)の大きさを有し、H(D)はドメインDの特徴量の要素数×(情報提供者数+1)の大きさを有する。またX(D)のi列目の情報提供者または予測対象者に対応する網膜層厚特徴量情報は、H(D)のi列目の列ベクトルとなる。 The retinal layer thickness feature amount information extraction unit 22 outputs the base mixing coefficient H (D) as retinal layer thickness feature amount information. In this equation (1), W (D) has the size of the number of data elements of domain D × the number of feature quantities (number of elements of feature quantities), and H (D) is the number of elements of domain D feature quantities. × (Number of information providers + 1). In addition, the retinal layer thickness feature amount information corresponding to the information provider or the prediction target person in the i-th column of X (D) is a column vector in the i-th column of H (D).
 視野感度特徴量情報抽出部23は、各情報提供者のTHのベクトルデータ(列ベクトルとする)のうち、欠損情報でないものを行方向に配列して、THのデータ要素数×情報提供者数の大きさの行列X(TH)を得る。このとき視野感度特徴量情報抽出部23は、X(TH)について、その列を特定する情報i(i=1,2,…)と、i列目に対応するベクトルデータの情報提供者を識別する情報とを関連付けて、記憶部12に記録しておく。 The field-of-view sensitivity feature amount information extraction unit 23 arranges the TH vector data (column vector) of each information provider that is not missing information in the row direction, and the number of TH data elements × the number of information providers A matrix X (TH) of the size At this time, the field-of-view sensitivity feature quantity information extraction unit 23 identifies the information provider (X = 1) that identifies the column i (i = 1, 2,...) And the vector data information provider corresponding to the i-th column. In association with information to be recorded, it is recorded in the storage unit 12.
 視野感度特徴量情報抽出部23は、この行列X(TH)について(1)式を用いて非負値行列分解(NMF)して、基底W(D)と、基底の混合係数H(D)(ここではD=TH)との積に分解する。視野感度特徴量情報抽出部23は、この基底の混合係数H(D)を視野感度特徴量情報として出力する。なお、ここでX(D)のi列目の情報提供者に対応する視野感度特徴量情報は、H(D)のi列目の列ベクトルとなる。 The visual field sensitivity feature amount information extraction unit 23 performs non-negative matrix decomposition (NMF) on the matrix X (TH) using the equation (1) to obtain a base W (D) and a base mixing coefficient H (D) ( Here, it is decomposed into a product of D = TH). The visual field sensitivity feature amount information extraction unit 23 outputs the base mixing coefficient H (D) as visual field sensitivity feature amount information. Here, the visual field sensitivity feature amount information corresponding to the information provider in the i-th column of X (D) is a column vector in the i-th column of H (D).
 ペア特徴量情報行列生成部24は、模式的に図4に例示するように、網膜層厚特徴量情報H(NFL),H(GCL+IPL),視野感度特徴量情報H(TH)のそれぞれに含まれる列のうち、網膜層厚情報と視野感度情報との双方を提供している情報提供者(双方提供者と呼ぶ)の識別情報に関連付けられた列を抽出する(S1)。 The pair feature amount information matrix generation unit 24 is included in each of retinal layer thickness feature amount information H (NFL), H (GCL + IPL), and visual field sensitivity feature amount information H (TH) as schematically illustrated in FIG. The column associated with the identification information of the information provider (referred to as both providers) that provides both the retinal layer thickness information and the visual field sensitivity information is extracted (S1).
 ペア特徴量情報行列生成部24は、双方提供者の識別情報を順次注目識別情報として選択し、網膜層厚特徴量情報H(NFL),H(GCL+IPL),視野感度特徴量情報H(TH)の列からそれぞれ抽出された、選択した注目識別情報に関連付けられたベクトルデータを連結して配列し、特徴量行列を生成する(S2)。ペア特徴量情報行列生成部24は、これにより、ある双方提供者(識別情報の集合を{Pk}(kはいずれも情報提供者の数+予測対象者の数(1)以下の整数)とする。図4ではPa,Pb,Pc…として示す)に係る網膜層厚特徴量情報と、視野感度特徴量情報とがそれぞれのi列目に配されている特徴量行列Hp(NFL),Hp(GCL+IPL),Hp(TH)を得る。また、ペア特徴量情報行列生成部24は、これらペア特徴量行列を行方向に連結して配列してペア特徴量情報行列を得る(S3)。 The pair feature amount information matrix generation unit 24 sequentially selects the identification information of both providers as attention identification information, retinal layer thickness feature amount information H (NFL), H (GCL + IPL), and visual field sensitivity feature amount information H (TH). The vector data associated with the selected attention identification information extracted from each of the columns is connected and arranged to generate a feature matrix (S2). Thus, the pair feature quantity information matrix generation unit 24 uses a certain mutual provider (a set of identification information {Pk} (k is an integer equal to or less than the number of information providers + the number of prediction subjects (1)). In FIG. 4, feature matrixes Hp (NFL), Hp in which retinal layer thickness feature quantity information and visual field sensitivity feature quantity information relating to Pa, Pb, Pc. (GCL + IPL), Hp (TH) is obtained. Further, the pair feature quantity information matrix generation unit 24 obtains a pair feature quantity information matrix by connecting and arranging these pair feature quantity matrices in the row direction (S3).
 結合特徴量情報行列生成部25は、予測対象者の網膜層厚特徴量情報h(NFL),h(GCL+IPL)を、網膜層厚特徴量情報H(NFL),H(GCL+IPL)から抽出する。結合特徴量情報行列生成部25は、また、予測対象者の視野感度特徴量情報h′(TH)を欠損情報(すべての成分が「0」であるベクトルデータ)として設定する(図4の(S4))。そして結合特徴量情報行列生成部25は、h′(NFL),h′(GCL+IPL)及び欠損情報として設定したh′(TH)を列方向に連結して付加情報列を生成し、図4に模式的に例示するように、この付加情報列を付加情報h′として、ペア特徴量情報行列に対して行方向に連結して、結合特徴量情報行列χを生成する(S5)。 The combined feature quantity information matrix generation unit 25 extracts the retinal layer thickness feature quantity information h (NFL) and h (GCL + IPL) of the prediction target person from the retinal layer thickness feature quantity information H (NFL) and H (GCL + IPL). The combined feature amount information matrix generation unit 25 also sets the visual field sensitivity feature amount information h ′ (TH) of the prediction target person as missing information (vector data in which all components are “0”) (( S4)). Then, the combined feature amount information matrix generation unit 25 connects h ′ (NFL), h ′ (GCL + IPL) and h ′ (TH) set as missing information in the column direction to generate an additional information sequence, and FIG. As schematically illustrated, this additional information string is used as additional information h ′ and connected to the pair feature amount information matrix in the row direction to generate a combined feature amount information matrix χ (S5).
 パラメータ推定部26は、結合特徴量情報行列χを近似的に再現する潜在変数モデルのパラメータを演算により推定する。具体的にこのパラメータ推定部26は、結合特徴量情報行列χを、潜在変数モデルを表す関係パラメータ行列Fと、潜在変数行列Bとの積に近似的に分解する。すなわち、
Figure JPOXMLDOC01-appb-M000002
となるF,Bを求める。この分解も非負値行列分解を用いて行うこととしてもよい。ここで、Mは、χと同じ大きさの行列であり、χのうち欠損値である要素に対応する要素については「0」、それ以外の要素については「1」である行列であり、円内にドットのある記号は行列の要素積(対応する要素ごとに積を演算すること)を意味する。
The parameter estimation unit 26 estimates the parameters of the latent variable model that approximately reproduces the combined feature amount information matrix χ by calculation. Specifically, the parameter estimation unit 26 approximately decomposes the combined feature amount information matrix χ into a product of a relational parameter matrix F representing a latent variable model and a latent variable matrix B. That is,
Figure JPOXMLDOC01-appb-M000002
Find F and B. This decomposition may also be performed using non-negative matrix decomposition. Here, M is a matrix having the same size as χ, and is a matrix of “0” for an element corresponding to an element that is a missing value of χ, and “1” for the other elements. A symbol with a dot in it means an element product of a matrix (a product is calculated for each corresponding element).
 推定処理部27は、パラメータ推定部26が得た関係パラメータ行列Fと、潜在変数行列Bとの積を演算して再構成特徴量行列χ″を求め、付加情報h′に対応する範囲にある列ベクトルh″を抽出する。ここで求められるχ″は、元の結合特徴量情報行列χとは異なり、結合特徴量情報行列χの付加情報h′以外の成分の影響により、結合特徴量情報行列χの付加情報h′に対応する列ベクトルh″のうち、元は欠損情報として設定したh′(TH)に対応する範囲にある部分的な列ベクトルh″(TH)の成分は一般に、「0」(上記任意の値)とは異なる数値となる。 The estimation processing unit 27 calculates a product of the relational parameter matrix F obtained by the parameter estimation unit 26 and the latent variable matrix B to obtain a reconstructed feature quantity matrix χ ″, and is in a range corresponding to the additional information h ′. Extract column vector h ″. Unlike the original combined feature quantity information matrix χ, χ ″ obtained here is changed to the additional information h ′ of the combined feature quantity information matrix χ due to the influence of components other than the additional information h ′ of the combined feature quantity information matrix χ. Of the corresponding column vector h ″, the component of the partial column vector h ″ (TH) that is originally in the range corresponding to h ′ (TH) set as missing information is generally “0” (the above arbitrary value). ) Is a different value.
 また、推定処理部27は、視野感度特徴量情報抽出部23が得たX(NFL)に係る基底W(NFL)と、X(GCL+IPL)に係る基底W(GCL+IPL)と、行列X(TH)に係る基底W(TH)とを列方向に連結して基底ωを生成する。そして推定処理部27は、この基底ωに、再構成特徴量行列χ″から抽出した列ベクトルh″を乗じて予測視野感度情報x″(列ベクトル)を求める((3)式)。
Figure JPOXMLDOC01-appb-M000003
In addition, the estimation processing unit 27 includes a base W (NFL) related to X (NFL) obtained by the visual field sensitivity feature amount information extraction unit 23, a base W (GCL + IPL) related to X (GCL + IPL), and a matrix X (TH). A base ω is generated by concatenating the base W (TH) related to in the column direction. The estimation processing unit 27 multiplies the base ω by the column vector h ″ extracted from the reconstructed feature quantity matrix χ ″ to obtain predicted visual field sensitivity information x ″ (column vector) (Equation (3)).
Figure JPOXMLDOC01-appb-M000003
 推定処理部27は、この列ベクトルx″を対象患者の視野感度の予測結果として、出力部28に出力する。出力部28は、この対象患者の視野感度の予測結果を、表示部14に表示し、または外部の装置に出力する。一例として出力部28は、入力された視野感度の情報としての画像データのサイズ(縦Pv画素×横Ph画素)を得ておき、これと同じサイズの画像データを初期化する。そして当該初期化した画像データの各画素を、推定処理部27が出力する列ベクトルのうち、対応する成分の値に基づいて設定し、図3(c)に例示したような視野感度を表す画像データを生成して表示出力する。 The estimation processing unit 27 outputs the column vector x ″ as a prediction result of the visual field sensitivity of the target patient to the output unit 28. The output unit 28 displays the prediction result of the visual field sensitivity of the target patient on the display unit 14. As an example, the output unit 28 obtains the size of the image data (vertical Pv pixel × horizontal Ph pixel) as input visual field sensitivity information, and an image having the same size as this. As shown in FIG. 3C, each pixel of the initialized image data is set based on the value of the corresponding component in the column vector output by the estimation processing unit 27. The image data representing the visual field sensitivity is generated and displayed.
 なお、この出力部28は、対象患者の視野感度の予測結果である列ベクトルx″内の各成分の値のうち、予め定めた値域を超えるものについては、当該値に最も近い値域内の値に設定し直すこととしてもよい。例えば値域が0以上40以下と定められている場合に、列ベクトルx″内に45なる値がある場合には当該値を値域内で最も45に近い値である40に設定し直す。同様に、(ここでは非負値行列分解を用いているのであり得ないものではあるが他の方法で特徴量情報を生成した場合に)列ベクトルx″内に負の値がある場合は、当該値を値域内で最も近い値である0に設定し直す。 In addition, this output part 28 is the value within the value range nearest to the said value about the value of each component in column vector x "which is a prediction result of the visual field sensitivity of the target patient that exceeds a predetermined value range. For example, when the value range is set to 0 or more and 40 or less, and there is a value of 45 in the column vector x ″, the value is set to a value closest to 45 in the value range. Reset to 40. Similarly, when there is a negative value in the column vector x ″ (when the feature amount information is generated by another method, which is impossible because non-negative matrix decomposition is used here), Reset the value to 0, which is the closest value in the range.
[基本的動作]
 本実施の形態の視野感度推定装置1は、基本的に以上の構成を備えており、次のように動作する。視野感度推定装置1の利用者は、事前に、情報提供者である複数の患者から網膜層の厚さに係る情報(検査結果)と、視野感度に係る情報(検査結果)との少なくとも一方の情報提供を受けて、視野感度推定装置1に入力する。視野感度推定装置1は、この情報提供者ごとの網膜層の厚さに係る情報と視野感度に係る情報との少なくとも一方を含む提供情報の入力を受け入れる。
[Basic operation]
The visual field sensitivity estimation apparatus 1 according to the present embodiment basically has the above-described configuration and operates as follows. The user of the visual field sensitivity estimation apparatus 1 in advance receives at least one of information (examination result) relating to the thickness of the retinal layer and information (examination result) relating to visual field sensitivity from a plurality of patients who are information providers. The information is received and input to the visual field sensitivity estimation apparatus 1. The visual field sensitivity estimation apparatus 1 accepts input of provided information including at least one of information related to the thickness of the retinal layer and information related to visual field sensitivity for each information provider.
 なお以下の説明でも、上述の例と同様に、網膜層の厚さに係る情報として、情報提供者の黄斑部における神経繊維層の厚み(NFL)と、同様に黄斑部における神経節細胞層の厚み(GCL+IPL)とを用い、視野感度に係る情報として、中心視野の所定角度(10度とする)以内の複数の観測点における視野感度の情報(TH)を用いるものとし、それぞれ一次元のベクトルデータとしておく。 In the following description, as in the above example, the information relating to the thickness of the retinal layer includes the thickness (NFL) of the nerve fiber layer in the macular region of the information provider, and the ganglion cell layer in the macular region as well. Thickness (GCL + IPL) is used, and information on visual field sensitivity (TH) at a plurality of observation points within a predetermined angle (10 degrees) of the central visual field is used as information on visual field sensitivity, each of which is a one-dimensional vector. Keep it as data.
 視野感度推定装置1は、情報提供者ごとに、情報提供者を特定する固有の識別情報を発行し、当該識別情報に対して、当該識別情報で特定される情報提供者から提供された、黄斑部における神経繊維層の厚み(NFL)と、黄斑部における神経節細胞層の厚み(GCL+IPL)と、中心視野所定角度以内の複数の観測点における視野感度の情報(TH)とを関連付けて記憶部12に格納する。なお、入力されなかった情報については、欠損情報として、予め決められた要素の数だけ「0」を配列したベクトルデータを記録する。 The visual field sensitivity estimation device 1 issues unique identification information for identifying an information provider for each information provider, and the macular provided from the information provider identified by the identification information to the identification information. A storage unit that correlates the nerve fiber layer thickness (NFL) in the head part, the ganglion cell layer thickness (GCL + IPL) in the macula part, and field sensitivity information (TH) at a plurality of observation points within a predetermined angle of the central field of view. 12. For information that has not been input, vector data in which “0” is arranged for the predetermined number of elements is recorded as missing information.
 利用者は、また、視野感度の予測の対象となる患者(予測対象者)についての網膜層の厚さに係る情報(検査結果)を得て、視野感度推定装置1に入力する。視野感度推定装置1は、予測対象者を特定する固有の識別情報を発行して、当該識別情報に対して、予測対象者の黄斑部における神経繊維層の厚み(NFL)と、黄斑部における神経節細胞層の厚み(GCL+IPL)とを関連付けて記憶部12に格納する。なお、予測対象者についてもNFL及びGCL+IPLはそれぞれ一次元のベクトルデータとしておき、入力されない視野感度の情報については欠損情報として、視野感度の情報として予め決められた要素の数だけ「0」を配列したベクトルデータを記録する。 The user also obtains information (examination result) related to the thickness of the retinal layer for the patient (prediction target person) for which the visual field sensitivity is predicted, and inputs the information to the visual field sensitivity estimation apparatus 1. The visual field sensitivity estimation device 1 issues unique identification information for specifying the prediction target person, and for the identification information, the nerve fiber layer thickness (NFL) in the macular part of the prediction target person and the nerve in the macular part. The node cell layer thickness (GCL + IPL) is stored in the storage unit 12 in association with each other. Note that NFL and GCL + IPL are also set as one-dimensional vector data for the prediction target person, and the field sensitivity information that is not input is set as missing information, and “0” is arranged as the number of elements determined in advance as field sensitivity information. Record vector data.
 そして視野感度推定装置1は、予測対象者の視野感度の予測を行うべき旨の指示を受けて図5に例示する処理を開始し、まず各ベクトルデータを再現する特徴量情報を抽出する(S11)。具体的に、視野感度推定装置1は、各情報提供者のNFLのベクトルデータ(列ベクトルとする)のうち、欠損情報でないものと、予測対象者のNFLのベクトルデータ(列ベクトルとする)とを行方向に連結して配列して、NFLのデータ次元×(情報提供者数+1)の大きさの行列X(NFL)を得る。同様に、各情報提供者のGCL+IPLのベクトルデータ(列ベクトルとする)のうち、欠損情報でないものと、予測対象者のGCL+IPLのベクトルデータ(列ベクトルとする)とを、行方向に連結して配列して、GCL+IPLのデータ次元×(情報提供者数+1)の大きさの行列X(GCL+IPL)を得る。 The field-of-view sensitivity estimation apparatus 1 receives the instruction to predict the field-of-view sensitivity of the person to be predicted, starts the process illustrated in FIG. 5, and first extracts feature amount information that reproduces each vector data (S11). ). Specifically, the field-of-view sensitivity estimation apparatus 1 includes NFL vector data (column vector) of each information provider that is not missing information and NFL vector data (column vector) of the prediction target person. Are connected in the row direction to obtain a matrix X (NFL) having a size of NFL data dimension × (number of information providers + 1). Similarly, the GCL + IPL vector data (column vector) of each information provider is not missing information and the GCL + IPL vector data (column vector) of the prediction target person is connected in the row direction. The matrix X (GCL + IPL) is obtained by arranging the data dimensions of GCL + IPL × (number of information providers + 1).
 また視野感度推定装置1は、X(NFL),X(GCL+IPL)のそれぞれについて、列を特定する情報i(i=1,2,…)と、i列目に対応するベクトルデータの情報提供者または予測対象者の識別情報とを関連付けて、記憶部12に記録しておく。 The field-of-view sensitivity estimation device 1 also provides information i (i = 1, 2,...) For specifying a column and vector data information provider corresponding to the i-th column for each of X (NFL) and X (GCL + IPL). Alternatively, it is recorded in the storage unit 12 in association with the identification information of the person to be predicted.
 そして視野感度推定装置1は、得られた行列X(D)(D=NFL,GCL+IPL)を、それぞれ非負値行列分解(NMF)して、基底W(D)と、基底の混合係数H(D)との積に分解し、この基底の混合係数H(D)を網膜層厚特徴量情報として出力する。 The field-of-view sensitivity estimation apparatus 1 then performs non-negative matrix decomposition (NMF) on the obtained matrix X (D) (D = NFL, GCL + IPL), respectively, to obtain a base W (D) and a base mixing coefficient H (D And the base mixing coefficient H (D) is output as retinal layer thickness feature quantity information.
 また視野感度推定装置1は、各情報提供者のTHのベクトルデータ(列ベクトルとする)のうち、欠損情報でないものを行方向に配列して、THのデータ次元×情報提供者数の大きさの行列X(TH)を得る。このとき視野感度推定装置1は、X(TH)について、その列を特定する情報i(i=1,2,…)と、i列目に対応するベクトルデータの情報提供者を識別する情報とを関連付けて、記憶部12に記録しておく。 The field-of-view sensitivity estimation apparatus 1 arranges TH vector data (column vector) of each information provider that is not missing information in the row direction, and the TH data dimension × the number of information providers. To obtain a matrix X (TH). At this time, the field-of-view sensitivity estimation apparatus 1 includes information i (i = 1, 2,...) For specifying the column of X (TH), and information for identifying an information provider of vector data corresponding to the i-th column. Are stored in the storage unit 12 in association with each other.
 視野感度推定装置1は、この行列X(TH)について非負値行列分解(NMF)して、基底W(D)と、基底の混合係数H(D)(ここではD=TH)との積に分解し、この基底の混合係数H(D)を視野感度特徴量情報として出力する。 The visual field sensitivity estimation device 1 performs non-negative matrix decomposition (NMF) on this matrix X (TH) to obtain a product of the base W (D) and the base mixing coefficient H (D) (here, D = TH). The base mixing coefficient H (D) is output as visual field sensitivity characteristic amount information.
 また視野感度推定装置1は、網膜層厚特徴量情報H(NFL),H(GCL+IPL),視野感度特徴量情報H(TH)のそれぞれに含まれる列のうち、網膜層厚情報と視野感度情報との双方を提供している情報提供者(双方提供者と呼ぶ)の識別情報に関連付けられた列のベクトルデータを抽出し、この抽出したベクトルデータを用いて、所定の潜在変数モデル(網膜層厚特徴量情報と視野感度特徴量情報との双方に関係する潜在変数モデル)に係る学習処理を実行する(S12)。 The visual field sensitivity estimation apparatus 1 also includes retinal layer thickness information and visual field sensitivity information among columns included in the retinal layer thickness characteristic amount information H (NFL), H (GCL + IPL), and visual field sensitivity feature amount information H (TH). And the vector data of the column associated with the identification information of the information provider (referred to as the both providers) that provide both, and a predetermined latent variable model (retinal layer) using the extracted vector data A learning process related to a latent variable model related to both the thickness feature amount information and the visual field sensitivity feature amount information is executed (S12).
 具体的には、視野感度推定装置1は、双方提供者の識別情報を順次注目識別情報として選択し、網膜層厚特徴量情報H(NFL),H(GCL+IPL),視野感度特徴量情報H(TH)の列からそれぞれ抽出された、選択した注目識別情報に関連付けられたベクトルデータを連結して配列する。これにより、ある双方提供者に係る網膜層厚特徴量情報と、視野感度特徴量情報とがそれぞれのi列目に配されている特徴量行列Hp(NFL),Hp(GCL+IPL),Hp(TH)が得られる。また視野感度推定装置1は、これらの特徴量行列Hp(NFL),Hp(GCL+IPL),Hp(TH)を列方向に連結して、ペア特徴量情報行列を生成する。 Specifically, the visual field sensitivity estimation apparatus 1 sequentially selects the identification information of both providers as attention identification information, retinal layer thickness feature amount information H (NFL), H (GCL + IPL), and visual field sensitivity feature amount information H ( Vector data associated with the selected attention identification information extracted from the TH) column are connected and arranged. Thereby, the retinal layer thickness feature amount information and the visual field sensitivity feature amount information related to a certain provider are feature amount matrices Hp (NFL), Hp (GCL + IPL), Hp (TH) arranged in the respective i columns. ) Is obtained. The field-of-view sensitivity estimation apparatus 1 connects these feature quantity matrices Hp (NFL), Hp (GCL + IPL), and Hp (TH) in the column direction to generate a pair feature quantity information matrix.
 視野感度推定装置1は、また、予測対象者の網膜層厚特徴量情報h(NFL),h(GCL+IPL)を、網膜層厚特徴量情報H(NFL),H(GCL+IPL)から抽出する。さらに視野感度推定装置1は、予測対象者の視野感度特徴量情報h′(TH)を欠損情報(すべての成分が「0」であるベクトルデータ)として設定し、h′(NFL),h′(GCL+IPL)及び欠損情報として設定したh′(TH)を列方向に連結して付加情報h′を生成する。視野感度推定装置1は、ペア特徴量情報行列に対して付加情報h′を行方向に連結して合成して、結合特徴量情報行列χを生成する。 The visual field sensitivity estimation apparatus 1 also extracts the retinal layer thickness feature amount information h (NFL) and h (GCL + IPL) of the prediction target person from the retinal layer thickness feature amount information H (NFL) and H (GCL + IPL). Further, the visual field sensitivity estimation apparatus 1 sets the visual field sensitivity characteristic amount information h ′ (TH) of the prediction target person as missing information (vector data in which all components are “0”), and h ′ (NFL), h ′. The additional information h ′ is generated by connecting (GCL + IPL) and h ′ (TH) set as missing information in the column direction. The field-of-view sensitivity estimation apparatus 1 generates a combined feature amount information matrix χ by combining additional information h ′ with the pair feature amount information matrix in the row direction.
 そして視野感度推定装置1は、この結合特徴量情報行列χを近似的に再現する潜在変数モデルのモデルパラメータを演算する。具体的にここでは、結合特徴量情報行列χを、非負値行列分解を用いて、潜在変数モデルを表す関係パラメータ行列Fと、潜在変数行列Bとの積に近似的に分解する。 The visual field sensitivity estimation apparatus 1 calculates model parameters of a latent variable model that approximately reproduces the combined feature amount information matrix χ. Specifically, here, the combined feature amount information matrix χ is approximately decomposed into a product of the relational parameter matrix F representing the latent variable model and the latent variable matrix B using non-negative matrix decomposition.
 視野感度推定装置1は、学習した潜在変数モデルを用いて、予測対象者の視野感度特徴量情報を得る(S13)。具体的に視野感度推定装置1は、上記の処理により得られた関係パラメータ行列Fと、潜在変数行列Bとの積を演算して再構成特徴量行列χ″を求め、付加情報h′に対応する範囲にある列ベクトルh″を抽出する。 The visual field sensitivity estimation apparatus 1 obtains visual field sensitivity feature amount information of the prediction target person using the learned latent variable model (S13). Specifically, the visual field sensitivity estimation apparatus 1 calculates a product of the relational parameter matrix F obtained by the above processing and the latent variable matrix B to obtain a reconstructed feature matrix χ ″, and corresponds to the additional information h ′. A column vector h ″ in the range to be extracted is extracted.
 視野感度推定装置1は、そして、この得られた視野感度特徴量情報に基づいて、予測視野感度情報を得る(S14)。具体的には、処理S11において得たX(NFL)に係る基底W(NFL)と、X(GCL+IPL)に係る基底W(GCL+IPL)と、行列X(TH)に係る基底W(TH)とを列方向に連結して基底ωを生成する。そして視野感度推定装置1は、この基底ωに、再構成特徴量行列χ″から抽出した列ベクトルh″を乗じて予測視野感度情報x″(列ベクトル)を求め、この列ベクトルx″を対象患者の視野感度の予測結果として出力する(S15)。 The visual field sensitivity estimation apparatus 1 obtains predicted visual field sensitivity information based on the obtained visual field sensitivity feature amount information (S14). Specifically, the base W (NFL) related to X (NFL) obtained in step S11, the base W (GCL + IPL) related to X (GCL + IPL), and the base W (TH) related to the matrix X (TH) are obtained. A base ω is generated by connecting in the column direction. The field-of-view sensitivity estimation apparatus 1 obtains predicted field-of-view sensitivity information x ″ (column vector) by multiplying this base ω by the column vector h ″ extracted from the reconstructed feature quantity matrix χ ″, and this column vector x ″ is the target. It outputs as a prediction result of a patient's visual field sensitivity (S15).
 視野感度推定装置1は、具体的には、ここで求めた列ベクトルの各成分の値に基づいて画素値を設定した画素を配列し、視野感度の情報を表す画像データを生成して表示出力する。 Specifically, the visual field sensitivity estimation device 1 arranges pixels in which pixel values are set based on the values of the respective components of the column vector obtained here, generates image data representing visual field sensitivity information, and outputs the display data. To do.
[パラメータ推定部の処理例]
 また上述の例において、結合特徴量情報行列χを、潜在変数モデルを表す関係パラメータ行列Fと、潜在変数行列Bとの積に近似的に分解するにあたり、ドメインD(D=NFL,GCL+IPL,TH)ごとに固有の因子を考慮せず、単純に非負値行列分解する例について説明したが、本実施の形態はこれに限られない。
[Processing example of parameter estimation unit]
In the above example, the domain D (D = NFL, GCL + IPL, TH) is used to approximately decompose the combined feature quantity information matrix χ into the product of the relational parameter matrix F representing the latent variable model and the latent variable matrix B. ), An example in which non-negative matrix decomposition is simply performed without considering a specific factor has been described, but the present embodiment is not limited to this.
 例えば、ドメインD(D=NFL,GCL+IPL,TH)ごとに固有な因子の存在を考慮して、構造的非負値行列分解(Structured NMF:例えば、Hans Laurberg, et. al., “Structured Non-Negative Matrix Factorization with Sparsity Patterns”, Proc. of Asilomar Conference on Signals, Systems, and Computers. (2008))を用いてもよい。 For example, considering the existence of a unique factor for each domain D (D = NFL, GCL + IPL, TH), a structural non-negative matrix decomposition (Structured NMF: For example, Hans Laurberg, et. Al., “Structured Non-Negative Matrix Factorization with Sparsity Patterns ”, Proc. Of Asilomar Conference on Signals, Systems, and Computers. (2008)) may be used.
 この場合、
Figure JPOXMLDOC01-appb-M000004
なる基底の行列U(NFL),U(GCL+IPL),U(TH),V(NFL),V(GCL+IPL)、及び特徴量の列ベクトルZ(NFL),Z(GCL+IPL),Z(TH)を求めることとなる。なお、U,V,Zのいずれの要素も非負(すなわち0または正の数)であるものとする。また、χ(NFL)は、結合特徴量情報行列のうち、Hp(NFL)またはh(NFL)に相当する部分的な行列であり、χ(GCL+IPL)は、結合特徴量情報行列のうち、Hp(GCL+IPL)またはh(GCL+IPL)に相当する部分的な行列であり、χ(TH)は、結合特徴量情報行列のうち、Hp(TH)またはh(TH)に相当する部分的な行列である。そして||α||F 2は、αのフロベニウスノルムを演算することを意味する。
in this case,
Figure JPOXMLDOC01-appb-M000004
The following matrix U (NFL), U (GCL + IPL), U (TH), V (NFL), V (GCL + IPL), and feature column vectors Z (NFL), Z (GCL + IPL), Z (TH) Will be asked. Note that any element of U, V, and Z is non-negative (that is, 0 or a positive number). Χ (NFL) is a partial matrix corresponding to Hp (NFL) or h (NFL) in the combined feature quantity information matrix, and χ (GCL + IPL) is Hp in the combined feature quantity information matrix. (GCL + IPL) or a partial matrix corresponding to h (GCL + IPL), and χ (TH) is a partial matrix corresponding to Hp (TH) or h (TH) in the combined feature quantity information matrix. . And || α || F 2 means computing the Frobenius norm of α.
 この例においても、演算の結果として求められた行列
Figure JPOXMLDOC01-appb-M000005
に、列ベクトル
Figure JPOXMLDOC01-appb-M000006
を乗じて再構成特徴量行列χ″を求め、付加情報h′に対応する範囲にある列ベクトルh″を抽出する。ここでも求められるχ″は、元の結合特徴量情報行列χとは異なり、結合特徴量情報行列χの付加情報h′以外の成分の影響により、結合特徴量情報行列χの付加情報h′に対応する列ベクトルh″のうち、元は欠損情報として設定したh′(TH)に対応する範囲にある部分的な列ベクトルh″(TH)の成分は一般に、「0」とは異なる数値となる。
Also in this example, the matrix obtained as a result of the operation
Figure JPOXMLDOC01-appb-M000005
The column vector
Figure JPOXMLDOC01-appb-M000006
To obtain a reconstructed feature matrix χ ″, and extract a column vector h ″ in a range corresponding to the additional information h ′. Χ ″ obtained here is different from the original combined feature information matrix χ, and is added to the additional information h ′ of the combined feature information matrix χ by the influence of components other than the additional information h ′ of the combined feature information matrix χ. Of the corresponding column vector h ″, the component of the partial column vector h ″ (TH) that is originally in the range corresponding to h ′ (TH) set as missing information is generally a numerical value different from “0”. Become.
 なお、(4)式においても、Mは、χ(TH)と同じ大きさの行列であり、χ(TH)のうち欠損値である要素に対応する要素については「0」、それ以外の要素については「1」である行列であり、円内にドットのある記号は行列の要素積(対応する要素ごとに積を演算すること)を意味する。 In Equation (4), M is a matrix having the same size as χ (TH), and “0” is assigned to elements corresponding to elements that are missing values in χ (TH), and other elements. Is a matrix of “1”, and a symbol with a dot in a circle means an element product of the matrix (operating the product for each corresponding element).
 そしてこの例でも推定処理部27が、視野感度特徴量情報抽出部23が得たX(NFL)に係る基底W(NFL)と、X(GCL+IPL)に係る基底W(GCL+IPL)と、行列X(TH)に係る基底W(TH)とを列方向に連結して基底ωを生成し、この基底ωに、再構成特徴量行列χ″から抽出した列ベクトルh″を乗じて予測視野感度情報x″(列ベクトル)を求める。 Also in this example, the estimation processing unit 27 uses the basis W (NFL) related to X (NFL) obtained by the visual field sensitivity feature amount information extraction unit 23, the basis W (GCL + IPL) related to X (GCL + IPL), and the matrix X ( TH) is connected to the base W (TH) in the column direction to generate a base ω, and this base ω is multiplied by a column vector h ″ extracted from the reconstructed feature matrix χ ″ to predict visual field sensitivity information x. ″ (Column vector) is obtained.
[モデルパラメータ生成におけるデータの選抜]
 さらに本実施の形態の一例では、モデルパラメータ生成の際に用いるペア特徴量情報を、予め定めた条件に基づいて選抜してもよい。この選抜は例えば次のようにして行われる。
[Selection of data for model parameter generation]
Furthermore, in an example of the present embodiment, pair feature amount information used for generating model parameters may be selected based on a predetermined condition. This selection is performed as follows, for example.
 すなわち本実施の形態のある例では、視野感度推定装置1の制御部11は、情報提供者と予測対象者とから提供されたNFLまたはGCL+IPLの少なくとも一方を対象としてクラスタリング処理を行う。このクラスタリング処理は、例えば主成分分析や、k-means法など、広く知られた方法を採用できる。 That is, in an example of this embodiment, the control unit 11 of the visual field sensitivity estimation apparatus 1 performs a clustering process on at least one of NFL and GCL + IPL provided by the information provider and the prediction target person. For this clustering process, a widely known method such as principal component analysis or k-means method can be employed.
 制御部11は、予測対象者のNFLまたはGCL+IPLの少なくとも一方と同じクラスタに属すると判断されたNFLまたはGCL+IPLを提供した情報提供者を特定する識別情報を取得する。 The control unit 11 acquires identification information that identifies an information provider who has provided NFL or GCL + IPL, which is determined to belong to the same cluster as at least one of NFL or GCL + IPL of the prediction target person.
 そして視野感度推定装置1の制御部11は、ペア特徴量情報行列生成部24としての動作において、網膜層厚特徴量情報H(NFL),H(GCL+IPL),視野感度特徴量情報H(TH)のそれぞれに含まれる列のうち、網膜層厚情報と視野感度情報との双方を提供している情報提供者(双方提供者)の識別情報であって、上記取得した識別情報に関連付けられた列を対象ペア情報として抽出する。 The control unit 11 of the visual field sensitivity estimation apparatus 1 operates as the pair characteristic amount information matrix generation unit 24 in the retinal layer thickness feature amount information H (NFL), H (GCL + IPL), and the visual field sensitivity feature amount information H (TH). Among the columns included in each of the above, the identification information of the information provider (both providers) providing both the retinal layer thickness information and the visual field sensitivity information, the column being associated with the acquired identification information Is extracted as target pair information.
 そして制御部11は当該抽出された対象ペア情報に含まれる各ベクトルデータを連結して配列し、特徴量行列Hp(NFL),Hp(GCL+IPL),Hp(TH)を得る。また制御部11は、これら特徴量行列を列方向に連結して、ペア特徴量情報列を生成し、さらに各ペア特徴量情報列を行方向に連結して配列してペア特徴量情報行列を得る。そして以下の処理は上述の例と同様に実行する。 Then, the control unit 11 concatenates and arranges the vector data included in the extracted target pair information, and obtains feature quantity matrices Hp (NFL), Hp (GCL + IPL), and Hp (TH). Further, the control unit 11 concatenates these feature amount matrices in the column direction to generate a pair feature amount information sequence, and further concatenates each pair feature amount information sequence in the row direction to arrange the pair feature amount information matrix. obtain. The following processing is executed in the same manner as in the above example.
 このようにして制御部11が受け入れた情報提供者ごとの提供情報のうち、網膜層の厚さに係る情報と視野感度に係る情報との双方を含む提供情報であって、提供情報に含まれる網膜層の厚さに係る情報が、所定のクラスタリング処理により、予測対象者の網膜層の厚さに係る情報と同じクラスタに属する提供情報を対象ペア情報とし、当該対象ペア情報に基づく特徴量情報を用いてモデルパラメータの演算を行うことで、ヘテロ性を有する網膜層厚の情報について、予測対象者と同様の傾向を有する情報提供者から提供された情報に基づく特徴量を選抜してモデルパラメータの演算が行われることとなり、予測精度の向上が期待できる。 Of the provision information for each information provider received by the control unit 11 in this way, the provision information includes both information relating to the thickness of the retinal layer and information relating to the visual field sensitivity, and is included in the provision information. Feature information based on the target pair information, with the information related to the thickness of the retinal layer being provided as the target pair information by a predetermined clustering process and belonging to the same cluster as the information related to the thickness of the retinal layer of the prediction target person The model parameter is calculated using the model parameter by selecting the feature amount based on the information provided by the information provider having the same tendency as the prediction target person for the heterogeneous retinal layer thickness information. Thus, the prediction accuracy can be improved.
 また、クラスタリングの方法は、上記の例に限られない。例えば、視野感度推定装置1の制御部11は、情報提供者と予測対象者とから提供されたNFLまたはGCL+IPLの少なくとも一方のベクトルを用い、情報提供者のベクトルが表す空間上の点であって、予測対象者のベクトルが表す空間上の点からの距離(マハラノビス距離など)が予め定めたしきい値を下回る(あるいは予め定めた情報提供者数を超える距離以下となる)点を抽出する。そして制御部11は、当該抽出した点に係るベクトルであるNFLまたはGCL+IPLの少なくとも一方を提供した情報提供者を特定する識別情報を抽出して、ペア特徴量情報行列生成部24としての動作を行ってもよい。 Further, the clustering method is not limited to the above example. For example, the control unit 11 of the field-of-view sensitivity estimation apparatus 1 is a point on the space represented by the information provider vector using at least one vector of NFL or GCL + IPL provided from the information provider and the prediction target person. Then, a point where a distance (Mahalanobis distance or the like) from a point on the space represented by the vector of the prediction target person is less than a predetermined threshold (or less than a distance exceeding the predetermined number of information providers) is extracted. Then, the control unit 11 extracts identification information that identifies an information provider who provides at least one of NFL and GCL + IPL, which is a vector related to the extracted point, and performs an operation as the pair feature amount information matrix generation unit 24. May be.
[属性情報の利用]
 また、モデルパラメータ生成の際に用いるペア特徴量情報を選抜する条件は、上述のように、網膜層厚の情報をクラスタリングした結果を用いた条件のほか、情報提供者と予測対象者の属性に関する情報を用いてもよい。ここで属性とは、年齢(年齢層別でよい)、性別、人種(例えばヨーロッパ人種、アフリカ人種、アジア人種の別)、網膜の血管の位置等であり、予測対象者と共通の属性を有する情報提供者のうち、網膜層厚情報と視野感度情報との双方を提供している情報提供者が提供した情報に基づく特徴量を用いて潜在変数モデルのパラメータを推定する。
[Use attribute information]
In addition, as described above, the conditions for selecting the pair feature information used when generating the model parameters are related to the attributes of the information provider and the prediction target person in addition to the conditions using the result of clustering the information of the retinal layer thickness. Information may be used. Here, attributes are age (may be by age group), gender, race (for example, European, African or Asian), retinal blood vessel position, etc. The parameters of the latent variable model are estimated using the feature amount based on the information provided by the information provider who provides both the retinal layer thickness information and the visual field sensitivity information.
[変形例]
 また、ここまでの説明では、網膜層厚の情報として黄斑部における神経繊維層の厚み(NFL)と、黄斑部における神経節細胞層の厚み(GCL+IPL)とを用いる例について述べたが、本実施の形態では、これらに代えて、またはこれらの少なくとも一方に加えて、視細胞層の厚さの情報を含めてもよい。
[Modification]
In the above description, examples of using the thickness of the nerve fiber layer (NFL) in the macula and the thickness of the ganglion cell layer (GCL + IPL) in the macula as the information of the retinal layer thickness have been described. In this embodiment, information on the thickness of the photoreceptor layer may be included instead of or in addition to at least one of them.
 また、視野感度の情報(TH)は、デシベル換算したデータであってもよいし、デシベル換算する前のデータ(線形性のあるデータ)を用いてもよい。 Further, the visual field sensitivity information (TH) may be data converted to decibels, or data before decibel conversion (linear data) may be used.
[結果の補正]
 さらにここまでの説明において、本実施の形態の視野感度推定装置1は、対象患者の視野感度の予測結果である列ベクトルx″を、誤差分布に基づく情報(例えば誤差分布の平均等、誤差分布の統計値)に基づいて補正し、当該補正後の列ベクトルを、対象患者の視野感度の予測結果として出力してもよい。
[Correction of results]
Further, in the description so far, the visual field sensitivity estimation apparatus 1 according to the present embodiment uses the column vector x ″, which is a prediction result of the visual field sensitivity of the target patient, as information based on the error distribution (for example, an error distribution such as an average error distribution). And the column vector after the correction may be output as a prediction result of the visual field sensitivity of the target patient.
 具体的に、本発明の実施の形態のある例では、実際に検査により測定された対象患者の視野感度の情報(真値)と、当該情報が不知であると仮定して演算した視野感度の予測結果(列ベクトルx″)との組を複数セット用い、予測値pに対する真値rの分布を得る(図6)。 Specifically, in an example of the embodiment of the present invention, the information (true value) of the visual field sensitivity of the target patient actually measured by the examination and the visual field sensitivity calculated on the assumption that the information is unknown. A plurality of sets of prediction results (column vector x ″) are used to obtain a distribution of true values r with respect to the prediction value p (FIG. 6).
 視野感度推定装置1は、この分布を所定の関数で近似し(例えば多項式回帰などの重回帰等により近似するものでもよい)、平均的な分布r=f(p)を得る。 The visual field sensitivity estimation apparatus 1 approximates this distribution with a predetermined function (may be approximated by multiple regression such as polynomial regression) to obtain an average distribution r = f (p).
 そしてこの例の視野感度推定装置1は、列ベクトルx″の各成分ξi(i=1,2,…)をξi′=f(ξi)と補正した補正後の列ベクトル(各成分がξ′iであるベクトル)を演算し、この演算した補正後の列ベクトルを、対象患者の視野感度の予測結果として出力する。 The field-of-view sensitivity estimation apparatus 1 in this example corrects the column vector x ″ after correcting each component ξi (i = 1, 2,...) To ξi ′ = f (ξi) (each component is ξ ′ (vector i) is calculated, and the calculated column vector after correction is output as a prediction result of the visual field sensitivity of the target patient.
 この例によると、平均的な誤差の傾向がある場合に、簡易な方法で補正ができる。 According to this example, when there is an average error tendency, it can be corrected by a simple method.
[機械学習モデルを用いる例]
 また、本発明の実施の形態の別の例では、制御部11が、記憶部12に格納されたプログラムに従って動作し、複数の患者(情報提供者と呼ぶ)から、網膜層の厚さに係る情報(検査結果)、及び視野感度に係る情報(検査結果)の情報と、各情報提供者の属性に関する情報(年齢、人種等)との提供を受け、視野感度推定装置1に入力する。制御部11は、この情報提供者ごとの網膜層の厚さに係る情報と視野感度に係る情報とを含む提供情報を受け入れる。
[Example using machine learning model]
In another example of the embodiment of the present invention, the control unit 11 operates according to a program stored in the storage unit 12 and relates to the thickness of the retinal layer from a plurality of patients (referred to as information providers). Information (examination result) and information related to visual field sensitivity (inspection result) and information (age, race, etc.) related to attributes of each information provider are received and input to the visual field sensitivity estimation device 1. The control unit 11 accepts provided information including information related to the thickness of the retinal layer for each information provider and information related to visual field sensitivity.
 制御部11は、情報提供者ごとの目の網膜層の厚さに係る情報と視野感度に係る情報とを含む提供情報を受け入れて、網膜層の厚さに係る情報を学習データとし、学習データに対応する情報提供者の視野感度に係る情報を教師データとして、機械学習した状態にある学習モデルオブジェクトを記憶部12内に保持する。また制御部11は、視野予測の対象となる予測対象者の網膜層の厚さに係る情報を受け入れ、当該予測対象者の網膜層厚特徴量情報を学習モデルオブジェクトに対する入力として、当該入力に対する視野感度の推定データを生成する。 The control unit 11 accepts provision information including information related to the thickness of the retinal layer of the eye and information related to the visual field sensitivity for each information provider, and uses the information related to the thickness of the retinal layer as learning data. The learning model object in the machine-learned state is held in the storage unit 12 using the information related to the visual field sensitivity of the information provider corresponding to as teacher data. Further, the control unit 11 accepts information related to the thickness of the retinal layer of the prediction target person who is the target of the visual field prediction, and uses the retinal layer thickness feature amount information of the prediction target person as input to the learning model object. Generate sensitivity estimation data.
 具体的にこの例に係る制御部11は、図7に例示するように、受入部21と、訓練処理部31と、推定処理部27′と、出力部28とを含んで機能的に構成される。 Specifically, as illustrated in FIG. 7, the control unit 11 according to this example is functionally configured including an acceptance unit 21, a training processing unit 31, an estimation processing unit 27 ′, and an output unit 28. The
 ここで受入部21は、情報提供者ごとの網膜層の厚さに係る情報と視野感度に係る情報とを含む提供情報の入力を受け入れる。 Here, the accepting unit 21 accepts input of provision information including information on the thickness of the retinal layer for each information provider and information on the visual field sensitivity.
 本実施の形態のここでの例のように、機械学習処理を用いる場合、網膜層の厚さに係る情報として、情報提供者の黄斑部における神経繊維層の厚み(NFL)と、同様に黄斑部における神経節細胞層の厚み(GCL+IPL)とに加え、桿状体錐状体層の厚み(RCL)を用いる。また視野感度に係る情報は、既に述べた例と同様、中心視野所定角度(例えば10度または30度)以内の複数の観測点(以下の例では、M×Nのマトリクス状に配列された各観測点とする)における視野感度の情報(TH)を用いるものとする。 As in this example of the present embodiment, when using machine learning processing, as information related to the thickness of the retinal layer, the thickness (NFL) of the nerve fiber layer in the macular portion of the information provider, and the macular similarly In addition to the thickness of the ganglion cell layer (GCL + IPL) in the part, the thickness of the rod-shaped cone layer (RCL) is used. Similarly to the example described above, the information related to the visual field sensitivity includes a plurality of observation points (in the following example, arranged in an M × N matrix) within a predetermined angle (for example, 10 degrees or 30 degrees) of the central visual field. The field sensitivity information (TH) at the observation point) is used.
 この例の受入部21は、情報提供者ごとに、黄斑部における神経繊維層の厚み(NFL)と、黄斑部における神経節細胞層の厚み(GCL+IPL)と、桿状体錐状体層の厚み(RCL)と、中心視野所定角度(ここでは10度とする)以内の複数の観測点における視野感度の情報を成分としたマトリクス状のデータ(TH)(以下簡潔に、視野感度の情報と呼ぶ)とを受ける。なお、これら各情報提供者のNFL,GCL+IPL,THの各データは、図3(a)(NFL),図3(b)(GCL+IPL),及び図3(c)(TH)等に例示したように、それぞれ二次元の画像データ(ここでは予め幅方向の画素数が決められているものとする)として表現できる。本実施の形態の受入部21は、それぞれの二次元の画像データに含まれる各画素の画素値(輝度の値など)または各画素が表す値(網膜層厚や視野感度の測定値そのもの)を所定の順序(例えば、いわゆるスキャンライン順)に配列した、データ要素数が予め決められている(ここではデータ要素数は画素数となる)一次元のベクトルデータとして、NFL,GCL+IPL,RCL,THの各データを受け入れるものとする。 The receiving unit 21 in this example, for each information provider, the thickness of the nerve fiber layer (NFL) in the macula, the thickness of the ganglion cell layer in the macula (GCL + IPL), and the thickness of the rod-shaped cone layer ( RCL) and matrix data (TH) whose components are visual field sensitivity information at a plurality of observation points within a central visual field predetermined angle (here, 10 degrees) (hereinafter simply referred to as visual field sensitivity information). And receive. Note that the NFL, GCL + IPL, and TH data of each information provider are as illustrated in FIG. 3A (NFL), FIG. 3B (GCL + IPL), FIG. 3C, and TH. Furthermore, each can be expressed as two-dimensional image data (here, the number of pixels in the width direction is determined in advance). The receiving unit 21 according to the present embodiment calculates a pixel value (such as a luminance value) of each pixel included in each two-dimensional image data or a value represented by each pixel (measurement value of retinal layer thickness or visual field sensitivity itself). NFL, GCL + IPL, RCL, TH as one-dimensional vector data in which the number of data elements arranged in a predetermined order (for example, so-called scan line order) is predetermined (here, the number of data elements is the number of pixels). Each data is accepted.
 受入部21は、情報提供者ごとに、情報提供者を特定する識別情報(固有の識別番号等でよい)を発行する等して得て、当該識別情報に対して、当該識別情報で特定される情報提供者から提供された、黄斑部における神経繊維層の厚み(NFL)と、黄斑部における神経節細胞層の厚み(GCL+IPL)と、桿状体錐状体層の厚み(RCL)と、中心視野所定角度以内の複数の観測点における視野感度の情報(TH)とを関連付けて記憶部12に格納し、訓練処理部31に対して当該格納した情報に基づく学習処理を行うよう指示する。なお、入力されなかった情報については、欠損情報として、予め決められた要素の数だけ任意の値(例えば「0」)を配列したベクトルデータを記録しておいてもよい。 The receiving unit 21 is obtained by issuing identification information (which may be a unique identification number or the like) for identifying the information provider for each information provider, and is identified by the identification information with respect to the identification information. Provided by the information provider, the thickness of the nerve fiber layer in the macula (NFL), the thickness of the ganglion cell layer in the macula (GCL + IPL), the thickness of the rod cone layer (RCL), and the center Field sensitivity information (TH) at a plurality of observation points within a predetermined field of view is associated with each other and stored in the storage unit 12, and the training processing unit 31 is instructed to perform a learning process based on the stored information. For information that has not been input, vector data in which arbitrary values (for example, “0”) are arranged as many as a predetermined number of elements may be recorded as missing information.
 また、受入部21は、視野感度の予測の対象となる患者(予測対象者)についての網膜層の厚さに係る情報(検査結果)を受け入れる。受入部21は、予測対象者を特定する識別情報(情報提供者と同様、固有の識別番号等でよい)を発行する等して得て、当該識別情報に対して、予測対象者から提供された、黄斑部における神経繊維層の厚み(NFL)と、黄斑部における神経節細胞層の厚み(GCL+IPL)と、桿状体錐状体層の厚み(RCL)とを関連付けて記憶部12に格納する。そして受入部21は、推定処理部27′に対して、推定処理を行うよう指示する。なお、入力されない視野感度の情報については、欠損情報として、視野感度の情報として予め決められた要素の数だけ任意の値(以下では一例としてこの任意の値を「0」とする)を配列したベクトルデータを記録する。 In addition, the receiving unit 21 receives information (examination result) related to the thickness of the retinal layer for a patient (prediction target person) whose visual field sensitivity is to be predicted. The receiving unit 21 is obtained by issuing identification information for identifying the prediction target person (may be a unique identification number or the like as with the information provider), and is provided from the prediction target person for the identification information. In addition, the nerve fiber layer thickness (NFL) in the macula part, the ganglion cell layer thickness (GCL + IPL) in the macula part, and the rod-shaped cone layer thickness (RCL) are stored in the storage unit 12 in association with each other. . Then, the receiving unit 21 instructs the estimation processing unit 27 ′ to perform the estimation process. Regarding the field sensitivity information that is not input, as the missing information, an arbitrary value (hereinafter, this arbitrary value is set to “0” as an example) is arranged by the number of elements determined in advance as the field sensitivity information. Record vector data.
 訓練処理部31は、受入部21から学習処理を行うべき旨の指示を受けて、受入部21が受け入れて、記憶部12に格納した情報を学習データとして、当該学習データに基づいて、予め記憶部12に格納した学習モデルオブジェクトの機械学習処理を行う。ここで、学習モデルオブジェクトは、多層のニューラルネットのデータである。 The training processing unit 31 receives an instruction from the receiving unit 21 that learning processing should be performed, and stores in advance, based on the learning data, information received by the receiving unit 21 and stored in the storage unit 12 as learning data. Machine learning processing of the learning model object stored in the unit 12 is performed. Here, the learning model object is multi-layer neural network data.
 本実施の形態の一例において、この学習モデルオブジェクトは、少なくとも学習データが直接入力される第1層が、畳み込み層(CNN:Convolutional Neural Network)となっている。具体的に、本実施の形態の訓練処理部31により学習処理される学習モデルオブジェクトは、例えばVGG16として知られる学習モデルオブジェクトと同様、データの特徴的な分布を抽出する畳み込み層と、分布イメージのサイズを縮小するプーリング層(ここでは例えばマックス・プーリングを行う)とを含む。もっとも本実施の形態の学習モデルオブジェクトの出力側の層は、全結合層に代えて、疎結合な層としておくことが好適である。この出力層の例については後に述べる。 In one example of the present embodiment, in this learning model object, at least the first layer to which learning data is directly input is a convolutional layer (CNN: Convolutional Neural Network). Specifically, the learning model object that is subjected to learning processing by the training processing unit 31 of the present embodiment is similar to the learning model object known as VGG16, for example, a convolutional layer that extracts a characteristic distribution of data, and a distribution image And a pooling layer for reducing the size (for example, max pooling is performed here). However, the layer on the output side of the learning model object according to the present embodiment is preferably a loosely coupled layer instead of the fully coupled layer. An example of this output layer will be described later.
 また、この学習モデルオブジェクトの学習データは、X×Yのマトリクス状に配列された3チャネル分のデータ(すなわち3×X×Yに配列されるデータ列)であるとする。 Further, it is assumed that the learning data of the learning model object is data for three channels arranged in an X × Y matrix (that is, a data string arranged in 3 × X × Y).
 また本実施の形態の一例において学習モデルオブジェクトは、情報提供者となり得る患者数が一般に少ないことを考慮して、事前に黄斑部における神経繊維層の厚み(NFL)や、黄斑部における神経節細胞層の厚み(GCL+IPL)、桿状体錐状体層の厚み(RCL)とは異なる、一般的な画像データ(例えばイメージネット(J.Deng, et.al., ImageNet : A large-scale hierarchical image database. In IEEE Conference on Computer Vision and Pattern Recognition, 2009. CVPR 2009, 248-255))に基づく学習を行っておいてもよい。この際、画像データの画素を表す三原色(赤色R,緑色G,青色B)の各色成分のチャネルを、学習データの各チャネルに割り当てる。例えば第1チャネルに、画像データの赤色成分のチャネルの情報を入力し、第2チャネルに画像データの緑色成分のチャネルの情報を入力し…というようにして画像データを学習データとして入力して学習処理させる。このような学習処理の方法は画像データの機械学習方法として広く知られているので詳しい説明は省略する。 In addition, in consideration of the fact that the number of patients who can be information providers is generally small in the example of the present embodiment, the learning model object has a thickness of the nerve fiber layer (NFL) in the macular region or a ganglion cell in the macular region in advance. General image data (for example, image net (J.Deng, et.al., ImageNet: A large-scale hierarchical image database) different from the layer thickness (GCL + IPL) and the thickness of the rod cone layer (RCL) Learning based on .In (IEEE) Conference (on) Computer (Vision) and Pattern (Recognition, 2009. CVPR (2009, 248-255)) may be performed. At this time, channels of the respective color components of the three primary colors (red R, green G, and blue B) representing the pixels of the image data are assigned to each channel of the learning data. For example, the channel information of the red component of the image data is input to the first channel, the channel information of the green component of the image data is input to the second channel, and so on. Let it be processed. Since such a learning processing method is widely known as a machine learning method for image data, a detailed description thereof will be omitted.
 また本実施の形態の学習モデルオブジェクトは、その出力層の一つ前の層が、出力データと同サイズのデータ列(ここでの例では、出力時に視野感度の情報(TH)の二次元配列に一致させるため、M×Nのマトリクス状に配列される実数のデータ列であるとする)を複数(例えばL個)出力する。以下の説明では、このデータ列のうちl番目のデータ列の、位置(m,n)に配列されるべきデータの値をdlmnと表記する。 In the learning model object according to the present embodiment, the layer immediately before the output layer has a data string having the same size as the output data (in this example, a two-dimensional array of field sensitivity information (TH) at the time of output) In order to match these, a plurality of data (for example, L) is output (assuming that the data strings are real numbers arranged in an M × N matrix). In the following description, the value of data to be arranged at the position (m, n) of the l-th data string in this data string is denoted as dlmn.
 そしてここでの学習モデルオブジェクトの出力層は、学習により変更され得るパラメータA,B(Aは、L×M×N個の実数のデータ列であり、l番目のデータ列の、位置(m,n)に対応するデータの値をalmnと表記する。またBは、M×N個の実数のデータ列であり、位置(m,n)に対応するデータの値をbmnと表記する)と、上記データの値dlmnとを用いて、次のようにして、出力するデータ列xmnを生成する。
Figure JPOXMLDOC01-appb-M000007
すなわち、l=1,2,…L番目の各データ列についての、amnとdmnとの積をlについて総和した値に、bmnの値を加算したものが、出力するデータ列の(m,n)の位置に対応する成分xmnであるとする。このように、本実施の形態で用いる学習モデルオブジェクトの出力層は、前段の層と間で、対応する成分間だけに係る演算を行うこととした、疎な結合を表す(つまり、位置に係る情報を保存した)層となっている。
The output layer of the learning model object here is parameters A and B (A is a L × M × N real data string that can be changed by learning, and the position (m, The data value corresponding to n) is expressed as almn, and B is an M × N real data string, and the data value corresponding to the position (m, n) is expressed as bmn), Using the data value dlmn, a data string xmn to be output is generated as follows.
Figure JPOXMLDOC01-appb-M000007
That is, the sum of the product of amn and dmn for l = 1, 2,... L-th data string plus the value of bmn is added to the output data string (m, n ) Is the component xmn corresponding to the position of. As described above, the output layer of the learning model object used in the present embodiment represents a sparse coupling in which the calculation related to only the corresponding component is performed with the previous layer (that is, the position related to the position) Information) layer.
 このように、本実施の形態では、一般的なCNNとは異なり、最終層およびその前段の層を全結合させることなく、光干渉断層計と視野との(OCT-VFの)空間的な対応を考慮して、位置ごとに疎結合させることとしているので、モデルの複雑性が抑制され、過学習を防止でき、予測率が改善される。 As described above, in the present embodiment, unlike general CNN, the spatial correspondence (OCT-VF) between the optical coherence tomography and the visual field is obtained without fully coupling the final layer and the previous layer. Therefore, the model is suppressed in complexity, overlearning can be prevented, and the prediction rate is improved.
 なお、A,Bの値はそれぞれザビエル(Xavier)の方法(Xavier et.al., Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the International Conference on Artificial Intelligence and Statistics(AISTATS2010). Society for Artificial Intelligence and Statistics.)等によって初期化しておく。 The values of A and B are Xavier's method (Xavier et.al., Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the International Conference on Artificial Intelligence and Statistics ((AISTATSty) Intelligence に よ っ て and Statistics.) Etc.
 訓練処理部31は、受入部21が受け入れた情報を学習データとして、当該学習データのうち、黄斑部における神経繊維層の厚み(NFL)と、黄斑部における神経節細胞層の厚み(GCL+IPL)と、桿状体錐状体層の厚み(RCL)との情報を学習モデルオブジェクトに入力する。ここで訓練処理部31は、黄斑部における神経繊維層の厚み(NFL)を第1のチャネル、黄斑部における神経節細胞層の厚み(GCL+IPL)を第2のチャネル、桿状体錐状体層の厚み(RCL)の情報を第3のチャネル、といったように、入力するデータの各チャネルにわけて入力する。 The training processing unit 31 uses the information received by the receiving unit 21 as learning data, and among the learning data, the nerve fiber layer thickness (NFL) in the macula and the ganglion cell layer thickness (GCL + IPL) in the macula The information about the thickness (RCL) of the rod-like cone layer is input to the learning model object. Here, the training processing unit 31 sets the thickness (NFL) of the nerve fiber layer in the macula to the first channel, the thickness of the ganglion cell layer (GCL + IPL) in the macula to the second channel, and the rod-shaped cone layer. Thickness (RCL) information is input separately for each channel of data to be input, such as the third channel.
 また訓練処理部31は、当該入力の結果、学習モデルオブジェクトの出力層が出力するM×Nのマトリクス状に配列される実数のデータ列xmnと、学習データに含まれる正解データである視野感度の情報(TH)の対応する位置のデータとの差(視野感度の情報(TH)の位置(m,n)にあるデータをTHmnと表記すると、THmn-xmn)の絶対値の和、またはその二乗和を損失として、一般的なバックプロパゲーションの方法を用いて、学習モデルオブジェクトに含まれる出力層のパラメータA,Bと、畳み込み層との各パラメータを更新する(プーリング層には一般的には更新するべきパラメータがないのでここでは記載していないが、ここで述べた以外の層を含み、当該層がバックプロパゲーションにより更新するべきパラメータを含む場合は、ここで当該層のパラメータも併せて更新する)。 In addition, the training processing unit 31 has a real data string xmn arranged in an M × N matrix output from the output layer of the learning model object as a result of the input, and the visual field sensitivity that is correct data included in the learning data. The difference between the information (TH) and the corresponding position data (when the data at the position (m, n) of the field sensitivity information (TH) is expressed as THmn, the sum of the absolute values of THmn−xmn) or its square Using the general backpropagation method with the sum as a loss, the parameters A and B of the output layer and the convolution layer included in the learning model object are updated (generally, the pooling layer has Although there is no parameter to be updated, it is not described here, but it includes layers other than those described here, and the layer includes parameters to be updated by backpropagation. If this is the case, the parameters of the layer are also updated here).
 なお、更新(バックプロパゲーション)の方法は、種々の方法、例えばいわゆるモーメンタム法を採用できるので、ここでの詳しい説明を省略する。 Note that various methods, for example, a so-called momentum method can be adopted as the method of updating (back propagation), and thus detailed description thereof is omitted here.
 推定処理部27′は、受入部21から推定処理を行うべき旨の指示を受けて、受入部21が受け入れて、記憶部12に格納した情報(予測対象者の、黄斑部における神経繊維層の厚み(NFL)、黄斑部における神経節細胞層の厚み(GCL+IPL)、及び桿状体錐状体層の厚み(RCL)の情報)を入力データとして、訓練処理部31にて訓練処理を行った後の学習モデルオブジェクトに対して入力する。 The estimation processing unit 27 ′ receives an instruction from the receiving unit 21 that the estimation process should be performed, and receives the information received by the receiving unit 21 and stored in the storage unit 12 (the prediction target person's nerve fiber layer in the macular region). After performing the training process in the training processing unit 31 using the thickness (NFL), the thickness of the ganglion cell layer in the macula (GCL + IPL), and the thickness of the rod-shaped cone layer (RCL) as input data Input for the learning model object.
 推定処理部27′は、訓練処理部31と同様、黄斑部における神経繊維層の厚み(NFL)を第1のチャネル、黄斑部における神経節細胞層の厚み(GCL+IPL)を第2のチャネル、桿状体錐状体層の厚み(RCL)の情報を第3のチャネル、といったように、入力するデータの各チャネルにわけて入力する。 Similar to the training processing unit 31, the estimation processing unit 27 ′ uses the nerve fiber layer thickness (NFL) in the macular region as the first channel, and the ganglion cell layer thickness (GCL + IPL) in the macula region as the second channel. Information on the thickness (RCL) of the body cone layer is input separately for each channel of data to be input, such as the third channel.
 また推定処理部27′は、当該入力の結果、学習モデルオブジェクトの出力層が出力するM×Nのマトリクス状に配列される実数のデータ列xmnを、そのまま、対象患者(予測対象者)の視野感度の情報(TH)の予測結果として出力する。すなわち、本実施の形態のこの例に係る出力部28は、出力する予測結果としての視野感度の情報(TH)の位置(m,n)にあるデータTHmnを、学習モデルオブジェクトが出力する、対応する位置の値xmnに設定して、M×Nの画像のデータとして、視野感度の予測結果の情報を出力する。 In addition, the estimation processing unit 27 ′ uses the real data string xmn arranged in an M × N matrix output from the output layer of the learning model object as a result of the input as it is, and the field of view of the target patient (prediction target person). Output as a prediction result of sensitivity information (TH). That is, the output unit 28 according to this example of the present embodiment outputs the data THmn at the position (m, n) of the visual field sensitivity information (TH) as the prediction result to be output. The position sensitivity value xmn is set, and information on the visual field sensitivity prediction result is output as M × N image data.
 なお、ここまでの説明では、畳み込み層を有する学習モデルオブジェクトを、情報提供者から提供された、黄斑部における神経繊維層の厚み(NFL)と、黄斑部における神経節細胞層の厚み(GCL+IPL)と、桿状体錐状体層の厚み(RCL)と、中心視野所定角度以内の複数の観測点における視野感度の情報(TH)とに基づいて学習することとしていたが、本実施の形態における学習モデルオブジェクトはこの例に限られない。 In the description so far, the learning model object having the convolutional layer is obtained by providing the thickness of the nerve fiber layer in the macula (NFL) and the thickness of the ganglion cell layer in the macula (GCL + IPL) provided by the information provider. Learning based on the thickness (RCL) of the rod-shaped cone layer and the field sensitivity information (TH) at a plurality of observation points within a predetermined angle of the central field of view. The model object is not limited to this example.
 例えば、2乃至3層の全結合層を含む学習モデルオブジェクト(区別のため、ここでは小オブジェクトと呼ぶ)を用いて次のように学習してもよい。すなわち、情報提供者から提供された、黄斑部における神経繊維層の厚み(NFL)と、黄斑部における神経節細胞層の厚み(GCL+IPL)と、桿状体錐状体層の厚み(RCL)との情報を、上述の方法で学習した、畳み込み層を有する学習モデルオブジェクトに入力して得られた出力を教師データとして得ておく。また、小オブジェクトに情報提供者から提供された、黄斑部における神経繊維層の厚み(NFL)と、黄斑部における神経節細胞層の厚み(GCL+IPL)と、桿状体錐状体層の厚み(RCL)との情報を入力して得た出力と、上記当該教師データとの差に基づいて小オブジェクトを学習処理することとしてもよい。このような方法は、いわゆる「蒸留法」として広く知られている処理であるので、ここでの詳細な説明は省略する。 For example, learning may be performed as follows using a learning model object including two to three layers of all connected layers (referred to here as a small object for distinction). That is, the nerve fiber layer thickness (NFL) in the macula, the ganglion cell layer thickness (GCL + IPL), and the rod cone layer thickness (RCL) provided by the information provider An output obtained by inputting information to a learning model object having a convolutional layer learned by the above-described method is obtained as teacher data. In addition, the nerve fiber layer thickness (NFL) in the macula, the ganglion cell layer thickness (GCL + IPL), and the rod cone layer thickness (RCL) provided by the information provider to the small object ) And the output obtained by inputting the information and the difference between the teacher data and the small object may be subjected to learning processing. Since such a method is a process widely known as a so-called “distillation method”, a detailed description thereof is omitted here.
 本実施の形態の一例では、このような小オブジェクトを用いて、推定処理部27′による推定処理を実行してもよい。 In an example of the present embodiment, the estimation processing by the estimation processing unit 27 ′ may be executed using such a small object.
 1 視野感度推定装置、11 制御部、12 記憶部、13 操作部、14 出力部、15 インタフェース部、21 受入部、22 網膜層厚特徴量情報抽出部、23 視野感度特徴量情報抽出部、24 ペア特徴量情報行列生成部、25 結合特徴量情報行列生成部、26 パラメータ推定部、27,27′ 推定処理部、28 出力部,31 訓練処理部。
 
DESCRIPTION OF SYMBOLS 1 Field-of-view sensitivity estimation apparatus, 11 Control part, 12 Memory | storage part, 13 Operation part, 14 Output part, 15 Interface part, 21 Accepting part, 22 Retina layer thickness feature-value information extraction part, 23 Field-of-view sensitivity feature-value information extraction part, 24 Pair feature quantity information matrix generation section, 25 combined feature quantity information matrix generation section, 26 parameter estimation section, 27, 27 'estimation processing section, 28 output section, 31 training processing section.

Claims (7)

  1.  情報提供者ごとの目の網膜層の厚さに係る情報と視野感度に係る情報との少なくとも一方を含む提供情報、及び、視野予測の対象となる予測対象者の網膜層の厚さに係る情報を受け入れる手段と、
     前記受け入れた提供情報に含まれる前記網膜層の厚さに係る情報と、予測対象者の網膜層の厚さに係る情報とに基づいて、情報提供者及び予測対象者ごとの網膜層厚特徴量情報を抽出する手段と、
     前記受け入れた提供情報に含まれる前記視野感度に係る情報に基づいて、情報提供者ごとの視野感度特徴量情報を抽出する手段と、
     前記受け入れた情報提供者ごとの提供情報のうち、網膜層の厚さに係る情報と視野感度に係る情報との双方を含む提供情報の少なくとも一部を対象ペア情報として、対象ペア情報が含む網膜層の厚さに係る情報と視野感度に係る情報とに基づく網膜層厚特徴量情報と、視野感度特徴量情報とを含むペア特徴量情報行列を生成する手段と、
     視野予測の対象となる予測対象者の視野感度特徴量情報を欠損情報として設定し、予測対象者の網膜層厚特徴量情報と、前記欠損情報として設定した視野感度特徴量情報とを含む特徴量情報列を得て、前記ペア特徴量情報行列に連結して結合ペア特徴量情報行列を生成する手段と、
     当該結合ペア特徴量情報行列を再構成する潜在変数モデルのパラメータを推定する手段と、
     を含み、
     前記欠損情報である視野感度に係る情報を、前記推定された潜在変数モデルのパラメータに基づいて再構成された結合ペア特徴量情報行列を用いて推定し、視野感度の情報として出力する視野感度推定装置。
    Provided information including at least one of information relating to the thickness of the retinal layer of the eye and information relating to the visual field sensitivity for each information provider, and information relating to the thickness of the retinal layer of the prediction target person to be subject to visual field prediction Means to accept,
    Based on the information related to the thickness of the retinal layer included in the received provision information and the information related to the thickness of the retinal layer of the prediction target person, the retinal layer thickness feature amount for each information provider and prediction target person A means of extracting information;
    Means for extracting visual field sensitivity feature amount information for each information provider based on information relating to the visual field sensitivity included in the received provision information;
    The retina included in the target pair information with at least a part of the provided information including both the information related to the thickness of the retinal layer and the information related to the visual field sensitivity as the target pair information. Means for generating a pair feature amount information matrix including retinal layer thickness feature amount information based on information relating to layer thickness and information relating to visual field sensitivity, and visual field sensitivity feature amount information;
    Feature quantity information including field sensitivity feature amount information of a prediction target person to be subject to visual field prediction is set as defect information, and includes retinal layer thickness feature quantity information of the prediction target person and field sensitivity feature amount information set as the defect information Means for obtaining an information string and concatenating the pair feature quantity information matrix to generate a combined pair feature quantity information matrix;
    Means for estimating parameters of a latent variable model for reconstructing the combined pair feature information matrix;
    Including
    Field-of-view sensitivity estimation that estimates information related to the field-of-view sensitivity that is the missing information using a combined pair feature amount information matrix reconstructed based on the parameters of the estimated latent variable model, and outputs the information as field-of-view sensitivity information apparatus.
  2.  請求項1記載の視野感度推定装置であって、
     前記ペア特徴量情報行列を生成する手段は、前記受け入れた情報提供者ごとの提供情報のうち、網膜層の厚さに係る情報と視野感度に係る情報との双方を含む提供情報であって、提供情報に含まれる網膜層の厚さに係る情報が、所定のクラスタリング処理により、前記予測対象者の網膜層の厚さに係る情報と同じクラスタに属する提供情報を対象ペア情報とする視野感度推定装置。
    The visual field sensitivity estimation apparatus according to claim 1,
    The means for generating the pair feature amount information matrix is provided information including both information related to the thickness of the retinal layer and information related to visual field sensitivity among the provided information for each received information provider, Visual field sensitivity estimation in which the information related to the thickness of the retinal layer included in the provided information is subject to the provided information belonging to the same cluster as the information related to the thickness of the retinal layer of the prediction target by the predetermined clustering process apparatus.
  3.  請求項1または2に記載の視野感度推定装置であって、
     前記潜在変数モデルのパラメータを推定する手段は、構造的非負値行列分解の演算を用いて、結合ペア特徴量情報行列を再構成する潜在変数モデルのパラメータを推定する視野感度推定装置。
    The visual field sensitivity estimation apparatus according to claim 1 or 2,
    The visual field sensitivity estimation apparatus, wherein the means for estimating the parameters of the latent variable model estimates the parameters of the latent variable model for reconstructing the combined pair feature quantity information matrix by using the operation of structural non-negative matrix decomposition.
  4.  情報提供者ごとの目の網膜層の厚さに係る情報と視野感度に係る情報とを含む提供情報を受け入れて、前記網膜層の厚さに係る情報を学習データとし、学習データに対応する情報提供者の視野感度に係る情報を教師データとして、機械学習した状態にある学習モデルオブジェクトを保持する手段と、
     視野予測の対象となる予測対象者の網膜層の厚さに係る情報を受け入れる手段と、
     当該予測対象者の網膜層厚特徴量情報を前記学習モデルオブジェクトに対する入力として、当該入力に対する視野感度の推定データを生成する生成手段と、
     を含み、
     前記推定データを視野感度の情報として出力する視野感度推定装置。
    Information corresponding to learning data by accepting provided information including information related to the thickness of the retinal layer of the eye and information related to visual field sensitivity for each information provider, and using the information related to the thickness of the retinal layer as learning data Means for holding a learning model object in a machine-learned state using information related to the visual field sensitivity of the provider as teacher data,
    Means for accepting information relating to the thickness of the retinal layer of the prediction target subject to visual field prediction;
    Generating means for generating visual field sensitivity estimation data for the input using the prediction target person's retinal layer thickness feature amount information as an input to the learning model object;
    Including
    A visual field sensitivity estimation device that outputs the estimation data as visual field sensitivity information.
  5.  請求項4に記載の視野感度推定装置であって、
     前記学習モデルオブジェクトは、多層のニューラルネットを含み、少なくとも前記学習データが直接入力される第1層が、畳み込み層である視野感度推定装置。
    The visual field sensitivity estimation apparatus according to claim 4,
    The visual field sensitivity estimation apparatus, wherein the learning model object includes a multilayer neural network, and at least a first layer to which the learning data is directly input is a convolutional layer.
  6.  視野感度推定装置に対して、情報提供者ごとの目の網膜層の厚さに係る情報と視野感度に係る情報との少なくとも一方を含む提供情報、及び、視野予測の対象となる予測対象者の網膜層の厚さに係る情報を入力する工程と、
     前記視野感度推定装置が、前記受け入れた提供情報に含まれる前記網膜層の厚さに係る情報と、予測対象者の網膜層の厚さに係る情報とに基づいて、情報提供者及び予測対象者ごとの網膜層厚特徴量情報を抽出する工程と、
     前記視野感度推定装置が、前記受け入れた提供情報に含まれる前記視野感度に係る情報に基づいて、情報提供者ごとの視野感度特徴量情報を抽出する工程と、
     前記視野感度推定装置が、前記受け入れた情報提供者ごとの提供情報のうち、網膜層の厚さに係る情報と視野感度に係る情報との双方を含む提供情報の少なくとも一部を対象ペア情報として、対象ペア情報が含む網膜層の厚さに係る情報と視野感度に係る情報とに基づく網膜層厚特徴量情報と、視野感度特徴量情報とを含むペア特徴量情報行列を生成する工程と、
     前記視野感度推定装置が、視野予測の対象となる予測対象者の視野感度特徴量情報を欠損情報として設定し、予測対象者の網膜層厚特徴量情報と、前記欠損情報として設定した視野感度特徴量情報とを含む特徴量情報列を得て、前記ペア特徴量情報行列に連結して結合ペア特徴量情報行列を生成する工程と、
     前記視野感度推定装置が、当該結合ペア特徴量情報行列を再構成する潜在変数モデルのパラメータを推定する工程と、
     を含み、
     前記欠損情報である視野感度に係る情報を、前記推定された潜在変数モデルのパラメータに基づいて再構成された潜在変数モデルのパラメータを用いて推定させ、視野感度の情報として出力させる視野感度推定装置の制御方法。
    For the visual field sensitivity estimation device, provided information including at least one of information related to the thickness of the retinal layer of the eye for each information provider and information related to visual field sensitivity, and a prediction target person to be subject to visual field prediction Inputting information relating to the thickness of the retinal layer;
    The visual field sensitivity estimation device is configured to provide an information provider and a prediction target person based on the information related to the thickness of the retinal layer included in the received provision information and the information related to the thickness of the retinal layer of the prediction target person. Extracting retinal layer thickness feature information for each,
    The visual field sensitivity estimation device extracts visual field sensitivity feature amount information for each information provider based on information related to the visual field sensitivity included in the received provision information;
    Of the provided information for each of the accepted information providers, the visual field sensitivity estimation device sets at least a part of the provided information including both information related to the thickness of the retinal layer and information related to visual field sensitivity as target pair information. Generating a pair feature amount information matrix including retinal layer thickness feature amount information based on information related to the thickness of the retinal layer included in the target pair information and information related to visual field sensitivity, and visual field sensitivity feature amount information;
    The visual field sensitivity estimation device sets the visual field sensitivity feature amount information of the prediction target person who is the target of visual field prediction as defect information, the retinal layer thickness characteristic amount information of the prediction target person, and the visual field sensitivity feature set as the defect information Obtaining a feature quantity information sequence including quantity information, and connecting the pair feature quantity information matrix to generate a combined pair feature quantity information matrix;
    The visual field sensitivity estimation device estimating a parameter of a latent variable model for reconstructing the combined pair feature information matrix;
    Including
    Field-of-view sensitivity estimation device for estimating information relating to visual field sensitivity, which is the missing information, using parameters of the latent variable model reconstructed based on the parameters of the estimated latent variable model, and outputting the information as visual field sensitivity information Control method.
  7.  コンピュータを、
     情報提供者ごとの目の網膜層の厚さに係る情報と視野感度に係る情報との少なくとも一方を含む提供情報、及び、視野予測の対象となる予測対象者の網膜層の厚さに係る情報を受け入れる手段と、
     前記受け入れた提供情報に含まれる前記網膜層の厚さに係る情報と、予測対象者の網膜層の厚さに係る情報とに基づいて、情報提供者及び予測対象者ごとの網膜層厚特徴量情報を抽出する手段と、
     前記受け入れた提供情報に含まれる前記視野感度に係る情報に基づいて、情報提供者ごとの視野感度特徴量情報を抽出する手段と、
     前記受け入れた情報提供者ごとの提供情報のうち、網膜層の厚さに係る情報と視野感度に係る情報との双方を含む提供情報の少なくとも一部を対象ペア情報として、対象ペア情報が含む網膜層の厚さに係る情報と視野感度に係る情報とに基づく網膜層厚特徴量情報と、視野感度特徴量情報とを含むペア特徴量情報行列を生成する手段と、
     視野予測の対象となる予測対象者の視野感度特徴量情報を欠損情報として設定し、予測対象者の網膜層厚特徴量情報と、前記欠損情報として設定した視野感度特徴量情報とを含む特徴量情報列を得て、前記ペア特徴量情報行列に連結して結合ペア特徴量情報行列を生成する手段と、
     当該結合ペア特徴量情報行列を再構成する潜在変数モデルのパラメータを推定する手段と、
     として機能させ、
     前記欠損情報である視野感度に係る情報を、前記推定された潜在変数モデルのパラメータに基づいて再構成された結合ペア特徴量情報行列を用いて推定させて、視野感度の情報として出力させるプログラム。
     
    Computer
    Provided information including at least one of information relating to the thickness of the retinal layer of the eye and information relating to the visual field sensitivity for each information provider, and information relating to the thickness of the retinal layer of the prediction target person to be subject to visual field prediction Means to accept,
    Based on the information related to the thickness of the retinal layer included in the received provision information and the information related to the thickness of the retinal layer of the prediction target person, the retinal layer thickness feature amount for each information provider and prediction target person A means of extracting information;
    Means for extracting visual field sensitivity feature amount information for each information provider based on information relating to the visual field sensitivity included in the received provision information;
    The retina included in the target pair information with at least a part of the provided information including both the information related to the thickness of the retinal layer and the information related to the visual field sensitivity as the target pair information. Means for generating a pair feature amount information matrix including retinal layer thickness feature amount information based on information relating to layer thickness and information relating to visual field sensitivity, and visual field sensitivity feature amount information;
    Feature quantity information including field sensitivity feature amount information of a prediction target person to be subject to visual field prediction is set as defect information, and includes retinal layer thickness feature quantity information of the prediction target person and field sensitivity feature amount information set as the defect information Means for obtaining an information string and concatenating the pair feature quantity information matrix to generate a combined pair feature quantity information matrix;
    Means for estimating parameters of a latent variable model for reconstructing the combined pair feature information matrix;
    Function as
    A program for estimating information relating to visual field sensitivity, which is the defect information, using a combined pair feature amount information matrix reconstructed based on the parameters of the estimated latent variable model, and outputting the information as visual field sensitivity information.
PCT/JP2017/028491 2016-11-02 2017-08-04 Visual field sensitivity estimation device, method for controlling visual field sensitivity estimation device, and program WO2018083853A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2018548560A JPWO2018083853A1 (en) 2016-11-02 2017-08-04 Field-of-view sensitivity estimation apparatus, method of controlling field-of-view sensitivity estimation apparatus, and program

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2016215556 2016-11-02
JP2016-215556 2016-11-02

Publications (1)

Publication Number Publication Date
WO2018083853A1 true WO2018083853A1 (en) 2018-05-11

Family

ID=62075951

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2017/028491 WO2018083853A1 (en) 2016-11-02 2017-08-04 Visual field sensitivity estimation device, method for controlling visual field sensitivity estimation device, and program

Country Status (2)

Country Link
JP (1) JPWO2018083853A1 (en)
WO (1) WO2018083853A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2021000224A (en) * 2019-06-20 2021-01-07 国立大学法人 東京大学 Information processing device, information processing method, and program
WO2021043980A1 (en) * 2019-09-06 2021-03-11 Carl Zeiss Meditec, Inc. Machine learning methods for creating structure-derived visual field priors
CN112933600A (en) * 2021-03-09 2021-06-11 超参数科技(深圳)有限公司 Virtual object control method and device, computer equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013501553A (en) * 2009-08-10 2013-01-17 カール ツァイス メディテック アクチエンゲゼルシャフト Combination analysis of glaucoma
JP2015142768A (en) * 2015-03-30 2015-08-06 株式会社ニデック Ophthalmologic apparatus

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013501553A (en) * 2009-08-10 2013-01-17 カール ツァイス メディテック アクチエンゲゼルシャフト Combination analysis of glaucoma
JP2015142768A (en) * 2015-03-30 2015-08-06 株式会社ニデック Ophthalmologic apparatus

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2021000224A (en) * 2019-06-20 2021-01-07 国立大学法人 東京大学 Information processing device, information processing method, and program
JP7343145B2 (en) 2019-06-20 2023-09-12 国立大学法人 東京大学 Information processing device, information processing method, and program
WO2021043980A1 (en) * 2019-09-06 2021-03-11 Carl Zeiss Meditec, Inc. Machine learning methods for creating structure-derived visual field priors
CN112933600A (en) * 2021-03-09 2021-06-11 超参数科技(深圳)有限公司 Virtual object control method and device, computer equipment and storage medium

Also Published As

Publication number Publication date
JPWO2018083853A1 (en) 2019-09-19

Similar Documents

Publication Publication Date Title
Ren et al. Reconstructing seen image from brain activity by visually-guided cognitive representation and adversarial learning
Vernon et al. Modeling first impressions from highly variable facial images
CN107679466B (en) Information output method and device
US8861815B2 (en) Systems and methods for modeling and processing functional magnetic resonance image data using full-brain vector auto-regressive model
JP6994588B2 (en) Face feature extraction model training method, face feature extraction method, equipment, equipment and storage medium
Nicolle et al. Facial action unit intensity prediction via hard multi-task metric learning for kernel regression
JP2019525786A (en) System and method for automatically detecting, locating, and semantic segmentation of anatomical objects
CN107507153B (en) Image denoising method and device
CN108135520B (en) Generating natural language representations of psychological content from functional brain images
JP2017510927A (en) Face image verification method and face image verification system based on reference image
WO2018083853A1 (en) Visual field sensitivity estimation device, method for controlling visual field sensitivity estimation device, and program
JP2019091454A (en) Data analysis processing device and data analysis processing program
Sabrol et al. Intensity based feature extraction for tomato plant disease recognition by classification using decision tree
JPWO2016009569A1 (en) Attribute factor analysis method, apparatus, and program
WO2022109096A1 (en) Digital imaging and learning systems and methods for analyzing pixel data of an image of a hair region of a user's head to generate one or more user-specific recommendations
Shete et al. Tasselgan: An application of the generative adversarial model for creating field-based maize tassel data
CN110414541A (en) The method, equipment and computer readable storage medium of object for identification
US11538552B2 (en) System and method for contrastive network analysis and visualization
Chen et al. A new hypothesis on facial beauty perception
US20220164658A1 (en) Method, device, and computer program
Qiang et al. Functional brain network identification and fMRI augmentation using a VAE-GAN framework
JP6033724B2 (en) Image display system, server, and diagnostic image display device
Dai et al. The Recognition and Implementation of Handwritten Character based on Deep Learning.
Maul et al. Cybernetics of vision systems: Toward an understanding of putative functions of the outer retina
Siddiquee et al. A2B-GAN: Utilizing Unannotated Anomalous Images for Anomaly Detection in Medical Image Analysis

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17866750

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2018548560

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17866750

Country of ref document: EP

Kind code of ref document: A1