WO2019229912A1 - Information processing device, information processing method, information processing program, and microscope - Google Patents

Information processing device, information processing method, information processing program, and microscope Download PDF

Info

Publication number
WO2019229912A1
WO2019229912A1 PCT/JP2018/020864 JP2018020864W WO2019229912A1 WO 2019229912 A1 WO2019229912 A1 WO 2019229912A1 JP 2018020864 W JP2018020864 W JP 2018020864W WO 2019229912 A1 WO2019229912 A1 WO 2019229912A1
Authority
WO
WIPO (PCT)
Prior art keywords
unit
point cloud
information
input
point
Prior art date
Application number
PCT/JP2018/020864
Other languages
French (fr)
Japanese (ja)
Inventor
亘 友杉
定繁 石田
秀太朗 大西
五月 大島
Original Assignee
株式会社ニコン
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社ニコン filed Critical 株式会社ニコン
Priority to PCT/JP2018/020864 priority Critical patent/WO2019229912A1/en
Publication of WO2019229912A1 publication Critical patent/WO2019229912A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/62Systems in which the material investigated is excited whereby it emits light or causes a change in wavelength of the incident light
    • G01N21/63Systems in which the material investigated is excited whereby it emits light or causes a change in wavelength of the incident light optically excited
    • G01N21/64Fluorescence; Phosphorescence
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B21/00Microscopes
    • G02B21/36Microscopes arranged for photographic purposes or projection purposes or digital imaging or video purposes including associated control and data processing arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis

Definitions

  • the present invention relates to an information processing apparatus, an information processing method, an information processing program, and a microscope.
  • STORM STORM, PALM, etc. are known as super-resolution microscopes.
  • STORM a fluorescent substance is activated, and the activated fluorescent substance is irradiated with excitation light to acquire a fluorescent image (see Patent Document 1 below).
  • a display control unit that displays a point cloud image on a display unit, an input information acquisition unit that acquires input information input by the input unit, and an input information acquisition unit
  • a processing unit that extracts a part of the point group from the point group included in the point group image based on the input information, and the display control unit extracts the extracted points based on the part of the point group that the processing unit extracts.
  • the information processing apparatus according to the first aspect, the optical system that illuminates the activation light that activates part of the fluorescent substance contained in the sample, and the activated fluorescent substance
  • An illumination optical system that illuminates at least a part of the excitation light
  • an observation optical system that forms an image of light from the sample
  • an imaging unit that captures an image formed by the observation optical system
  • an imaging unit that calculates position information of a fluorescent substance based on the result and generates a point cloud using the calculated position information is provided.
  • the point cloud image is displayed on the display unit, the input information input by the input unit is acquired, and the points included in the point cloud image based on the input information.
  • An information processing method including extracting a part of a point group from a group and displaying an extracted point group image based on the extracted part of the point group on a display unit is provided.
  • the computer displays the point cloud image on the display unit, acquires the input information input by the input unit, and creates the point cloud image based on the input information.
  • an information processing program for executing extraction of a part of a point cloud from included points and display of an extracted point cloud image based on the extracted part of the point cloud on a display unit.
  • FIG. 1 is a diagram illustrating an information processing apparatus according to the first embodiment.
  • the information processing apparatus 1 according to the embodiment generates an image (point cloud image) using the point cloud data DG and displays the image on the display device 2. Further, the information processing apparatus 1 processes point cloud data DG (data group).
  • the point cloud data DG is a plurality of N-dimensional data D1.
  • N is an arbitrary integer of 2 or more.
  • the N-dimensional data D1 is data (eg, vector data) in which N values are combined.
  • point cloud data DG is three-dimensional data in which coordinate values (eg, x1, y1, z1) in a three-dimensional space are combined. In the following description, it is assumed that the above N is 3.
  • N may be 2 or 4 or more.
  • point cloud data DG is m pieces of N-dimensional data. m is an arbitrary integer of 2 or more.
  • the point cloud image is an image generated using the point cloud data DG.
  • the point cloud data DG is three-dimensional data in which coordinate values (eg, x1, y1, z1) in a three-dimensional space are set as one set
  • the image is an image displaying points at each coordinate position.
  • the shape of the displayed point is not limited to a circle, and may be another shape such as an ellipse or a rectangle.
  • Point cloud data is sometimes simply referred to as a point cloud.
  • a plurality of points on the point cloud image are appropriately referred to as a point cloud.
  • the point cloud data DG is supplied to the information processing device 1 from, for example, a device external to the information processing device 1 (hereinafter referred to as an external device).
  • the external device is, for example, a microscope main body 51 shown later in FIG.
  • the external device may not be the microscope main body 51.
  • the external device may be a CT scan that detects a value at each point inside the object, or a measurement device that measures the shape of the object.
  • the information processing apparatus 1 may generate point cloud data DG based on data supplied from an external device, and process the generated point cloud data DG.
  • the information processing apparatus 1 executes processing based on input information that a user inputs using a graphical user interface (referred to as GUI in this specification as appropriate).
  • the information processing device 1 is connected to a display device 2 (display unit).
  • the display device 2 is, for example, a liquid crystal display.
  • the information processing apparatus 1 supplies image data to the display device 2 and causes the display device 2 to display the image.
  • the display device 2 is an external device attached to the information processing device 1, but may be a part of the information processing device 1.
  • the information processing device 1 is connected to an input device 3 (input unit).
  • the input device 3 is an input interface that can be operated by a user.
  • the input device 3 includes, for example, at least one of a mouse, a keyboard, a touch pad, and a trackball.
  • the input device 3 detects an operation by the user and supplies the detection result to the information processing device 1 as input information input by the user.
  • the input device 3 is a mouse.
  • the information processing device 1 causes the display device 2 to display a pointer.
  • the information processing apparatus 1 acquires mouse movement information and click information indicating the presence or absence of a click from the input apparatus 3 as input information detected by the input apparatus 3.
  • the information processing apparatus 1 moves the pointer on the screen of the display device 2 based on the mouse movement information.
  • the information processing apparatus 1 executes processing assigned to the position of the pointer and click information (eg, left click, right click, drag, double click) based on the click information.
  • the input device 3 is, for example, a device externally attached to the information processing device 1, but may be a part of the information processing device 1 (for example, a built-in touch pad). Further, the input device 3 may be a touch panel integrated with the display device 2 or the like.
  • the information processing apparatus 1 includes, for example, a computer.
  • the information processing apparatus 1 includes an operating system unit 5 (hereinafter referred to as an OS unit 5), a GUI unit 6, a processing unit 7, and a storage unit 8.
  • the information processing apparatus 1 executes various processes according to the program stored in the storage unit 8.
  • the OS unit 5 provides an interface to the outside and the inside of the information processing apparatus 1.
  • the OS unit 5 controls the supply of image data to the display device 2.
  • the OS unit 5 acquires input information from the input device 3.
  • the OS unit 5 supplies input information to an application that manages an active GUI screen in the display device 2.
  • the GUI unit 6 includes an input control unit 11 and an output control unit 12.
  • the input control unit 11 is an input information acquisition unit that acquires input information input by the input unit (input device 3).
  • the output control unit 12 is a display control unit that displays a point cloud image on the display unit (display device 2).
  • the output control unit 12 causes the display device 2 to display a GUI screen (GUI screen W shown in FIG. 2 and the like later).
  • the GUI screen is a window provided by an application.
  • Information constituting the GUI screen (hereinafter referred to as GUI information) is stored in the storage unit 8, for example.
  • the output control unit 12 reads the GUI information from the storage unit 8 and supplies the GUI information to the OS unit 5.
  • the OS unit 5 causes the display device 2 to display a GUI screen based on the GUI information supplied from the output control unit 12. In this way, the output control unit 12 supplies the GUI information to the OS unit 5 to display the GUI screen on the display device 2.
  • the input control unit 11 acquires input information input by the user using the GUI screen. For example, the input control unit 11 acquires mouse movement information and click information as input information from the OS unit 5. When the click information indicates that there has been a click operation, the input control unit 11 causes the process assigned to the click information to be executed based on the coordinates of the pointer on the GUI screen obtained from the mouse movement information.
  • the input control unit 11 causes the output control unit 12 to execute processing for displaying the menu.
  • Information representing the menu is included in the GUI information, and the output control unit 12 causes the display device 2 to display the menu via the OS unit 5 based on the GUI information.
  • a left click is detected on the GUI screen.
  • the input control unit 11 specifies the position of the pointer on the GUI screen based on the movement information of the mouse, and determines whether there is a button at the specified pointer position.
  • the input control unit 11 causes a process assigned to this button to be executed if there is a button at the position of the pointer.
  • the processing unit 7 Based on the input information acquired by the input information acquisition unit (input control unit 11), the processing unit 7 extracts a part of the point group (hereinafter referred to as a point set) from the point group included in the point cloud image. .
  • the input information is information related to the point cloud specified in the point cloud image.
  • the processing unit 7 divides the point group included in the point group image into a plurality of point groups (a plurality of subsets), and calculates the feature amount or similarity between the divided point group (subset) and the designated point group. Some point clouds are extracted based on this.
  • the processing unit 7 includes a clustering unit 9 and a classifier 10.
  • the clustering unit 9 divides the point group included in the point group image into a plurality of point groups.
  • a point cloud obtained by dividing a point cloud included in a point cloud image is referred to as a subset.
  • the clustering unit 9 divides (classifies) the point cloud data DG into a plurality of subsets based on the distribution of the plurality of N-dimensional data D1. For example, the clustering unit 9 randomly selects the N-dimensional data D1 from the point cloud data DG. Further, the clustering unit 9 counts the number of other N-dimensional data D1 existing in a predetermined area centered on the selected N-dimensional data D1. When the clustering unit 9 determines that the counted number of N-dimensional data D1 is equal to or greater than the threshold, the selected N-dimensional data D1 and other N-dimensional data D1 existing in the predetermined region belong to the subset. Is determined.
  • the clustering unit 9 classifies the N-dimensional data D1 included in the point cloud data DG into a plurality of non-overlapping subsets or noise. For example, the clustering unit 9 assigns an identification number to a plurality of subsets, and for the N-dimensional data D1 belonging to the subset, the N-dimensional data D1 or the identification number thereof and the identification number of the subset to which the N-dimensional data D1 belongs Are stored in the storage unit 8. In addition, the clustering unit 9 adds a flag indicating noise, for example, to the N-dimensional data D1 classified as noise. The clustering unit 9 may delete the N-dimensional data D1 determined to be noise from the point cloud data DG.
  • the classifier 10 executes an extraction process for extracting a part of the point cloud (point set) from the point cloud data DG.
  • the information processing apparatus 1 acquires, as input information, information that specifies an extraction target by using the GUI as described above.
  • the information defining the extraction target is, for example, N-dimensional data distribution (hereinafter referred to as target distribution) corresponding to the point set to be extracted.
  • the input control unit 11 causes the processing unit 7 to execute the process assigned to the input information. For example, when the input information indicating the target distribution is acquired, the input control unit 11 identifies the distribution specified by the input information. Then, the input control unit 11 causes the processing unit 7 to execute an extraction process for extracting a point set of a distribution similar to the target distribution as a process assigned to the input information.
  • the processing unit 7 extracts a point set from the point cloud data DG including a plurality of N-dimensional data based on the distribution specified by the input control unit 11.
  • the classifier 10 classifies a point set that satisfies a predetermined condition from the point cloud data DG.
  • the classifier 10 executes a process (hereinafter referred to as a classification process) for classifying a point set that satisfies a condition that the similarity to the target distribution is equal to or greater than a predetermined value as the predetermined condition.
  • the processing unit 7 extracts a part of the point group (point set) from the point cloud data DG when the classifier 10 executes the classification process.
  • processing of the GUI unit 6 and the processing unit 7 in the extraction processing will be described with reference to FIGS.
  • FIG. 2 is a diagram showing a GUI screen according to the first embodiment.
  • the GUI screen W is displayed in the display area 2A of the display device 2 (see FIG. 1).
  • the GUI screen W is displayed in a part of the display area 2A, but may be displayed in full screen in the display area 2A.
  • the GUI screen W in FIG. 2 includes a window W1, a window W2, a window W3, and a window W4.
  • the point cloud image P1 is displayed in the window W1.
  • the point cloud image P1 is an image representing the distribution of the plurality of N-dimensional data D1 shown in FIG.
  • the N-dimensional data D1 is three-dimensional data, and one N-dimensional data D1 is represented by one point.
  • one N-dimensional data D1 shown in FIG. 1 is (x1, y1, z1), and is represented by a point in the point cloud image P1 where the X coordinate is x1, the Y coordinate is y1, and the Z coordinate is z1. Is done.
  • the information processing apparatus 1 When the information processing apparatus 1 receives a command for opening the point cloud data DG (a command for displaying the point cloud data DG) from the user based on the input information, the information processing apparatus 1 generates data of the point cloud image P1.
  • the output control unit 12 supplies the data of the generated point cloud image P1 to the OS unit 5, and the OS unit 5 displays the data on the window W1 of the GUI screen W.
  • the information processing apparatus 1 may remove noise in the point cloud data DG. For example, when the point cloud data DG is obtained by detecting an object, the information processing apparatus 1 may exclude the N-dimensional data D1 estimated not to constitute the structure of the object to be detected as noise and exclude it from the processing target. Good. For example, the information processing apparatus 1 counts the number of other N-dimensional data D1 existing in a space having a predetermined radius centered on the first N-dimensional data D1 (data point), and the counted number is less than the threshold value. The first N-dimensional data may be determined as noise. The information processing apparatus 1 may generate the point cloud image P1 based on the point cloud data DG from which noise has been removed.
  • the window W2 is output to the GUI screen W by the output control unit 12 when it is detected that the pointer P is right-clicked with the pointer P being placed on the GUI screen W, for example.
  • processing options regarding the point cloud data DG are displayed as input information options. [Analyze ⁇ data], [Some operation 1], [Some operation 2], and [Some operation 3] are displayed in the window W2 of FIG. These options are, for example, buttons to which commands are assigned.
  • [Analyze data] is selected as the processing option.
  • the selected option is displayed with emphasis over the other buttons.
  • [Analyze data] is displayed in a larger font than other options in window W2 (eg, [Some operation 1]).
  • the selected option ([Analyze data]) is displayed with a mark (for example, [ ⁇ ] in the figure) indicating that it is being selected.
  • a mark for example, [ ⁇ ] in the figure
  • the input control unit 11 acquires information on options selected using the GUI screen W among the options of input information. For example, when it is detected that the left click is performed in a state where the pointer P is arranged on [Analyze ⁇ data], the input control unit 11 acquires the content of the process assigned to [Analyze data]. The content of the process assigned to the option is defined in the GUI information, and the input control unit 11 collates the input information with the GUI information and acquires the content of the process corresponding to this input information. Then, the input control unit 11 causes the process assigned to [Analyze data] to be executed. [Analyze data] is assigned a process for starting the extraction process.
  • the window W3 is generated.
  • the input control unit 11 causes the output control unit 12 to output the window W3.
  • the information on the window W3 is included in the GUI information, and the output control unit 12 acquires the information on the window W3 from the GUI information stored in the storage unit 8.
  • the output control unit 12 supplies information on the window W3 to the OS unit 5, and the OS unit 5 causes the display device 2 to display the window W3.
  • [Some operation 1], [Some operation 2], and [Some operation 3] are allotted other processes (eg, opening a file, outputting the result, and ending the application).
  • the GUI unit 6 may not provide at least one option of [Some operation 1], [Some operation 2], and [Some operation 3]. Further, the GUI unit 6 may provide other options of [Some operation 1], [Some operation 2], and [Some operation 3].
  • [Example] is selected as an option for the distribution designation method. The selected option is displayed with emphasis over the other buttons. For example, [Example] is displayed in a larger font than other options of the window W3 (eg, [Already] prepared]).
  • a mark for example, [ ⁇ ]
  • selection is being performed is displayed together with the selected option ([Example]).
  • the input control unit 11 acquires information on options selected using the GUI screen W among the options of input information. For example, when it is detected that the left click is performed in a state where the pointer P is placed on [Example], the contents of the process assigned to [Example] are acquired. The content of the process assigned to the option is defined in the GUI information, and the input control unit 11 collates the input information with the GUI information and acquires the content of the process corresponding to this input information. Then, the input control unit 11 causes the process assigned to [Example] to be executed. [Example] is assigned a process of displaying options for selecting a distribution from predetermined candidates.
  • a window W4 is generated.
  • the input control unit 11 causes the output control unit 12 to output the window W4.
  • the information on the window W4 is included in the GUI information, and the output control unit 12 acquires the information on the window W4 from the GUI information stored in the storage unit 8.
  • the output control unit 12 supplies information on the window W4 to the OS unit 5, and the OS unit 5 causes the display device 2 to display the window W4.
  • distribution candidate categories are displayed as input information options.
  • [Geometric shape] and [Biological 4 objects] are displayed as distribution candidate categories. These options are, for example, buttons to which commands are assigned.
  • input information is information related to a geometric shape ([information specifying [Geometric shape]).
  • [Geometric shape] is selected as a category of distribution candidates.
  • the selected option is displayed more emphasized than the other buttons. For example, [Geometric shape] is displayed in a larger font than [Biological objects].
  • the selected [Geometric shape] is displayed with a mark (for example, [ ⁇ ]) indicating that it is being selected.
  • the input control unit 11 acquires information on options selected using the GUI screen W among the options of input information. For example, when it is detected that the left click is performed in a state where the pointer P is placed on [Geometric shape], the contents of the process assigned to [Geometric shape] are acquired. The content of the process assigned to the option is defined in the GUI information, and the input control unit 11 collates the input information with the GUI information and acquires the content of the process corresponding to this input information. Then, the input control unit 11 causes the process assigned to [Geometric shape] to be executed. [Geometric shape] is assigned a process of displaying a geometric candidate representing a distribution as a predetermined candidate.
  • a window W5 is generated.
  • the input control unit 11 causes the output control unit 12 to output the window W5.
  • the information on the window W5 is included in the GUI information, and the output control unit 12 acquires the information on the window W5 from the GUI information stored in the storage unit 8.
  • the output control unit 12 supplies information on the window W5 to the OS unit 5, and the OS unit 5 causes the display device 2 to display the window W5.
  • geometric shape candidates representing a distribution are displayed as input information options.
  • [Sphere], [Ellipsoid], [Star], [Etc ...] are displayed as geometric shape candidates. These options are, for example, buttons to which commands are assigned.
  • [Ellipsoid] is selected as a geometric candidate.
  • the selected option ([Ellipsoid]) is displayed with emphasis over the other buttons. For example, [Ellipsoid] is displayed in a larger font than [Sphere].
  • the selected [Ellipsoid] is also displayed with a mark (for example, [ ⁇ ]) indicating that it is being selected.
  • the input control unit 11 acquires information on options selected using the GUI screen W among the options of input information. For example, when it is detected that the left click is performed in a state where the pointer P is placed on [Ellipsoid], the contents of the process assigned to [Ellipsoid] are acquired. The content of the process assigned to the option is defined in the GUI information, and the input control unit 11 collates the input information with the GUI information and acquires the content of the process corresponding to this input information. Then, the input control unit 11 causes the process assigned to [Ellipsoid] to be executed. [Ellipsoid] is assigned a process for designating the distribution of data points that fall within an ellipsoid as the target distribution for extraction.
  • [Sphere] indicates that the target distribution is a spherical distribution.
  • [Star] indicates that the target distribution falls within a star shape.
  • [Etc ...] indicates that another geometric shape is designated as the target distribution. For example, when [Etc ...] is selected, the user can designate the geometric shape as a target distribution by, for example, reading data defining the geometric shape.
  • the input control unit 11 causes the processing unit 7 to perform an extraction process using a distribution that falls within an ellipsoid as a target distribution.
  • the processing unit 7 extracts N-dimensional data belonging to a subset whose outer shape is approximated by an ellipsoid from the point cloud data DG.
  • information related to the size (size) of the geometric shape may be settable.
  • FIG. 3 is a diagram illustrating processing by the processing unit according to the first embodiment.
  • symbol Ka is a target distribution.
  • the processing unit 7 (clustering unit 9) divides the point group included in the point group image into a plurality of point groups (subsets).
  • the codes Kb1 to Kb6 in FIG. 3 are distributions corresponding to the point groups (subsets) divided by the clustering unit 9.
  • Reference numerals Kb1 to Kb6 are distributions of the N-dimensional data D1 in a part of the space (eg, ROI) selected (cut out) from the data space in which the point cloud data DG is accommodated.
  • the processing unit 7 extracts a part of the point group (point set) based on the feature quantity of the divided point group (subset) and the geometric feature quantity.
  • the classifier 10 calculates the feature amount of the subset divided by the clustering unit 9 and compares (matches) with the feature amount (eg, geometric feature amount) specified by the input information.
  • the classifier 10 classifies the subset as a point set when the feature amount of the subset divided by the clustering unit 9 matches the feature amount (for example, the geometric feature amount) specified by the input information.
  • the feature amount may be the size of the structure.
  • the size is a size (absolute value) or a relative size (relative value) in real space.
  • the processing unit 7 divides the point group included in the point group image into a plurality of point groups (subsets), and a part based on the size of the shape represented by the divided point group and the size specified by the input information
  • the point cloud may be extracted.
  • the classifier 10 may classify the point set based on the similarity between the point group (target distribution) specified by the input information and the distribution of points corresponding to the subset. For example, the classifier 10 calculates the degree of similarity of the distribution Kb1, the distribution Kb2,... With the target distribution Ka. For example, the classifier 10 calculates the similarity Q1 between the distribution Kb1 and the target distribution Ka. For example, the similarity Q1 is obtained by dividing the sum of squares of the distance (norm) between the N-dimensional data D1 selected from the distribution Ka and the N-dimensional data D1 selected from the target distribution Kb1 by the number of data. The value obtained by subtracting the square root from 1.
  • the similarity Q1 may be a correlation coefficient between the distribution Kb1 and the target distribution Ka, for example. The same applies to the similarity Q2, the similarity Q3,.
  • the processing unit 7 may convert the distribution Kb1 and calculate the similarity between the converted distribution Kb1 and the target distribution Ka.
  • the transformation includes, for example, at least one of a translation transformation, a transformation to rotate, a linear transformation, a scale transformation, and a transformation combining two or more of these transformations (eg, affine transformation).
  • the type of conversion may be determined in advance or may be set according to input information from the user.
  • the classifier 10 determines whether or not to extract the distribution Kb1, the distribution Kb2,... By comparing the calculated similarity with a threshold value. For example, the classifier 10 determines that the N-dimensional data D1 belonging to the distribution Kb1 is extracted from the point group data DG when the similarity Q1 between the distribution Kb1 and the target distribution Ka is equal to or greater than a threshold value. Further, when the similarity Q1 between the distribution Kb1 and the target distribution Ka is less than the threshold, the classifier 10 determines that the N-dimensional data D1 belonging to the distribution Kb1 is not extracted from the point cloud data DG. The classifier 10 extracts a set of N-dimensional data D1 determined to be extracted as a partial point group (point set). The classifier 10 causes the storage unit 8 to store the extracted point set information as a processing result.
  • the classifier 10 may classify the point set as follows.
  • a condition for classifying the point set a condition that the geometric shape ([Geometric shape]) is an ellipsoid ([Ellipsoid]) is specified.
  • the following processing is performed on each point (each N-dimensional data D1) included in the point cloud data DG, where the number of points existing within a certain radius area centered on each point has a predetermined value.
  • the points in this region are defined as one subset (lumb).
  • the processing unit 7 calculates the feature amount of the subset classified by the clustering unit 9. For example, the processing unit 7 calculates a ratio between the major axis length and the minor axis length of the outer shape of the structure represented by the subset as the feature amount.
  • the classifier 10 classifies (extracts), as a point set, a subset in which the feature amount calculated by the processing unit 7 satisfies the above classification condition. For example, when the condition that the geometric shape ([Geometric shape]) is an ellipsoid ([Ellipsoid]) is specified as the condition for classification, the classifier 10 calculates the length of the long axis calculated as the feature amount by the processing unit 7.
  • the subset is classified as a sphere, and the above ratio is outside the predetermined range (eg, greater than 0)
  • the subset is classified into an ellipsoid if it is less than 0.9 or greater than 1.1.
  • the classifier 10 is based on parameters (eg, feature quantities) other than similarity. You may classify.
  • the display control unit displays an extracted point cloud image based on a part of the point cloud (point set) extracted by the processing unit 7.
  • the extracted point cloud image is an image representing a part of a point cloud (point set) extracted from the point cloud image.
  • the output control unit 12 causes the extraction point cloud image based on the extraction result by the processing unit 7 to be output to the GUI screen W.
  • FIG. 4 is a diagram showing an extracted point cloud image according to the first embodiment.
  • the output control unit 12 outputs the distribution of the subset extracted by the processing unit 7 to the GUI screen W as the extracted point cloud image P2.
  • the display control unit (output control unit 12) applies a part of the point group (point set) extracted by the processing unit 7 in the extracted point cloud image P2 to one of color and brightness with respect to the point cloud image P1. You may display so that both may differ.
  • the display control unit (output control unit 12) may display a part of the point group (point set) extracted by the processing unit 7 in a color different from that of the other point groups, or a brightness different from that of the other point groups. It may be displayed.
  • the display control unit (output control unit 12) may display only a part of the point group (point set) extracted by the processing unit 7 in the extracted point group image.
  • the display control unit (output control unit 12) may display the extracted point cloud image excluding the point cloud other than the point set from the point cloud included in the point cloud image.
  • the input control unit 11 causes the output control unit 12 to execute the process of outputting the extracted point cloud image P2 when the process of outputting the extracted point cloud image P2 is designated by the input information input by the user.
  • the output control unit 12 supplies data of the extracted point cloud image P2 generated using the processing result stored in the storage unit 8 to the OS unit 5.
  • the OS unit 5 outputs the data of the extracted point cloud image P2 to the display device 2 and displays the extracted point cloud image P2 on the GUI screen W.
  • the extracted point cloud image P2 shows the result of the extraction process when an ellipsoid is designated as the geometric shape representing the target distribution Ka, as described with reference to FIGS.
  • the extracted point cloud image P2 includes a distribution Kc of N-dimensional data D1 belonging to a subset determined to have an outer shape similar to an ellipsoid.
  • the extracted point cloud image P2 is an image in which a subset indicated by a triangle or a rectangle is excluded from the point cloud image P1 in FIG. Note that the extracted subset changes depending on a threshold value used to determine whether or not the outline of the subset is similar to an ellipsoid. This threshold value may be changeable according to input information input from the user.
  • FIG. 5 is a diagram showing an extracted point cloud image according to the first embodiment.
  • the extracted point cloud image P3 in FIG. 5 corresponds to the processing result of the extraction processing when the threshold value for determining whether or not the outer shape of the subset is similar to an ellipsoid is changed.
  • the similarity threshold for extracting the point set corresponding to the extracted point cloud image P3 in FIG. 5 is set higher than the similarity threshold for extracting the point set corresponding to the extracted point cloud image P2 in FIG. Has been.
  • the extracted point group image P3 in FIG. 5 is an image in which the oval shape lacking in the extracted point group image P2 in FIG.
  • the number of point sets (distribution Kc of extracted N-dimensional data D1) included in the extracted point group image P3 in FIG. 5 is the same as the number of point sets (extracted N-dimensional data D1 extracted in the extracted point group image P2 in FIG. 4). Less than the number of distributions Kc).
  • the information processing apparatus 1 may not remove noise from the point cloud data DG. For example, at least a part of the noise is excluded from the extracted point cloud image P2 by determining that the noise is not similar to the target distribution Ka.
  • FIG. 6 is a flowchart illustrating the information processing method according to the first embodiment.
  • step S1 the output control unit 12 causes the display unit (display device 2) to output the GUI screen W.
  • step S2 the information processing apparatus 1 acquires point cloud data DG.
  • step S3a the information processing apparatus 1 removes noise from the point cloud data DG.
  • step S3b the clustering unit 9 classifies the subset from the point group data DG from which noise has been removed (performs clustering processing on the point group data DG).
  • step S4 the information processing apparatus 1 generates a point cloud image P1 based on the point cloud data DG from which noise has been removed.
  • At least a part of the processing from step S2 to step S4 can be executed at an arbitrary timing before the processing of step S5 described below. For example, at least a part of the process from step S2 to step S4 may be executed before the start of the process of step S1, or may be executed in parallel with the process of step S1, and the end of the process of step S1. It may be performed later.
  • step S5 the output control unit 12 causes the point cloud image P1 generated in step S4 to be output to the GUI screen W.
  • the input control unit 11 acquires input information using the GUI screen W.
  • the input control unit 11 acquires information regarding a point cloud specified in the point cloud image as input information.
  • a feature amount is specified by input information as an extraction condition.
  • the input information is a geometric shape (eg, [Geometric shape] [Ellipsoid] in FIG. 2)
  • the input information specifies the feature quantity of the ellipsoid (eg, the ratio of the length of the major axis to the minor axis). .
  • step S7 the processing unit 7 extracts a partial set (point set) based on the input information.
  • step S8 the classifier 10 calculates the feature quantities of the clustered subsets, and compares the feature quantities of the subsets with the feature quantities based on the input information. For example, when the input information is a geometric shape (for example, [Geometric shape] [Ellipsoid] in FIG. 2), the processing unit 7 applies the shape represented by the subset to an ellipsoid, and the long axis and the short axis The length ratio is calculated as a feature amount of the subset.
  • a geometric shape for example, [Geometric shape] [Ellipsoid] in FIG. 2
  • the classifier 10 compares the feature quantity of the geometric shape based on the input information (eg, the ratio of the major axis to the minor axis is 0.9 or less or 1.1 or more) and the feature quantity of the subset.
  • the classifier 10 classifies the subset as a point group (point set) when the feature amount of the subset satisfies a predetermined relationship with the feature amount based on the input information. .
  • the classifier 10 matches the feature quantity of the subset with the feature quantity of the ellipsoid based on the input information (eg, the ratio of the major axis to the minor axis is 0.9 or less or 1.1 or more), This subset is classified as an ellipsoid.
  • the processing unit 7 stores the extracted point set information in the storage unit 8.
  • step S10 the output control unit 12 outputs the extraction result.
  • the output control unit 12 causes the extraction point group image P2 representing the extraction result by the processing unit 7 to be output to the GUI screen W.
  • the output control unit 12 may not output the extraction result by the processing unit 7 to the GUI screen W.
  • the output control unit 12 may cause the device (eg, printer) other than the display device 2 to output the extraction result by the processing unit 7.
  • the point cloud image is displayed on the display unit, the input information input by the input unit is acquired, and the point cloud image is obtained based on the input information. Extracting a part of the point group from the included point group and displaying an extracted point group image based on the extracted part of the point group on the display unit.
  • FIG. 7 is a diagram showing a GUI screen according to the first embodiment.
  • the input information is the type of structure (information specifying [BiologicalBioobjects]).
  • [Geometric shape] is selected as the distribution candidate category, but in the window W4 in FIG. 7, [Biological objects] is selected.
  • [Biological objects] indicates that the type of structure corresponding to the point set extracted by the processing unit 7 is specified as a distribution candidate category.
  • [Biological objects] is assigned a process of displaying candidate types of structures as predetermined candidates.
  • the window W6 is generated.
  • the process for generating the window W6 is the same as the process for generating the window W5 described with reference to FIG.
  • candidates for the type of structure to be extracted are displayed as choices of input information.
  • [Clathrin], [Mitochondria], and [Tubulin] are displayed as candidates for the type of structure to be extracted.
  • [Clathrin] is assigned a process for specifying clathrin as an extraction target.
  • the processing unit 7 (clustering unit 9) divides the point group included in the point group image into a plurality of point groups (subsets).
  • the processing unit 7 (classifier 10) extracts a part of the point group based on the feature amount of the point group (subset) divided by the clustering unit 9 and the feature amount of the structure.
  • the storage unit 8 stores information on the feature amount of the structure.
  • the information regarding the feature amount of the structure is information that defines the shape of the structure (eg, clathrin), for example.
  • the information on the feature amount of the structure is information defining the distribution of the N-dimensional data D1 corresponding to the shape of the structure (eg, clathrin), for example.
  • the input control unit 11 designates a distribution corresponding to the shape of the clathrin as the distribution of the N-dimensional data D1 in the point set to be extracted, and causes the processing unit 7 to execute the extraction process. .
  • the processing unit 7 reads the distribution information corresponding to the shape of the clathrin from the storage unit 8 and executes the extraction process.
  • a process for designating mitochondria as an extraction target is assigned to [Mitochondria].
  • [Tubulin] is assigned a process for specifying tubulin as an extraction target. The process when [Mitochondria] or [Tubulin] is selected is the same as the process when [Clathrin] is selected.
  • the point set is extracted by selecting [Input trained data] or [Targeting].
  • Conditions eg, type of structure, shape
  • FIG. 8 is a diagram showing a GUI screen according to the first embodiment.
  • [Example] is selected as an option for the distribution designation method, but in the window W3 in FIG. 8, [Targeting] is selected.
  • [Targeting] indicates that a method for specifying a distribution by a graphic drawn by the user on the GUI screen W is selected as a distribution specifying method.
  • [Targeting] is assigned a process of displaying candidates for a method of drawing a graphic on the GUI screen W.
  • the window W7 is generated.
  • the process for generating the window W7 is the same as the process for generating the window W4 described with reference to FIG.
  • [Rectangular domain] and [Draw curve] are displayed as candidates for a method of drawing a graphic on the GUI screen W.
  • [Rectangular domain] is selected.
  • [Rectangular domain] indicates that the distribution of the N-dimensional data D1 inside the specified area is specified as the target distribution by specifying the rectangular parallelepiped area.
  • the input control unit 11 displays a rectangular parallelepiped area AR1 at the position of the pointer P when it is detected that the left click is performed with the pointer P placed on the point cloud image P1.
  • the input control unit 11 expands and contracts the area AR1 in the moving direction of the pointer P when it is detected that the pointer P is arranged on the side of the area AR1 and is dragged in the crossing direction of the side.
  • AR1 is displayed. In this way, the user can expand and contract the area AR1 in each of the X direction, the Y direction, and the Z direction.
  • the input control unit 11 displays the point cloud image P1 whose viewpoint has been changed.
  • the information processing apparatus 1 executes rendering processing based on the direction and amount of movement of the pointer P by dragging, and generates a point cloud image P1 with a changed viewpoint.
  • the information processing apparatus 1 can also display the point cloud image P1 by zooming (eg, zooming in or zooming out) based on the input information.
  • the input control unit 11 causes the output control unit 12 to execute a process of displaying the point cloud image P1 whose viewpoint has been changed. In this way, the user can appropriately expand and contract the area AR1 while viewing the point cloud image P1 from different directions, and specify the area AR1 so as to surround a desired subset.
  • FIG. 9 is a diagram showing a GUI screen according to the first embodiment.
  • [Targeting] is selected as the distribution designation method.
  • [Drawvecurve] is selected as a candidate for a method of drawing a graphic on the GUI screen W.
  • [Draw curve] indicates that the user draws a free curve while moving the pointer P, and designates a distribution surrounded by the free curve as a target distribution.
  • the input control unit 11 corresponds to the locus of the pointer P starting from the position of the pointer P at the start of the dragging when it is detected that the pointer P is dragged with the pointer P placed on the point cloud image P1.
  • the curve P4 to be displayed is displayed.
  • the input control unit 11 sets the position of the pointer P when the drag is released as the end point of the curve P4.
  • the input control unit 11 determines whether or not the curve P4 drawn by the user includes a closed curve.
  • the input control unit 11 adjusts the curve P4 so that the curve P4 includes the closed curve, for example, by interpolation processing.
  • a three-dimensional region can be specified by using the point cloud image P1 whose viewpoint has been changed, as in the case where [Rectangular domain] is selected.
  • the input control unit 11 designates the distribution of the N-dimensional data D1 inside the closed curve included in the curve P4 as the target distribution.
  • FIGS. 16 and 11 are diagrams showing a GUI screen according to the first embodiment.
  • [Input trained data] is selected as an option for the distribution designation method.
  • [Input trained data] is an option indicating that information defining a distribution extracted by the processing unit is read as a distribution designation method. For example, a process of reading information obtained by machine learning (described later with reference to FIGS. 16 to 20) is assigned to [Input trained data].
  • the input control unit 11 causes the output control unit 12 to display the window W8 when it is determined that [Input ⁇ ⁇ ⁇ ⁇ trained data] is selected according to user input information.
  • [Drop here] in the window W8 indicates that a file including information defining the target distribution is designated by drag and drop.
  • the input control unit 11 displays a window W ⁇ b> 9 indicating the hierarchy of files managed by the OS unit 5. Alternatively, it may be displayed by the output control unit 12. In this case, the user can specify a file including information defining the target distribution on the window W9.
  • the information processing apparatus 1 may read a file that defines extraction conditions instead of reading a learning result file using [Input trained data], and extract a point set based on the definition. For example, the information processing apparatus 1 may extract a point set by reading a file defining a geometric feature amount as a file defining an extraction condition by “Etc ..” in the window W5 in FIG. .
  • the input information options described with reference to FIGS. 2 and 7 to 11 can be changed as appropriate.
  • the GUI unit 6 may not provide some of the input information options described with reference to FIGS. 2 and 7 to 11. Further, the GUI unit 6 may provide options different from the input information options described with reference to FIGS. 2 and 7 to 11.
  • the processing unit 7 selects a partial region (eg, ROI) from the space in which the point cloud data DG is defined, and the similarity between the distribution of the N-dimensional data D1 and the specified distribution in the selected region. May be calculated.
  • the classifier 10 may classify the selected region as a subset when the calculated similarity is greater than or equal to a threshold value.
  • the processing unit 7 may extract the subset by changing the position of the partial area and repeating the process of determining whether the partial area corresponds to the subset to be extracted. .
  • the information processing apparatus 1 has the same configuration as that shown in FIG. In the present embodiment, the configuration of the information processing apparatus 1 is appropriately referred to FIG.
  • the output control unit 12 displays the subset classified by the clustering unit 9 as a distribution candidate specified by the user using the input information.
  • FIG. 12 is a diagram illustrating a GUI screen output by the output control unit based on the subset classified by the clustering unit according to the second embodiment.
  • the GUI screen W in FIG. 12 includes a window W10.
  • a symbol Kd in the window W10 indicates the distribution of the N-dimensional data D1 in the subset classified by the clustering unit 9, respectively.
  • the output control unit 12 may display the subset identification number assigned by the clustering unit 9 and the distribution Kd together.
  • the information processing apparatus 1 can display the distribution of at least a part of the point cloud data DG by changing the viewpoint, as described with reference to FIGS. Similarly, in FIG. 12, the information processing apparatus 1 can display the distribution of the N-dimensional data D1 for each subset by changing the viewpoint. For example, when the input control unit 11 detects that the pointer K is dragged in a state where the pointer P is arranged on the subset distribution Kd, the input control unit 11 displays the distribution Kd by changing the viewpoint.
  • FIG. 13 is a diagram showing a distribution designation method using the GUI screen according to the second embodiment.
  • the user can select each of the plurality of distributions Kd displayed in the window W10 based on the input information.
  • the user can specify whether or not the selected distribution Kd (hereinafter referred to as distribution Kd1) is to be extracted by input information.
  • distribution Kd1 the selected distribution Kd
  • the input control unit 11 is the distribution selected by the user input information. Is determined.
  • the input control unit 11 displays the selected distribution Kd1 so as to be distinguishable from other distributions Kd.
  • the input control unit 11 displays a frame (indicated by a thick line in FIG. 13) surrounding the distribution Kd1 in a color or brightness different from the frame surrounding the other distribution Kd.
  • the user can specify a distribution to be extracted (hereinafter referred to as an extraction target distribution) by input information.
  • the input control unit 11 determines that the distribution Kd1 is designated as the extraction target distribution when it is detected that the left click is performed in a state where the pointer P is arranged on the selected distribution Kd1.
  • the symbols Kd2 and Kd3 represent distributions determined to be designated as the extraction target distribution.
  • the input control unit 11 displays the distribution Kd2 and the distribution Kd3 so as to be distinguishable from other distributions Kd.
  • the input control unit 11 displays a frame (indicated by a two-dot chain line in FIG. 13) surrounding the distribution Kd2 in a color or brightness different from that of the frame surrounding the other distribution Kd.
  • the user can specify a distribution to be excluded from extraction (hereinafter referred to as an extraction exclusion distribution) by input information.
  • the input control unit 11 determines that the distribution Kd1 is designated as the extraction exclusion distribution when it is detected that the right-click is performed in a state where the pointer P is placed on the selected distribution Kd1.
  • the distribution Kd determined to be designated as the distribution outside the extraction field is represented by the symbol Kd4.
  • the input control unit 11 displays the distribution Kd4 so as to be distinguishable from other distributions Kd.
  • the input control unit 11 displays a frame (indicated by a dotted line in FIG. 13) surrounding the distribution Kd4 in a color or brightness different from the frame surrounding the other distribution Kd.
  • the processing unit 7 extracts a point set from the point group based on the similarity between the distribution specified by the input control unit 11 and the distribution of N-dimensional data in the subset classified by the clustering unit 9.
  • FIG. 15 is a diagram illustrating processing by the processing unit according to the second embodiment.
  • the processing unit 7 calculates the similarity between each of the distributions Kd (Kd2, Kd3) determined to be designated as the extraction target distribution and each of the subset distributions Kd classified by the clustering unit 9.
  • the distribution Kd for which the degree of similarity with the distribution Kd (Kd2, Kd3) determined to be designated as the extraction target distribution is calculated is represented by the symbol Kd5.
  • the distribution Kd (Kd2, Kd3) determined to be designated as the extraction target distribution has a maximum similarity with itself, and may be excluded from the partner distribution Kd5 for calculating the similarity.
  • the processing unit 7 extracts a distribution whose similarity is equal to or greater than a threshold for at least one of the distributions Kd (Kd2, Kd3) determined to be designated as the extraction target distribution.
  • one of the distributions Kd5 is represented by a symbol Kd51.
  • the processing unit 7 extracts the distribution Kd51 when one or both of the similarity Q11 between the distribution Kd2 and the distribution Kd5 (Kd51) and the similarity Q21 between the distribution Kd3 and the distribution Kd51 are equal to or greater than a threshold value.
  • the processing unit 7 may determine that the distribution Kd51 is a distribution that is not similar to the target distribution when only one of the similarity Q11 and the similarity Q21 is less than the threshold. Further, the processing unit 7 may determine whether or not the distribution is similar to the target distribution based on values (for example, average values) calculated from the similarity Q11 and the similarity Q21. In FIG. 15, two distributions Kb2 and Kb3 are shown as target distributions, but the number of target distributions may be one or three or more.
  • the processing unit 7 calculates the degree of similarity with the distribution Kd5 in the same manner as the distributions (Kd2, Kd3) determined as the extraction target distribution for the distribution Kd4 (see FIG. 13) to be excluded from the extraction.
  • the processing unit 7 determines that the distribution Kd5 whose similarity to the distribution Kd5 is equal to or greater than the threshold is dissimilar from the target distribution.
  • the processing unit 7 excludes the distribution determined to be dissimilar to the target distribution from the extraction.
  • the processing unit 7 calculates the degree of similarity of each distribution Kd4 with each of the plurality of distributions Kd5.
  • the processing unit 7 may determine whether or not the distribution is dissimilar to the target distribution based on a plurality of values calculated as the degrees of similarity between each distribution Kd4 and the plurality of distributions Kd5. For example, the processing unit 7 may determine that the distribution is not similar to the target distribution when the maximum value of the plurality of values is equal to or greater than a threshold value. Further, the processing unit 7 may determine that the distribution is not similar to the target distribution when the minimum value of the plurality of values is equal to or greater than a threshold value. Further, the processing unit 7 may determine that the distribution is not similar to the target distribution when a value (eg, an average value) calculated from the plurality of values is equal to or greater than a threshold value.
  • a value eg, an average value
  • FIG. 16 is a flowchart illustrating an information processing method according to the second embodiment.
  • the processing from step S1 to step S3 is the same as the processing described in FIG.
  • the clustering unit 9 classifies the point cloud data DG into a subset.
  • the process of step S3 may be executed as part of the process of step S11.
  • the clustering unit 9 may classify noise when classifying the N-dimensional data D1 included in the point cloud data DG into a subset.
  • the process of step S3 may not be a part of the process of step S11, and does not need to be performed.
  • step S4 and the processing in step S5 are the same as the processing described in FIG.
  • step S ⁇ b> 6 the input control unit 11 acquires input information using the GUI screen W.
  • step S6a the output control unit 12 displays the distribution of the N-dimensional data D1 in the subset classified by the clustering unit 9 in step S11 on the GUI screen W (see FIGS. 12 and 13).
  • step S11 the input control unit 11 specifies a distribution specified by the input information (see FIG. 13).
  • step S12 the processing unit 7 extracts a subset based on the distribution specified in step S11 (see FIG. 15).
  • step S13 of step S12 the processing unit 7 calculates the similarity between the distribution of the clustered subset and the specified distribution.
  • the distribution of the clustered subset is a distribution of N-dimensional data included in the subset classified by the clustering unit 9 in step S3b. Further, the specified distribution is the distribution specified by the input control unit 11 as the distribution specified by the input information in step S11.
  • the classifier 10 classifies a subset whose similarity is equal to or higher than a threshold as a one-minute point group (point set).
  • the classifier 10 extracts (classifies) the subset as a point set similar to the distribution specified in step S11 when the similarity calculated in step S14 is equal to or greater than the threshold.
  • the processing unit 7 stores the extracted point set information in the storage unit 8.
  • FIG. 16 is a diagram illustrating an information processing apparatus according to the third embodiment.
  • the information processing apparatus 1 includes a machine learning unit 15.
  • the information processing apparatus 1 generates the classifier 10 by the machine learning unit 15.
  • the machine learning unit 15 Based on the input information acquired by the input control unit 11, the machine learning unit 15 generates an index (eg, determination criterion, evaluation function) when the processing unit 7 extracts a point set from the point cloud data DG by machine learning.
  • index eg, determination criterion, evaluation function
  • Examples of machine learning methods include Neural network (eg, Deep learning), support vector machine, regression forest, and the like.
  • the machine learning unit 15 executes machine learning by combining one or two or more of the above-described machine learning methods or other machine learning methods.
  • the input information acquisition unit acquires the teacher data of the machine learning unit 15 as input information. As described with reference to FIG. 13, the input control unit 11 acquires information representing a target distribution (eg, distribution Kb2, distribution Kb3) extracted by the processing unit 7 as input information.
  • the machine learning unit 15 executes machine learning using the target distribution obtained from the input information acquired by the input control unit 11 as teacher data.
  • the processing unit 7 extracts a part of the point group (point set) from the point cloud data DG based on the index generated by the machine learning unit 15.
  • the above teacher data includes information (information indicating the distribution to be extracted, correct teacher data) that defines a part of the point group (point set) extracted by the processing unit 7. Further, the teacher data includes information (information indicating a distribution that is not extracted, incorrect answer teacher data) that defines a point group that the processing unit 7 excludes from the extraction.
  • the user can input information representing a distribution to be extracted and information representing a distribution not to be extracted.
  • FIG. 17 is a diagram illustrating processing for designating a distribution. As described with reference to FIG. 8, the user can specify an area using the GUI screen W.
  • symbol AR3 represented by a two-dot chain line
  • a symbol AR4 represented by a dotted line
  • the input control unit 11 specifies the distribution (represented by the symbols Ke1, Ke2, and Ke3) specified by the user as the distribution to be extracted by specifying the distribution included in the area AR3.
  • the distribution 17 corresponds to information defining a point group to be excluded from extraction by the processing unit 7, and is a distribution group identified by the input control unit 11 as a distribution that is not extracted.
  • the distribution information included in the group G2 can be used as teacher data representing an incorrect answer. Note that the user may designate one or both of the distribution to be extracted and the distribution not to be extracted by selecting a candidate from the list (see FIG. 13).
  • FIG. 18 is a diagram illustrating processing by the machine learning unit and the processing unit according to the third embodiment.
  • the machine learning unit 15 calculates a feature amount for each of the distributions Ke1 to Ke3 selected as the target distribution extracted by the processing unit 7.
  • the types of feature amounts are, for example, the size of the space occupied by the distribution, the number density of the N-dimensional data D1 in the distribution, the curvature of the space occupied by the distribution, and the like.
  • the machine learning unit 15 calculates a plurality of types of feature amounts, for example.
  • the feature amounts calculated by the machine learning unit 15 are represented by “feature amount 1” and “feature amount 2” in FIG.
  • the machine learning unit 15 derives a relationship that the feature quantity 1 and the feature quantity 2 satisfy. For example, the machine learning unit 15 derives an area AR2 in which the feature quantity 2 with respect to the feature quantity 1 is arranged for each of a plurality of distributions (Ke1 to Ke3). The machine learning unit 15 generates information (for example, a function) representing the area AR2 as an index for extracting a point set from the point cloud data DG. The machine learning unit 15 causes the storage unit 8 to store information representing the area AR2 as a result of machine learning.
  • the processing unit 7 reads out the result of the machine learning by the machine learning unit 15 from the storage unit 8 and executes the extraction process. For example, when the user selects [Input trained data] as [Target] on the GUI screen W shown in FIGS. 10 and 11, the user designates information representing the area AR ⁇ b> 2 stored in the storage unit 8. The processing unit 7 reads information specified by the user as information representing the area AR2, and executes an extraction process.
  • the processing unit 7 calculates the feature amount 1 and the feature amount 2 for the distribution Kd51 of the N-dimensional data D1 in the subset classified by the clustering unit 9.
  • the classifier 10 determines whether or not the feature amount 2 for the feature amount 1 of the distribution Kd51 exists in the area AR2.
  • the classifier 10 determines that the distribution Kd51 is similar to the target distribution when the feature amount 2 for the feature amount 1 of the distribution Kd51 exists in the area AR2.
  • the classifier 10 extracts (classifies) the distribution Kd51 determined to be similar to the target distribution as a point set.
  • the processing unit 7 calculates the feature amount 1 and the feature amount 2 for the distribution Kd52 of the N-dimensional data D1 in the subset classified by the clustering unit 9.
  • the classifier 10 determines whether or not the feature amount 2 for the feature amount 1 of the distribution Kd52 exists in the area AR2.
  • the classifier 10 determines that the distribution Kd52 is not similar to the target distribution (is dissimilar) when the feature amount 2 for the feature amount 1 of the distribution Kd52 does not exist in the area AR2.
  • the classifier 10 does not extract the distribution Kd52 that is determined not to be similar to the target distribution as a point set.
  • FIG. 19 is a diagram illustrating processing by the machine learning unit according to the third embodiment.
  • the machine learning unit 15 performs machine learning based on a distribution (Ke1 to Ke3) selected as a target distribution to be extracted and a distribution (Kf1 to Kf3) selected as a distribution not to be extracted.
  • the machine learning unit 15 has the feature quantity 2 for the feature quantity 1 of the distribution to be extracted (Ke1 to Ke3) within the area AR2, and the feature quantity 2 for the feature quantity 1 of the distribution (Kf1 to Kf3) not to be extracted exists in the area AR2.
  • the area AR2 is derived so that it does not occur.
  • the machine learning unit 15 uses the distribution (Kf1 to Kf3) selected as the distribution not to be extracted, and does not use the distribution (Ke1 to Ke3) selected as the distribution to be extracted. Perform machine learning.
  • the machine learning unit 15 derives the area AR2 so that the feature quantity 2 for the feature quantity 1 of the distribution (Kf1 to Kf3) that is not extracted does not exist in the area AR2.
  • FIG. 20 is a flowchart illustrating an information processing method according to the third embodiment.
  • the machine learning unit 15 performs machine learning based on the distribution specified in step S7 (see FIG. 18A).
  • step S22 the processing unit 7 extracts a point set based on the result of the machine learning in step S21. At that time, in step S22a, the processing unit 7 calculates the feature amount of the distribution of the subset clustered by the clustering unit 9.
  • step S22b the classifier 10 classifies the point set based on the learning result and the feature amount.
  • the classification unit 7A reads information representing the area AR2 (see FIG. 18A, FIG. 19A, and FIG. 19B) from the storage unit 8 as a learning result, and the feature amount calculated in step S22a. It is determined whether or not the subset has the target distribution depending on whether or not the position of is within the area AR2.
  • the information processing apparatus 1 may include a surface generation unit that generates a surface representing the shape of the subset.
  • the surface generation unit generates a scalar field based on N-dimensional data included in the point cloud data DG, and generates a contour surface of the scalar field as a surface representing the shape of the subset.
  • the processing unit 7 may extract a part of the point group (point set) from the point group based on the surface generated by the surface generation unit.
  • FIG. 21 is a diagram illustrating an information processing apparatus according to the fourth embodiment.
  • the information processing apparatus 1 according to the present embodiment includes a calculation unit 17.
  • the computing unit 17 performs computation using a part of the point group (point set) extracted by the processing unit 7.
  • the calculation unit 17 calculates one or both of the surface area and the volume of the shape represented by a part of the point group (point set) extracted by the processing unit 7.
  • the calculation unit 17 applies the distribution of the N-dimensional data in the point set extracted by the processing unit 7 to a function representing an ellipsoid, and calculates the coefficient of this function.
  • the processing unit 7 calculates the major axis and the minor axis of the ellipsoid using the calculated coefficient.
  • the processing unit 7 calculates the surface area by substituting the calculated major axis and minor axis into the ellipsoidal surface area formula.
  • the processing unit 7 calculates the volume by substituting the calculated major axis and minor axis into the ellipsoidal volume formula.
  • the calculation unit 17 counts (calculates) the number of point sets extracted by the processing unit 7.
  • FIG. 22 is a diagram illustrating a GUI screen output by the output control unit based on the calculation result of the calculation unit according to the fourth embodiment.
  • the GUI screen W in FIG. 22 includes a window W11. In the window WA, a list of point sets is displayed as the calculation result image P5. An identification number ([Target No.] in the figure) is assigned to the point set based on the number of point sets counted by the calculation unit 17. [Volume] in the figure is a value calculated by the calculation unit 17 as a volume of a shape corresponding to a point set.
  • [Surface Area] in the figure is a value calculated by the calculation unit 17 as the surface area of the shape corresponding to the point set.
  • [X], [Y], and [Z] in the figure are coordinates representing a point set (for example, the position of the center of gravity).
  • the calculation result image P5 is displayed on the GUI screen W together with the extracted point cloud image P3, for example.
  • the input control unit 11 detects that the left click is performed in a state where the pointer P is arranged on the point set in the extracted point cloud image P3, the pointer P is arranged in the extracted point cloud image P3.
  • the point set Kg and the calculation result of the calculation unit 17 relating to the point set Kg in the calculation result image P5 are displayed with emphasis.
  • the input control unit 11 detects that the left click is performed in a state where the pointer P is arranged in the row in the calculation result image P5
  • the input control unit 11 calculates the calculation result of the calculation unit 17 in the row in which the pointer P is arranged,
  • the point set Kg of the extracted point cloud image P2 corresponding to the row may be highlighted and displayed.
  • FIG. 23 is a diagram illustrating N-dimensional data according to the embodiment.
  • the point cloud data DG in FIG. 23 is voxel data obtained by CT scan or the like.
  • the voxel data is four-dimensional data in which three-dimensional coordinate values (x, y, z) of each cell Cv and a value (v) given to the cell Cv are combined.
  • the cell Cv1 has a three-dimensional coordinate of (4, 2, 3) and a cell value v of 4.
  • the N-dimensional data D1 corresponding to the cell Cv1 is represented by (4, 2, 3, 4).
  • N-dimensional data D1 is processed by the information processing apparatus 1, for example, as shown in FIG. 23B, cells whose cell value v satisfies a predetermined condition are extracted (filtered). In FIG. 23B, cells having a value of v of 5 or more are extracted. Then, as shown in FIG. 23C, assuming that a point is arranged at the center position of each cell, three-dimensional point cloud data is obtained. As described in the above embodiment, information processing is performed. The partial area can be extracted by the apparatus 1.
  • N may be an integer of 4 or more
  • the information processing apparatus 1 may process N-dimensional data without performing the above filtering.
  • the information processing apparatus 1 may represent the value v of the above cell with luminance or color on the GUI screen W.
  • N values included in one N-dimensional data may be expressed by being divided into a plurality of windows. For example, in the case of four-dimensional data, the distribution of two-dimensional data in which two values selected from four values are grouped is displayed in one window, and the distribution of two-dimensional data in which the remaining two values are grouped is represented. It may be displayed in a separate window.
  • the input device 3 is a mouse.
  • a device other than a mouse eg, a keyboard
  • the user may execute at least a part of the processing of the information processing apparatus 1 by operating a keyboard and inputting a command to the command line.
  • the input information of the user includes information on the key pressed on the keyboard.
  • the information processing apparatus 1 includes, for example, a computer system.
  • the information processing device 1 reads an information processing program stored in the storage unit 8 (storage device) and executes various processes according to the information processing program.
  • This information processing program for example, causes a computer to display a point cloud image on a display unit, obtains input information input by the input unit, and points included in the point cloud image based on the input information. Extracting a part of the point group from the group and displaying an extracted point group image based on the extracted part of the point group on the display unit are executed.
  • the information processing program may be provided by being recorded on a computer-readable storage medium (eg, non-transitory recording medium, non-transitory tangible medium).
  • FIG. 24 is a diagram illustrating a microscope according to the embodiment.
  • the microscope 50 includes a microscope main body 51, the information processing apparatus 1 described in the above embodiment, and a control device 52.
  • the control device 52 includes a control unit 53 that controls each unit of the microscope main body 51 and an image processing unit 54. At least a part of the control device 52 may be provided in the microscope main body 51 (may be incorporated). In addition, the control unit 53 controls the information processing apparatus 1. At least a part of the control device 52 may be provided in the information processing device 1 (may be incorporated).
  • the microscope main body 51 detects a sample.
  • the microscope 50 is, for example, a fluorescence microscope, and the microscope main body 51 detects an image of fluorescence emitted from a sample containing a fluorescent substance.
  • the microscope according to the embodiment is a super-resolution microscope such as STORM or PALM, for example.
  • STORM activates a fluorescent material, and irradiates the activated fluorescent material with excitation light, thereby acquiring a plurality of fluorescent images.
  • Data of a plurality of fluorescent images is input to the image processing unit 54.
  • the image processing unit 54 calculates the position information of the fluorescent substance in each fluorescence image, and generates point cloud data DG using the calculated plurality of position information.
  • the image processing unit 54 generates a point cloud image representing the point cloud data DG.
  • the image processing unit 54 calculates the two-dimensional position information of the fluorescent substance, and generates point group data DG including a plurality of two-dimensional data.
  • the image processing unit 54 calculates three-dimensional position information of the fluorescent material, and generates point group data DG including a plurality of three-dimensional data.
  • FIG. 25 is a diagram showing a microscope main body according to the embodiment.
  • the microscope main body 51 can be used for both fluorescence observation of a sample labeled with one type of fluorescent substance and fluorescence observation of a sample labeled with two or more kinds of fluorescent substances.
  • fluorescent substance eg, reporter dye
  • the microscope 51 can generate a three-dimensional super-resolution image.
  • the microscope 51 has a mode for generating a two-dimensional super-resolution image and a mode for generating a three-dimensional super-resolution image, and can switch between the two modes.
  • the sample may include living cells (live cells), may include cells fixed using a tissue fixing solution such as a formaldehyde solution, or may be tissue.
  • the fluorescent substance may be a fluorescent dye such as a cyanine dye or a fluorescent protein.
  • the fluorescent dye includes a reporter dye that emits fluorescence when receiving excitation light in an activated state (hereinafter referred to as an activated state).
  • the fluorescent dye may include an activator dye that receives activation light and activates the reporter dye. If the fluorescent dye does not contain an activator dye, the reporter dye receives an activation light and becomes activated.
  • Fluorescent dyes are, for example, a dye pair in which two kinds of cyanine dyes are combined (eg, Cy3-Cy5 dye pair (Cy3, Cy5 is a registered trademark), Cy2-Cy5 dye pair (Cy2, Cy5 is a registered trademark) , Cy3-Alexa® Fluor647 dye pair (Cy3, Alexa® Fluor is a registered trademark)) and one type of dye (eg, Alexa® Fluor647 (Alexa® Fluor is a registered trademark)).
  • the fluorescent protein include PA-GFP and Dronpa.
  • the microscope main body 51 includes a stage 102, a light source device 103, an illumination optical system 104, a first observation optical system 105, an imaging unit 106, an image processing unit 54, and a control device 52.
  • the control device 52 includes a control unit 53 that comprehensively controls each unit of the microscope main body 51.
  • the image processing unit 54 is provided in the control device 52, for example.
  • the stage 102 holds the sample W to be observed.
  • the stage 102 can place the sample W on the upper surface thereof, for example.
  • the stage 102 may have a mechanism for moving the sample W like an XY stage, or may not have a mechanism for moving the sample W like a desk.
  • the microscope main body 51 may not include the stage 102.
  • the light source device 103 includes an activation light source 110a, an excitation light source 110b, a shutter 111a, and a shutter 111b.
  • the activation light source 110a emits activation light L that activates a part of the fluorescent material contained in the sample W.
  • the fluorescent material contains a reporter dye and does not contain an activator dye.
  • the reporter dye of the fluorescent substance is in an activated state capable of emitting fluorescence when irradiated with the activation light L.
  • the fluorescent substance may include a reporter dye and an activator dye. In this case, the activator dye activates the reporter dye when it receives the activation light L.
  • the fluorescent substance may be a fluorescent protein such as PA-GFP or Dronpa.
  • the excitation light source 110b emits excitation light L1 that excites at least a part of the fluorescent material activated in the sample W.
  • the fluorescent material emits fluorescence or is inactivated when the excitation light L1 is irradiated in the activated state.
  • an inactivated state When the fluorescent material is irradiated with the activation light L in an inactivated state (hereinafter referred to as an inactivated state), the fluorescent material is again activated.
  • the activation light source 110a and the excitation light source 110b include, for example, a solid light source such as a laser light source, and each emits laser light having a wavelength corresponding to the type of fluorescent material.
  • the emission wavelength of the activation light source 110a and the emission wavelength of the excitation light source 110b are selected from, for example, about 405 nm, about 457 nm, about 488 nm, about 532 nm, about 561 nm, about 640 nm, and about 647 nm.
  • the emission wavelength of the activation light source 110a is about 405 nm and the emission wavelength of the excitation light source 110b is a wavelength selected from about 488 nm, about 561 nm, and about 647 nm.
  • the shutter 111a is controlled by the control unit 53, and can switch between a state in which the activation light L from the activation light source 110a passes and a state in which the activation light L is blocked.
  • the shutter 111b is controlled by the control unit 53, and can switch between a state in which the excitation light L1 from the excitation light source 110b passes and a state in which the excitation light L1 is blocked.
  • the light source device 103 includes a mirror 112, a dichroic mirror 113, an acoustooptic device 114, and a lens 115.
  • the mirror 112 is provided on the emission side of the excitation light source 110b, for example.
  • the excitation light L1 from the excitation light source 110b is reflected by the mirror 112 and enters the dichroic mirror 113.
  • the dichroic mirror 113 is provided, for example, on the emission side of the activation light source 110a.
  • the dichroic mirror 113 has a characteristic that the activation light L is transmitted and the excitation light L1 is reflected.
  • the activation light L transmitted through the dichroic mirror 113 and the excitation light L1 reflected by the dichroic mirror 113 enter the acoustooptic device 114 through the same optical path.
  • the acoustooptic element 114 is, for example, an acoustooptic filter.
  • the acoustooptic device 114 is controlled by the control unit 53 and can adjust the light intensity of the activation light L and the light intensity of the excitation light L1.
  • the acoustooptic element 114 is controlled by the control unit 53 so that the activation light L and the excitation light L1 are blocked by the acoustooptic element 114 from passing through the acoustooptic element 114 (hereinafter referred to as a light passing state). Or a state in which the intensity is reduced (hereinafter referred to as a light shielding state) can be switched.
  • the control unit 53 controls the acoustooptic device 114 so that the activation light L and the excitation light L1 are irradiated simultaneously. Further, when the fluorescent material includes a reporter dye and an activator dye, the control unit 53 controls the acoustooptic device 114 so as to irradiate the excitation light L1 after the activation light L is irradiated, for example.
  • the lens 115 is, for example, a coupler, and condenses the activation light L and the excitation light L1 from the acoustooptic device 114 on the light guide member 116.
  • the microscope main body 51 may not include at least a part of the light source device 103.
  • the light source device 103 is unitized, and may be provided in the microscope main body 51 so as to be replaceable (attachable or removable).
  • the light source device 103 may be attached to the microscope main body 51 when observing with the microscope 1.
  • the illumination optical system 104 irradiates the activation light L that activates a part of the fluorescent substance contained in the sample W and the excitation light L1 that excites at least a part of the activated fluorescent substance.
  • the illumination optical system 104 irradiates the sample W with the activation light L and the excitation light L1 from the light source device 103.
  • the illumination optical system 104 includes a light guide member 116, a lens 117, a lens 118, a filter 119, a dichroic mirror 120, and an objective lens 121.
  • the light guide member 116 is an optical fiber, for example, and guides the activation light L and the excitation light L1 to the lens 117.
  • the lens 117 is a collimator, for example, and converts the activation light L and the excitation light L1 into parallel light.
  • the lens 118 condenses, for example, the activation light L and the excitation light L1 at the position of the pupil plane of the objective lens 121.
  • the filter 119 has a characteristic of transmitting the activation light L and the excitation light L1 and blocking at least a part of light of other wavelengths.
  • the dichroic mirror 120 has a characteristic that the activation light L and the excitation light L1 are reflected, and light (for example, fluorescence) in a predetermined wavelength band out of the light from the sample W is transmitted.
  • the light from the filter 119 is reflected by the dichroic mirror 120 and enters the objective lens 121.
  • the sample W is disposed on the front focal plane of the objective lens 121 during observation.
  • the activation light L and the excitation light L1 are applied to the sample W by the illumination optical system 104 as described above.
  • the illumination optical system 104 described above is an example, and can be changed as appropriate. For example, a part of the illumination optical system 104 described above may be omitted.
  • the illumination optical system 104 may include at least a part of the light source device 103.
  • the illumination optical system 104 may include an aperture stop, an illumination field stop, and the like.
  • the first observation optical system 105 forms an image of light from the sample W.
  • the first observation optical system 105 forms an image of fluorescence from the fluorescent material contained in the sample W.
  • the first observation optical system 105 includes an objective lens 121, a dichroic mirror 120, a filter 124, a lens 125, an optical path switching member 126, a lens 127, and a lens 128.
  • the first observation optical system 105 shares the objective lens 121 and the dichroic mirror 120 with the illumination optical system 104.
  • the optical path between the sample W and the imaging unit 106 is indicated by a solid line.
  • Fluorescence from the sample W enters the filter 124 through the objective lens 121 and the dichroic mirror 120.
  • the filter 124 has a characteristic that light in a predetermined wavelength band out of the light from the sample W selectively passes.
  • the filter 124 blocks, for example, illumination light, external light, stray light, etc. reflected by the sample W.
  • the filter 124 is unitized with, for example, the filter 119 and the dichroic mirror 120, and the filter unit 23 is provided in a replaceable manner.
  • the filter unit 23 is exchanged according to the wavelength of light emitted from the light source device 103 (for example, the wavelength of the activation light L, the wavelength of the excitation light L1), the wavelength of fluorescence emitted from the sample W, and the like.
  • a single filter unit corresponding to a plurality of excitation and fluorescence wavelengths may be used.
  • the light that has passed through the filter 124 enters the optical path switching member 126 through the lens 125.
  • the light emitted from the lens 125 passes through the optical path switching member 126 and then forms an intermediate image on the intermediate image surface 105b.
  • the optical path switching member 126 is a prism, for example, and is provided so as to be able to be inserted into and removed from the optical path of the first observation optical system 105.
  • the optical path switching member 126 is inserted into and removed from the optical path of the first observation optical system 105 by a drive unit (not shown) controlled by the control unit 53, for example.
  • the optical path switching member 126 guides the fluorescence from the sample W to the optical path toward the imaging unit 106 by internal reflection.
  • the lens 127 converts fluorescence emitted from the intermediate image (fluorescence that has passed through the intermediate image surface 105b) into parallel light, and the lens 128 condenses the light that has passed through the lens 127.
  • the first observation optical system 105 includes an astigmatism optical system (for example, a cylindrical lens 129).
  • the cylindrical lens 129 acts on at least part of the fluorescence from the sample W and generates astigmatism with respect to at least part of the fluorescence. That is, an astigmatism optical system such as the cylindrical lens 129 generates astigmatism by generating astigmatism with respect to at least a part of the fluorescence. This astigmatism is used to calculate the position of the fluorescent material in the depth direction of the sample W (the optical axis direction of the objective lens 121).
  • the cylindrical lens 129 is detachably provided in the optical path between the sample W and the imaging unit 106 (for example, the imaging device 140).
  • the cylindrical lens 129 can be inserted into and removed from the optical path between the lens 127 and the lens 128.
  • the cylindrical lens 129 is disposed in this optical path in a mode for generating a three-dimensional super-resolution image, and is retracted from this optical path in a mode for generating a two-dimensional super-resolution image.
  • the microscope main body 51 includes the second observation optical system 130.
  • the second observation optical system 130 is used for setting an observation range.
  • the second observation optical system 130 includes an objective lens 121, a dichroic mirror 120, a filter 124, a lens 125, a mirror 131, a lens 132, a mirror 133, a lens 134, a lens 135, and a mirror in order from the sample W toward the observer's viewpoint Vp. 136 and a lens 137.
  • the second observation optical system 130 shares the configuration from the objective lens 121 to the lens 125 with the first observation optical system 105.
  • the light from the sample W passes through the lens 125 and then enters the mirror 131 in a state where the optical path switching member 126 is retracted from the optical path of the first observation optical system 105.
  • the light reflected by the mirror 131 is incident on the mirror 133 via the lens 132, is reflected by the mirror 133, and then enters the mirror 136 via the lens 134 and the lens 135.
  • the light reflected by the mirror 136 enters the viewpoint Vp through the lens 137.
  • the second observation optical system 130 forms an intermediate image of the sample W in the optical path between the lens 135 and the lens 137.
  • the lens 137 is an eyepiece, for example, and the observer can set an observation range by observing the intermediate image.
  • the imaging unit 106 captures an image formed by the first observation optical system 105.
  • the imaging unit 106 includes an imaging element 140 and a control unit 141.
  • the image sensor 140 is, for example, a CMOS image sensor, but may be a CCD image sensor or the like.
  • the image sensor 140 has, for example, a structure having a plurality of pixels arranged two-dimensionally and a photoelectric conversion element such as a photodiode disposed in each pixel.
  • the imaging element 140 reads out the electric charge accumulated in the photoelectric conversion element by a reading circuit.
  • the image sensor 140 converts the read electric charges into digital data, and outputs data in a digital format (eg, image data) in which pixel positions and gradation values are associated with each other.
  • the control unit 141 operates the image sensor 140 based on a control signal input from the control unit 53 of the control device 52, and outputs captured image data to the control device 52. Further, the control unit 141 outputs the charge accumulation period and the charge read period to the control device 52.
  • the control device 52 includes a control unit 53 that collectively controls each unit of the microscope main body 51.
  • the control unit 53 transmits light from the light source device 103 to the acoustooptic device 114 based on a signal (imaging timing information) indicating the charge accumulation period and the charge read period supplied from the control unit 141.
  • a control signal for switching between a state and a light blocking state that blocks light from the light source device 103 is supplied.
  • the acoustooptic device 114 switches between a light transmission state and a light shielding state based on this control signal.
  • the control unit 53 controls the acoustooptic device 114 to control a period in which the activation light L is irradiated on the sample W and a period in which the activation light L is not irradiated on the sample W. Further, the control unit 53 controls the acoustooptic device 114 to control a period during which the sample W is irradiated with the excitation light L1 and a period during which the sample W is not irradiated with the excitation light L1. The control unit 53 controls the acoustooptic device 114 to control the light intensity of the activation light L and the light intensity of the excitation light L1 that are irradiated on the sample W.
  • control unit 141 controls the acoustooptic device 114 to switch between a light shielding state and a light transmission state based on a signal (information on imaging timing) indicating a charge accumulation period and a charge read period.
  • a signal may be supplied to control the acousto-optic element 114.
  • the control unit 53 controls the imaging unit 106 to cause the imaging device 140 to perform imaging.
  • the control unit 53 acquires an imaging result (captured image data) from the imaging unit 106.
  • the image processing unit 54 calculates the position information of the fluorescent substance in each fluorescence image by calculating the center of gravity of the fluorescence image shown in the captured image, and uses the calculated plurality of position information to obtain the point cloud data DG. Generate.
  • the image processing unit 54 is a two-dimensional STORM
  • the image generation unit 13 calculates two-dimensional position information of the fluorescent substance, and generates point cloud data DG including a plurality of two-dimensional data.
  • the image processing unit 54 calculates three-dimensional position information of the fluorescent material, and generates point group data DG including a plurality of three-dimensional data.
  • the image processing unit 54 outputs the point cloud data DG to the information processing apparatus 1 shown in FIG.
  • the information processing apparatus 1 processes point cloud data DG obtained from the detection result of the microscope main body 51.
  • the control device 52 acquires the imaging result (data of the captured image) from the imaging unit 106, outputs the acquired imaging result to the information processing device 1, and the information processing device 1 generates the point cloud data DG.
  • the information processing apparatus 1 calculates the position information of the fluorescent substance in each fluorescence image, and generates the point cloud data DG using the calculated plurality of position information.
  • the information processing apparatus 1 generates a point cloud image representing the point cloud data DG.
  • the information processing apparatus 1 calculates the two-dimensional position information of the fluorescent material, and generates point group data DG including a plurality of two-dimensional data.
  • the information processing apparatus 1 calculates three-dimensional position information of the fluorescent material and generates point group data DG including a plurality of three-dimensional data.
  • the observation method includes detecting a sample, displaying a point cloud image obtained by detecting the sample on the display unit, acquiring input information input by the input unit, and inputting Extracting a part of the point cloud from the point cloud included in the point cloud image based on the information, and causing the display unit to display the extracted point cloud image based on the extracted part of the point cloud.
  • the control device 52 controls the microscope body 51
  • the microscope body 51 detects the sample W by detecting an image of fluorescence emitted from the sample containing the fluorescent material.
  • the control device 52 controls the information processing device 1 and causes the output control unit 12 to output the GUI screen W to the display device 2.
  • control device 52 controls the information processing device 1 and causes the output control unit 12 to output the point cloud image P1 to the GUI screen W.
  • control device 52 controls the information processing device 1 so that the input control unit 11 acquires input information that the user inputs using the GUI screen W.
  • control device 52 controls the information processing device 1 to specify the distribution specified by the input information by the input control unit 11.
  • control device 52 controls the information processing device 1 and extracts a point set from the point cloud data DG including a plurality of N-dimensional data D1 by the processing unit 7 based on the distribution specified by the input control unit 11.
  • control device 52 includes, for example, a computer system.
  • the control device 52 reads the observation program stored in the storage unit (storage device) and executes various processes according to the program.
  • This observation program causes a computer to detect a sample, to display a point cloud image obtained by detecting the sample on a display unit, to acquire input information input by the input unit, and to input information And extracting a part of the point cloud from the point cloud included in the point cloud image and displaying the extracted point cloud image based on the extracted part of the point cloud on the display unit.
  • This observation program may be provided by being recorded on a computer-readable storage medium (eg, non-transitory recording medium, non-transitory tangible medium).
  • control device 52 may be provided in the information processing device 1.
  • the information processing apparatus 1 is an aspect in which a computer executes various processes according to an information processing program
  • at least a part of the control apparatus 52 is an aspect in which the same computer as the information processing apparatus 1 executes various processes according to an observation program But you can.
  • DESCRIPTION OF SYMBOLS 1 ... Information processing apparatus, 7 ... Processing part, 8 ... Memory

Abstract

[Problem] To facilitate point cloud handling. [Solution] This information processing device comprises a display control unit for displaying a point cloud image on a display unit, an input information acquisition unit for acquiring input information input by an input unit, and a processing unit for extracting a portion of the point cloud included in the point cloud image on the basis of the input information acquired by the input information acquisition unit. The display control unit displays, on the display unit, an extracted point cloud image based on the portion of the point cloud extracted by the processing unit.

Description

情報処理装置、情報処理方法、情報処理プログラム、及び顕微鏡Information processing apparatus, information processing method, information processing program, and microscope
 本発明は、情報処理装置、情報処理方法、情報処理プログラム、及び顕微鏡に関する。 The present invention relates to an information processing apparatus, an information processing method, an information processing program, and a microscope.
 超解像顕微鏡として、例えば、STORM、PALM等が知られている。STORMでは、蛍光物質を活性化し、活性化した蛍光物質に対して励起光を照射して、蛍光画像を取得する(下記の特許文献1参照)。 For example, STORM, PALM, etc. are known as super-resolution microscopes. In STORM, a fluorescent substance is activated, and the activated fluorescent substance is irradiated with excitation light to acquire a fluorescent image (see Patent Document 1 below).
米国特許出願公開第2008/0182336号US Patent Application Publication No. 2008/0182336
 本発明の第1の態様に従えば、点群画像を表示部に表示させる表示制御部と、入力部により入力される入力情報を取得する入力情報取得部と、入力情報取得部により取得される入力情報に基づいて、点群画像に含まれる点群から一部の点群を抽出する処理部と、を備え、表示制御部は、処理部が抽出する一部の点群に基づいた抽出点群画像を表示部に表示させる、情報処理装置が提供される。 According to the first aspect of the present invention, a display control unit that displays a point cloud image on a display unit, an input information acquisition unit that acquires input information input by the input unit, and an input information acquisition unit A processing unit that extracts a part of the point group from the point group included in the point group image based on the input information, and the display control unit extracts the extracted points based on the part of the point group that the processing unit extracts. An information processing apparatus for displaying a group image on a display unit is provided.
 本発明の第2の態様に従えば、第1の態様の情報処理装置と、試料に含まれる蛍光物質の一部を活性化する活性化光を照明する光学系と、活性化された蛍光物質の少なくとも一部を励起する励起光を照明する照明光学系と、試料からの光の像を形成する観察光学系と、観察光学系が形成した像を撮像する撮像部と、撮像部により撮像された結果に基づいて、蛍光物質の位置情報を算出し、算出した位置情報を用いて点群を生成する画像処理部と、を備える顕微鏡が提供される。 According to the second aspect of the present invention, the information processing apparatus according to the first aspect, the optical system that illuminates the activation light that activates part of the fluorescent substance contained in the sample, and the activated fluorescent substance An illumination optical system that illuminates at least a part of the excitation light, an observation optical system that forms an image of light from the sample, an imaging unit that captures an image formed by the observation optical system, and an imaging unit. An image processing unit that calculates position information of a fluorescent substance based on the result and generates a point cloud using the calculated position information is provided.
 本発明の第3の態様に従えば、点群画像を表示部に表示させることと、入力部により入力される入力情報を取得することと、入力情報に基づいて、点群画像に含まれる点群から一部の点群を抽出することと、抽出された一部の点群に基づいた抽出点群画像を表示部に表示させることと、を含む情報処理方法が提供される。 According to the third aspect of the present invention, the point cloud image is displayed on the display unit, the input information input by the input unit is acquired, and the points included in the point cloud image based on the input information. An information processing method including extracting a part of a point group from a group and displaying an extracted point group image based on the extracted part of the point group on a display unit is provided.
 本発明の第4の態様に従えば、コンピュータに、点群画像を表示部に表示させることと、入力部により入力される入力情報を取得することと、入力情報に基づいて、点群画像に含まれる点群から一部の点群を抽出することと、抽出された一部の点群に基づいた抽出点群画像を表示部に表示させることと、を実行させる情報処理プログラムが提供される。 According to the fourth aspect of the present invention, the computer displays the point cloud image on the display unit, acquires the input information input by the input unit, and creates the point cloud image based on the input information. Provided is an information processing program for executing extraction of a part of a point cloud from included points and display of an extracted point cloud image based on the extracted part of the point cloud on a display unit. .
第1実施形態に係る情報処理装置を示す図である。It is a figure which shows the information processing apparatus which concerns on 1st Embodiment. 第1実施形態に係るGUI画面を示す図である。It is a figure which shows the GUI screen which concerns on 1st Embodiment. 第1実施形態に係る処理部による処理を示す図である。It is a figure which shows the process by the process part which concerns on 1st Embodiment. 第1実施形態に係る抽出点群画像を示す図である。It is a figure which shows the extraction point cloud image which concerns on 1st Embodiment. 第1実施形態に係る抽出点群画像を示す図である。It is a figure which shows the extraction point cloud image which concerns on 1st Embodiment. 第1実施形態に係る情報処理方法を示すフローチャートである。It is a flowchart which shows the information processing method which concerns on 1st Embodiment. 第1実施形態に係るGUI画面を示す図である。It is a figure which shows the GUI screen which concerns on 1st Embodiment. 第1実施形態に係るGUI画面を示す図である。It is a figure which shows the GUI screen which concerns on 1st Embodiment. 第1実施形態に係るGUI画面を示す図である。It is a figure which shows the GUI screen which concerns on 1st Embodiment. 第1実施形態に係るGUI画面を示す図である。It is a figure which shows the GUI screen which concerns on 1st Embodiment. 第1実施形態に係るGUI画面を示す図である。It is a figure which shows the GUI screen which concerns on 1st Embodiment. 第2実施形態に係るクラスタリング部が分類した部分集合に基づいて出力制御部が出力するGUI画面を示す図である。It is a figure which shows the GUI screen which an output control part outputs based on the subset classified by the clustering part which concerns on 2nd Embodiment. 第2実施形態に係るGUI画面を用いた分布の指定方法を示す図である。It is a figure which shows the designation | designated method of distribution using the GUI screen which concerns on 2nd Embodiment. 第2実施形態に係る処理部による処理を示す図である。It is a figure which shows the process by the process part which concerns on 2nd Embodiment. 第2実施形態に係る情報処理方法を示すフローチャートである。It is a flowchart which shows the information processing method which concerns on 2nd Embodiment. 第3実施形態に係る情報処理装置を示す図である。It is a figure which shows the information processing apparatus which concerns on 3rd Embodiment. 第3実施形態において、分布を指定する処理を示す図である。It is a figure which shows the process which designates distribution in 3rd Embodiment. 第3実施形態に係る機械学習部および処理部による処理を示す図である。It is a figure which shows the process by the machine learning part and process part which concern on 3rd Embodiment. 第3実施形態に係る機械学習部による処理を示す図である。It is a figure which shows the process by the machine learning part which concerns on 3rd Embodiment. 第3実施形態に係る情報処理方法を示すフローチャートである。It is a flowchart which shows the information processing method which concerns on 3rd Embodiment. 第4実施形態に係る情報処理装置を示す図である。It is a figure which shows the information processing apparatus which concerns on 4th Embodiment. 第4実施形態に係る演算部の演算結果に基づいて出力制御部が出力するGUI画面を示す図である。It is a figure which shows the GUI screen which an output control part outputs based on the calculation result of the calculating part which concerns on 4th Embodiment. 実施形態に係るN次元データを示す図である。It is a figure which shows the N-dimensional data which concern on embodiment. 実施形態に係る顕微鏡を示す図である。It is a figure which shows the microscope which concerns on embodiment. 実施形態に係る顕微鏡本体を示す図である。It is a figure which shows the microscope main body which concerns on embodiment.
[第1実施形態]
 第1実施形態について説明する。図1は、第1実施形態に係る情報処理装置を示す図である。実施形態に係る情報処理装置1は、点群データDGを用いて画像(点群画像)を生成し、表示装置2に表示する。また、情報処理装置1は、点群データDG(データ群)を処理する。点群データDGは、複数のN次元データD1である。Nは2以上の任意の整数である。N次元データD1とは、N個の値を一組にしたデータ(例、ベクトルデータ)である。例えば、図1において、点群データDGは、3次元空間の座標値(例、x1,y1,z1)を一組にした3次元データである。以下の説明では上記のNが3であるとする。Nは2でもよいし、4以上でもよい。図1において、点群データDGは、m個のN次元データである。mは、2以上の任意の整数である。
[First Embodiment]
A first embodiment will be described. FIG. 1 is a diagram illustrating an information processing apparatus according to the first embodiment. The information processing apparatus 1 according to the embodiment generates an image (point cloud image) using the point cloud data DG and displays the image on the display device 2. Further, the information processing apparatus 1 processes point cloud data DG (data group). The point cloud data DG is a plurality of N-dimensional data D1. N is an arbitrary integer of 2 or more. The N-dimensional data D1 is data (eg, vector data) in which N values are combined. For example, in FIG. 1, point cloud data DG is three-dimensional data in which coordinate values (eg, x1, y1, z1) in a three-dimensional space are combined. In the following description, it is assumed that the above N is 3. N may be 2 or 4 or more. In FIG. 1, point cloud data DG is m pieces of N-dimensional data. m is an arbitrary integer of 2 or more.
 また、点群画像とは、点群データDGを用いて生成された画像である。例えば、点群データDGが3次元空間の座標値(例、x1,y1,z1)を一組にした3次元データである場合、各座標位置に点を表示した画像である。なお、表示される点の大きさは、適宜変更可能である。また、表示される点の形状は、円に限られず、楕円、矩形など他の形状でもよい。点群データは、単に点群と称されることがある。また、本明細書において、点群画像上の複数の点を、適宜、点群と称する。 Further, the point cloud image is an image generated using the point cloud data DG. For example, when the point cloud data DG is three-dimensional data in which coordinate values (eg, x1, y1, z1) in a three-dimensional space are set as one set, the image is an image displaying points at each coordinate position. Note that the size of the displayed points can be changed as appropriate. The shape of the displayed point is not limited to a circle, and may be another shape such as an ellipse or a rectangle. Point cloud data is sometimes simply referred to as a point cloud. In the present specification, a plurality of points on the point cloud image are appropriately referred to as a point cloud.
 点群データDGは、例えば、情報処理装置1の外部の装置(以下、外部装置という)から情報処理装置1へ供給される。上記の外部装置は、例えば、後に図24に示す顕微鏡本体51である。上記の外部装置は、顕微鏡本体51でなくてもよい。例えば、上記の外部装置は、物体の内部の各点における値を検出するCTスキャン、または物体の形状を測定する測定装置でもよい。また、情報処理装置1は、外部装置から供給されるデータに基づいて点群データDGを生成し、生成した点群データDGを処理してもよい。 The point cloud data DG is supplied to the information processing device 1 from, for example, a device external to the information processing device 1 (hereinafter referred to as an external device). The external device is, for example, a microscope main body 51 shown later in FIG. The external device may not be the microscope main body 51. For example, the external device may be a CT scan that detects a value at each point inside the object, or a measurement device that measures the shape of the object. In addition, the information processing apparatus 1 may generate point cloud data DG based on data supplied from an external device, and process the generated point cloud data DG.
 情報処理装置1は、ユーザがグラフィカルユーザインタフェース(本明細書において適宜、GUIと略記する)を用いて入力する入力情報に基づいて、処理を実行する。情報処理装置1は、表示装置2(表示部)と接続される。表示装置2は、例えば液晶ディスプレイ等である。情報処理装置1は、画像のデータを表示装置2に供給し、この画像を表示装置2に表示させる。表示装置2は、例えば、情報処理装置1に外付けされる装置であるが、情報処理装置1の一部でもよい。 The information processing apparatus 1 executes processing based on input information that a user inputs using a graphical user interface (referred to as GUI in this specification as appropriate). The information processing device 1 is connected to a display device 2 (display unit). The display device 2 is, for example, a liquid crystal display. The information processing apparatus 1 supplies image data to the display device 2 and causes the display device 2 to display the image. For example, the display device 2 is an external device attached to the information processing device 1, but may be a part of the information processing device 1.
 情報処理装置1は、入力装置3(入力部)と接続される。入力装置3は、ユーザが操作可能な入力インターフェースである。入力装置3は、例えば、マウス、キーボード、タッチパッド、トラックボールの少なくとも1つを含む。入力装置3は、ユーザによる操作を検出し、その検出結果をユーザが入力した入力情報として情報処理装置1に供給する。 The information processing device 1 is connected to an input device 3 (input unit). The input device 3 is an input interface that can be operated by a user. The input device 3 includes, for example, at least one of a mouse, a keyboard, a touch pad, and a trackball. The input device 3 detects an operation by the user and supplies the detection result to the information processing device 1 as input information input by the user.
 以下の説明において、入力装置3がマウスであるものとする。入力装置3がマウスである場合、情報処理装置1は、ポインタを表示装置2に表示させる。情報処理装置1は、入力装置3が検出した入力情報として、マウスの移動情報、およびクリックの有無を示すクリック情報を入力装置3から取得する。情報処理装置1は、マウスの移動情報に基づいて、ポインタを表示装置2の画面上で移動させる。また、情報処理装置1は、クリック情報に基づいて、ポインタの位置およびクリック情報(例、左クリック、右クリック、ドラッグ、ダブルクリック)に割り当てられた処理を実行する。入力装置3は、例えば、情報処理装置1に外付けされる装置であるが、情報処理装置1の一部(例、内臓型のタッチパッド)でもよい。また、入力装置3は、表示装置2と一体のタッチパネル等でもよい。 In the following description, it is assumed that the input device 3 is a mouse. When the input device 3 is a mouse, the information processing device 1 causes the display device 2 to display a pointer. The information processing apparatus 1 acquires mouse movement information and click information indicating the presence or absence of a click from the input apparatus 3 as input information detected by the input apparatus 3. The information processing apparatus 1 moves the pointer on the screen of the display device 2 based on the mouse movement information. Further, the information processing apparatus 1 executes processing assigned to the position of the pointer and click information (eg, left click, right click, drag, double click) based on the click information. The input device 3 is, for example, a device externally attached to the information processing device 1, but may be a part of the information processing device 1 (for example, a built-in touch pad). Further, the input device 3 may be a touch panel integrated with the display device 2 or the like.
 情報処理装置1は、例えば、コンピュータを含む。情報処理装置1は、オペレーティングシステム部5(以下、OS部5と称する)と、GUI部6と、処理部7と、記憶部8とを備える。情報処理装置1は、記憶部8に記憶されたプログラムに従って、各種の処理を実行する。OS部5は、情報処理装置1の外部および内部に対してインターフェースを提供する。例えば、OS部5は、表示装置2に対する画像のデータの供給を制御する。また、OS部5は、入力装置3から入力情報を取得する。OS部5は、例えば、表示装置2においてアクティブなGUI画面を管理するアプリケーションに対して、入力情報を供給する。 The information processing apparatus 1 includes, for example, a computer. The information processing apparatus 1 includes an operating system unit 5 (hereinafter referred to as an OS unit 5), a GUI unit 6, a processing unit 7, and a storage unit 8. The information processing apparatus 1 executes various processes according to the program stored in the storage unit 8. The OS unit 5 provides an interface to the outside and the inside of the information processing apparatus 1. For example, the OS unit 5 controls the supply of image data to the display device 2. The OS unit 5 acquires input information from the input device 3. For example, the OS unit 5 supplies input information to an application that manages an active GUI screen in the display device 2.
 GUI部6は、入力制御部11と、出力制御部12とを備える。入力制御部11は、入力部(入力装置3)により入力される入力情報を取得する入力情報取得部である。出力制御部12は、点群画像を表示部(表示装置2)に表示させる表示制御部である。出力制御部12は、GUI画面(後に図2等に示すGUI画面W)を表示装置2に表示させる。GUI画面は、アプリケーションが提供するウィンドウなどである。GUI画面を構成する情報(以下、GUI情報という)は、例えば、記憶部8に記憶される。出力制御部12は、記憶部8からGUI情報を読み出し、GUI情報をOS部5に供給する。OS部5は、出力制御部12から供給されたGUI情報に基づいて、GUI画面を表示装置2に表示させる。このように、出力制御部12は、GUI情報をOS部5に供給することによって、表示装置2にGUI画面を表示させる。 The GUI unit 6 includes an input control unit 11 and an output control unit 12. The input control unit 11 is an input information acquisition unit that acquires input information input by the input unit (input device 3). The output control unit 12 is a display control unit that displays a point cloud image on the display unit (display device 2). The output control unit 12 causes the display device 2 to display a GUI screen (GUI screen W shown in FIG. 2 and the like later). The GUI screen is a window provided by an application. Information constituting the GUI screen (hereinafter referred to as GUI information) is stored in the storage unit 8, for example. The output control unit 12 reads the GUI information from the storage unit 8 and supplies the GUI information to the OS unit 5. The OS unit 5 causes the display device 2 to display a GUI screen based on the GUI information supplied from the output control unit 12. In this way, the output control unit 12 supplies the GUI information to the OS unit 5 to display the GUI screen on the display device 2.
 入力制御部11は、ユーザがGUI画面を用いて入力する入力情報を取得する。例えば、入力制御部11は、入力情報としてマウスの移動情報およびクリック情報を、OS部5から取得する。入力制御部11は、クリック動作があったことがクリック情報に示される場合、マウスの移動情報から得られるGUI画面上のポインタの座標に基づいて、クリック情報に割り付けられた処理を実行させる。 The input control unit 11 acquires input information input by the user using the GUI screen. For example, the input control unit 11 acquires mouse movement information and click information as input information from the OS unit 5. When the click information indicates that there has been a click operation, the input control unit 11 causes the process assigned to the click information to be executed based on the coordinates of the pointer on the GUI screen obtained from the mouse movement information.
 例えば、GUI画面上で右クリックがあったと検出されたとする。また、右クリックに割り付けられた処理は、メニューを表示させる処理であるとする。この場合、入力制御部11は、メニューを表示させる処理を、出力制御部12に実行させる。メニューを表す情報はGUI情報に含まれており、出力制御部12は、GUI情報に基づいて、OS部5を介して表示装置2にメニューを表示させる。 For example, it is assumed that a right click is detected on the GUI screen. Further, it is assumed that the process assigned to the right click is a process for displaying a menu. In this case, the input control unit 11 causes the output control unit 12 to execute processing for displaying the menu. Information representing the menu is included in the GUI information, and the output control unit 12 causes the display device 2 to display the menu via the OS unit 5 based on the GUI information.
 また、GUI画面上で左クリックがあったと検出されたとする。GUI画面のボタン上にポインタが配置された状態で左クリックがあった場合、このボタンに割り付けられた処理を実行することが予め定められている。入力制御部11は、マウスの移動情報に基づいてGUI画面上のポインタの位置を特定し、特定したポインタの位置にボタンがあるか否かを判定する。入力制御部11は、左クリックが検出された場合に、ポインタの位置にボタンがある場合に、このボタンに割り付けられた処理を実行させる。 Suppose that a left click is detected on the GUI screen. When there is a left click in a state where the pointer is placed on a button on the GUI screen, it is predetermined to execute the process assigned to this button. The input control unit 11 specifies the position of the pointer on the GUI screen based on the movement information of the mouse, and determines whether there is a button at the specified pointer position. When a left click is detected, the input control unit 11 causes a process assigned to this button to be executed if there is a button at the position of the pointer.
 処理部7は、入力情報取得部(入力制御部11)により取得される入力情報に基づいて、点群画像に含まれる点群から一部の点群(以下、点集合と称する)を抽出する。入力情報は、点群画像において指定される点群に関する情報である。処理部7は、点群画像に含まれる点群を複数の点群(複数の部分集合)に分割し、分割した点群(部分集合)と指定された点群との特徴量または類似度に基づいて一部の点群を抽出する。 Based on the input information acquired by the input information acquisition unit (input control unit 11), the processing unit 7 extracts a part of the point group (hereinafter referred to as a point set) from the point group included in the point cloud image. . The input information is information related to the point cloud specified in the point cloud image. The processing unit 7 divides the point group included in the point group image into a plurality of point groups (a plurality of subsets), and calculates the feature amount or similarity between the divided point group (subset) and the designated point group. Some point clouds are extracted based on this.
 処理部7は、クラスタリング部9と、分類器10とを備える。クラスタリング部9は、点群画像に含まれる点群を複数の点群に分割する。以下の説明において、点群画像に含まれる点群を分割した点群を部分集合と称する。クラスタリング部9は、複数のN次元データD1の分布に基づいて、点群データDGを複数の部分集合へ分割(分類)する。クラスタリング部9は、例えば、点群データDGから無作為にN次元データD1を選択する。また、クラスタリング部9は、選択したN次元データD1を中心とする所定の領域内に存在する他のN次元データD1の数をカウントする。クラスタリング部9は、カウントしたN次元データD1の数が閾値以上であると判定した場合に、選択したN次元データD1および上記所定の領域内に存在する他のN次元データD1が部分集合に属すると判定する。 The processing unit 7 includes a clustering unit 9 and a classifier 10. The clustering unit 9 divides the point group included in the point group image into a plurality of point groups. In the following description, a point cloud obtained by dividing a point cloud included in a point cloud image is referred to as a subset. The clustering unit 9 divides (classifies) the point cloud data DG into a plurality of subsets based on the distribution of the plurality of N-dimensional data D1. For example, the clustering unit 9 randomly selects the N-dimensional data D1 from the point cloud data DG. Further, the clustering unit 9 counts the number of other N-dimensional data D1 existing in a predetermined area centered on the selected N-dimensional data D1. When the clustering unit 9 determines that the counted number of N-dimensional data D1 is equal to or greater than the threshold, the selected N-dimensional data D1 and other N-dimensional data D1 existing in the predetermined region belong to the subset. Is determined.
 クラスタリング部9は、点群データDGに含まれるN次元データD1を、重複しない複数の部分集合、又はノイズに分類する。クラスタリング部9は、例えば、複数の部分集合に識別番号を割り付け、部分集合に属するN次元データD1については、N次元データD1またはその識別番号と、このN次元データD1が属する部分集合の識別番号とを関連付けて、記憶部8に記憶させる。また、クラスタリング部9は、ノイズに分類されたN次元データD1に、例えばノイズであることを示すフラグを付与する。クラスタリング部9は、ノイズであると判定したN次元データD1を点群データDGから削除してもよい。 The clustering unit 9 classifies the N-dimensional data D1 included in the point cloud data DG into a plurality of non-overlapping subsets or noise. For example, the clustering unit 9 assigns an identification number to a plurality of subsets, and for the N-dimensional data D1 belonging to the subset, the N-dimensional data D1 or the identification number thereof and the identification number of the subset to which the N-dimensional data D1 belongs Are stored in the storage unit 8. In addition, the clustering unit 9 adds a flag indicating noise, for example, to the N-dimensional data D1 classified as noise. The clustering unit 9 may delete the N-dimensional data D1 determined to be noise from the point cloud data DG.
 分類器10は、点群データDGから一部の点群(点集合)を抽出する抽出処理を実行する。情報処理装置1は、上述のようなGUIによって、抽出の対象を規定する情報を入力情報として取得する。抽出の対象を規定する情報は、例えば、抽出の目標となる点集合に相当するN次元データの分布(以下、目標の分布という)である。入力制御部11は、入力情報に割り付けられた処理を処理部7に実行させる。例えば、入力制御部11は、目標の分布を示す入力情報が取得された場合に、入力情報によって指定される分布を特定する。そして、入力制御部11は、入力情報に割り付けられた処理として、目標の分布に類似する分布の点集合を抽出する抽出処理を、処理部7に実行させる。処理部7は、入力制御部11が特定した分布に基づいて、複数のN次元データを含む点群データDGから点集合を抽出する。 The classifier 10 executes an extraction process for extracting a part of the point cloud (point set) from the point cloud data DG. The information processing apparatus 1 acquires, as input information, information that specifies an extraction target by using the GUI as described above. The information defining the extraction target is, for example, N-dimensional data distribution (hereinafter referred to as target distribution) corresponding to the point set to be extracted. The input control unit 11 causes the processing unit 7 to execute the process assigned to the input information. For example, when the input information indicating the target distribution is acquired, the input control unit 11 identifies the distribution specified by the input information. Then, the input control unit 11 causes the processing unit 7 to execute an extraction process for extracting a point set of a distribution similar to the target distribution as a process assigned to the input information. The processing unit 7 extracts a point set from the point cloud data DG including a plurality of N-dimensional data based on the distribution specified by the input control unit 11.
 分類器10は、点群データDGから所定の条件を満たす点集合を分類する。分類器10は、上記所定の条件として目標の分布に対する類似度が所定値以上である条件を満たす点集合を分類する処理(以下、分類処理)を実行する。処理部7は、分類器10が分類処理を実行することによって、点群データDGから一部の点群(点集合)を抽出する。以下、図2から図5を参照して、抽出処理におけるGUI部6および処理部7の処理について説明する。 The classifier 10 classifies a point set that satisfies a predetermined condition from the point cloud data DG. The classifier 10 executes a process (hereinafter referred to as a classification process) for classifying a point set that satisfies a condition that the similarity to the target distribution is equal to or greater than a predetermined value as the predetermined condition. The processing unit 7 extracts a part of the point group (point set) from the point cloud data DG when the classifier 10 executes the classification process. Hereinafter, processing of the GUI unit 6 and the processing unit 7 in the extraction processing will be described with reference to FIGS.
 図2は、第1実施形態に係るGUI画面を示す図である。GUI画面Wは、表示装置2(図1参照)の表示領域2Aに表示される。図2において、GUI画面Wは、表示領域2Aの一部に表示されるが、表示領域2Aに全画面表示されてもよい。図2のGUI画面Wは、ウィンドウW1と、ウィンドウW2と、ウィンドウW3と、ウィンドウW4とを含む。 FIG. 2 is a diagram showing a GUI screen according to the first embodiment. The GUI screen W is displayed in the display area 2A of the display device 2 (see FIG. 1). In FIG. 2, the GUI screen W is displayed in a part of the display area 2A, but may be displayed in full screen in the display area 2A. The GUI screen W in FIG. 2 includes a window W1, a window W2, a window W3, and a window W4.
 ウィンドウW1には、点群画像P1が表示される。点群画像P1は、図1に示した複数のN次元データD1の分布を表す画像である。本実施形態において、N次元データD1は3次元データであり、1つのN次元データD1は1つの点で表される。例えば、図1に示した1つのN次元データD1は、(x1,y1,z1)であり、点群画像P1においてX座標がx1、Y座標がy1、Z座標がz1の位置の点で表される。情報処理装置1は、点群データDGを開くコマンド(点群データDGを表示させるコマンド)を入力情報によってユーザから受けた場合に、点群画像P1のデータを生成する。出力制御部12は、生成された点群画像P1のデータをOS部5に供給し、OS部5はGUI画面WのウィンドウW1に表示させる。 The point cloud image P1 is displayed in the window W1. The point cloud image P1 is an image representing the distribution of the plurality of N-dimensional data D1 shown in FIG. In the present embodiment, the N-dimensional data D1 is three-dimensional data, and one N-dimensional data D1 is represented by one point. For example, one N-dimensional data D1 shown in FIG. 1 is (x1, y1, z1), and is represented by a point in the point cloud image P1 where the X coordinate is x1, the Y coordinate is y1, and the Z coordinate is z1. Is done. When the information processing apparatus 1 receives a command for opening the point cloud data DG (a command for displaying the point cloud data DG) from the user based on the input information, the information processing apparatus 1 generates data of the point cloud image P1. The output control unit 12 supplies the data of the generated point cloud image P1 to the OS unit 5, and the OS unit 5 displays the data on the window W1 of the GUI screen W.
 情報処理装置1は、点群データDGにおいてノイズを除去してもよい。例えば、点群データDGが物体の検出によって得られる場合、情報処理装置1は、検出対象の物体の構造体を構成しないと推定されるN次元データD1をノイズとして、処理対象から除外してもよい。例えば、情報処理装置1は、第1のN次元データD1(データ点)を中心とする所定の半径の空間に存在する他のN次元データD1の数をカウントし、カウントされた数が閾値未満である場合に第1のN次元データをノイズであると判定してもよい。情報処理装置1は、ノイズが除去された点群データDGに基づいて、点群画像P1を生成してもよい。 The information processing apparatus 1 may remove noise in the point cloud data DG. For example, when the point cloud data DG is obtained by detecting an object, the information processing apparatus 1 may exclude the N-dimensional data D1 estimated not to constitute the structure of the object to be detected as noise and exclude it from the processing target. Good. For example, the information processing apparatus 1 counts the number of other N-dimensional data D1 existing in a space having a predetermined radius centered on the first N-dimensional data D1 (data point), and the counted number is less than the threshold value. The first N-dimensional data may be determined as noise. The information processing apparatus 1 may generate the point cloud image P1 based on the point cloud data DG from which noise has been removed.
 ウィンドウW2は、例えば、ポインタPがGUI画面W上に配置された状態で右クリックされたことが検出された場合に、出力制御部12によってGUI画面Wに出力される。ウィンドウW2には、例えば、入力情報の選択肢として点群データDGに関する処理の選択肢が表示される。図2のウィンドウW2には、処理の選択肢として、[Analyze data]、[Some operation 1]、[Some operation 2]、[Some operation 3]が表示される。これら選択肢は、例えばコマンドが割り付けられたボタンである。 The window W2 is output to the GUI screen W by the output control unit 12 when it is detected that the pointer P is right-clicked with the pointer P being placed on the GUI screen W, for example. In the window W2, for example, processing options regarding the point cloud data DG are displayed as input information options. [Analyze の data], [Some operation 1], [Some operation 2], and [Some operation 3] are displayed in the window W2 of FIG. These options are, for example, buttons to which commands are assigned.
 図2では、処理の選択肢として[Analyze data]が選択されている。選択された選択肢は、他のボタンよりも強調して表示される。例えば、[Analyze data]は、ウィンドウW2の他の選択肢(例、[Some operation 1])よりも大きなフォントで表示される。また、選択された選択肢([Analyze data])には、選択中であることを示すマーク(例、図中の[→])が合わせて表示される。ここでは、まず、図2において選択中の選択肢に割り付けられた処理を説明する。図2において選択されていない選択肢に割り付けられた処理については、適宜、図7から図11などを参照して後述する。 In Fig. 2, [Analyze data] is selected as the processing option. The selected option is displayed with emphasis over the other buttons. For example, [Analyze data] is displayed in a larger font than other options in window W2 (eg, [Some operation 1]). In addition, the selected option ([Analyze data]) is displayed with a mark (for example, [→] in the figure) indicating that it is being selected. Here, first, the process assigned to the option being selected in FIG. 2 will be described. The processing assigned to the options not selected in FIG. 2 will be described later with reference to FIGS.
 入力制御部11は、入力情報の選択肢のうち、GUI画面Wを用いて選択された選択肢の情報を取得する。例えば、入力制御部11は、[Analyze data]上にポインタPが配置された状態で左クリックされたことが検出された場合に、[Analyze data]に割り付けられた処理の内容を取得する。選択肢に割り付けられた処理の内容はGUI情報に定義されており、入力制御部11は、入力情報をGUI情報に照合して、この入力情報に対応する処理の内容を取得する。そして、入力制御部11は、[Analyze data]に割り付けられた処理を実行させる。[Analyze data]には、抽出処理を開始する処理が割り付けられている。 The input control unit 11 acquires information on options selected using the GUI screen W among the options of input information. For example, when it is detected that the left click is performed in a state where the pointer P is arranged on [Analyze 左 data], the input control unit 11 acquires the content of the process assigned to [Analyze data]. The content of the process assigned to the option is defined in the GUI information, and the input control unit 11 collates the input information with the GUI information and acquires the content of the process corresponding to this input information. Then, the input control unit 11 causes the process assigned to [Analyze data] to be executed. [Analyze data] is assigned a process for starting the extraction process.
 ユーザの入力情報によって[Analyze data]が選択された場合、ウィンドウW3が生成される。入力制御部11は、出力制御部12にウィンドウW3の出力を実行させる。ウィンドウW3の情報はGUI情報に含まれており、出力制御部12は、記憶部8に記憶されたGUI情報からウィンドウW3の情報を取得する。出力制御部12は、ウィンドウW3の情報をOS部5に供給し、OS部5は表示装置2にウィンドウW3を表示させる。 When the [Analyze data] is selected according to the user input information, the window W3 is generated. The input control unit 11 causes the output control unit 12 to output the window W3. The information on the window W3 is included in the GUI information, and the output control unit 12 acquires the information on the window W3 from the GUI information stored in the storage unit 8. The output control unit 12 supplies information on the window W3 to the OS unit 5, and the OS unit 5 causes the display device 2 to display the window W3.
 なお、[Some operation 1]、[Some operation 2]、[Some operation 3]には、それぞれ、他の処理(例、ファイルを開く、結果の出力、アプリケーションの終了)が割り付けられている。なお、GUI部6は、[Some operation 1]、[Some operation 2]、及び[Some operation 3]の少なくとも1つの選択肢を提供しなくてもよい。また、GUI部6は、[Some operation 1]、[Some operation 2]、及び[Some operation 3]の他の選択肢を提供してもよい。 [Some operation 1], [Some operation 2], and [Some operation 3] are allotted other processes (eg, opening a file, outputting the result, and ending the application). Note that the GUI unit 6 may not provide at least one option of [Some operation 1], [Some operation 2], and [Some operation 3]. Further, the GUI unit 6 may provide other options of [Some operation 1], [Some operation 2], and [Some operation 3].
 ウィンドウW3には、入力情報の選択肢として分布の指定方法の選択肢が表示される。図2のウィンドウW3には、分布の指定方法の選択肢として、[Already prepared]、[Example]、[Input trained data]、[Targeting]が表示される。これら選択肢は、例えばコマンドが割り付けられたボタンである。図2では、分布の指定方法の選択肢として[Example]が選択されている。選択された選択肢は、他のボタンよりも強調して表示される。例えば、[Example]は、ウィンドウW3の他の選択肢(例、[Already prepared])よりも大きなフォントで表示される。また、選択された選択肢([Example])には、選択中であることを示すマーク(例、[→])が合わせて表示される。 In the window W3, options for specifying the distribution are displayed as input information options. [Already prepared], [Example], [Input trained data], and [Targeting] are displayed in the window W3 of FIG. These options are, for example, buttons to which commands are assigned. In FIG. 2, [Example] is selected as an option for the distribution designation method. The selected option is displayed with emphasis over the other buttons. For example, [Example] is displayed in a larger font than other options of the window W3 (eg, [Already] prepared]). In addition, a mark (for example, [→]) indicating that selection is being performed is displayed together with the selected option ([Example]).
 入力制御部11は、入力情報の選択肢のうち、GUI画面Wを用いて選択された選択肢の情報を取得する。例えば、[Example]上にポインタPが配置された状態で左クリックされたことが検出された場合に、[Example]に割り付けられた処理の内容を取得する。選択肢に割り付けられた処理の内容はGUI情報に定義されており、入力制御部11は、入力情報をGUI情報に照合して、この入力情報に対応する処理の内容を取得する。そして、入力制御部11は、[Example]に割り付けられた処理を実行させる。[Example]には、予め定められた候補から分布を選択する選択肢を表示する処理が割り付けられている。 The input control unit 11 acquires information on options selected using the GUI screen W among the options of input information. For example, when it is detected that the left click is performed in a state where the pointer P is placed on [Example], the contents of the process assigned to [Example] are acquired. The content of the process assigned to the option is defined in the GUI information, and the input control unit 11 collates the input information with the GUI information and acquires the content of the process corresponding to this input information. Then, the input control unit 11 causes the process assigned to [Example] to be executed. [Example] is assigned a process of displaying options for selecting a distribution from predetermined candidates.
 ユーザの入力情報によって[Example]が選択された場合、ウィンドウW4が生成される。入力制御部11は、出力制御部12にウィンドウW4の出力を実行させる。ウィンドウW4の情報はGUI情報に含まれており、出力制御部12は、記憶部8に記憶されたGUI情報からウィンドウW4の情報を取得する。出力制御部12は、ウィンドウW4の情報をOS部5に供給し、OS部5はウィンドウW4を表示装置2に表示させる。 When [Example] is selected according to user input information, a window W4 is generated. The input control unit 11 causes the output control unit 12 to output the window W4. The information on the window W4 is included in the GUI information, and the output control unit 12 acquires the information on the window W4 from the GUI information stored in the storage unit 8. The output control unit 12 supplies information on the window W4 to the OS unit 5, and the OS unit 5 causes the display device 2 to display the window W4.
 ウィンドウW4には、入力情報の選択肢として分布の候補のカテゴリーが表示される。図2のウィンドウW4には、分布の候補のカテゴリーとして、[Geometric shape]、[Biological objects]が表示される。これら選択肢は、例えばコマンドが割り付けられたボタンである。図2において、入力情報は、幾何形状に関する情報([[Geometric shape]を指定する情報])である。図2では、分布の候補のカテゴリーとして[Geometric shape]が選択されている。選択された選択肢([Geometric shape])は、他のボタンよりも強調して表示される。例えば、[Geometric shape]は、[Biological objects]よりも大きなフォントで表示される。また、選択された[Geometric shape]には、選択中であることを示すマーク(例、[→])が合わせて表示される。 In the window W4, distribution candidate categories are displayed as input information options. In the window W4 of FIG. 2, [Geometric shape] and [Biological 4 objects] are displayed as distribution candidate categories. These options are, for example, buttons to which commands are assigned. In FIG. 2, input information is information related to a geometric shape ([information specifying [Geometric shape]). In FIG. 2, [Geometric shape] is selected as a category of distribution candidates. The selected option ([Geometric shape]) is displayed more emphasized than the other buttons. For example, [Geometric shape] is displayed in a larger font than [Biological objects]. In addition, the selected [Geometric shape] is displayed with a mark (for example, [→]) indicating that it is being selected.
 入力制御部11は、入力情報の選択肢のうち、GUI画面Wを用いて選択された選択肢の情報を取得する。例えば、[Geometric shape]上にポインタPが配置された状態で左クリックされたことが検出された場合に、[Geometric shape]に割り付けられた処理の内容を取得する。選択肢に割り付けられた処理の内容はGUI情報に定義されており、入力制御部11は、入力情報をGUI情報に照合して、この入力情報に対応する処理の内容を取得する。そして、入力制御部11は、[Geometric shape]に割り付けられた処理を実行させる。[Geometric shape]には、予め定められた候補として分布を表す幾何形状の候補を表示する処理が割り付けられている。 The input control unit 11 acquires information on options selected using the GUI screen W among the options of input information. For example, when it is detected that the left click is performed in a state where the pointer P is placed on [Geometric shape], the contents of the process assigned to [Geometric shape] are acquired. The content of the process assigned to the option is defined in the GUI information, and the input control unit 11 collates the input information with the GUI information and acquires the content of the process corresponding to this input information. Then, the input control unit 11 causes the process assigned to [Geometric shape] to be executed. [Geometric shape] is assigned a process of displaying a geometric candidate representing a distribution as a predetermined candidate.
 ユーザの入力情報によって[Geometric shape]が選択された場合、ウィンドウW5が生成される。入力制御部11は、出力制御部12にウィンドウW5の出力を実行させる。ウィンドウW5の情報はGUI情報に含まれており、出力制御部12は、記憶部8に記憶されたGUI情報からウィンドウW5の情報を取得する。出力制御部12は、ウィンドウW5の情報をOS部5に供給し、OS部5はウィンドウW5を表示装置2に表示させる。 When [Geometric shape] is selected according to user input information, a window W5 is generated. The input control unit 11 causes the output control unit 12 to output the window W5. The information on the window W5 is included in the GUI information, and the output control unit 12 acquires the information on the window W5 from the GUI information stored in the storage unit 8. The output control unit 12 supplies information on the window W5 to the OS unit 5, and the OS unit 5 causes the display device 2 to display the window W5.
 ウィンドウW5には、入力情報の選択肢として分布を表す幾何形状の候補が表示される。図2のウィンドウW5には、幾何形状の候補として、[Sphere]、[Ellipsoid]、[Star]、[Etc...]が表示される。これら選択肢は、例えばコマンドが割り付けられたボタンである。図2では、幾何形状の候補として、[Ellipsoid]が選択されている。選択された選択肢([Ellipsoid])は、他のボタンよりも強調して表示される。例えば、[Ellipsoid]は、[Sphere]よりも大きなフォントで表示される。また、選択された[Ellipsoid]には、選択中であることを示すマーク(例、[→])が合わせて表示される。 In the window W5, geometric shape candidates representing a distribution are displayed as input information options. In the window W5 in FIG. 2, [Sphere], [Ellipsoid], [Star], [Etc ...] are displayed as geometric shape candidates. These options are, for example, buttons to which commands are assigned. In FIG. 2, [Ellipsoid] is selected as a geometric candidate. The selected option ([Ellipsoid]) is displayed with emphasis over the other buttons. For example, [Ellipsoid] is displayed in a larger font than [Sphere]. In addition, the selected [Ellipsoid] is also displayed with a mark (for example, [→]) indicating that it is being selected.
 入力制御部11は、入力情報の選択肢のうち、GUI画面Wを用いて選択された選択肢の情報を取得する。例えば、[Ellipsoid]上にポインタPが配置された状態で左クリックされたことが検出された場合に、[Ellipsoid]に割り付けられた処理の内容を取得する。選択肢に割り付けられた処理の内容はGUI情報に定義されており、入力制御部11は、入力情報をGUI情報に照合して、この入力情報に対応する処理の内容を取得する。そして、入力制御部11は、[Ellipsoid]に割り付けられた処理を実行させる。[Ellipsoid]には、抽出の目標の分布として、楕円体に収まるデータ点の分布を指定する処理が割り付けられている。 The input control unit 11 acquires information on options selected using the GUI screen W among the options of input information. For example, when it is detected that the left click is performed in a state where the pointer P is placed on [Ellipsoid], the contents of the process assigned to [Ellipsoid] are acquired. The content of the process assigned to the option is defined in the GUI information, and the input control unit 11 collates the input information with the GUI information and acquires the content of the process corresponding to this input information. Then, the input control unit 11 causes the process assigned to [Ellipsoid] to be executed. [Ellipsoid] is assigned a process for designating the distribution of data points that fall within an ellipsoid as the target distribution for extraction.
 なお、[Sphere]は、目標の分布が球形に収まる分布であることを示す。[Star]は、目標の分布が星形に収まる分布であることを示す。[Etc...]は、その他の幾何形状を目標の分布として指定することを示す。例えば、ユーザは、[Etc...]を選択した場合、例えば、幾何形状を定義したデータを読み込ませることによって、この幾何形状を目標の分布として指定可能である。 [Sphere] indicates that the target distribution is a spherical distribution. [Star] indicates that the target distribution falls within a star shape. [Etc ...] indicates that another geometric shape is designated as the target distribution. For example, when [Etc ...] is selected, the user can designate the geometric shape as a target distribution by, for example, reading data defining the geometric shape.
 ユーザの入力情報によって[Ellipsoid]が選択された場合、入力制御部11は、楕円体に収まる分布を目標の分布とした抽出処理を処理部7に実行させる。処理部7は、点群データDGから、外形が楕円体で近似される部分集合に属するN次元データを抽出する。なお、幾何形状に関する情報に加えて、幾何形状の大きさ(サイズ)に関する情報が設定可能であってもよい。 When [Ellipsoid] is selected according to user input information, the input control unit 11 causes the processing unit 7 to perform an extraction process using a distribution that falls within an ellipsoid as a target distribution. The processing unit 7 extracts N-dimensional data belonging to a subset whose outer shape is approximated by an ellipsoid from the point cloud data DG. In addition to the information related to the geometric shape, information related to the size (size) of the geometric shape may be settable.
 図3は、第1実施形態に係る処理部による処理を示す図である。図3において、符号Kaは、目標の分布である。処理部7(クラスタリング部9)は、点群画像に含まれる点群を複数の点群(部分集合)に分割する。図3の符号Kb1からKb6は、クラスタリング部9が分割した点群(部分集合)に相当する分布である。符号Kb1からKb6は、それぞれ、点群データDGが収まるデータ空間から選択された(切り出された)一部の空間(例、ROI)におけるN次元データD1の分布である。 FIG. 3 is a diagram illustrating processing by the processing unit according to the first embodiment. In FIG. 3, symbol Ka is a target distribution. The processing unit 7 (clustering unit 9) divides the point group included in the point group image into a plurality of point groups (subsets). The codes Kb1 to Kb6 in FIG. 3 are distributions corresponding to the point groups (subsets) divided by the clustering unit 9. Reference numerals Kb1 to Kb6 are distributions of the N-dimensional data D1 in a part of the space (eg, ROI) selected (cut out) from the data space in which the point cloud data DG is accommodated.
 処理部7(分類器10)は、分割した点群(部分集合)の特徴量と、幾何形状の特徴量とに基づいて一部の点群(点集合)を抽出する。分類器10は、クラスタリング部9が分割した部分集合の特徴量を算出し、入力情報によって指定される特徴量(例、幾何形状の特徴量)と比較(照合)する。分類器10は、クラスタリング部9が分割した部分集合の特徴量と、入力情報によって指定される特徴量(例、幾何形状の特徴量)と整合する場合に、この部分集合を点集合として分類する。上記の特徴量は、構造物のサイズでもよい。上記サイズは、実空間上の大きさ(絶対値)又は相対的な大きさ(相対値)である。処理部7は、点群画像に含まれる点群を複数の点群(部分集合)に分割し、分割した点群が表す形状のサイズと、入力情報によって指定されるサイズとに基づいて一部の点群を抽出してもよい。 The processing unit 7 (classifier 10) extracts a part of the point group (point set) based on the feature quantity of the divided point group (subset) and the geometric feature quantity. The classifier 10 calculates the feature amount of the subset divided by the clustering unit 9 and compares (matches) with the feature amount (eg, geometric feature amount) specified by the input information. The classifier 10 classifies the subset as a point set when the feature amount of the subset divided by the clustering unit 9 matches the feature amount (for example, the geometric feature amount) specified by the input information. . The feature amount may be the size of the structure. The size is a size (absolute value) or a relative size (relative value) in real space. The processing unit 7 divides the point group included in the point group image into a plurality of point groups (subsets), and a part based on the size of the shape represented by the divided point group and the size specified by the input information The point cloud may be extracted.
 また、分類器10は、入力情報によって指定される点群(目標の分布)と、部分集合に相当する点の分布との類似度に基づいて、点集合を分類してもよい。例えば、分類器10は、分布Kb1、分布Kb2、・・・のそれぞれについて、目標の分布Kaとの類似度を算出する。例えば、分類器10は、分布Kb1と目標の分布Kaとの類似度Q1を算出する。類似度Q1は、例えば、分布Kaから選択されるN次元データD1と、目標の分布Kb1から選択されるN次元データD1との距離(ノルム)の2乗の総和をデータ数で除算し、その平方根を1から差し引いた値である。類似度Q1は、例えば分布Kb1と目標の分布Kaとの相関係数でもよい。類似度Q2、類似度Q3、・・・についても同様である。 Further, the classifier 10 may classify the point set based on the similarity between the point group (target distribution) specified by the input information and the distribution of points corresponding to the subset. For example, the classifier 10 calculates the degree of similarity of the distribution Kb1, the distribution Kb2,... With the target distribution Ka. For example, the classifier 10 calculates the similarity Q1 between the distribution Kb1 and the target distribution Ka. For example, the similarity Q1 is obtained by dividing the sum of squares of the distance (norm) between the N-dimensional data D1 selected from the distribution Ka and the N-dimensional data D1 selected from the target distribution Kb1 by the number of data. The value obtained by subtracting the square root from 1. The similarity Q1 may be a correlation coefficient between the distribution Kb1 and the target distribution Ka, for example. The same applies to the similarity Q2, the similarity Q3,.
 処理部7は、分布Kb1を変換し、変換後の分布Kb1と目標の分布Kaとの類似度を算出してもよい。上記の変換は、例えば、平行移動させる変換、回転させる変換、線形変換、スケールの変換、及びこれら変換のうち2以上を組み合わせた変換(例、アフィン変換)の少なくとも1つを含む。処理部7が分布Kb1を変換する場合、その変換の種類は、予め定められていてもよいし、ユーザからの入力情報によって設定可能でもよい。 The processing unit 7 may convert the distribution Kb1 and calculate the similarity between the converted distribution Kb1 and the target distribution Ka. The transformation includes, for example, at least one of a translation transformation, a transformation to rotate, a linear transformation, a scale transformation, and a transformation combining two or more of these transformations (eg, affine transformation). When the processing unit 7 converts the distribution Kb1, the type of conversion may be determined in advance or may be set according to input information from the user.
 分類器10は、分布Kb1、分布Kb2、・・・のそれぞれについて、算出した類似度と閾値とを比較して抽出するか否かを判定する。例えば、分類器10は、分布Kb1と目標の分布Kaとの類似度Q1が閾値以上である場合、分布Kb1に属するN次元データD1を点群データDGから抽出すると判定する。また、分類器10は、分布Kb1と目標の分布Kaとの類似度Q1が閾値未満である場合、分布Kb1に属するN次元データD1を点群データDGから抽出しないと判定する。分類器10は、抽出すると判定したN次元データD1の集合を一部の点群(点集合)として抽出する。分類器10は、抽出した点集合の情報を、処理結果として記憶部8に記憶させる。 The classifier 10 determines whether or not to extract the distribution Kb1, the distribution Kb2,... By comparing the calculated similarity with a threshold value. For example, the classifier 10 determines that the N-dimensional data D1 belonging to the distribution Kb1 is extracted from the point group data DG when the similarity Q1 between the distribution Kb1 and the target distribution Ka is equal to or greater than a threshold value. Further, when the similarity Q1 between the distribution Kb1 and the target distribution Ka is less than the threshold, the classifier 10 determines that the N-dimensional data D1 belonging to the distribution Kb1 is not extracted from the point cloud data DG. The classifier 10 extracts a set of N-dimensional data D1 determined to be extracted as a partial point group (point set). The classifier 10 causes the storage unit 8 to store the extracted point set information as a processing result.
 また、分類器10は、以下のようにして点集合を分類してもよい。ここでは、図2に示したように、点集合を分類する条件として、幾何形状([Geometric shape])が楕円体([Ellipsoid]である条件が指定されているとする。クラスタリング部9は、点群データDGに含まれる各点(各N次元データD1)に対して、以下の処理を行う。各点を中心とする一定の半径の領域内に存在する点の数が所定値がある場合、この領域内の点をひとつの部分集合(塊)とする。 Further, the classifier 10 may classify the point set as follows. Here, as shown in Fig. 2, as a condition for classifying the point set, a condition that the geometric shape ([Geometric shape]) is an ellipsoid ([Ellipsoid]) is specified. The following processing is performed on each point (each N-dimensional data D1) included in the point cloud data DG, where the number of points existing within a certain radius area centered on each point has a predetermined value. The points in this region are defined as one subset (lumb).
 処理部7は、クラスタリング部9が分類した部分集合の特徴量を算出する。例えば、処理部7は、上記特徴量として、部分集合が表す構造物の外形の長軸の長さと短軸の長さとの比を算出する。分類器10は、処理部7が算出した特徴量が上記分類する条件を満たす部分集合を点集合として分類(抽出)する。例えば、幾何形状([Geometric shape])が楕円体([Ellipsoid]である条件が上記分類する条件として指定される場合、分類器10は、処理部7が特徴量として算出した長軸の長さと短軸の長さとの比が所定の範囲内(例、0.9以上1.1以下)である場合に部分集合を球体へ分類し、上記の比が所定の範囲外(例、0より大きく0.9より小さい、または1.1よりも大きい)である場合に部分集合を楕円体へ分類する。このように、分類器10は、類似度以外のパラメータ(例、特徴量)に基づいて分類してもよい。 The processing unit 7 calculates the feature amount of the subset classified by the clustering unit 9. For example, the processing unit 7 calculates a ratio between the major axis length and the minor axis length of the outer shape of the structure represented by the subset as the feature amount. The classifier 10 classifies (extracts), as a point set, a subset in which the feature amount calculated by the processing unit 7 satisfies the above classification condition. For example, when the condition that the geometric shape ([Geometric shape]) is an ellipsoid ([Ellipsoid]) is specified as the condition for classification, the classifier 10 calculates the length of the long axis calculated as the feature amount by the processing unit 7. If the ratio of the minor axis length is within a predetermined range (eg, 0.9 or more and 1.1 or less), the subset is classified as a sphere, and the above ratio is outside the predetermined range (eg, greater than 0) The subset is classified into an ellipsoid if it is less than 0.9 or greater than 1.1. Thus, the classifier 10 is based on parameters (eg, feature quantities) other than similarity. You may classify.
 表示制御部(出力制御部12)は、処理部7が抽出する一部の点群(点集合)に基づいた抽出点群画像を表示させる。抽出点群画像とは、点群画像から抽出された一部の点群(点集合)を表す画像である。 The display control unit (output control unit 12) displays an extracted point cloud image based on a part of the point cloud (point set) extracted by the processing unit 7. The extracted point cloud image is an image representing a part of a point cloud (point set) extracted from the point cloud image.
 出力制御部12は、処理部7による抽出結果に基づいた抽出点群画像を、GUI画面Wに出力させる。図4は、第1実施形態に係る抽出点群画像を示す図である。出力制御部12は、抽出点群画像P2として、処理部7が抽出した部分集合の分布をGUI画面Wに出力させる。表示制御部(出力制御部12)は、抽出点群画像P2において、処理部7が抽出する一部の点群(点集合)を、点群画像P1に対して色と明るさとの一方またが双方異なるように表示させてもよい。表示制御部(出力制御部12)は、処理部7が抽出する一部の点群(点集合)を、その他の点群と異なる色で表示させてもよいし、その他の点群と異なる明るさで表示させてもよい。表示制御部(出力制御部12)は、抽出点群画像において、処理部7が抽出する一部の点群(点集合)のみを表示させてもよい。表示制御部(出力制御部12)は、点群画像に含まれる点群のうち点集合以外の点群を除いた抽出点群画像を表示させてもよい。 The output control unit 12 causes the extraction point cloud image based on the extraction result by the processing unit 7 to be output to the GUI screen W. FIG. 4 is a diagram showing an extracted point cloud image according to the first embodiment. The output control unit 12 outputs the distribution of the subset extracted by the processing unit 7 to the GUI screen W as the extracted point cloud image P2. The display control unit (output control unit 12) applies a part of the point group (point set) extracted by the processing unit 7 in the extracted point cloud image P2 to one of color and brightness with respect to the point cloud image P1. You may display so that both may differ. The display control unit (output control unit 12) may display a part of the point group (point set) extracted by the processing unit 7 in a color different from that of the other point groups, or a brightness different from that of the other point groups. It may be displayed. The display control unit (output control unit 12) may display only a part of the point group (point set) extracted by the processing unit 7 in the extracted point group image. The display control unit (output control unit 12) may display the extracted point cloud image excluding the point cloud other than the point set from the point cloud included in the point cloud image.
 入力制御部11は、ユーザによって入力された入力情報によって抽出点群画像P2を出力する処理が指定される場合、出力制御部12に抽出点群画像P2を出力する処理を実行させる。出力制御部12は、記憶部8に記憶された処理結果を用いて生成される抽出点群画像P2のデータを、OS部5に供給する。OS部5は、抽出点群画像P2のデータを表示装置2に出力し、抽出点群画像P2をGUI画面Wに表示させる。 The input control unit 11 causes the output control unit 12 to execute the process of outputting the extracted point cloud image P2 when the process of outputting the extracted point cloud image P2 is designated by the input information input by the user. The output control unit 12 supplies data of the extracted point cloud image P2 generated using the processing result stored in the storage unit 8 to the OS unit 5. The OS unit 5 outputs the data of the extracted point cloud image P2 to the display device 2 and displays the extracted point cloud image P2 on the GUI screen W.
 抽出点群画像P2は、図2および図3で説明したように、目標の分布Kaを表す幾何形状として楕円体が指定された場合の抽出処理の結果を示す。抽出点群画像P2には、外形が楕円体に類似すると判定された部分集合に属するN次元データD1の分布Kcが含まれる。抽出点群画像P2は、図2の点群画像P1において三角形あるいは矩形で示された部分集合が除外された画像になっている。なお、抽出される部分集合は、部分集合の外形が楕円体に類似するか否かを判定する際の閾値によって変化する。この閾値は、ユーザから入力される入力情報によって変更可能でもよい。 The extracted point cloud image P2 shows the result of the extraction process when an ellipsoid is designated as the geometric shape representing the target distribution Ka, as described with reference to FIGS. The extracted point cloud image P2 includes a distribution Kc of N-dimensional data D1 belonging to a subset determined to have an outer shape similar to an ellipsoid. The extracted point cloud image P2 is an image in which a subset indicated by a triangle or a rectangle is excluded from the point cloud image P1 in FIG. Note that the extracted subset changes depending on a threshold value used to determine whether or not the outline of the subset is similar to an ellipsoid. This threshold value may be changeable according to input information input from the user.
 図5は、第1実施形態に係る抽出点群画像を示す図である。図5の抽出点群画像P3は、部分集合の外形が楕円体に類似するか否かを判定する際の閾値を変更した際の抽出処理の処理結果に相当する。図5の抽出点群画像P3に対応する点集合を抽出する際の類似度の閾値は、図4の抽出点群画像P2に対応する点集合を抽出する際の類似度の閾値よりも高く設定されている。図5の抽出点群画像P3は、図4の抽出点群画像P2において欠けた楕円形あるいは扁平率が高い楕円形が除外された画像になっている。図5の抽出点群画像P3に含まれる点集合(抽出されたN次元データD1の分布Kc)の数は、図4の抽出点群画像P2に含まれる点集合(抽出されたN次元データD1の分布Kc)の数よりも少ない。 FIG. 5 is a diagram showing an extracted point cloud image according to the first embodiment. The extracted point cloud image P3 in FIG. 5 corresponds to the processing result of the extraction processing when the threshold value for determining whether or not the outer shape of the subset is similar to an ellipsoid is changed. The similarity threshold for extracting the point set corresponding to the extracted point cloud image P3 in FIG. 5 is set higher than the similarity threshold for extracting the point set corresponding to the extracted point cloud image P2 in FIG. Has been. The extracted point group image P3 in FIG. 5 is an image in which the oval shape lacking in the extracted point group image P2 in FIG. The number of point sets (distribution Kc of extracted N-dimensional data D1) included in the extracted point group image P3 in FIG. 5 is the same as the number of point sets (extracted N-dimensional data D1 extracted in the extracted point group image P2 in FIG. 4). Less than the number of distributions Kc).
 なお、情報処理装置1が点群データDGからノイズを除去すると説明したが、情報処理装置1は、点群データDGからノイズを除去しなくてもよい。例えば、ノイズの少なくとも一部は、目標の分布Kaと非類似と判定されることで抽出点群画像P2から除外される。 Although the information processing apparatus 1 has been described as removing noise from the point cloud data DG, the information processing apparatus 1 may not remove noise from the point cloud data DG. For example, at least a part of the noise is excluded from the extracted point cloud image P2 by determining that the noise is not similar to the target distribution Ka.
 次に、上述の情報処理装置1の動作に基づいて、本実施形態に係る情報処理方法について説明する。情報処理装置1の各部および各部の処理については、適宜、図2から図5を参照する。図6は、第1実施形態に係る情報処理方法を示すフローチャートである。 Next, an information processing method according to the present embodiment will be described based on the operation of the information processing apparatus 1 described above. For each part of the information processing apparatus 1 and processing of each part, refer to FIGS. 2 to 5 as appropriate. FIG. 6 is a flowchart illustrating the information processing method according to the first embodiment.
 ステップS1において、出力制御部12は、表示部(表示装置2)にGUI画面Wを出力させる。ステップS2において、情報処理装置1は、点群データDGを取得する。ステップS3aにおいて、情報処理装置1は、点群データDGのノイズを除去する。ステップS3bにおいて、クラスタリング部9は、ノイズが除去された点群データDGから部分集合を分類する(点群データDGに対してクラスタリング処理を実行する)。ステップS4において、情報処理装置1は、ノイズが除去された点群データDGに基づいて点群画像P1を生成する。ステップS2からステップS4の少なくとも一部の処理は、次に説明するステップS5の処理よりも前の任意のタイミングで実行可能である。例えば、ステップS2からステップS4の少なくとも一部の処理は、ステップS1の処理の開始前に実行されてもよいし、ステップS1の処理と並行して実行されてもよく、ステップS1の処理の終了後に実行されてもよい。 In step S1, the output control unit 12 causes the display unit (display device 2) to output the GUI screen W. In step S2, the information processing apparatus 1 acquires point cloud data DG. In step S3a, the information processing apparatus 1 removes noise from the point cloud data DG. In step S3b, the clustering unit 9 classifies the subset from the point group data DG from which noise has been removed (performs clustering processing on the point group data DG). In step S4, the information processing apparatus 1 generates a point cloud image P1 based on the point cloud data DG from which noise has been removed. At least a part of the processing from step S2 to step S4 can be executed at an arbitrary timing before the processing of step S5 described below. For example, at least a part of the process from step S2 to step S4 may be executed before the start of the process of step S1, or may be executed in parallel with the process of step S1, and the end of the process of step S1. It may be performed later.
 ステップS5において、出力制御部12は、ステップS4で生成された点群画像P1をGUI画面Wに出力させる。ステップS6において、入力制御部11は、GUI画面Wを用いた入力情報を取得する。入力制御部11は、入力情報として、点群画像において指定される点群に関する情報を取得する。ここでは、抽出の条件として特徴量が入力情報によって指定されるものとする。入力情報が幾何形状(例、図2の[Geometric shape] [Ellipsoid])である場合、入力情報によって楕円体の特徴量(例、長軸と短軸との長さの比)が指定される。 In step S5, the output control unit 12 causes the point cloud image P1 generated in step S4 to be output to the GUI screen W. In step S <b> 6, the input control unit 11 acquires input information using the GUI screen W. The input control unit 11 acquires information regarding a point cloud specified in the point cloud image as input information. Here, it is assumed that a feature amount is specified by input information as an extraction condition. When the input information is a geometric shape (eg, [Geometric shape] [Ellipsoid] in FIG. 2), the input information specifies the feature quantity of the ellipsoid (eg, the ratio of the length of the major axis to the minor axis). .
 ステップS7において、処理部7は、入力情報に基づいて一部の集合(点集合)を抽出する。ステップS8において、分類器10は、クラスタリングされた部分集合の特徴量を算出し、部分集合の特徴量と入力情報に基づく特徴量とを比較する。例えば、入力情報が幾何形状(例、図2の[Geometric shape] [Ellipsoid])である場合、処理部7は、部分集合が表す形状を楕円体にあてはめて、その長軸と短軸との長さの比を部分集合の特徴量として算出する。そして、分類器10は、入力情報に基づく幾何形状の特徴量(例、長軸と短軸との比が0.9以下または1.1以上)と、部分集合の特徴量とを比較する。そして、ステップS9において、分類器10は、部分集合の特徴量が入力情報に基づく特徴量に対して所定の関係を満たす場合に、この部分集合を一分の点群(点集合)として分類する。例えば、分類器10は、部分集合の特徴量が入力情報に基づく楕円体の特徴量(例、長軸と短軸との比が0.9以下または1.1以上)と整合する場合に、この部分集合を、楕円体として分類する。処理部7は、抽出した点集合の情報を記憶部8に記憶させる。 In step S7, the processing unit 7 extracts a partial set (point set) based on the input information. In step S8, the classifier 10 calculates the feature quantities of the clustered subsets, and compares the feature quantities of the subsets with the feature quantities based on the input information. For example, when the input information is a geometric shape (for example, [Geometric shape] [Ellipsoid] in FIG. 2), the processing unit 7 applies the shape represented by the subset to an ellipsoid, and the long axis and the short axis The length ratio is calculated as a feature amount of the subset. Then, the classifier 10 compares the feature quantity of the geometric shape based on the input information (eg, the ratio of the major axis to the minor axis is 0.9 or less or 1.1 or more) and the feature quantity of the subset. In step S9, the classifier 10 classifies the subset as a point group (point set) when the feature amount of the subset satisfies a predetermined relationship with the feature amount based on the input information. . For example, when the classifier 10 matches the feature quantity of the subset with the feature quantity of the ellipsoid based on the input information (eg, the ratio of the major axis to the minor axis is 0.9 or less or 1.1 or more), This subset is classified as an ellipsoid. The processing unit 7 stores the extracted point set information in the storage unit 8.
 ステップS10において、出力制御部12は、抽出結果を出力させる。例えば、出力制御部12は、処理部7による抽出結果を表す抽出点群画像P2をGUI画面Wに出力させる。なお、出力制御部12は、処理部7による抽出結果を、GUI画面Wに出力しなくてもよい。例えば、出力制御部12は、表示装置2以外の装置(例、プリンター)に処理部7による抽出結果を出力させてもよい。 In step S10, the output control unit 12 outputs the extraction result. For example, the output control unit 12 causes the extraction point group image P2 representing the extraction result by the processing unit 7 to be output to the GUI screen W. The output control unit 12 may not output the extraction result by the processing unit 7 to the GUI screen W. For example, the output control unit 12 may cause the device (eg, printer) other than the display device 2 to output the extraction result by the processing unit 7.
 上述のように、実施形態に係る情報処理方法は、点群画像を表示部に表示させることと、入力部により入力される入力情報を取得することと、入力情報に基づいて、点群画像に含まれる点群から一部の点群を抽出することと、抽出された一部の点群に基づいた抽出点群画像を表示部に表示させることと、を含む。 As described above, in the information processing method according to the embodiment, the point cloud image is displayed on the display unit, the input information input by the input unit is acquired, and the point cloud image is obtained based on the input information. Extracting a part of the point group from the included point group and displaying an extracted point group image based on the extracted part of the point group on the display unit.
 次に、図2で選択されていない選択肢に割り付けられた処理について説明する。図7は、第1実施形態に係るGUI画面を示す図である。図7において、入力情報は、構造体の種類([Biological objects]を指定する情報)である。図2のウィンドウW4では分布の候補のカテゴリーとして[Geometric shape]が選択されていたが、図7のウィンドウW4では[Biological objects]が選択されている。[Biological objects]は、分布の候補のカテゴリーとして、処理部7が抽出する点集合に対応する構造物の種類を指定することを示す。[Biological objects]には、予め定められた候補として、構造物の種類の候補を表示する処理が割り付けられている。 Next, processing assigned to options not selected in FIG. 2 will be described. FIG. 7 is a diagram showing a GUI screen according to the first embodiment. In FIG. 7, the input information is the type of structure (information specifying [BiologicalBioobjects]). In the window W4 in FIG. 2, [Geometric shape] is selected as the distribution candidate category, but in the window W4 in FIG. 7, [Biological objects] is selected. [Biological objects] indicates that the type of structure corresponding to the point set extracted by the processing unit 7 is specified as a distribution candidate category. [Biological objects] is assigned a process of displaying candidate types of structures as predetermined candidates.
 ユーザの入力情報によって[Biological objects]が選択された場合、ウィンドウW6が生成される。ウィンドウW6を生成する処理は、図2で説明したウィンドウW5を生成する処理と同様である。ウィンドウW6には、入力情報の選択肢として、抽出対象の構造物の種類の候補が表示される。図7のウィンドウW5には、抽出対象の構造物の種類の候補として、[Clathrin]、[Mitochondria]、[Tubulin]が表示される。[Clathrin]には、抽出対象としてクラスリンを指定する処理が割り付けられている。 When the [Biological objects] is selected according to the user input information, the window W6 is generated. The process for generating the window W6 is the same as the process for generating the window W5 described with reference to FIG. In the window W6, candidates for the type of structure to be extracted are displayed as choices of input information. In the window W5 of FIG. 7, [Clathrin], [Mitochondria], and [Tubulin] are displayed as candidates for the type of structure to be extracted. [Clathrin] is assigned a process for specifying clathrin as an extraction target.
 処理部7(クラスタリング部9)は、点群画像に含まれる点群を複数の点群(部分集合)に分割する。処理部7(分類器10)は、クラスタリング部9が分割した点群(部分集合)の特徴量と、構造体の特徴量とに基づいて一部の点群を抽出する。記憶部8には、構造体の特徴量に関する情報が記憶されている。構造体の特徴量に関する情報は、例えば、構造体(例、クラスリン)の形状を規定する情報である。構造体の特徴量に関する情報は、例えば、構造体(例、クラスリン)の形状に対応するN次元データD1の分布を定義する情報である。 The processing unit 7 (clustering unit 9) divides the point group included in the point group image into a plurality of point groups (subsets). The processing unit 7 (classifier 10) extracts a part of the point group based on the feature amount of the point group (subset) divided by the clustering unit 9 and the feature amount of the structure. The storage unit 8 stores information on the feature amount of the structure. The information regarding the feature amount of the structure is information that defines the shape of the structure (eg, clathrin), for example. The information on the feature amount of the structure is information defining the distribution of the N-dimensional data D1 corresponding to the shape of the structure (eg, clathrin), for example.
 [Clathrin]が選択された場合、入力制御部11は、抽出対象の点集合におけるN次元データD1の分布としてクラスリンの形状に対応する分布を指定して、処理部7に抽出処理を実行させる。処理部7は、クラスリンの形状に対応する分布の情報を記憶部8から読み出し、抽出処理を実行する。同様に、[Mitochondria]には、抽出対象としてミトコンドリアを指定する処理が割り付けられている。[Tubulin]には、抽出対象としてチューブリンを指定する処理が割り付けられている。[Mitochondria]または[Tubulin]が選択された場合の処理は、[Clathrin]が選択された場合の処理と同様である。なお、ユーザは、予め登録された構造の候補(例、クラスリン、チューブリン、ミトコンドリア)以外を選択したい場合に、[Input trained data]や[Targeting]を選択することで、点集合を抽出する条件(例、構造の種類、形状)を指定可能である。 When [Clathrin] is selected, the input control unit 11 designates a distribution corresponding to the shape of the clathrin as the distribution of the N-dimensional data D1 in the point set to be extracted, and causes the processing unit 7 to execute the extraction process. . The processing unit 7 reads the distribution information corresponding to the shape of the clathrin from the storage unit 8 and executes the extraction process. Similarly, a process for designating mitochondria as an extraction target is assigned to [Mitochondria]. [Tubulin] is assigned a process for specifying tubulin as an extraction target. The process when [Mitochondria] or [Tubulin] is selected is the same as the process when [Clathrin] is selected. In addition, when the user wants to select a candidate other than a registered structure candidate (eg, clathrin, tubulin, mitochondria), the point set is extracted by selecting [Input trained data] or [Targeting]. Conditions (eg, type of structure, shape) can be specified.
 図8は、第1実施形態に係るGUI画面を示す図である。図2のウィンドウW3では分布の指定方法の選択肢として[Example]が選択されていたが、図8のウィンドウW3では[Targeting]が選択されている。[Targeting]は、分布の指定方法として、GUI画面W上にユーザが描く図形によって分布を指定する方法を選択することを示す。[Targeting]には、GUI画面Wに図形を描く方法の候補を表示する処理が割り付けられている。 FIG. 8 is a diagram showing a GUI screen according to the first embodiment. In the window W3 in FIG. 2, [Example] is selected as an option for the distribution designation method, but in the window W3 in FIG. 8, [Targeting] is selected. [Targeting] indicates that a method for specifying a distribution by a graphic drawn by the user on the GUI screen W is selected as a distribution specifying method. [Targeting] is assigned a process of displaying candidates for a method of drawing a graphic on the GUI screen W.
 ユーザの入力情報によって[Targeting]が選択された場合、ウィンドウW7が生成される。ウィンドウW7を生成する処理は、図2で説明したウィンドウW4を生成する処理と同様である。ウィンドウW7には、GUI画面Wに図形を描く方法の候補として、[Rectangular domain]、[Draw curve]が表示されている。図8では[Rectangular domain]が選択されている。 When the [Targeting] is selected by the user input information, the window W7 is generated. The process for generating the window W7 is the same as the process for generating the window W4 described with reference to FIG. In the window W7, [Rectangular domain] and [Draw curve] are displayed as candidates for a method of drawing a graphic on the GUI screen W. In FIG. 8, [Rectangular domain] is selected.
 [Rectangular domain]は、直方体状の領域の指定によって、指定された領域の内側のN次元データD1の分布を目標の分布に指定することを表す。例えば、入力制御部11は、点群画像P1上にポインタPが配置された状態で左クリックされたことが検出された場合に、ポインタPの位置に直方体状の領域AR1を表示させる。また、入力制御部11は、領域AR1の辺にポインタPが配置された状態でこの辺の交差方向にドラッグされたことが検出された場合にポインタPの移動方向において領域AR1を伸縮させて、領域AR1を表示させる。このようにして、ユーザは、領域AR1をX方向、Y方向、Z方向のそれぞれにおいて伸縮可能である。 [Rectangular domain] indicates that the distribution of the N-dimensional data D1 inside the specified area is specified as the target distribution by specifying the rectangular parallelepiped area. For example, the input control unit 11 displays a rectangular parallelepiped area AR1 at the position of the pointer P when it is detected that the left click is performed with the pointer P placed on the point cloud image P1. The input control unit 11 expands and contracts the area AR1 in the moving direction of the pointer P when it is detected that the pointer P is arranged on the side of the area AR1 and is dragged in the crossing direction of the side. AR1 is displayed. In this way, the user can expand and contract the area AR1 in each of the X direction, the Y direction, and the Z direction.
 また、入力制御部11は、例えばウィンドウW1において点群画像P1の外側にポインタPが配置された状態でドラッグされたことが検出された場合、視点を変えた点群画像P1を表示させる。例えば、情報処理装置1は、ドラッグによってポインタPが移動した向き及び移動量に基づいてレンダリング処理を実行し、視点を変更した点群画像P1を生成する。また、情報処理装置1は、入力情報に基づいて点群画像P1をズーム(例、ズームイン、ズームアウト)して表示することも可能である。入力制御部11は、出力制御部12に視点を変更した点群画像P1を表示する処理を実行させる。このようにして、ユーザは、異なる方向から点群画像P1を見ながら領域AR1を適宜伸縮させ、所望の部分集合を囲むように領域AR1を指定することができる。 In addition, for example, when it is detected that the drag is performed in a state where the pointer P is arranged outside the point cloud image P1 in the window W1, the input control unit 11 displays the point cloud image P1 whose viewpoint has been changed. For example, the information processing apparatus 1 executes rendering processing based on the direction and amount of movement of the pointer P by dragging, and generates a point cloud image P1 with a changed viewpoint. The information processing apparatus 1 can also display the point cloud image P1 by zooming (eg, zooming in or zooming out) based on the input information. The input control unit 11 causes the output control unit 12 to execute a process of displaying the point cloud image P1 whose viewpoint has been changed. In this way, the user can appropriately expand and contract the area AR1 while viewing the point cloud image P1 from different directions, and specify the area AR1 so as to surround a desired subset.
 図9は、第1実施形態に係るGUI画面を示す図である。図9のウィンドウW3では、分布の指定方法として[Targeting]が選択されている。また、GUI画面Wに図形を描く方法の候補として[Draw curve]が選択されている。[Draw curve]は、ユーザがポインタPを移動させながら自由曲線を描画し、この自由曲線に囲まれる分布を目標の分布に指定することを示す。 FIG. 9 is a diagram showing a GUI screen according to the first embodiment. In the window W3 in FIG. 9, [Targeting] is selected as the distribution designation method. Further, [Drawvecurve] is selected as a candidate for a method of drawing a graphic on the GUI screen W. [Draw curve] indicates that the user draws a free curve while moving the pointer P, and designates a distribution surrounded by the free curve as a target distribution.
 例えば、入力制御部11は、点群画像P1上にポインタPが配置された状態でドラッグされたことが検出された場合に、ドラッグ開始時のポインタPの位置を起点としてポインタPの軌跡に相当する曲線P4を表示させる。入力制御部11は、ドラッグが解除されたことが検出された場合に、ドラッグが解除された時点のポインタPの位置を曲線P4の終点とする。入力制御部11は、例えば、ユーザが描いた曲線P4が閉曲線を含むか否かを判定する。入力制御部11は、曲線P4が閉曲線を含まないと判定した場合、例えば補間処理によって、曲線P4が閉曲線を含むように曲線P4を調整する。[Draw curve]が選択されている場合、[Rectangular domain]が選択されている場合と同様に、視点を変更した点群画像P1を用いることで3次元の領域を指定可能である。入力制御部11は、曲線P4に含まれる閉曲線の内側のN次元データD1の分布を目標の分布に指定する。 For example, the input control unit 11 corresponds to the locus of the pointer P starting from the position of the pointer P at the start of the dragging when it is detected that the pointer P is dragged with the pointer P placed on the point cloud image P1. The curve P4 to be displayed is displayed. When it is detected that the drag is released, the input control unit 11 sets the position of the pointer P when the drag is released as the end point of the curve P4. For example, the input control unit 11 determines whether or not the curve P4 drawn by the user includes a closed curve. When the input control unit 11 determines that the curve P4 does not include the closed curve, the input control unit 11 adjusts the curve P4 so that the curve P4 includes the closed curve, for example, by interpolation processing. When [Draw curve] is selected, a three-dimensional region can be specified by using the point cloud image P1 whose viewpoint has been changed, as in the case where [Rectangular domain] is selected. The input control unit 11 designates the distribution of the N-dimensional data D1 inside the closed curve included in the curve P4 as the target distribution.
 図10および図11は、第1実施形態に係るGUI画面を示す図である。図10のウィンドウW3では、分布の指定方法の選択肢として[Input trained data]が選択されている。[Input trained data]は、分布の指定方法として、処理部によって抽出する分布を定義した情報を読み込むことを示す選択肢である。[Input trained data]には、例えば、機械学習(図16から図20で後述する)によって得られる情報を読み出す処理が割り付けられている。 10 and 11 are diagrams showing a GUI screen according to the first embodiment. In the window W3 of FIG. 10, [Input trained data] is selected as an option for the distribution designation method. [Input trained data] is an option indicating that information defining a distribution extracted by the processing unit is read as a distribution designation method. For example, a process of reading information obtained by machine learning (described later with reference to FIGS. 16 to 20) is assigned to [Input trained data].
 入力制御部11は、ユーザの入力情報によって[Input trained data]が選択されていると判定した場合に、ウィンドウW8を出力制御部12によって表示させる。ウィンドウW8の[Drop here]は、目標の分布を定義した情報を含むファイルをドラッグ アンド ドロップによって指定することを示す。また、図11に示すように、入力制御部11は、ユーザの入力情報によって[Input trained data]が選択されていると判定した場合に、OS部5が管理するファイルの階層を示すウィンドウW9を、出力制御部12によって表示させてもよい。この場合、ユーザは、目標の分布を定義した情報を含むファイルをウィンドウW9上で指定することができる。なお、情報処理装置1は、[Input trained data]によって学習結果のファイルを読み込む代わりに、抽出の条件を定義するファイルを読みこんで、定義に基づいて点集合を抽出してもよい。例えば、情報処理装置1は、図2のウィンドウW5の「Etc..」によって、抽出の条件を定義するファイルとして幾何形状の特徴量を定義するファイルを読み込んで、点集合を抽出してもよい。 The input control unit 11 causes the output control unit 12 to display the window W8 when it is determined that [Input に よ っ て trained data] is selected according to user input information. [Drop here] in the window W8 indicates that a file including information defining the target distribution is designated by drag and drop. As shown in FIG. 11, when the input control unit 11 determines that [Input trained] data] is selected based on user input information, the input control unit 11 displays a window W <b> 9 indicating the hierarchy of files managed by the OS unit 5. Alternatively, it may be displayed by the output control unit 12. In this case, the user can specify a file including information defining the target distribution on the window W9. Note that the information processing apparatus 1 may read a file that defines extraction conditions instead of reading a learning result file using [Input trained data], and extract a point set based on the definition. For example, the information processing apparatus 1 may extract a point set by reading a file defining a geometric feature amount as a file defining an extraction condition by “Etc ..” in the window W5 in FIG. .
 なお、図2、及び図7から図11で説明した入力情報の選択肢は、適宜変更可能である。例えば、GUI部6は、図2、及び図7から図11で説明した入力情報の選択肢の一部を提供しなくてもよい。また、GUI部6は、図2、及び図7から図11で説明した入力情報の選択肢と異なる選択肢を提供してもよい。 It should be noted that the input information options described with reference to FIGS. 2 and 7 to 11 can be changed as appropriate. For example, the GUI unit 6 may not provide some of the input information options described with reference to FIGS. 2 and 7 to 11. Further, the GUI unit 6 may provide options different from the input information options described with reference to FIGS. 2 and 7 to 11.
 なお、情報処理装置1は、クラスタリング部9を備えなくてもよい。この場合、処理部7は、点群データDGが定義される空間から一部の領域(例、ROI)を選択し、選択した領域におけるN次元データD1の分布と指定された分布との類似度を算出してもよい。分類器10は、算出された類似度が閾値以上である場合に、上記選択された領域を部分集合として分類してもよい。処理部7は、上記一部の領域の位置を変更して、上記一部の領域が抽出対象の部分集合に該当するか否かを判定する処理を繰り返して、部分集合を抽出してもよい。 Note that the information processing apparatus 1 does not have to include the clustering unit 9. In this case, the processing unit 7 selects a partial region (eg, ROI) from the space in which the point cloud data DG is defined, and the similarity between the distribution of the N-dimensional data D1 and the specified distribution in the selected region. May be calculated. The classifier 10 may classify the selected region as a subset when the calculated similarity is greater than or equal to a threshold value. The processing unit 7 may extract the subset by changing the position of the partial area and repeating the process of determining whether the partial area corresponds to the subset to be extracted. .
[第2実施形態]
 次に、第2実施形態について説明する。本実施形態において、上述の実施形態と同様の構成については、同じ符号を付してその説明を省略あるいは簡略化する。本実施形態に係る情報処理装置1は、構成が図1と同様であるが、GUI部6による処理が第1実施形態と異なる。本実施形態において、情報処理装置1の構成については、適宜、図1を参照する。
[Second Embodiment]
Next, a second embodiment will be described. In the present embodiment, the same components as those in the above-described embodiment are denoted by the same reference numerals, and the description thereof is omitted or simplified. The information processing apparatus 1 according to the present embodiment has the same configuration as that shown in FIG. In the present embodiment, the configuration of the information processing apparatus 1 is appropriately referred to FIG.
 出力制御部12は、クラスタリング部9が分類した部分集合を、ユーザが入力情報によって指定する分布の候補として表示させる。図12は、第2実施形態に係るクラスタリング部が分類した部分集合に基づいて出力制御部が出力するGUI画面を示す図である。図12のGUI画面Wは、ウィンドウW10を含む。ウィンドウW10における符号Kdは、それぞれ、クラスタリング部9が分類した部分集合におけるN次元データD1の分布を示す。出力制御部12は、クラスタリング部9が割り付けた部分集合の識別番号と分布Kdとを合わせて表示させてもよい。 The output control unit 12 displays the subset classified by the clustering unit 9 as a distribution candidate specified by the user using the input information. FIG. 12 is a diagram illustrating a GUI screen output by the output control unit based on the subset classified by the clustering unit according to the second embodiment. The GUI screen W in FIG. 12 includes a window W10. A symbol Kd in the window W10 indicates the distribution of the N-dimensional data D1 in the subset classified by the clustering unit 9, respectively. The output control unit 12 may display the subset identification number assigned by the clustering unit 9 and the distribution Kd together.
 情報処理装置1は、図8および図9で説明したように、点群データDGの少なくとも一部の分布を視点を変更して表示可能である。図12においても同様に、情報処理装置1は、部分集合ごとのN次元データD1の分布を、視点を変更して表示可能である。例えば、入力制御部11は、部分集合の分布KdにポインタPが配置された状態でドラッグされたことを検出した場合に、分布Kdについて視点を変更して表示させる。 The information processing apparatus 1 can display the distribution of at least a part of the point cloud data DG by changing the viewpoint, as described with reference to FIGS. Similarly, in FIG. 12, the information processing apparatus 1 can display the distribution of the N-dimensional data D1 for each subset by changing the viewpoint. For example, when the input control unit 11 detects that the pointer K is dragged in a state where the pointer P is arranged on the subset distribution Kd, the input control unit 11 displays the distribution Kd by changing the viewpoint.
 図13は、第2実施形態に係るGUI画面を用いた分布の指定方法を示す図である。本実施形態において、ユーザは、ウィンドウW10に表示された複数の分布Kdのそれぞれを入力情報によって選択可能である。ユーザは、選択された分布Kd(以下、分布Kd1と称する)について、抽出の対象とするか否かを入力情報によって指定可能である。例えば、入力制御部11は、部分集合の分布Kd1上にポインタPが配置された状態で左クリックされたことが検出された場合に、この分布Kd1をユーザの入力情報によって選択された分布であると判定する。入力制御部11は、例えば、選択されている分布Kd1を、他の分布Kdと区別可能に表示させる。例えば、入力制御部11は、分布Kd1を囲む枠(図13に太線で示す)を、他の分布Kdを囲む枠と異なる色、あるいは明るさで表示させる。 FIG. 13 is a diagram showing a distribution designation method using the GUI screen according to the second embodiment. In the present embodiment, the user can select each of the plurality of distributions Kd displayed in the window W10 based on the input information. The user can specify whether or not the selected distribution Kd (hereinafter referred to as distribution Kd1) is to be extracted by input information. For example, when it is detected that the left click is performed in a state where the pointer P is arranged on the subset distribution Kd1, the input control unit 11 is the distribution selected by the user input information. Is determined. For example, the input control unit 11 displays the selected distribution Kd1 so as to be distinguishable from other distributions Kd. For example, the input control unit 11 displays a frame (indicated by a thick line in FIG. 13) surrounding the distribution Kd1 in a color or brightness different from the frame surrounding the other distribution Kd.
 本実施形態において、ユーザは、抽出の対象とする分布(以下、抽出対象分布と称する)を入力情報によって指定可能である。例えば、入力制御部11は、選択された分布Kd1上にポインタPが配置された状態で左クリックされたことが検出された場合に、この分布Kd1を抽出対象分布に指定されたと判定する。符号Kd2およびKd3は、抽出対象分布に指定されたと判定された分布を表す。入力制御部11は、例えば、分布Kd2および分布Kd3を他の分布Kdと区別可能に表示させる。例えば、入力制御部11は、分布Kd2を囲む枠(図13に二点鎖線で示す)を、他の分布Kdを囲む枠と異なる色、あるいは明るさで表示させる。 In this embodiment, the user can specify a distribution to be extracted (hereinafter referred to as an extraction target distribution) by input information. For example, the input control unit 11 determines that the distribution Kd1 is designated as the extraction target distribution when it is detected that the left click is performed in a state where the pointer P is arranged on the selected distribution Kd1. The symbols Kd2 and Kd3 represent distributions determined to be designated as the extraction target distribution. For example, the input control unit 11 displays the distribution Kd2 and the distribution Kd3 so as to be distinguishable from other distributions Kd. For example, the input control unit 11 displays a frame (indicated by a two-dot chain line in FIG. 13) surrounding the distribution Kd2 in a color or brightness different from that of the frame surrounding the other distribution Kd.
 本実施形態において、ユーザは、抽出から除外する分布(以下、抽出除外分布と称する)を入力情報によって指定可能である。例えば、入力制御部11は、選択された分布Kd1上にポインタPが配置された状態で右クリックされたことが検出された場合に、この分布Kd1を抽出除外分布に指定されたと判定する。ここでは、抽出場外分布に指定されたと判定された分布Kdを符号Kd4で表す。入力制御部11は、例えば、分布Kd4を他の分布Kdと区別可能に表示させる。例えば、入力制御部11は、分布Kd4を囲む枠(図13に点線で示す)を、他の分布Kdを囲む枠と異なる色、あるいは明るさで表示させる。 In this embodiment, the user can specify a distribution to be excluded from extraction (hereinafter referred to as an extraction exclusion distribution) by input information. For example, the input control unit 11 determines that the distribution Kd1 is designated as the extraction exclusion distribution when it is detected that the right-click is performed in a state where the pointer P is placed on the selected distribution Kd1. Here, the distribution Kd determined to be designated as the distribution outside the extraction field is represented by the symbol Kd4. For example, the input control unit 11 displays the distribution Kd4 so as to be distinguishable from other distributions Kd. For example, the input control unit 11 displays a frame (indicated by a dotted line in FIG. 13) surrounding the distribution Kd4 in a color or brightness different from the frame surrounding the other distribution Kd.
 処理部7は、入力制御部11が特定した分布とクラスタリング部9が分類した部分集合におけるN次元データの分布との類似度に基づいて、点群から点集合を抽出する。図15は、第2実施形態に係る処理部による処理を示す図である。処理部7は、抽出対象分布に指定されたと判定された分布Kd(Kd2、Kd3)のそれぞれについて、クラスタリング部9が分類した部分集合の分布Kdのそれぞれとの類似度を算出する。以下の説明において、抽出対象分布に指定されたと判定された分布Kd(Kd2、Kd3)との類似度が算出される分布Kdを符号Kd5で表す。抽出対象分布に指定されたと判定された分布Kd(Kd2、Kd3)については、自身との類似度が最大値になるので、類似度を算出する相手の分布Kd5から除外されてもよい。処理部7は、例えば、抽出対象分布に指定されたと判定された分布Kd(Kd2、Kd3)の少なくとも1つに対して類似度が閾値以上となる分布を抽出する。ここで分布Kd5の1つを符号Kd51で表す。処理部7は、分布Kd2と分布Kd5(Kd51)との類似度Q11と、分布Kd3と分布Kd51との類似度Q21とのうち一方または双方が閾値以上である場合に、分布Kd51を抽出する。 The processing unit 7 extracts a point set from the point group based on the similarity between the distribution specified by the input control unit 11 and the distribution of N-dimensional data in the subset classified by the clustering unit 9. FIG. 15 is a diagram illustrating processing by the processing unit according to the second embodiment. The processing unit 7 calculates the similarity between each of the distributions Kd (Kd2, Kd3) determined to be designated as the extraction target distribution and each of the subset distributions Kd classified by the clustering unit 9. In the following description, the distribution Kd for which the degree of similarity with the distribution Kd (Kd2, Kd3) determined to be designated as the extraction target distribution is calculated is represented by the symbol Kd5. The distribution Kd (Kd2, Kd3) determined to be designated as the extraction target distribution has a maximum similarity with itself, and may be excluded from the partner distribution Kd5 for calculating the similarity. For example, the processing unit 7 extracts a distribution whose similarity is equal to or greater than a threshold for at least one of the distributions Kd (Kd2, Kd3) determined to be designated as the extraction target distribution. Here, one of the distributions Kd5 is represented by a symbol Kd51. The processing unit 7 extracts the distribution Kd51 when one or both of the similarity Q11 between the distribution Kd2 and the distribution Kd5 (Kd51) and the similarity Q21 between the distribution Kd3 and the distribution Kd51 are equal to or greater than a threshold value.
 なお、処理部7は、類似度Q11と類似度Q21との一方のみが閾値未満である場合に、分布Kd51を目標の分布に類似しない分布であると判定してもよい。また、処理部7は、類似度Q11および類似度Q21から算出される値(例、平均値)に基づいて、目標の分布に類似する分布であるか否かを判定してもよい。なお、図15では、目標の分布として、分布Kb2および分布Kb3の2つを示したが、目標の分布の数は、1つでもよいし、3以上でもよい。 Note that the processing unit 7 may determine that the distribution Kd51 is a distribution that is not similar to the target distribution when only one of the similarity Q11 and the similarity Q21 is less than the threshold. Further, the processing unit 7 may determine whether or not the distribution is similar to the target distribution based on values (for example, average values) calculated from the similarity Q11 and the similarity Q21. In FIG. 15, two distributions Kb2 and Kb3 are shown as target distributions, but the number of target distributions may be one or three or more.
 また、処理部7は、抽出から除外する分布Kd4(図13参照)についても抽出対象分布と判定された分布(Kd2、Kd3)と同様に、分布Kd5との類似度を算出する。処理部7は、分布Kd5との類似度が閾値以上である分布Kd5を、目標の分布と非類似であると判定する。処理部7は、目標の分布と非類似であると判定した分布を抽出から除外する。分布Kd4が複数指定されている場合、処理部7は、各分布Kd4について複数の分布Kd5のそれぞれとの類似度を算出する。処理部7は、各分布Kd4と複数の分布Kd5との類似度として算出される複数の値に基づいて、目標の分布と非類似であるか否かを判定してもよい。例えば、処理部7は、上記複数の値の最大値が閾値以上である場合に、目標の分布と非類似であると判定してもよい。また、処理部7は、上記複数の値の最小値が閾値以上である場合に、目標の分布と非類似であると判定してもよい。また、処理部7は、上記複数の値から算出される値(例、平均値)が閾値以上である場合に、目標の分布と非類似であると判定してもよい。 Also, the processing unit 7 calculates the degree of similarity with the distribution Kd5 in the same manner as the distributions (Kd2, Kd3) determined as the extraction target distribution for the distribution Kd4 (see FIG. 13) to be excluded from the extraction. The processing unit 7 determines that the distribution Kd5 whose similarity to the distribution Kd5 is equal to or greater than the threshold is dissimilar from the target distribution. The processing unit 7 excludes the distribution determined to be dissimilar to the target distribution from the extraction. When a plurality of distributions Kd4 are designated, the processing unit 7 calculates the degree of similarity of each distribution Kd4 with each of the plurality of distributions Kd5. The processing unit 7 may determine whether or not the distribution is dissimilar to the target distribution based on a plurality of values calculated as the degrees of similarity between each distribution Kd4 and the plurality of distributions Kd5. For example, the processing unit 7 may determine that the distribution is not similar to the target distribution when the maximum value of the plurality of values is equal to or greater than a threshold value. Further, the processing unit 7 may determine that the distribution is not similar to the target distribution when the minimum value of the plurality of values is equal to or greater than a threshold value. Further, the processing unit 7 may determine that the distribution is not similar to the target distribution when a value (eg, an average value) calculated from the plurality of values is equal to or greater than a threshold value.
 次に、上述の情報処理装置1の動作に基づいて、本実施形態に係る情報処理方法について説明する。図16は、第2実施形態に係る情報処理方法を示すフローチャートである。ステップS1からステップS3の処理は、図6で説明した処理と同様である。ステップS2の処理後のステップS11において、クラスタリング部9は、点群データDGを部分集合に分類する。なお、ステップS3の処理は、ステップS11の処理の一部として実行されてもよい。例えば、クラスタリング部9は、点群データDGに含まれるN次元データD1を部分集合に分類する際に、ノイズを分類してもよい。また、ステップS3の処理は、ステップS11の処理の一部でなくてもよいし、実行されなくてもよい。 Next, an information processing method according to the present embodiment will be described based on the operation of the information processing apparatus 1 described above. FIG. 16 is a flowchart illustrating an information processing method according to the second embodiment. The processing from step S1 to step S3 is the same as the processing described in FIG. In step S11 after the process of step S2, the clustering unit 9 classifies the point cloud data DG into a subset. Note that the process of step S3 may be executed as part of the process of step S11. For example, the clustering unit 9 may classify noise when classifying the N-dimensional data D1 included in the point cloud data DG into a subset. Moreover, the process of step S3 may not be a part of the process of step S11, and does not need to be performed.
 ステップS4の処理およびステップS5の処理は、図6で説明した処理と同様である。ステップS6において、入力制御部11は、GUI画面Wを用いた入力情報を取得する。その際のステップS6aにおいて、出力制御部12は、クラスタリング部9がステップS11において分類した部分集合におけるN次元データD1の分布をGUI画面Wに表示させる(図12および図13参照)。ステップS11において、入力制御部11は、入力情報によって指定される分布を特定する(図13参照)。ステップS12において、処理部7は、ステップS11において特定された分布に基づいて部分集合を抽出する(図15参照)。ステップS12のステップS13において、処理部7は、クラスタリングされた部分集合の分布と、特定された分布との類似度を算出する。上記クラスタリングされた部分集合の分布は、ステップS3bにおいてクラスタリング部9により分類された部分集合に含まれるN次元データの分布である。また、上記特定された分布は、ステップS11において、入力情報によって指定される分布として入力制御部11が特定した分布である。そして、ステップS12のステップS14において、分類器10は、類似度が閾値以上の部分集合を一分の点群(点集合)として分類する。分類器10は、ステップS14で算出された類似度が閾値以上である場合に、部分集合を、ステップS11で特定された分布に類似する点集合として抽出する(分類する)。処理部7は、抽出した点集合の情報を記憶部8に記憶させる。 The processing in step S4 and the processing in step S5 are the same as the processing described in FIG. In step S <b> 6, the input control unit 11 acquires input information using the GUI screen W. In step S6a at that time, the output control unit 12 displays the distribution of the N-dimensional data D1 in the subset classified by the clustering unit 9 in step S11 on the GUI screen W (see FIGS. 12 and 13). In step S11, the input control unit 11 specifies a distribution specified by the input information (see FIG. 13). In step S12, the processing unit 7 extracts a subset based on the distribution specified in step S11 (see FIG. 15). In step S13 of step S12, the processing unit 7 calculates the similarity between the distribution of the clustered subset and the specified distribution. The distribution of the clustered subset is a distribution of N-dimensional data included in the subset classified by the clustering unit 9 in step S3b. Further, the specified distribution is the distribution specified by the input control unit 11 as the distribution specified by the input information in step S11. In step S <b> 14 of step S <b> 12, the classifier 10 classifies a subset whose similarity is equal to or higher than a threshold as a one-minute point group (point set). The classifier 10 extracts (classifies) the subset as a point set similar to the distribution specified in step S11 when the similarity calculated in step S14 is equal to or greater than the threshold. The processing unit 7 stores the extracted point set information in the storage unit 8.
[第3実施形態]
 次に、第3実施形態について説明する。本実施形態において、上述の実施形態と同様の構成については、同じ符号を付してその説明を省略あるいは簡略化する。図16は、第3実施形態に係る情報処理装置を示す図である。本実施形態に係る情報処理装置1は、機械学習部15を備える。情報処理装置1は、機械学習部15によって分類器10を生成する。機械学習部15は、入力制御部11が取得した入力情報に基づいて、処理部7が点群データDGから点集合を抽出する際の指標(例、判定基準、評価関数)を機械学習によって生成する。機械学習の手法としては、Neural network(例、Deep learning)、support vector machine、regression forest等が挙げられる。機械学習部15は、上記の機械学習の手法またはその他の機械学習の手法のうち1つ又は2以上を組み合わせることによって機械学習を実行する。
[Third Embodiment]
Next, a third embodiment will be described. In the present embodiment, the same components as those in the above-described embodiment are denoted by the same reference numerals, and the description thereof is omitted or simplified. FIG. 16 is a diagram illustrating an information processing apparatus according to the third embodiment. The information processing apparatus 1 according to the present embodiment includes a machine learning unit 15. The information processing apparatus 1 generates the classifier 10 by the machine learning unit 15. Based on the input information acquired by the input control unit 11, the machine learning unit 15 generates an index (eg, determination criterion, evaluation function) when the processing unit 7 extracts a point set from the point cloud data DG by machine learning. To do. Examples of machine learning methods include Neural network (eg, Deep learning), support vector machine, regression forest, and the like. The machine learning unit 15 executes machine learning by combining one or two or more of the above-described machine learning methods or other machine learning methods.
 入力情報取得部(入力制御部11)は、入力情報として機械学習部15の教師データを取得する。入力制御部11は、図13で説明したように、処理部7が抽出する目標の分布(例、分布Kb2、分布Kb3)を表す情報を入力情報として取得する。機械学習部15は、入力制御部11が取得した入力情報から得られる目標の分布を教師データに用いて、機械学習を実行する。処理部7は、機械学習部15が生成した指標に基づいて、点群データDGから一部の点群(点集合)を抽出する。 The input information acquisition unit (input control unit 11) acquires the teacher data of the machine learning unit 15 as input information. As described with reference to FIG. 13, the input control unit 11 acquires information representing a target distribution (eg, distribution Kb2, distribution Kb3) extracted by the processing unit 7 as input information. The machine learning unit 15 executes machine learning using the target distribution obtained from the input information acquired by the input control unit 11 as teacher data. The processing unit 7 extracts a part of the point group (point set) from the point cloud data DG based on the index generated by the machine learning unit 15.
 上記の教師データは、処理部7が抽出する一部の点群(点集合)を規定する情報(抽出する分布を表す情報、正解の教師データ)を含む。また、上記の教師データは、処理部7が抽出から除外する点群を規定する情報(抽出しない分布を表す情報、不正解の教師データ)を含む。本実施形態において、ユーザは、抽出する分布を表す情報および抽出しない分布を表す情報を入力可能である。図17は、分布を指定する処理を示す図である。ユーザは、図8で説明したように、GUI画面Wを用いて領域を指定可能である。図17において、符号AR3(二点鎖線で表す)は、抽出する分布が含まれる領域としてユーザが指定する領域である。また、符号AR4(点線で表す)は、抽出しない分布が含まれる領域としてユーザが指定する領域である。 The above teacher data includes information (information indicating the distribution to be extracted, correct teacher data) that defines a part of the point group (point set) extracted by the processing unit 7. Further, the teacher data includes information (information indicating a distribution that is not extracted, incorrect answer teacher data) that defines a point group that the processing unit 7 excludes from the extraction. In the present embodiment, the user can input information representing a distribution to be extracted and information representing a distribution not to be extracted. FIG. 17 is a diagram illustrating processing for designating a distribution. As described with reference to FIG. 8, the user can specify an area using the GUI screen W. In FIG. 17, symbol AR3 (represented by a two-dot chain line) is a region designated by the user as a region including the distribution to be extracted. A symbol AR4 (represented by a dotted line) is a region designated by the user as a region including a distribution that is not extracted.
 入力制御部11は、領域AR3に含まれる分布を特定することによって、抽出する分布としてユーザが指定する分布(符号Ke1、Ke2、Ke3で表す)を特定する。図17の符号G1は、処理部7が抽出する一部の点群を規定する情報に対応し、抽出する分布として入力制御部11が特定した分布のグループである。グループG1に含まれる分布の情報は、正解を表す教師データとして利用可能である。また、入力制御部11は、領域AR4に含まれる分布を特定することによって、抽出しない分布としてユーザが指定する分布(符号Kf1、Kf2、Kf3で表す)を特定する。図17の符号G2は、処理部7が抽出から除外する点群を規定する情報に対応し、抽出しない分布として入力制御部11が特定した分布のグループである。グループG2に含まれる分布の情報は、不正解を表す教師データとして利用可能である。なお、ユーザは、リスト(図13参照)から候補を選択することによって、抽出する分布および抽出しない分布の一方または双方を指定してもよい。 The input control unit 11 specifies the distribution (represented by the symbols Ke1, Ke2, and Ke3) specified by the user as the distribution to be extracted by specifying the distribution included in the area AR3. The code | symbol G1 of FIG. 17 respond | corresponds to the information which prescribes | regulates the some point cloud which the process part 7 extracts, and is a group of the distribution which the input control part 11 specified as distribution to extract. Information on the distribution included in the group G1 can be used as teacher data representing a correct answer. Further, the input control unit 11 specifies a distribution (represented by symbols Kf1, Kf2, and Kf3) specified by the user as a distribution that is not extracted by specifying a distribution included in the area AR4. Reference numeral G2 in FIG. 17 corresponds to information defining a point group to be excluded from extraction by the processing unit 7, and is a distribution group identified by the input control unit 11 as a distribution that is not extracted. The distribution information included in the group G2 can be used as teacher data representing an incorrect answer. Note that the user may designate one or both of the distribution to be extracted and the distribution not to be extracted by selecting a candidate from the list (see FIG. 13).
 次に、機械学習およびその学習結果を用いた抽出処理について説明する。まず、図18を参照してグループG1(正解の教師データ)を用いた例について説明する。そして、図19(A)を参照して、グループG1(正解の教師データ)およびグループG2(不正解の教師データ)を用いた例について説明する。そして、図19(B)を参照してグループG2(不正解の教師データ)を用いた例について説明する。 Next, machine learning and extraction processing using the learning result will be described. First, an example using the group G1 (correct teacher data) will be described with reference to FIG. An example using the group G1 (correct answer teacher data) and the group G2 (incorrect answer teacher data) will be described with reference to FIG. Then, an example using the group G2 (incorrect answer teacher data) will be described with reference to FIG.
 図18は、第3実施形態に係る機械学習部および処理部による処理を示す図である。機械学習部15は、処理部7が抽出する目標の分布として選択された分布Ke1からKe3のそれぞれについて、特徴量を算出する。特徴量の種類は、例えば、分布が占める空間の大きさ、分布におけるN次元データD1の数密度、分布が占める空間の曲率などである。機械学習部15は、例えば複数の種類の特徴量を算出する。機械学習部15が算出する特徴量を、図18では「特徴量1」、「特徴量2」で表す。 FIG. 18 is a diagram illustrating processing by the machine learning unit and the processing unit according to the third embodiment. The machine learning unit 15 calculates a feature amount for each of the distributions Ke1 to Ke3 selected as the target distribution extracted by the processing unit 7. The types of feature amounts are, for example, the size of the space occupied by the distribution, the number density of the N-dimensional data D1 in the distribution, the curvature of the space occupied by the distribution, and the like. The machine learning unit 15 calculates a plurality of types of feature amounts, for example. The feature amounts calculated by the machine learning unit 15 are represented by “feature amount 1” and “feature amount 2” in FIG.
 機械学習部15は、特徴量1と特徴量2とが満たす関係を導出する。例えば、機械学習部15は、複数の分布(Ke1からKe3)のそれぞれについて、特徴量1に対する特徴量2が配置される領域AR2を導出する。機械学習部15は、点群データDGから点集合を抽出する際の指標として、領域AR2を表す情報(例、関数)を生成する。機械学習部15は、機械学習の結果として、領域AR2を表す情報を記憶部8に記憶させる。 The machine learning unit 15 derives a relationship that the feature quantity 1 and the feature quantity 2 satisfy. For example, the machine learning unit 15 derives an area AR2 in which the feature quantity 2 with respect to the feature quantity 1 is arranged for each of a plurality of distributions (Ke1 to Ke3). The machine learning unit 15 generates information (for example, a function) representing the area AR2 as an index for extracting a point set from the point cloud data DG. The machine learning unit 15 causes the storage unit 8 to store information representing the area AR2 as a result of machine learning.
 処理部7は、機械学習部15による機械学習の結果を記憶部8から読み出して、抽出処理を実行する。例えば、ユーザは、図10および図11に示したGUI画面Wにおいて、[Target]として[Input trained data]を選択した場合、記憶部8に記憶されている領域AR2を表す情報を指定する。処理部7は、領域AR2を表す情報としてユーザが指定する情報を読み出し、抽出処理を実行する。 The processing unit 7 reads out the result of the machine learning by the machine learning unit 15 from the storage unit 8 and executes the extraction process. For example, when the user selects [Input trained data] as [Target] on the GUI screen W shown in FIGS. 10 and 11, the user designates information representing the area AR <b> 2 stored in the storage unit 8. The processing unit 7 reads information specified by the user as information representing the area AR2, and executes an extraction process.
 処理部7は、クラスタリング部9が分類した部分集合におけるN次元データD1の分布Kd51について、特徴量1および特徴量2を算出する。分類器10は、分布Kd51の特徴量1に対する特徴量2が領域AR2に存在するか否かを判定する。分類器10は、分布Kd51の特徴量1に対する特徴量2が領域AR2に存在する場合に、この分布Kd51が目標の分布に類似すると判定する。分類器10は、目標の分布に類似すると判定した分布Kd51を点集合として抽出する(分類する)。 The processing unit 7 calculates the feature amount 1 and the feature amount 2 for the distribution Kd51 of the N-dimensional data D1 in the subset classified by the clustering unit 9. The classifier 10 determines whether or not the feature amount 2 for the feature amount 1 of the distribution Kd51 exists in the area AR2. The classifier 10 determines that the distribution Kd51 is similar to the target distribution when the feature amount 2 for the feature amount 1 of the distribution Kd51 exists in the area AR2. The classifier 10 extracts (classifies) the distribution Kd51 determined to be similar to the target distribution as a point set.
 また、処理部7は、クラスタリング部9が分類した部分集合におけるN次元データD1の分布Kd52について、特徴量1および特徴量2を算出する。分類器10は、分布Kd52の特徴量1に対する特徴量2が領域AR2に存在するか否かを判定する。分類器10は、分布Kd52の特徴量1に対する特徴量2が領域AR2に存在しない場合に、この分布Kd52が目標の分布に類似しない(非類似である)と判定する。分類器10は、目標の分布に類似しないと判定した分布Kd52を点集合として抽出しない。 Further, the processing unit 7 calculates the feature amount 1 and the feature amount 2 for the distribution Kd52 of the N-dimensional data D1 in the subset classified by the clustering unit 9. The classifier 10 determines whether or not the feature amount 2 for the feature amount 1 of the distribution Kd52 exists in the area AR2. The classifier 10 determines that the distribution Kd52 is not similar to the target distribution (is dissimilar) when the feature amount 2 for the feature amount 1 of the distribution Kd52 does not exist in the area AR2. The classifier 10 does not extract the distribution Kd52 that is determined not to be similar to the target distribution as a point set.
 図19は、第3実施形態に係る機械学習部による処理を示す図である。図19(A)において、機械学習部15は、抽出する目標の分布として選択された分布(Ke1からKe3)と、抽出しない分布として選択された分布(Kf1からKf3)とに基づいて、機械学習を実行する。機械学習部15は、抽出する分布(Ke1からKe3)の特徴量1に対する特徴量2が領域AR2に収まり、かつ抽出しない分布(Kf1からKf3)の特徴量1に対する特徴量2が領域AR2に存在しないように、領域AR2を導出する。 FIG. 19 is a diagram illustrating processing by the machine learning unit according to the third embodiment. In FIG. 19A, the machine learning unit 15 performs machine learning based on a distribution (Ke1 to Ke3) selected as a target distribution to be extracted and a distribution (Kf1 to Kf3) selected as a distribution not to be extracted. Execute. The machine learning unit 15 has the feature quantity 2 for the feature quantity 1 of the distribution to be extracted (Ke1 to Ke3) within the area AR2, and the feature quantity 2 for the feature quantity 1 of the distribution (Kf1 to Kf3) not to be extracted exists in the area AR2. The area AR2 is derived so that it does not occur.
 また、図19(B)において、機械学習部15は、抽出しない分布として選択された分布(Kf1からKf3)を用いるとともに、抽出する分布として選択された分布(Ke1からKe3)を用いないで、機械学習を実行する。機械学習部15は、抽出しない分布(Kf1からKf3)の特徴量1に対する特徴量2が領域AR2に存在しないように、領域AR2を導出する。 In FIG. 19B, the machine learning unit 15 uses the distribution (Kf1 to Kf3) selected as the distribution not to be extracted, and does not use the distribution (Ke1 to Ke3) selected as the distribution to be extracted. Perform machine learning. The machine learning unit 15 derives the area AR2 so that the feature quantity 2 for the feature quantity 1 of the distribution (Kf1 to Kf3) that is not extracted does not exist in the area AR2.
 次に、上述の情報処理装置1の動作に基づいて、本実施形態に係る情報処理方法について説明する。図20は、第3実施形態に係る情報処理方法を示すフローチャートである。図6または図16で説明した処理と同様の処理については、同じ符号を付してその説明を省略する。ステップS21において、機械学習部15は、ステップS7で特定された分布に基づいて、機械学習を実行する(図18(A)参照)。ステップS22において、処理部7は、ステップS21の機械学習の結果に基づいて、点集合を抽出する。その際に、ステップS22aにおいて、処理部7は、クラスタリング部9によってクラスタリングされた部分集合の分布の特徴量を算出する。そして、ステップS22bにおいて、分類器10は、学習結果および特徴量に基づいて、点集合を分類する。分類部7Aは、学習結果として、領域AR2(図18(A)、図19(A)、図19(B)参照))を表す情報を記憶部8から読み出し、ステップS22aで算出された特徴量の位置がが領域AR2内であるか否かによって部分集合が目標の分布であるか否かを判定する。 Next, an information processing method according to the present embodiment will be described based on the operation of the information processing apparatus 1 described above. FIG. 20 is a flowchart illustrating an information processing method according to the third embodiment. The same processes as those described in FIG. 6 or FIG. 16 are denoted by the same reference numerals, and the description thereof is omitted. In step S21, the machine learning unit 15 performs machine learning based on the distribution specified in step S7 (see FIG. 18A). In step S22, the processing unit 7 extracts a point set based on the result of the machine learning in step S21. At that time, in step S22a, the processing unit 7 calculates the feature amount of the distribution of the subset clustered by the clustering unit 9. In step S22b, the classifier 10 classifies the point set based on the learning result and the feature amount. The classification unit 7A reads information representing the area AR2 (see FIG. 18A, FIG. 19A, and FIG. 19B) from the storage unit 8 as a learning result, and the feature amount calculated in step S22a. It is determined whether or not the subset has the target distribution depending on whether or not the position of is within the area AR2.
 なお、情報処理装置1は、部分集合の形状を表す面を生成する面生成部を備えてもよい。この面生成部は、例えば、点群データDGに含まれるN次元データに基づいてスカラー場を生成し、スカラー場の等高面を、部分集合の形状を表す面として生成する。処理部7は、面生成部が生成した面に基づいて、点群から一部の点群(点集合)を抽出してもよい。 Note that the information processing apparatus 1 may include a surface generation unit that generates a surface representing the shape of the subset. For example, the surface generation unit generates a scalar field based on N-dimensional data included in the point cloud data DG, and generates a contour surface of the scalar field as a surface representing the shape of the subset. The processing unit 7 may extract a part of the point group (point set) from the point group based on the surface generated by the surface generation unit.
[第4実施形態]
 次に、第4実施形態について説明する。本実施形態において、上述の実施形態と同様の構成については、同じ符号を付してその説明を省略あるいは簡略化する。図21は、第4実施形態に係る情報処理装置を示す図である。本実施形態に係る情報処理装置1は、演算部17を備える。演算部17は、処理部7が抽出した一部の点群(点集合)を用いて演算する。例えば、演算部17は、処理部7が抽出した一部の点群(点集合)が表す形状の表面積と体積との一方または双方を演算する。
[Fourth Embodiment]
Next, a fourth embodiment will be described. In the present embodiment, the same components as those in the above-described embodiment are denoted by the same reference numerals, and the description thereof is omitted or simplified. FIG. 21 is a diagram illustrating an information processing apparatus according to the fourth embodiment. The information processing apparatus 1 according to the present embodiment includes a calculation unit 17. The computing unit 17 performs computation using a part of the point group (point set) extracted by the processing unit 7. For example, the calculation unit 17 calculates one or both of the surface area and the volume of the shape represented by a part of the point group (point set) extracted by the processing unit 7.
 例えば、目標の分布が楕円体で指定されているとする。この場合、演算部17は、処理部7が抽出した点集合におけるN次元データの分布を、楕円体を表す関数にあてはめて、この関数の係数を算出する。処理部7は、例えば、算出した係数を用いて、楕円体の長軸および短軸を算出する。処理部7は、算出した長軸および短軸を楕円体の表面積の公式に代入することで、表面積を算出する。また、処理部7は、算出した長軸および短軸を楕円体の体積の公式に代入することで、体積を算出する。また、演算部17は、処理部7が抽出した点集合の数をカウント(演算)する。 Suppose, for example, that the target distribution is specified by an ellipsoid. In this case, the calculation unit 17 applies the distribution of the N-dimensional data in the point set extracted by the processing unit 7 to a function representing an ellipsoid, and calculates the coefficient of this function. For example, the processing unit 7 calculates the major axis and the minor axis of the ellipsoid using the calculated coefficient. The processing unit 7 calculates the surface area by substituting the calculated major axis and minor axis into the ellipsoidal surface area formula. In addition, the processing unit 7 calculates the volume by substituting the calculated major axis and minor axis into the ellipsoidal volume formula. In addition, the calculation unit 17 counts (calculates) the number of point sets extracted by the processing unit 7.
 出力制御部12は、処理部7による抽出結果に基づいて演算部17が演算した結果を表す演算結果画像をGUI画面Wに出力させる。図22は、第4実施形態に係る演算部の演算結果に基づいて出力制御部が出力するGUI画面を示す図である。図22のGUI画面Wは、ウィンドウW11を含む。ウィンドウWAには、演算結果画像P5として点集合のリストが表示される。点集合には、演算部17がカウントした点集合の数に基づいて、識別番号(図中の[Target No.])が割り付けられている。図中の[Volume]は、点集合に相当する形状の体積として演算部17が算出した値である。図中の[Surface Area]は、点集合に相当する形状の表面積として演算部17が算出した値である。図中の[X]、[Y]、[Z]は、点集合を代表する座標(例、重心位置)である。 The output control unit 12 causes the GUI screen W to output a calculation result image representing the result calculated by the calculation unit 17 based on the extraction result by the processing unit 7. FIG. 22 is a diagram illustrating a GUI screen output by the output control unit based on the calculation result of the calculation unit according to the fourth embodiment. The GUI screen W in FIG. 22 includes a window W11. In the window WA, a list of point sets is displayed as the calculation result image P5. An identification number ([Target No.] in the figure) is assigned to the point set based on the number of point sets counted by the calculation unit 17. [Volume] in the figure is a value calculated by the calculation unit 17 as a volume of a shape corresponding to a point set. [Surface Area] in the figure is a value calculated by the calculation unit 17 as the surface area of the shape corresponding to the point set. [X], [Y], and [Z] in the figure are coordinates representing a point set (for example, the position of the center of gravity).
 演算結果画像P5は、例えば、抽出点群画像P3とともにGUI画面Wに表示される。入力制御部11は、例えば、抽出点群画像P3における点集合上にポインタPが配置された状態で左クリックされたことが検出された場合に、抽出点群画像P3においてポインタPが配置された点集合Kgと、演算結果画像P5において点集合Kgに関する演算部17の演算結果とを強調して表示する。入力制御部11は、演算結果画像P5における行にポインタPが配置された状態で左クリックされたことが検出された場合に、ポインタPが配置された行における演算部17の演算結果と、この行に対応する抽出点群画像P2の点集合Kgとを強調して表示してもよい。 The calculation result image P5 is displayed on the GUI screen W together with the extracted point cloud image P3, for example. For example, when the input control unit 11 detects that the left click is performed in a state where the pointer P is arranged on the point set in the extracted point cloud image P3, the pointer P is arranged in the extracted point cloud image P3. The point set Kg and the calculation result of the calculation unit 17 relating to the point set Kg in the calculation result image P5 are displayed with emphasis. When the input control unit 11 detects that the left click is performed in a state where the pointer P is arranged in the row in the calculation result image P5, the input control unit 11 calculates the calculation result of the calculation unit 17 in the row in which the pointer P is arranged, The point set Kg of the extracted point cloud image P2 corresponding to the row may be highlighted and displayed.
 なお、上述の実施形態において、N次元データが3次元データであるとして説明したが、N次元データは3次元データでなくてもよい。図23は、実施形態に係るN次元データを示す図である。図23の点群データDGは、CTスキャンなどで得られるボクセルデータである。ボクセルデータは、各セルCvの3次元座標の値(x,y,z)、及びセルCvに与えられた値(v)を一組にした4次元データである。例えば、セルCv1は、3次元座標が(4,2,3)であり、セルの値vが4である。このセルCv1に対応するN次元データD1は、(4,2,3,4)で表される。 In the above-described embodiment, the N-dimensional data is described as being three-dimensional data. However, the N-dimensional data may not be three-dimensional data. FIG. 23 is a diagram illustrating N-dimensional data according to the embodiment. The point cloud data DG in FIG. 23 is voxel data obtained by CT scan or the like. The voxel data is four-dimensional data in which three-dimensional coordinate values (x, y, z) of each cell Cv and a value (v) given to the cell Cv are combined. For example, the cell Cv1 has a three-dimensional coordinate of (4, 2, 3) and a cell value v of 4. The N-dimensional data D1 corresponding to the cell Cv1 is represented by (4, 2, 3, 4).
 このようなN次元データD1を情報処理装置1で処理する場合、例えば、図23(B)に示すようにセルの値vが所定の条件を満たすセルを抽出する(フィルタリングする)。図23(B)では、vの値が5以上であるセルが抽出されている。そして、図23(C)に示すように、各セルの中心位置に点が配置されているとすることで3次元の点群データが得られ、上述の実施形態で説明したように、情報処理装置1で部分領域を抽出することが可能である。 When such N-dimensional data D1 is processed by the information processing apparatus 1, for example, as shown in FIG. 23B, cells whose cell value v satisfies a predetermined condition are extracted (filtered). In FIG. 23B, cells having a value of v of 5 or more are extracted. Then, as shown in FIG. 23C, assuming that a point is arranged at the center position of each cell, three-dimensional point cloud data is obtained. As described in the above embodiment, information processing is performed. The partial area can be extracted by the apparatus 1.
 なお、Nが4以上の整数である場合に、情報処理装置1は、上記のフィルタリングを施さずにN次元データを処理してもよい。例えば、情報処理装置1は、上記のセルの値vをGUI画面W上の輝度、あるいは色で表現してもよい。また、1つのN次元データに含まれるN個の値を複数のウィンドウに分けて表現してもよい。例えば、4次元データである場合、4つの値から選択される2つの値を組にした2次元データの分布を1つのウィンドウに表し、残りの2つの値を組にした2次元データの分布を別のウィンドウに表してもよい。 Note that when N is an integer of 4 or more, the information processing apparatus 1 may process N-dimensional data without performing the above filtering. For example, the information processing apparatus 1 may represent the value v of the above cell with luminance or color on the GUI screen W. Further, N values included in one N-dimensional data may be expressed by being divided into a plurality of windows. For example, in the case of four-dimensional data, the distribution of two-dimensional data in which two values selected from four values are grouped is displayed in one window, and the distribution of two-dimensional data in which the remaining two values are grouped is represented. It may be displayed in a separate window.
 なお、上述の実施形態において、入力装置3がマウスであるとしたが、マウス以外の装置(例、キーボード)を含んでもよい。例えば、ユーザは、キーボードを操作してコマンドをコマンドラインに入力することで、情報処理装置1の少なくとも一部の処理を実行させてもよい。この場合、ユーザの入力情報は、キーボードにおいて押し下げられたキーの情報を含む。 In the above-described embodiment, the input device 3 is a mouse. However, a device other than a mouse (eg, a keyboard) may be included. For example, the user may execute at least a part of the processing of the information processing apparatus 1 by operating a keyboard and inputting a command to the command line. In this case, the input information of the user includes information on the key pressed on the keyboard.
 上述の実施形態において、情報処理装置1(情報処理部)は、例えばコンピュータシステムを含む。情報処理装置1は、記憶部8(記憶装置)に記憶されている情報処理プログラムを読み出し、この情報処理プログラムに従って各種の処理を実行する。この情報処理プログラムは、例えば、コンピュータに、点群画像を表示部に表示させることと、入力部により入力される入力情報を取得することと、入力情報に基づいて、点群画像に含まれる点群から一部の点群を抽出することと、抽出された一部の点群に基づいた抽出点群画像を表示部に表示させることと、を実行させる。この情報処理プログラムは、コンピュータ読み取り可能な記憶媒体(例、非一時的な記録媒体、non-transitory tangible media)に記録されて提供されてもよい。 In the above-described embodiment, the information processing apparatus 1 (information processing unit) includes, for example, a computer system. The information processing device 1 reads an information processing program stored in the storage unit 8 (storage device) and executes various processes according to the information processing program. This information processing program, for example, causes a computer to display a point cloud image on a display unit, obtains input information input by the input unit, and points included in the point cloud image based on the input information. Extracting a part of the point group from the group and displaying an extracted point group image based on the extracted part of the point group on the display unit are executed. The information processing program may be provided by being recorded on a computer-readable storage medium (eg, non-transitory recording medium, non-transitory tangible medium).
[顕微鏡]
 次に、実施形態係る顕微鏡について説明する。本実施形態において、上述の実施形態と同様の構成については、同じ符号を付してその説明を省略あるいは簡略化する。図24は、実施形態に係る顕微鏡を示す図である。顕微鏡50は、顕微鏡本体51と、上述の実施形態で説明した情報処理装置1と、制御装置52とを備える。制御装置52は、顕微鏡本体51の各部を制御する制御部53と、画像処理部54とを備える。制御装置52の少なくとも一部は、顕微鏡本体51に設けられてもよい(内蔵されてもよい)。また、制御部53は、情報処理装置1を制御する。制御装置52の少なくとも一部は、情報処理装置1に設けられてもよい(内蔵されてもよい)。
[microscope]
Next, the microscope according to the embodiment will be described. In the present embodiment, the same components as those in the above-described embodiment are denoted by the same reference numerals, and the description thereof is omitted or simplified. FIG. 24 is a diagram illustrating a microscope according to the embodiment. The microscope 50 includes a microscope main body 51, the information processing apparatus 1 described in the above embodiment, and a control device 52. The control device 52 includes a control unit 53 that controls each unit of the microscope main body 51 and an image processing unit 54. At least a part of the control device 52 may be provided in the microscope main body 51 (may be incorporated). In addition, the control unit 53 controls the information processing apparatus 1. At least a part of the control device 52 may be provided in the information processing device 1 (may be incorporated).
 顕微鏡本体51は、試料を検出する。顕微鏡50は、例えば蛍光顕微鏡であり、顕微鏡本体51は、蛍光物質を含む試料から放射される蛍光の像を検出する。実施形態に係る顕微鏡は、例えば、STORM、PALM等の超解像顕微鏡である。STORMは、蛍光物質を活性化し、活性化した蛍光物質に対して励起光を照射することにより、複数の蛍光画像が取得する。複数の蛍光画像のデータは、画像処理部54に入力される。画像処理部54は、各蛍光画像の蛍光物質の位置情報を算出し、算出した複数の位置情報を用いて点群データDGを生成する。また、画像処理部54は、点群データDGを表す点群画像生成する。2次元のSTORMの場合、画像処理部54は、蛍光物質の2次元の位置情報を算出し、複数の2次元データを含む点群データDGを生成する。また、3次元のSTORMの場合、画像処理部54は、蛍光物質の3次元の位置情報を算出し、複数の3次元データを含む点群データDGを生成する。 The microscope main body 51 detects a sample. The microscope 50 is, for example, a fluorescence microscope, and the microscope main body 51 detects an image of fluorescence emitted from a sample containing a fluorescent substance. The microscope according to the embodiment is a super-resolution microscope such as STORM or PALM, for example. STORM activates a fluorescent material, and irradiates the activated fluorescent material with excitation light, thereby acquiring a plurality of fluorescent images. Data of a plurality of fluorescent images is input to the image processing unit 54. The image processing unit 54 calculates the position information of the fluorescent substance in each fluorescence image, and generates point cloud data DG using the calculated plurality of position information. The image processing unit 54 generates a point cloud image representing the point cloud data DG. In the case of the two-dimensional STORM, the image processing unit 54 calculates the two-dimensional position information of the fluorescent substance, and generates point group data DG including a plurality of two-dimensional data. In the case of a three-dimensional STORM, the image processing unit 54 calculates three-dimensional position information of the fluorescent material, and generates point group data DG including a plurality of three-dimensional data.
 図25は、実施形態に係る顕微鏡本体を示す図である。顕微鏡本体51は、1種類の蛍光物質で標識(ラベル)された試料の蛍光観察、及び2種類以上の蛍光物質で標識された試料の蛍光観察のいずれにも利用できる。本実施形態では、標識に用いられる蛍光物質(例、レポータ色素)が1種類であるものとする。また、顕微鏡51は、3次元の超解像画像を生成可能である。例えば、顕微鏡51は、2次元の超解像画像を生成するモード、及び3次元の超解像画像を生成するモードを有し、2つのモードを切り替え可能である。 FIG. 25 is a diagram showing a microscope main body according to the embodiment. The microscope main body 51 can be used for both fluorescence observation of a sample labeled with one type of fluorescent substance and fluorescence observation of a sample labeled with two or more kinds of fluorescent substances. In the present embodiment, it is assumed that there is one kind of fluorescent substance (eg, reporter dye) used for labeling. The microscope 51 can generate a three-dimensional super-resolution image. For example, the microscope 51 has a mode for generating a two-dimensional super-resolution image and a mode for generating a three-dimensional super-resolution image, and can switch between the two modes.
 試料は、生きた細胞(ライブセル)を含むものでもよいし、ホルムアルデヒド溶液等の組織固定液を用いて固定された細胞を含むものでもよく、組織等でもよい。蛍光物質は、シアニン(cyanine)染料等の蛍光色素でもよいし、蛍光タンパク質でもよい。蛍光色素は、活性化された状態(以下、活性化状態という)で励起光を受けた場合に蛍光を発するレポータ色素を含む。また、蛍光色素は、活性化光を受けてレポータ色素を活性化状態にするアクティベータ色素を含む場合がある。蛍光色素がアクティベータ色素を含まない場合、レポータ色素は、活性化光を受けて活性化状態になる。蛍光色素は、例えば、2種類のシアニン(cyanine)染料を結合させた染料対(例、Cy3-Cy5染料対(Cy3、Cy5は登録商標)、Cy2-Cy5染料対(Cy2、Cy5は登録商標)、Cy3-Alexa Fluor647染料対(Cy3、Alexa Fluorは登録商標))、1種類の染料(例、Alexa Fluor647(Alexa Fluorは登録商標))である。蛍光タンパク質は、例えばPA-GFP、Dronpaなどである。 The sample may include living cells (live cells), may include cells fixed using a tissue fixing solution such as a formaldehyde solution, or may be tissue. The fluorescent substance may be a fluorescent dye such as a cyanine dye or a fluorescent protein. The fluorescent dye includes a reporter dye that emits fluorescence when receiving excitation light in an activated state (hereinafter referred to as an activated state). In addition, the fluorescent dye may include an activator dye that receives activation light and activates the reporter dye. If the fluorescent dye does not contain an activator dye, the reporter dye receives an activation light and becomes activated. Fluorescent dyes are, for example, a dye pair in which two kinds of cyanine dyes are combined (eg, Cy3-Cy5 dye pair (Cy3, Cy5 is a registered trademark), Cy2-Cy5 dye pair (Cy2, Cy5 is a registered trademark) , Cy3-Alexa® Fluor647 dye pair (Cy3, Alexa® Fluor is a registered trademark)) and one type of dye (eg, Alexa® Fluor647 (Alexa® Fluor is a registered trademark)). Examples of the fluorescent protein include PA-GFP and Dronpa.
 顕微鏡本体51は、ステージ102と、光源装置103と、照明光学系104と、第1観察光学系105と、撮像部106と、画像処理部54と、制御装置52とを備える。制御装置52は、顕微鏡本体51の各部を包括的に制御する制御部53を備える。画像処理部54は、例えば、制御装置52に設けられる。 The microscope main body 51 includes a stage 102, a light source device 103, an illumination optical system 104, a first observation optical system 105, an imaging unit 106, an image processing unit 54, and a control device 52. The control device 52 includes a control unit 53 that comprehensively controls each unit of the microscope main body 51. The image processing unit 54 is provided in the control device 52, for example.
 ステージ102は、観察対象の試料Wを保持する。ステージ102は、例えば、その上面に試料Wを載置可能である。ステージ102は、例えば、XYステージのように試料Wを移動させる機構を有してもよいし、机等のように試料Wを移動させる機構を有さなくてもよい。顕微鏡本体51は、ステージ102を備えなくてもよい。 The stage 102 holds the sample W to be observed. The stage 102 can place the sample W on the upper surface thereof, for example. For example, the stage 102 may have a mechanism for moving the sample W like an XY stage, or may not have a mechanism for moving the sample W like a desk. The microscope main body 51 may not include the stage 102.
 光源装置103は、活性化光源110a、励起光源110b、シャッタ111a、及びシャッタ111bを備える。活性化光源110aは、試料Wに含まれる蛍光物質の一部を活性化する活性化光Lを発する。ここでは、蛍光物質がレポータ色素を含み、アクティベータ色素を含まないものとする。蛍光物質のレポータ色素は、活性化光Lが照射されることで、蛍光を発することが可能な活性化状態となる。蛍光物質は、レポータ色素およびアクティベータ色素を含むものでもよく、この場合、アクティベータ色素は、活性化光Lを受けた場合にレポータ色素を活性状態にする。なお、蛍光物質は、例えばPA-GFP、Dronpaなどの蛍光タンパク質でもよい。 The light source device 103 includes an activation light source 110a, an excitation light source 110b, a shutter 111a, and a shutter 111b. The activation light source 110a emits activation light L that activates a part of the fluorescent material contained in the sample W. Here, it is assumed that the fluorescent material contains a reporter dye and does not contain an activator dye. The reporter dye of the fluorescent substance is in an activated state capable of emitting fluorescence when irradiated with the activation light L. The fluorescent substance may include a reporter dye and an activator dye. In this case, the activator dye activates the reporter dye when it receives the activation light L. The fluorescent substance may be a fluorescent protein such as PA-GFP or Dronpa.
 励起光源110bは、試料Wにおいて活性化された蛍光物質の少なくとも一部を励起する励起光L1を発する。蛍光物質は、活性化状態において励起光L1が照射されると、蛍光を発するか、不活性化される。蛍光物質は、不活性化された状態(以下、不活性化状態という)において活性化光Lが照射されると、再度、活性化状態となる。 The excitation light source 110b emits excitation light L1 that excites at least a part of the fluorescent material activated in the sample W. The fluorescent material emits fluorescence or is inactivated when the excitation light L1 is irradiated in the activated state. When the fluorescent material is irradiated with the activation light L in an inactivated state (hereinafter referred to as an inactivated state), the fluorescent material is again activated.
 活性化光源110aおよび励起光源110bは、例えば、レーザ光源などの固体光源を含み、それぞれ、蛍光物質の種類に応じた波長のレーザ光を発する。活性化光源110aの射出波長、励起光源110bの射出波長は、例えば、約405nm、約457nm、約488nm、約532nm、約561nm、約640nm、約647nmなどから選択される。ここでは、活性化光源110aの射出波長が約405nmであり、励起光源110bの射出波長が約488nm、約561nm、約647nmから選択される波長であるとする。 The activation light source 110a and the excitation light source 110b include, for example, a solid light source such as a laser light source, and each emits laser light having a wavelength corresponding to the type of fluorescent material. The emission wavelength of the activation light source 110a and the emission wavelength of the excitation light source 110b are selected from, for example, about 405 nm, about 457 nm, about 488 nm, about 532 nm, about 561 nm, about 640 nm, and about 647 nm. Here, it is assumed that the emission wavelength of the activation light source 110a is about 405 nm and the emission wavelength of the excitation light source 110b is a wavelength selected from about 488 nm, about 561 nm, and about 647 nm.
 シャッタ111aは、制御部53により制御され、活性化光源110aからの活性化光Lを通す状態と、活性化光Lを遮る状態とを切り替え可能である。シャッタ111bは、制御部53により制御され、励起光源110bからの励起光L1を通す状態と、励起光L1を遮る状態とを切り替え可能である。 The shutter 111a is controlled by the control unit 53, and can switch between a state in which the activation light L from the activation light source 110a passes and a state in which the activation light L is blocked. The shutter 111b is controlled by the control unit 53, and can switch between a state in which the excitation light L1 from the excitation light source 110b passes and a state in which the excitation light L1 is blocked.
 また、光源装置103は、ミラー112、ダイクロイックミラー113、音響光学素子114、及びレンズ115を備える。ミラー112は、例えば、励起光源110bの射出側に設けられる。励起光源110bからの励起光L1は、ミラー112で反射し、ダイクロイックミラー113に入射する。ダイクロイックミラー113は、例えば、活性化光源110aの射出側に設けられる。ダイクロイックミラー113は、活性化光Lが透過し、励起光L1が反射する特性を有する。ダイクロイックミラー113を透過した活性化光Lと、ダイクロイックミラー113で反射した励起光L1は、同じ光路を通って音響光学素子114に入射する。 The light source device 103 includes a mirror 112, a dichroic mirror 113, an acoustooptic device 114, and a lens 115. The mirror 112 is provided on the emission side of the excitation light source 110b, for example. The excitation light L1 from the excitation light source 110b is reflected by the mirror 112 and enters the dichroic mirror 113. The dichroic mirror 113 is provided, for example, on the emission side of the activation light source 110a. The dichroic mirror 113 has a characteristic that the activation light L is transmitted and the excitation light L1 is reflected. The activation light L transmitted through the dichroic mirror 113 and the excitation light L1 reflected by the dichroic mirror 113 enter the acoustooptic device 114 through the same optical path.
 音響光学素子114は、例えば音響光学フィルタなどである。音響光学素子114は、制御部53に制御され、活性化光Lの光強度および励起光L1の光強度のそれぞれを調整可能である。また、音響光学素子114は、制御部53に制御され、活性化光L、励起光L1のそれぞれについて、音響光学素子114を通る状態(以下、通光状態という)と、音響光学素子114により遮られる状態または強度が低減される状態(以下、遮光状態という)とを切り替え可能である。例えば、蛍光物質がレポータ色素を含みアクティベータ色素を含まない場合、制御部53は、活性化光Lと励起光L1とが同時に照射されるように、音響光学素子114を制御する。また、蛍光物質がレポータ色素およびアクティベータ色素を含む場合、制御部53は、例えば、活性化光Lの照射後に励起光L1を照射するように、音響光学素子114を制御する。レンズ115は、例えばカプラであり、音響光学素子114からの活性化光L、励起光L1を導光部材116に集光する。 The acoustooptic element 114 is, for example, an acoustooptic filter. The acoustooptic device 114 is controlled by the control unit 53 and can adjust the light intensity of the activation light L and the light intensity of the excitation light L1. The acoustooptic element 114 is controlled by the control unit 53 so that the activation light L and the excitation light L1 are blocked by the acoustooptic element 114 from passing through the acoustooptic element 114 (hereinafter referred to as a light passing state). Or a state in which the intensity is reduced (hereinafter referred to as a light shielding state) can be switched. For example, when the fluorescent material includes a reporter dye and does not include an activator dye, the control unit 53 controls the acoustooptic device 114 so that the activation light L and the excitation light L1 are irradiated simultaneously. Further, when the fluorescent material includes a reporter dye and an activator dye, the control unit 53 controls the acoustooptic device 114 so as to irradiate the excitation light L1 after the activation light L is irradiated, for example. The lens 115 is, for example, a coupler, and condenses the activation light L and the excitation light L1 from the acoustooptic device 114 on the light guide member 116.
 なお、顕微鏡本体51は、光源装置103の少なくとも一部を備えなくてもよい。例えば、光源装置103は、ユニット化されており、顕微鏡本体51に交換可能(取り付け可能、取り外し可能)に設けられていてもよい。例えば、光源装置103は、顕微鏡1による観察時などに、顕微鏡本体51に取り付けられてもよい。 Note that the microscope main body 51 may not include at least a part of the light source device 103. For example, the light source device 103 is unitized, and may be provided in the microscope main body 51 so as to be replaceable (attachable or removable). For example, the light source device 103 may be attached to the microscope main body 51 when observing with the microscope 1.
 照明光学系104は、試料Wに含まれる蛍光物質の一部を活性化する活性化光Lと、活性化された蛍光物質の少なくとも一部を励起する励起光L1とを照射する。照明光学系104は、光源装置103からの活性化光Lと励起光L1とを試料Wに照射する。照明光学系104は、導光部材116、レンズ117、レンズ118、フィルタ119、ダイクロイックミラー120、及び対物レンズ121を備える。 The illumination optical system 104 irradiates the activation light L that activates a part of the fluorescent substance contained in the sample W and the excitation light L1 that excites at least a part of the activated fluorescent substance. The illumination optical system 104 irradiates the sample W with the activation light L and the excitation light L1 from the light source device 103. The illumination optical system 104 includes a light guide member 116, a lens 117, a lens 118, a filter 119, a dichroic mirror 120, and an objective lens 121.
 導光部材116は、例えば光ファイバであり、活性化光L、励起光L1をレンズ117へ導く。図1等において、導光部材116の射出端から試料Wまでの光路を点線で示す。レンズ117は、例えばコリメータであり、活性化光L、励起光L1を平行光に変換する。レンズ118は、例えば、活性化光L、励起光L1を対物レンズ121の瞳面の位置に集光する。フィルタ119は、例えば、活性化光Lおよび励起光L1が透過し、他の波長の光の少なくとも一部を遮る特性を有する。ダイクロイックミラー120は、活性化光Lおよび励起光L1が反射し、試料Wからの光のうち所定の波長帯の光(例、蛍光)が透過する特性を有する。フィルタ119からの光は、ダイクロイックミラー120で反射し、対物レンズ121に入射する。試料Wは、観察時に対物レンズ121の前側焦点面に配置される。 The light guide member 116 is an optical fiber, for example, and guides the activation light L and the excitation light L1 to the lens 117. In FIG. 1 and the like, the optical path from the exit end of the light guide member 116 to the sample W is indicated by a dotted line. The lens 117 is a collimator, for example, and converts the activation light L and the excitation light L1 into parallel light. The lens 118 condenses, for example, the activation light L and the excitation light L1 at the position of the pupil plane of the objective lens 121. For example, the filter 119 has a characteristic of transmitting the activation light L and the excitation light L1 and blocking at least a part of light of other wavelengths. The dichroic mirror 120 has a characteristic that the activation light L and the excitation light L1 are reflected, and light (for example, fluorescence) in a predetermined wavelength band out of the light from the sample W is transmitted. The light from the filter 119 is reflected by the dichroic mirror 120 and enters the objective lens 121. The sample W is disposed on the front focal plane of the objective lens 121 during observation.
 活性化光Lおよび励起光L1は、上述のような照明光学系104により、試料Wに照射される。なお、上述した照明光学系104は一例であり、適宜、変更可能である。例えば、上述した照明光学系104の一部が省略されてもよい。また、照明光学系104は、光源装置103の少なくとも一部を含んでいてもよい。また、照明光学系104は、開口絞り、照野絞りなどを備えてもよい。 The activation light L and the excitation light L1 are applied to the sample W by the illumination optical system 104 as described above. The illumination optical system 104 described above is an example, and can be changed as appropriate. For example, a part of the illumination optical system 104 described above may be omitted. The illumination optical system 104 may include at least a part of the light source device 103. The illumination optical system 104 may include an aperture stop, an illumination field stop, and the like.
 第1観察光学系105は、試料Wからの光の像を形成する。ここでは、第1観察光学系105は、試料Wに含まれる蛍光物質からの蛍光の像を形成する。第1観察光学系105は、対物レンズ121、ダイクロイックミラー120、フィルタ124、レンズ125、光路切替部材126、レンズ127、及びレンズ128を備える。第1観察光学系105は、対物レンズ121およびダイクロイックミラー120を照明光学系104と共用している。図1などにおいて、試料Wと撮像部106との間の光路を実線で示す。 The first observation optical system 105 forms an image of light from the sample W. Here, the first observation optical system 105 forms an image of fluorescence from the fluorescent material contained in the sample W. The first observation optical system 105 includes an objective lens 121, a dichroic mirror 120, a filter 124, a lens 125, an optical path switching member 126, a lens 127, and a lens 128. The first observation optical system 105 shares the objective lens 121 and the dichroic mirror 120 with the illumination optical system 104. In FIG. 1 and the like, the optical path between the sample W and the imaging unit 106 is indicated by a solid line.
 試料Wからの蛍光は、対物レンズ121およびダイクロイックミラー120を通ってフィルタ124に入射する。フィルタ124は、試料Wからの光のうち所定の波長帯の光が選択的に通る特性を有する。フィルタ124は、例えば、試料Wで反射した照明光、外光、迷光などを遮断する。フィルタ124は、例えば、フィルタ119およびダイクロイックミラー120とユニット化され、このフィルタユニット23は、交換可能に設けられる。フィルタユニット23は、例えば、光源装置103から射出される光の波長(例、活性化光Lの波長、励起光L1の波長)、試料Wから放射される蛍光の波長などに応じて交換してもよいし、複数の励起、蛍光波長に対応した単一のフィルタユニットを利用してもよい。 Fluorescence from the sample W enters the filter 124 through the objective lens 121 and the dichroic mirror 120. The filter 124 has a characteristic that light in a predetermined wavelength band out of the light from the sample W selectively passes. The filter 124 blocks, for example, illumination light, external light, stray light, etc. reflected by the sample W. The filter 124 is unitized with, for example, the filter 119 and the dichroic mirror 120, and the filter unit 23 is provided in a replaceable manner. The filter unit 23 is exchanged according to the wavelength of light emitted from the light source device 103 (for example, the wavelength of the activation light L, the wavelength of the excitation light L1), the wavelength of fluorescence emitted from the sample W, and the like. Alternatively, a single filter unit corresponding to a plurality of excitation and fluorescence wavelengths may be used.
 フィルタ124を通った光は、レンズ125を介して光路切替部材126に入射する。レンズ125から射出された光は、光路切替部材126を通過した後、中間像面105bに中間像を結ぶ。光路切替部材126は、例えばプリズムであり、第1観察光学系105の光路に挿脱可能に設けられる。光路切替部材126は、例えば、制御部53により制御される駆動部(図示せず)によって、第1観察光学系105の光路に挿脱される。光路切替部材126は、第1観察光学系105の光路に挿入された状態において、試料Wからの蛍光を内面反射によって撮像部106へ向かう光路へ導く。 The light that has passed through the filter 124 enters the optical path switching member 126 through the lens 125. The light emitted from the lens 125 passes through the optical path switching member 126 and then forms an intermediate image on the intermediate image surface 105b. The optical path switching member 126 is a prism, for example, and is provided so as to be able to be inserted into and removed from the optical path of the first observation optical system 105. The optical path switching member 126 is inserted into and removed from the optical path of the first observation optical system 105 by a drive unit (not shown) controlled by the control unit 53, for example. When inserted in the optical path of the first observation optical system 105, the optical path switching member 126 guides the fluorescence from the sample W to the optical path toward the imaging unit 106 by internal reflection.
 レンズ127は、中間像から射出された蛍光(中間像面105bを通った蛍光)を平行光に変換し、レンズ128は、レンズ127を通った光を集光する。第1観察光学系105は、非点収差光学系(例、シリンドリカルレンズ129)を備える。シリンドリカルレンズ129は、試料Wからの蛍光の少なくとも一部に作用し、蛍光の少なくとも一部に対して非点収差を発生させる。すなわち、シリンドリカルレンズ129などの非点収差光学系は、蛍光の少なくとも一部に対して非点収差を発生させて、非点隔差を発生させる。この非点収差は、試料Wの深さ方向(対物レンズ121の光軸方向)における蛍光物質の位置を算出することに利用される。 The lens 127 converts fluorescence emitted from the intermediate image (fluorescence that has passed through the intermediate image surface 105b) into parallel light, and the lens 128 condenses the light that has passed through the lens 127. The first observation optical system 105 includes an astigmatism optical system (for example, a cylindrical lens 129). The cylindrical lens 129 acts on at least part of the fluorescence from the sample W and generates astigmatism with respect to at least part of the fluorescence. That is, an astigmatism optical system such as the cylindrical lens 129 generates astigmatism by generating astigmatism with respect to at least a part of the fluorescence. This astigmatism is used to calculate the position of the fluorescent material in the depth direction of the sample W (the optical axis direction of the objective lens 121).
 シリンドリカルレンズ129は、試料Wと撮像部106(例、撮像素子140)との間の光路に挿脱可能に設けられる。例えば、シリンドリカルレンズ129は、レンズ127とレンズ128との間の光路に挿脱可能である。シリンドリカルレンズ129は、3次元の超解像画像を生成するモード時にこの光路に配置され、2次元の超解像画像を生成するモード時にこの光路から退避される。 The cylindrical lens 129 is detachably provided in the optical path between the sample W and the imaging unit 106 (for example, the imaging device 140). For example, the cylindrical lens 129 can be inserted into and removed from the optical path between the lens 127 and the lens 128. The cylindrical lens 129 is disposed in this optical path in a mode for generating a three-dimensional super-resolution image, and is retracted from this optical path in a mode for generating a two-dimensional super-resolution image.
 本実施形態において、顕微鏡本体51は、第2観察光学系130を備える。第2観察光学系130は、観察範囲の設定などに利用される。第2観察光学系130は、試料Wから観察者の視点Vpに向かう順に、対物レンズ121、ダイクロイックミラー120、フィルタ124、レンズ125、ミラー131、レンズ132、ミラー133、レンズ134、レンズ135、ミラー136、及びレンズ137を備える。 In the present embodiment, the microscope main body 51 includes the second observation optical system 130. The second observation optical system 130 is used for setting an observation range. The second observation optical system 130 includes an objective lens 121, a dichroic mirror 120, a filter 124, a lens 125, a mirror 131, a lens 132, a mirror 133, a lens 134, a lens 135, and a mirror in order from the sample W toward the observer's viewpoint Vp. 136 and a lens 137.
 第2観察光学系130は、対物レンズ121からレンズ125までの構成を第1観察光学系105と共用している。試料Wからの光は、レンズ125を通った後に、光路切替部材126が第1観察光学系105の光路から退避した状態において、ミラー131に入射する。ミラー131で反射した光は、レンズ132を介してミラー133に入射し、ミラー133で反射した後に、レンズ134およびレンズ135を介してミラー136に入射する。ミラー136で反射した光は、レンズ137を介して、視点Vpに入射する。第2観察光学系130は、例えば、レンズ135とレンズ137との間の光路に試料Wの中間像を形成する。レンズ137は、例えば接眼レンズであり、観察者は、中間像を観察することにより観察範囲の設定などを行うことができる。 The second observation optical system 130 shares the configuration from the objective lens 121 to the lens 125 with the first observation optical system 105. The light from the sample W passes through the lens 125 and then enters the mirror 131 in a state where the optical path switching member 126 is retracted from the optical path of the first observation optical system 105. The light reflected by the mirror 131 is incident on the mirror 133 via the lens 132, is reflected by the mirror 133, and then enters the mirror 136 via the lens 134 and the lens 135. The light reflected by the mirror 136 enters the viewpoint Vp through the lens 137. For example, the second observation optical system 130 forms an intermediate image of the sample W in the optical path between the lens 135 and the lens 137. The lens 137 is an eyepiece, for example, and the observer can set an observation range by observing the intermediate image.
 撮像部106は、第1観察光学系105が形成した像を撮像する。撮像部106は、撮像素子140および制御部141を備える。撮像素子140は、例えばCMOSイメージセンサであるが、CCDイメージセンサなどでもよい。撮像素子140は、例えば、二次元的に配列された複数の画素を有し、各画素にフォトダイオードなどの光電変換素子が配置された構造である。撮像素子140は、例えば、光電変換素子に蓄積された電荷を、読出回路によって読み出す。撮像素子140は、読み出された電荷をデジタルデータに変換し、画素の位置と階調値とを関連付けたデジタル形式のデータ(例、画像のデータ)を出力する。制御部141は、制御装置52の制御部53から入力される制御信号に基づいて撮像素子140を動作させ、撮像画像のデータを制御装置52に出力する。また、制御部141は、電荷の蓄積期間と電荷の読み出し期間を制御装置52に出力する。 The imaging unit 106 captures an image formed by the first observation optical system 105. The imaging unit 106 includes an imaging element 140 and a control unit 141. The image sensor 140 is, for example, a CMOS image sensor, but may be a CCD image sensor or the like. The image sensor 140 has, for example, a structure having a plurality of pixels arranged two-dimensionally and a photoelectric conversion element such as a photodiode disposed in each pixel. For example, the imaging element 140 reads out the electric charge accumulated in the photoelectric conversion element by a reading circuit. The image sensor 140 converts the read electric charges into digital data, and outputs data in a digital format (eg, image data) in which pixel positions and gradation values are associated with each other. The control unit 141 operates the image sensor 140 based on a control signal input from the control unit 53 of the control device 52, and outputs captured image data to the control device 52. Further, the control unit 141 outputs the charge accumulation period and the charge read period to the control device 52.
 制御装置52は、顕微鏡本体51の各部を総括して制御する制御部53を備える。制御部53は、制御部141から供給される電荷の蓄積期間と電荷の読み出し期間を示す信号(撮像タイミングの情報)に基づいて、音響光学素子114に、光源装置103からの光を通す通光状態と、光源装置103からの光を遮る遮光状態とを切り替える制御信号を供給する。音響光学素子114は、この制御信号に基づいて、通光状態と遮光状態とを切り替える。制御部53は、音響光学素子114を制御し、試料Wに活性化光Lが照射される期間、及び試料Wに活性化光Lが照射されない期間を制御する。また、制御部53は、音響光学素子114を制御し、試料Wに励起光L1が照射される期間、及び、試料Wに励起光L1が照射されない期間を制御する。制御部53は、音響光学素子114を制御し、試料Wに照射される活性化光Lの光強度および励起光L1の光強度を制御する。 The control device 52 includes a control unit 53 that collectively controls each unit of the microscope main body 51. The control unit 53 transmits light from the light source device 103 to the acoustooptic device 114 based on a signal (imaging timing information) indicating the charge accumulation period and the charge read period supplied from the control unit 141. A control signal for switching between a state and a light blocking state that blocks light from the light source device 103 is supplied. The acoustooptic device 114 switches between a light transmission state and a light shielding state based on this control signal. The control unit 53 controls the acoustooptic device 114 to control a period in which the activation light L is irradiated on the sample W and a period in which the activation light L is not irradiated on the sample W. Further, the control unit 53 controls the acoustooptic device 114 to control a period during which the sample W is irradiated with the excitation light L1 and a period during which the sample W is not irradiated with the excitation light L1. The control unit 53 controls the acoustooptic device 114 to control the light intensity of the activation light L and the light intensity of the excitation light L1 that are irradiated on the sample W.
 なお、制御部53の代わりに制御部141は、電荷の蓄積期間と電荷の読み出し期間を示す信号(撮像タイミングの情報)に基づいて、音響光学素子114に遮光状態と通光状態とを切り替える制御信号を供給し、音響光学素子114を制御してもよい。 In place of the control unit 53, the control unit 141 controls the acoustooptic device 114 to switch between a light shielding state and a light transmission state based on a signal (information on imaging timing) indicating a charge accumulation period and a charge read period. A signal may be supplied to control the acousto-optic element 114.
 制御部53は、撮像部106を制御し、撮像素子140に撮像を実行させる。制御部53は、撮像部106から撮像結果(撮像画像のデータ)を取得する。画像処理部54は、撮像画像に写っている蛍光の像の重心を算出することによって、各蛍光画像の蛍光物質の位置情報を算出し、算出した複数の位置情報を用いて点群データDGを生成する。画像処理部54は、2次元のSTORMの場合、画像生成部13は、蛍光物質の2次元の位置情報を算出し、複数の2次元データを含む点群データDGを生成する。また、3次元のSTORMの場合、画像処理部54は、蛍光物質の3次元の位置情報を算出し、複数の3次元データを含む点群データDGを生成する。 The control unit 53 controls the imaging unit 106 to cause the imaging device 140 to perform imaging. The control unit 53 acquires an imaging result (captured image data) from the imaging unit 106. The image processing unit 54 calculates the position information of the fluorescent substance in each fluorescence image by calculating the center of gravity of the fluorescence image shown in the captured image, and uses the calculated plurality of position information to obtain the point cloud data DG. Generate. When the image processing unit 54 is a two-dimensional STORM, the image generation unit 13 calculates two-dimensional position information of the fluorescent substance, and generates point cloud data DG including a plurality of two-dimensional data. In the case of a three-dimensional STORM, the image processing unit 54 calculates three-dimensional position information of the fluorescent material, and generates point group data DG including a plurality of three-dimensional data.
 画像処理部54は、点群データDGを図24に示した情報処理装置1に出力する。情報処理装置1は、顕微鏡本体51の検出結果から得られる点群データDGを処理する。なお、制御装置52は、撮像部106から撮像結果(撮像画像のデータ)を取得し、取得した撮像結果を情報処理装置1へ出力し、情報処理装置1が点群データDGを生成してもよい。この場合、情報処理装置1は、各蛍光画像の蛍光物質の位置情報を算出し、算出した複数の位置情報を用いて点群データDGを生成する。また、情報処理装置1は、点群データDGを表す点群画像生成する。2次元のSTORMの場合、情報処理装置1は、蛍光物質の2次元の位置情報を算出し、複数の2次元データを含む点群データDGを生成する。また、3次元のSTORMの場合、情報処理装置1は、蛍光物質の3次元の位置情報を算出し、複数の3次元データを含む点群データDGを生成する。 The image processing unit 54 outputs the point cloud data DG to the information processing apparatus 1 shown in FIG. The information processing apparatus 1 processes point cloud data DG obtained from the detection result of the microscope main body 51. Note that the control device 52 acquires the imaging result (data of the captured image) from the imaging unit 106, outputs the acquired imaging result to the information processing device 1, and the information processing device 1 generates the point cloud data DG. Good. In this case, the information processing apparatus 1 calculates the position information of the fluorescent substance in each fluorescence image, and generates the point cloud data DG using the calculated plurality of position information. In addition, the information processing apparatus 1 generates a point cloud image representing the point cloud data DG. In the case of the two-dimensional STORM, the information processing apparatus 1 calculates the two-dimensional position information of the fluorescent material, and generates point group data DG including a plurality of two-dimensional data. In the case of a three-dimensional STORM, the information processing apparatus 1 calculates three-dimensional position information of the fluorescent material and generates point group data DG including a plurality of three-dimensional data.
 本実施形態に係る観察方法は、試料を検出することと、試料を検出して得られる点群画像を表示部に表示させることと、入力部により入力される入力情報を取得することと、入力情報に基づいて、点群画像に含まれる点群から一部の点群を抽出することと、抽出された一部の点群に基づいた抽出点群画像を表示部に表示させることと、を含む。例えば、制御装置52が顕微鏡本体51を制御することによって、顕微鏡本体51は、蛍光物質を含む試料から放射される蛍光の像を検出することで、試料Wを検出する。また、制御装置52は、情報処理装置1を制御して、出力制御部12によって表示装置2にGUI画面Wを出力させる。また、制御装置52は、情報処理装置1を制御して、出力制御部12によって点群画像P1をGUI画面Wに出力させる。また、制御装置52は、情報処理装置1を制御して、ユーザがGUI画面Wを用いて入力する入力情報を入力制御部11によって取得する。また、制御装置52は、情報処理装置1を制御して、入力情報によって指定される分布を入力制御部11によって特定する。また、制御装置52は、情報処理装置1を制御し、入力制御部11によって特定された分布に基づいて、複数のN次元データD1を含む点群データDGから点集合を処理部7によって抽出する The observation method according to the present embodiment includes detecting a sample, displaying a point cloud image obtained by detecting the sample on the display unit, acquiring input information input by the input unit, and inputting Extracting a part of the point cloud from the point cloud included in the point cloud image based on the information, and causing the display unit to display the extracted point cloud image based on the extracted part of the point cloud. Including. For example, when the control device 52 controls the microscope body 51, the microscope body 51 detects the sample W by detecting an image of fluorescence emitted from the sample containing the fluorescent material. Further, the control device 52 controls the information processing device 1 and causes the output control unit 12 to output the GUI screen W to the display device 2. In addition, the control device 52 controls the information processing device 1 and causes the output control unit 12 to output the point cloud image P1 to the GUI screen W. In addition, the control device 52 controls the information processing device 1 so that the input control unit 11 acquires input information that the user inputs using the GUI screen W. In addition, the control device 52 controls the information processing device 1 to specify the distribution specified by the input information by the input control unit 11. Further, the control device 52 controls the information processing device 1 and extracts a point set from the point cloud data DG including a plurality of N-dimensional data D1 by the processing unit 7 based on the distribution specified by the input control unit 11.
 本実施形態において、制御装置52(制御装置)は、例えばコンピュータシステムを含む。制御装置52は、記憶部(記憶装置)に記憶されている観察プログラムを読み出し、このプログラムに従って各種の処理を実行する。この観察プログラムは、コンピュータに、試料を検出する制御と、試料を検出して得られる点群画像を表示部に表示させることと、入力部により入力される入力情報を取得することと、入力情報に基づいて、点群画像に含まれる点群から一部の点群を抽出することと、抽出された一部の点群に基づいた抽出点群画像を表示部に表示させることと、を実行させる。この観察プログラムは、コンピュータ読み取り可能な記憶媒体(例、非一時的な記録媒体、non-transitory tangible media)に記録されて提供されてもよい。なお、制御装置52の少なくとも一部は、情報処理装置1に設けられてもよい。例えば、情報処理装置1は、コンピュータが情報処理プログラムに従って各種処理を実行する態様であって、制御装置52の少なくとも一部は、情報処理装置1と同じコンピュータが観察プログラムに従って各種処理を実行する態様でもよい。 In the present embodiment, the control device 52 (control device) includes, for example, a computer system. The control device 52 reads the observation program stored in the storage unit (storage device) and executes various processes according to the program. This observation program causes a computer to detect a sample, to display a point cloud image obtained by detecting the sample on a display unit, to acquire input information input by the input unit, and to input information And extracting a part of the point cloud from the point cloud included in the point cloud image and displaying the extracted point cloud image based on the extracted part of the point cloud on the display unit. Let This observation program may be provided by being recorded on a computer-readable storage medium (eg, non-transitory recording medium, non-transitory tangible medium). Note that at least a part of the control device 52 may be provided in the information processing device 1. For example, the information processing apparatus 1 is an aspect in which a computer executes various processes according to an information processing program, and at least a part of the control apparatus 52 is an aspect in which the same computer as the information processing apparatus 1 executes various processes according to an observation program But you can.
 なお、本発明の技術範囲は、上述の実施形態などで説明した態様に限定されるものではない。上述の実施形態などで説明した要件の1つ以上は、省略されることがある。また、上述の実施形態などで説明した要件は、適宜組み合わせることができる。また、法令で許容される限りにおいて、上述の実施形態などで引用した全ての文献の開示を援用して本文の記載の一部とする。 Note that the technical scope of the present invention is not limited to the aspect described in the above-described embodiment. One or more of the requirements described in the above embodiments and the like may be omitted. In addition, the requirements described in the above-described embodiments and the like can be combined as appropriate. In addition, as long as it is permitted by law, the disclosure of all documents cited in the above-described embodiments and the like is incorporated as a part of the description of the text.
1・・・情報処理装置、7・・・処理部、8・・・記憶部、11・・・入力制御部、12・・・出力制御部、13・・・クラスタリング部、15・・・機械学習部、16・・・面生成部、17・・・演算部、50・・・顕微鏡、51・・・顕微鏡本体、W・・・GUI画面 DESCRIPTION OF SYMBOLS 1 ... Information processing apparatus, 7 ... Processing part, 8 ... Memory | storage part, 11 ... Input control part, 12 ... Output control part, 13 ... Clustering part, 15 ... Machine Learning unit, 16 ... surface generation unit, 17 ... calculation unit, 50 ... microscope, 51 ... microscope body, W ... GUI screen

Claims (21)

  1.  点群画像を表示部に表示させる表示制御部と、
     入力部により入力される入力情報を取得する入力情報取得部と、
     前記入力情報取得部により取得される入力情報に基づいて、前記点群画像に含まれる点群から一部の点群を抽出する処理部と、を備え、
     前記表示制御部は、前記処理部が抽出する前記一部の点群に基づいた抽出点群画像を前記表示部に表示させる、
     情報処理装置。
    A display control unit for displaying a point cloud image on the display unit;
    An input information acquisition unit for acquiring input information input by the input unit;
    A processing unit that extracts a part of the point cloud from the point cloud included in the point cloud image based on the input information acquired by the input information acquisition unit, and
    The display control unit causes the display unit to display an extracted point cloud image based on the partial point cloud extracted by the processing unit;
    Information processing device.
  2.  前記入力情報は、前記点群画像において指定される点群に関する情報である、
     請求項1に記載の情報処理装置。
    The input information is information relating to a point cloud specified in the point cloud image.
    The information processing apparatus according to claim 1.
  3.  前記処理部は、前記点群画像に含まれる点群を複数の点群に分割し、前記分割した点群と前記指定された点群との類似度を算出し、前記類似度に基づいて前記一部の点群を抽出する、
     請求項2に記載の情報処理装置。
    The processing unit divides a point cloud included in the point cloud image into a plurality of point clouds, calculates a similarity between the divided point cloud and the designated point cloud, and based on the similarity Extract some point clouds,
    The information processing apparatus according to claim 2.
  4.  前記入力情報は、幾何形状に関する情報である、
     請求項1に記載の情報処理装置。
    The input information is information related to a geometric shape.
    The information processing apparatus according to claim 1.
  5.  前記処理部は、前記点群画像に含まれる点群を複数の点群に分割し、前記分割した点群の特徴量と、前記幾何形状の特徴量とに基づいて前記一部の点群を抽出する、
     請求項4に記載の情報処理装置。
    The processing unit divides a point group included in the point group image into a plurality of point groups, and determines the partial point group based on the feature amount of the divided point group and the feature amount of the geometric shape. Extract,
    The information processing apparatus according to claim 4.
  6.  前記入力情報は、構造体の種類である、
     請求項1に記載の情報処理装置。
    The input information is a structure type.
    The information processing apparatus according to claim 1.
  7.  前記処理部は、前記点群画像に含まれる点群を複数の点群に分割し、前記分割した点群の特徴量と、前記構造体の特徴量とに基づいて前記一部の点群を抽出する、
     請求項6に記載の情報処理装置。
    The processing unit divides a point group included in the point group image into a plurality of point groups, and determines the partial point group based on the feature amount of the divided point group and the feature amount of the structure. Extract,
    The information processing apparatus according to claim 6.
  8.  前記処理部は、前記点群画像に含まれる点群を複数の点群に分割し、前記分割した点群が表す形状のサイズと、前記入力情報によって指定されるサイズとに基づいて前記一部の点群を抽出する、
     請求項1から請求項7のいずれか一項に記載の情報処理装置。
    The processing unit divides a point group included in the point group image into a plurality of point groups, and the part is based on a size of a shape represented by the divided point group and a size specified by the input information. Extract a point cloud of
    The information processing apparatus according to any one of claims 1 to 7.
  9.  前記表示制御部は、前記抽出点群画像において、前記処理部が抽出する前記一部の点群を、前記点群画像に対して色と明るさとの一方またが双方異なるように表示させる、
     請求項1から請求項8のいずれか一項に記載の情報処理装置。
    The display control unit displays the point cloud extracted by the processing unit in the extracted point cloud image so that one or both of color and brightness are different from each other on the point cloud image.
    The information processing apparatus according to any one of claims 1 to 8.
  10.  前記表示制御部は、前記抽出点群画像において、前記処理部が抽出する前記一部の点群のみを表示させる、
     請求項1から請求項9のいずれか一項に記載の情報処理装置。
    The display control unit displays only the partial point group extracted by the processing unit in the extracted point cloud image;
    The information processing apparatus according to any one of claims 1 to 9.
  11.  前記入力情報取得部が取得した前記入力情報に基づいて、前記処理部が前記一部の点群を抽出する際の指標を機械学習によって生成する機械学習部を備える、
     請求項1から請求項10のいずれか一項に記載の情報処理装置。
    Based on the input information acquired by the input information acquisition unit, the processing unit includes a machine learning unit that generates an index for extracting the partial point cloud by machine learning.
    The information processing apparatus according to any one of claims 1 to 10.
  12.  前記入力情報取得部は、前記入力情報として前記機械学習部により用いられるの教師データを取得する、
     請求項11に記載の情報処理装置。
    The input information acquisition unit acquires the teacher data used by the machine learning unit as the input information;
    The information processing apparatus according to claim 11.
  13.  前記教師データは、前記処理部が抽出する前記一部の点群を規定する情報を含む、
     請求項12に記載の情報処理装置。
    The teacher data includes information defining the partial point group extracted by the processing unit,
    The information processing apparatus according to claim 12.
  14.  前記教師データは、前記処理部が抽出から除外する点群を規定する情報を含む、
     請求項13または請求項12に記載の情報処理装置。
    The teacher data includes information defining a point group to be excluded from extraction by the processing unit.
    The information processing apparatus according to claim 13 or 12.
  15.  前記処理部が抽出した前記一部の点群を用いて演算する演算部を備える、
     請求項1から請求項14のいずれか一項に記載の情報処理装置。
    A calculation unit that calculates using the partial point group extracted by the processing unit;
    The information processing apparatus according to any one of claims 1 to 14.
  16.  前記演算部は、前記処理部が抽出した前記一部の点群が表す形状の表面積と体積との一方または双方を演算する、
     請求項15に記載の情報処理装置。
    The calculation unit calculates one or both of the surface area and the volume of the shape represented by the partial point group extracted by the processing unit,
    The information processing apparatus according to claim 15.
  17.  前記演算部は、前記処理部が抽出した前記一部の点群の数を演算する、
     請求項15または請求項16に記載の情報処理装置。
    The calculation unit calculates the number of the partial point groups extracted by the processing unit.
    The information processing apparatus according to claim 15 or claim 16.
  18.  前記表示制御部は、前記演算部の演算結果を前記表示部に表示させる、
     請求項15から請求項17のいずれか一項に記載の情報処理装置。
    The display control unit displays the calculation result of the calculation unit on the display unit.
    The information processing apparatus according to any one of claims 15 to 17.
  19.  請求項1から請求項18のいずれか一項に記載の情報処理装置と、
     試料に含まれる蛍光物質の一部を活性化する活性化光を照明する光学系と、
     前記活性化された前記蛍光物質の少なくとも一部を励起する励起光を照明する照明光学系と、
     前記試料からの光の像を形成する観察光学系と、
     前記観察光学系が形成した像を撮像する撮像部と、
     前記撮像部により撮像された結果に基づいて、前記蛍光物質の位置情報を算出し、算出した位置情報を用いて前記点群を生成する画像処理部と、を備える顕微鏡。
    The information processing apparatus according to any one of claims 1 to 18,
    An optical system for illuminating activation light that activates a part of the fluorescent substance contained in the sample;
    An illumination optical system that illuminates excitation light that excites at least a portion of the activated fluorescent material;
    An observation optical system for forming an image of light from the sample;
    An imaging unit that captures an image formed by the observation optical system;
    A microscope comprising: an image processing unit that calculates positional information of the fluorescent material based on a result captured by the imaging unit, and generates the point group using the calculated positional information.
  20.  点群画像を表示部に表示させることと、
     入力部により入力される入力情報を取得することと、
     前記入力情報に基づいて、前記点群画像に含まれる点群から一部の点群を抽出することと、
     前記抽出された前記一部の点群に基づいた抽出点群画像を前記表示部に表示させることと、を含む情報処理方法。
    Displaying a point cloud image on the display unit;
    Obtaining input information input by the input unit;
    Extracting a part of a point cloud from a point cloud included in the point cloud image based on the input information;
    Displaying an extracted point cloud image based on the extracted part of the point cloud on the display unit.
  21.  コンピュータに、
     点群画像を表示部に表示させることと、
     入力部により入力される入力情報を取得することと、
     前記入力情報に基づいて、前記点群画像に含まれる点群から一部の点群を抽出することと、
     前記抽出された前記一部の点群に基づいた抽出点群画像を前記表示部に表示させることと、を実行させる情報処理プログラム。
    On the computer,
    Displaying a point cloud image on the display unit;
    Obtaining input information input by the input unit;
    Extracting a part of a point cloud from a point cloud included in the point cloud image based on the input information;
    An information processing program for causing the display unit to display an extracted point cloud image based on the extracted part of the point cloud.
PCT/JP2018/020864 2018-05-30 2018-05-30 Information processing device, information processing method, information processing program, and microscope WO2019229912A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2018/020864 WO2019229912A1 (en) 2018-05-30 2018-05-30 Information processing device, information processing method, information processing program, and microscope

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2018/020864 WO2019229912A1 (en) 2018-05-30 2018-05-30 Information processing device, information processing method, information processing program, and microscope

Publications (1)

Publication Number Publication Date
WO2019229912A1 true WO2019229912A1 (en) 2019-12-05

Family

ID=68698049

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2018/020864 WO2019229912A1 (en) 2018-05-30 2018-05-30 Information processing device, information processing method, information processing program, and microscope

Country Status (1)

Country Link
WO (1) WO2019229912A1 (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220284544A1 (en) * 2021-03-02 2022-09-08 Fyusion, Inc. Vehicle undercarriage imaging
US11562474B2 (en) 2020-01-16 2023-01-24 Fyusion, Inc. Mobile multi-camera multi-view capture
WO2023032086A1 (en) * 2021-09-01 2023-03-09 株式会社Fuji Machine tool
US11727626B2 (en) 2019-01-22 2023-08-15 Fyusion, Inc. Damage detection from multi-view visual data
US11748907B2 (en) 2019-01-22 2023-09-05 Fyusion, Inc. Object pose estimation in visual data
US11776142B2 (en) 2020-01-16 2023-10-03 Fyusion, Inc. Structuring visual data
US11783443B2 (en) 2019-01-22 2023-10-10 Fyusion, Inc. Extraction of standardized images from a single view or multi-view capture
US11972556B2 (en) 2022-12-19 2024-04-30 Fyusion, Inc. Mobile multi-camera multi-view capture

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014109555A (en) * 2012-12-04 2014-06-12 Nippon Telegr & Teleph Corp <Ntt> Point group analysis processing apparatus, point group analysis processing method and program
WO2014155715A1 (en) * 2013-03-29 2014-10-02 株式会社日立製作所 Object recognition device, object recognition method, and program
JP2016118502A (en) * 2014-12-22 2016-06-30 日本電信電話株式会社 Point group analysis processor, method, and program
US20170251191A1 (en) * 2016-02-26 2017-08-31 Yale University Systems, methods, and computer-readable media for ultra-high resolution 3d imaging of whole cells

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014109555A (en) * 2012-12-04 2014-06-12 Nippon Telegr & Teleph Corp <Ntt> Point group analysis processing apparatus, point group analysis processing method and program
WO2014155715A1 (en) * 2013-03-29 2014-10-02 株式会社日立製作所 Object recognition device, object recognition method, and program
JP2016118502A (en) * 2014-12-22 2016-06-30 日本電信電話株式会社 Point group analysis processor, method, and program
US20170251191A1 (en) * 2016-02-26 2017-08-31 Yale University Systems, methods, and computer-readable media for ultra-high resolution 3d imaging of whole cells

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
KAWATA, Y. ET AL.: "A GUI visualization system for airborne LiDAR image data to reconstruct 3D city model", PROC. SPIE, vol. 9643, 15 October 2015 (2015-10-15), XP060062245, DOI: 10.1117/12.2193067 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11727626B2 (en) 2019-01-22 2023-08-15 Fyusion, Inc. Damage detection from multi-view visual data
US11748907B2 (en) 2019-01-22 2023-09-05 Fyusion, Inc. Object pose estimation in visual data
US11783443B2 (en) 2019-01-22 2023-10-10 Fyusion, Inc. Extraction of standardized images from a single view or multi-view capture
US11562474B2 (en) 2020-01-16 2023-01-24 Fyusion, Inc. Mobile multi-camera multi-view capture
US11776142B2 (en) 2020-01-16 2023-10-03 Fyusion, Inc. Structuring visual data
US20220284544A1 (en) * 2021-03-02 2022-09-08 Fyusion, Inc. Vehicle undercarriage imaging
US11605151B2 (en) * 2021-03-02 2023-03-14 Fyusion, Inc. Vehicle undercarriage imaging
US11893707B2 (en) 2021-03-02 2024-02-06 Fyusion, Inc. Vehicle undercarriage imaging
WO2023032086A1 (en) * 2021-09-01 2023-03-09 株式会社Fuji Machine tool
US11972556B2 (en) 2022-12-19 2024-04-30 Fyusion, Inc. Mobile multi-camera multi-view capture

Similar Documents

Publication Publication Date Title
WO2019229912A1 (en) Information processing device, information processing method, information processing program, and microscope
JP6947841B2 (en) Augmented reality microscope for pathology
JP2021515240A (en) Augmented reality microscope for pathology with overlay of quantitative biomarker data
WO2017150194A1 (en) Image processing device, image processing method, and program
CN112106107A (en) Focus weighted machine learning classifier error prediction for microscope slice images
KR20220012214A (en) Artificial Intelligence Processing Systems and Automated Pre-Diagnostic Workflows for Digital Pathology
JP7176697B2 (en) Cell evaluation system and method, cell evaluation program
US10330912B2 (en) Image processing device and image processing method
JP2009512927A (en) Image processing method
CA3002902C (en) Systems and methods of unmixing images with varying acquisition properties
KR102580984B1 (en) Image processing method, program and recording medium
JP4997255B2 (en) Cell image analyzer
CN108475429A (en) The system and method for the segmentation of three-dimensional MIcrosope image
US10921252B2 (en) Image processing apparatus and method of operating image processing apparatus
CN108604375B (en) System and method for image analysis of multi-dimensional data
Mickler et al. Drop swarm analysis in dispersions with incident-light and transmitted-light illumination
JP2013109119A (en) Microscope controller and program
JP4271054B2 (en) Cell image analyzer
JP2014063019A (en) Image capturing and analyzing device, method of controlling the same, and program for the same
US7221784B2 (en) Method and arrangement for microscopy
US10690902B2 (en) Image processing device and microscope system
US11428920B2 (en) Information processing device, information processing method, information processing program, and microscope for displaying a plurality of surface images
US20220050996A1 (en) Augmented digital microscopy for lesion analysis
Waithe et al. Object detection networks and augmented reality for cellular detection in fluorescence microscopy acquisition and analysis
Grimes Image processing and analysis methods in quantitative endothelial cell biology

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18920906

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18920906

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP