WO2022244172A1 - Information processing device, information processing system, information processing method, and information processing program - Google Patents

Information processing device, information processing system, information processing method, and information processing program Download PDF

Info

Publication number
WO2022244172A1
WO2022244172A1 PCT/JP2021/019117 JP2021019117W WO2022244172A1 WO 2022244172 A1 WO2022244172 A1 WO 2022244172A1 JP 2021019117 W JP2021019117 W JP 2021019117W WO 2022244172 A1 WO2022244172 A1 WO 2022244172A1
Authority
WO
WIPO (PCT)
Prior art keywords
point group
detecting
information
image
information processing
Prior art date
Application number
PCT/JP2021/019117
Other languages
French (fr)
Japanese (ja)
Inventor
壮平 大澤
Original Assignee
三菱電機株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 三菱電機株式会社 filed Critical 三菱電機株式会社
Priority to PCT/JP2021/019117 priority Critical patent/WO2022244172A1/en
Priority to JP2023519163A priority patent/JP7330420B2/en
Priority to TW110140038A priority patent/TW202247094A/en
Publication of WO2022244172A1 publication Critical patent/WO2022244172A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics

Definitions

  • the present disclosure relates to an information processing device, an information processing system, an information processing method, and an information processing program.
  • AR Augmented Reality
  • a point group based on an image obtained by imaging an object is matched with a previously acquired point group.
  • Matching involves global registration and local registration.
  • Non-Patent Document 1 describes global registration.
  • Non-Patent Document 1 A method using the technology of Non-Patent Document 1 is conceivable for global registration of matching.
  • the target object does not move, but there are cases where things that exist in factories etc. are moved. For example, objects existing around the object (eg, shelves, desks, etc.) are moved. Also, for example, an object is moved. For example, if the object is a belt conveyor, the belt conveyor moves the object on the belt conveyor.
  • the object is a belt conveyor
  • the belt conveyor moves the object on the belt conveyor.
  • the purpose of this disclosure is to perform appropriate registration even for information based on an environment that fluctuates greatly.
  • the information processing device acquires position information indicating a first position, point cloud information, an image obtained by imaging an object, and specified position information indicating a specified position specified by a user in the image.
  • an acquisition unit that detects a plurality of feature points of the object based on the image as a first point group, detects a plurality of edges of the object based on the first point group, and detects detecting a first edge existing within a preset range from the specified position from among the plurality of edges obtained, and detecting the point group information indicating a second point group that is a plurality of characteristic points of the object; a detection unit that detects a plurality of edges of the object based on the specified executing a first process of aligning a position with the first position; executing a second process of aligning the first edge with the second edge; and a processing execution unit that aligns the first point group and the second point group by executing a process.
  • FIG. 4 is a diagram showing an overview of matching according to Embodiment 1;
  • FIG. 2 is a diagram showing hardware included in the terminal device according to the first embodiment;
  • FIG. 2 is a block diagram showing functions of the terminal device according to Embodiment 1;
  • FIG. 4 is a flow chart showing an example of processing executed by the terminal device according to the first embodiment;
  • FIG. 3 is a diagram showing an example of global registration according to the first embodiment;
  • FIG. FIG. 10 is a block diagram showing functions of a terminal device according to Embodiment 2;
  • FIG. 10 is a diagram showing a specific example of detection processing according to the second embodiment;
  • 10 is a flow chart showing an example of processing executed by the terminal device according to the second embodiment;
  • FIG. 11 is a block diagram showing functions of a terminal device according to Embodiment 3;
  • FIG. 13 is a diagram illustrating an example of detection processing according to the third embodiment;
  • FIG. 13 is a flow chart showing an example of processing executed by a terminal device according to Embodiment 3;
  • FIG. 13 is a flow chart showing an example of processing executed by a terminal device according to a modification of the third embodiment;
  • FIG. FIG. 12 is a block diagram showing functions of a terminal device according to Embodiment 4;
  • FIG. 12 is a diagram showing an example of detection processing according to the fourth embodiment;
  • FIG. FIG. 13 is a flow chart showing an example of processing executed by a terminal device according to a fourth embodiment;
  • FIG. FIG. 12 illustrates an example of an information processing system according to a fifth embodiment;
  • FIG. 1 is a diagram showing an overview of matching according to the first embodiment.
  • FIG. 1 shows a terminal device 100 and a belt conveyor 200.
  • the terminal device 100 is a smart phone, a tablet terminal, or the like.
  • the terminal device 100 is also called an information processing device.
  • the terminal device 100 is a device that executes an information processing method.
  • the terminal device 100 is a device used by a user.
  • the belt conveyor 200 is an example of an object. Objects present on the belt conveyor 200 are moved. That is, the objects existing on the belt conveyor 200 do not exist at fixed positions.
  • a user uses the terminal device 100 to capture an image of the belt conveyor 200, which is an object, in the factory.
  • a point group based on an image obtained by imaging the belt conveyor 200 is matched with a previously obtained point group. Matching involves global registration and local registration.
  • a point group is a collection of a plurality of feature points. For example, if the matching is successful, the CG (Computer Graphics) of the belt conveyor 200 is displayed on the display of the terminal device 100 .
  • CG Computer Graphics
  • FIG. 2 illustrates hardware included in the terminal device according to the first embodiment.
  • the terminal device 100 has a processor 101 , a volatile memory device 102 , a nonvolatile memory device 103 , an imaging device 104 and a display 105 .
  • the processor 101 controls the terminal device 100 as a whole.
  • the processor 101 is a CPU (Central Processing Unit), FPGA (Field Programmable Gate Array), or the like.
  • Processor 101 may be a multiprocessor.
  • the terminal device 100 may have a processing circuit.
  • the processing circuit may be a single circuit or multiple circuits.
  • the volatile memory device 102 is the main memory device of the terminal device 100 .
  • the volatile memory device 102 is RAM (Random Access Memory).
  • the nonvolatile storage device 103 is an auxiliary storage device of the terminal device 100 .
  • the nonvolatile memory device 103 is a HDD (Hard Disk Drive) or an SSD (Solid State Drive).
  • the imaging device 104 images an object.
  • imaging device 104 is a camera.
  • a display 105 displays information.
  • FIG. 3 is a block diagram showing functions of the terminal device according to the first embodiment.
  • the terminal device 100 has a storage unit 110 , an acquisition unit 120 , a detection unit 130 and a process execution unit 140 .
  • the storage unit 110 may be implemented as a storage area secured in the volatile storage device 102 or the nonvolatile storage device 103 .
  • a part or all of the acquisition unit 120, the detection unit 130, and the processing execution unit 140 may be realized by a processing circuit. Also, part or all of the acquisition unit 120, the detection unit 130, and the processing execution unit 140 may be implemented as modules of a program executed by the processor 101.
  • FIG. For example, a program executed by the processor 101 is also called an information processing program.
  • the information processing program is recorded on a recording medium.
  • the storage unit 110 may store the position information 111 and the point group information 112.
  • the position information 111 is information indicating a position. This position is also referred to as the first position.
  • the positions are two-dimensional coordinates or three-dimensional coordinates.
  • the point cloud information 112 is information indicating a point cloud that is a plurality of feature points of the object.
  • the point group is also called a second point group.
  • Point cloud information 112 may include location information 111 . Therefore, the position indicated by the position information 111 and the point group indicated by the point group information 112 can be expressed in the same coordinate system.
  • the position information 111 may be information indicating the position of the AR object. This sentence can be expressed as follows.
  • the position information 111 may be information indicating the position of the AR marker.
  • the acquisition unit 120 acquires the position information 111 and the point group information 112. For example, the acquisition unit 120 acquires the position information 111 and the point group information 112 from the storage unit 110 .
  • the position information 111 and the point cloud information 112 may be stored in an external device (for example, cloud server).
  • the acquisition unit 120 acquires the position information 111 and the point group information 112 from the external device.
  • the object is the belt conveyor 200 .
  • FIG. 4 is a flowchart illustrating an example of processing executed by the terminal device according to Embodiment 1.
  • the acquisition unit 120 acquires an image obtained by imaging the belt conveyor 200 with the imaging device 104 .
  • the image may be a two-dimensional image or a three-dimensional image.
  • the image is displayed on the display 105 .
  • a user taps an arbitrary position.
  • the tapped position is a specified position specified by the user in the image.
  • Information indicating the specified position is called specified position information.
  • the designated position information may be called tap position information. Note that, for example, when the position information 111 indicates the position of the AR object, the user taps the position of the AR object.
  • Step S12 The acquisition unit 120 acquires designated position information.
  • Step S13 Based on the image, the detection unit 130 detects a plurality of characteristic points of the belt conveyor 200 as a point group.
  • the point group is also called a first point group.
  • the detection unit 130 detects a plurality of edges of the belt conveyor 200 based on the detected point group. For example, the detection unit 130 uses the method described in Non-Patent Document 2 to detect the plurality of edges.
  • the detection unit 130 detects one edge existing within a preset range from the specified position indicated by the specified position information from among the plurality of edges. In other words, the detection unit 130 detects one edge existing in the vicinity of the designated position from among the plurality of edges.
  • the edge is also called the first edge.
  • Step S ⁇ b>15 The detection unit 130 detects a plurality of edges of the belt conveyor 200 based on the point group information 112 .
  • a method of detecting a plurality of edges is the same as in step S14.
  • the detection unit 130 detects one edge existing within a preset range from the position indicated by the position information 111 from among the plurality of edges. In other words, the detection unit 130 detects one edge existing near the position indicated by the position information 111 . This edge is also called a second edge.
  • Step S ⁇ b>16 The process executing unit 140 executes a process of aligning the designated position indicated by the designated position information with the position indicated by the position information 111 .
  • This processing is also referred to as first processing.
  • the process executing unit 140 executes a process of combining the edges detected in step S14 and the edges detected in step S15.
  • This process is also called a second process.
  • the process execution unit 140 performs the first process and the second process to match the point cloud detected in step S13 with the point cloud indicated by the point cloud information 112 . Further, when the coordinate systems are different between the first process and the second process, the process execution unit 140 adjusts them to the same coordinate system.
  • Step S17 The process executing unit 140 executes local registration.
  • Step S18 The process execution unit 140 determines whether or not the point group detected in step S13 and the point group indicated by the point group information 112 match. In other words, the processing execution unit 140 determines whether or not the point group detected in step S13 and the point group indicated by the point group information 112 match. Note that the match does not have to be a perfect match. For example, when the matching rate is 80%, the processing execution unit 140 determines that the two point groups match. If the condition is satisfied, the process proceeds to step S19. If the condition is not met, the process ends.
  • Step S ⁇ b>19 The processing execution unit 140 executes processing for displaying the display information associated with the point cloud information 112 on the display 105 .
  • the display information is the CG of the belt conveyor 200 . Thereby, the display information is displayed on the display 105 .
  • FIG. 5 is a diagram showing an example of global registration according to the first embodiment.
  • Point cloud information 112 in FIG. 5 shows a state in which position 113 indicated by position information 111 is included. Also, the point cloud information 112 may not include the point cloud of the object existing on the belt conveyor 200 .
  • the acquisition unit 120 acquires an image 300 obtained by imaging the belt conveyor 200 with the imaging device 104 . Acquisition unit 120 acquires specified position information.
  • the detection unit 130 Based on the image 300, the detection unit 130 detects a plurality of characteristic points of the belt conveyor 200 as a point group. Based on the detected point group, the detection unit 130 detects an edge 302 existing in the vicinity of the designated position 301 indicated by the designated position information. Based on the point group information 112 , the detection unit 130 detects edges 114 that exist near the position 113 indicated by the position information 111 .
  • the processing execution unit 140 executes processing for aligning the specified position 301 indicated by the specified position information with the position 113 indicated by the position information 111 .
  • the processing execution unit 140 executes processing for combining the edge 302 and the edge 114 . By executing these processes, the processing execution unit 140 matches the point cloud detected based on the image 300 with the point cloud indicated by the point cloud information 112 .
  • the terminal device 100 can perform appropriate alignment even for an image based on an environment with large fluctuations.
  • the point cloud may be detected as follows. First, the acquisition unit 120 acquires information from the infrared sensor. The detection unit 130 detects the point cloud based on the information acquired from the infrared sensor. Note that the point group is also referred to as a first point group.
  • Embodiment 2 Next, Embodiment 2 will be described. In Embodiment 2, mainly matters different from Embodiment 1 will be described. In the second embodiment, descriptions of items common to the first embodiment are omitted. In the first embodiment, the case of acquiring the specified position information indicating the position specified by the user has been described. In the second embodiment, a case of automatically detecting a position to be aligned with the position information 111 will be described.
  • FIG. 6 is a block diagram showing functions of the terminal device according to the second embodiment.
  • the terminal device 100a has an acquisition unit 120a, a detection unit 130a, and a process execution unit 140a.
  • the position information 111 indicates the position of the characteristic part of the belt conveyor 200 . This position is also referred to as the first position.
  • the obtaining unit 120a differs from the obtaining unit 120 in that it does not obtain the designated position information.
  • the detection unit 130a detects the position of the characteristic part of the belt conveyor 200 based on the image. 4 illustrates the detection process.
  • FIG. 7 is a diagram illustrating a specific example of detection processing according to the second embodiment.
  • the terminal device 100a generates an image 310 by capturing an image of the belt conveyor 200.
  • FIG. Image 310 includes switch 311 and signal 312 .
  • the detection unit 130 a detects the position of the characteristic part of the belt conveyor 200 .
  • the detection unit 130a detects the position of the characteristic part of the belt conveyor 200 using general object recognition technology.
  • the detected feature is the switch 311 .
  • the detected feature location is the signal 312 .
  • FIG. 8 is a flowchart illustrating an example of processing executed by the terminal device according to the second embodiment; FIG.
  • the acquisition unit 120 a acquires an image obtained by imaging the belt conveyor 200 with the imaging device 104 .
  • the image may be a two-dimensional image or a three-dimensional image.
  • the detection unit 130a detects one characteristic position of the belt conveyor 200 based on the image.
  • Step S23 Based on the image, the detection unit 130a detects a plurality of characteristic points of the belt conveyor 200 as a point group.
  • the point group is also called a first point group.
  • the detection unit 130a detects a plurality of edges of the belt conveyor 200 based on the detected point group.
  • the detection unit 130a detects one edge existing within a preset range from the position of the characteristic location from among the plurality of edges. In other words, the detection unit 130a detects one edge existing in the vicinity of the position of the characteristic location from among the plurality of edges.
  • the edge is also called the first edge.
  • the detection unit 130 a detects a plurality of edges of the belt conveyor 200 based on the point group information 112 .
  • the detection unit 130a detects one edge existing within a preset range from the position indicated by the position information 111 (that is, the position of the characteristic part of the belt conveyor 200) from among the plurality of edges. In other words, the detection unit 130a detects one edge that exists in the vicinity of the position indicated by the position information 111.
  • FIG. This edge is also called a second edge.
  • Step S ⁇ b>26 The process execution unit 140 a executes a process of aligning the position of the feature location with the position indicated by the position information 111 . This processing is also referred to as first processing.
  • the process executing unit 140a executes a process of combining the edges detected in step S24 and the edges detected in step S25. This process is also called a second process.
  • the process execution unit 140a matches the point cloud detected in step S23 with the point cloud indicated by the point cloud information 112 by performing the first process and the second process.
  • Step S27 The process executing unit 140a executes local registration.
  • Step S28 The process execution unit 140a determines whether or not the point group detected in step S23 and the point group indicated by the point group information 112 match. If the condition is satisfied, the process proceeds to step S29. If the condition is not met, the process ends. (Step S ⁇ b>29 ) The processing execution unit 140 a executes processing for displaying display information associated with the point cloud information 112 on the display 105 .
  • the terminal device 100a automatically detects the position of the feature location. Therefore, in the second embodiment, unlike in the first embodiment, the user does not have to perform the designated operation. Therefore, according to Embodiment 2, the terminal device 100a can reduce the burden on the user.
  • Embodiment 3 Next, Embodiment 3 will be described.
  • the third embodiment mainly matters different from the first embodiment will be described.
  • descriptions of matters common to the first embodiment are omitted.
  • the first embodiment the case of detecting edges using an image has been described.
  • a case of detecting lines using tracking technology will be described.
  • FIG. 9 is a block diagram showing functions of the terminal device according to the third embodiment.
  • the terminal device 100b has an acquisition unit 120b, a detection unit 130b, and a process execution unit 140b. Main functions of the detection unit 130b will be described with reference to FIG. Detailed functions of the acquisition unit 120b, the detection unit 130b, and the processing execution unit 140b will be described later.
  • the detection unit 130b detects the trajectory of the object. In other words, the detection unit 130b detects the movement trajectory of the object. 4 illustrates the detection process.
  • FIG. 10 is a diagram showing an example of detection processing according to the third embodiment.
  • FIG. 10 shows conveyor belt 200 moving object 320 .
  • the detection unit 130b detects the trajectory of the object 320 using tracking technology. For example, the detection unit 130 b detects the trajectory of the object 320 by tracking the corner 321 of the object 320 .
  • the detection unit 130b detects the straight line 322 based on the trajectory.
  • the detection unit 130b may detect the straight line 322 based on the trajectories of a plurality of objects. Furthermore, when the trajectory is a curve, the detection unit 130b detects the curve based on the trajectory.
  • FIG. 11 is a flowchart illustrating an example of processing executed by the terminal device according to the third embodiment;
  • the acquisition unit 120b acquires an image obtained by imaging the belt conveyor 200 that moves the object. Also, the image is displayed on the display 105 . A user taps an arbitrary place. The tapped place is the specified position specified by the user. Information indicating the specified position is called specified position information.
  • Step S32 The acquisition unit 120b acquires designated position information.
  • Step S33 Based on the image, the detection unit 130b detects a plurality of characteristic points of the belt conveyor 200 as a point group. Specifically, the detection unit 130b detects a plurality of characteristic points of the belt conveyor 200 as a point group based on one of the plurality of images that are the videos. The point group is also called a first point group.
  • Step S34 The detection unit 130b detects the trajectory of the object on the belt conveyor 200 based on the image.
  • the detection unit 130b detects lines based on the trajectory. The line is also called the first line.
  • the detection unit 130 b detects a plurality of edges of the belt conveyor 200 based on the point group information 112 .
  • the detection unit 130b detects one edge existing within a preset range from the position indicated by the position information 111 from among the plurality of edges. In other words, the detection unit 130b detects one edge that exists in the vicinity of the position indicated by the position information 111.
  • FIG. This edge is also called a second edge.
  • Step S36 The process executing unit 140b executes a process of matching the specified position indicated by the specified position information and the position indicated by the position information 111.
  • FIG. This processing is also referred to as first processing.
  • the process executing unit 140b executes a process of matching the line detected in step S34 and the edge detected in step S35. This process is also called a second process.
  • the process execution unit 140b matches the point cloud detected in step S33 with the point cloud indicated by the point cloud information 112 by executing the first process and the second process.
  • Step S37 The process executing unit 140b executes local registration.
  • Step S38 The process execution unit 140b determines whether or not the point group detected in step S33 and the point group indicated by the point group information 112 match. If the condition is satisfied, the process proceeds to step S39. If the condition is not met, the process ends. (Step S ⁇ b>39 ) The processing execution unit 140 b executes processing for displaying display information associated with the point cloud information 112 on the display 105 .
  • the terminal device 100b can perform appropriate alignment even for images based on an environment with large fluctuations.
  • Modification of Embodiment 3 A modification of the third embodiment describes a case where the second embodiment and the third embodiment are combined.
  • symbol of Embodiment 2 is used for a code
  • FIG. 12 is a flowchart showing an example of processing executed by a terminal device according to a modification of the third embodiment.
  • the process of FIG. 12 differs from the process of FIG. 8 in that steps S21a to 24a and 26a are executed. Therefore, in FIG. 12, steps S21a to 24a and 26a will be explained. Further, description of processes other than steps S21a to 24a and 26a is omitted.
  • Step S21a The acquisition unit 120a acquires an image obtained by imaging the belt conveyor 200 that moves the object.
  • Step S22a The detection unit 130a detects the position of the characteristic part of the belt conveyor 200 based on the image.
  • the detection unit 130a Based on the image, the detection unit 130a detects a plurality of feature points of the belt conveyor 200 as a point group. Specifically, the detection unit 130a detects a plurality of feature points of the belt conveyor 200 as a point group based on one of the plurality of images that are the video. The point group is also called a first point group.
  • the detection unit 130a detects the trajectory of the object on the belt conveyor 200 based on the image.
  • the detection unit 130a detects a line based on the trajectory. The line is also called the first line.
  • Step S ⁇ b>26 a The process execution unit 140 a executes a process of matching the position of the characteristic location and the position indicated by the position information 111 . This processing is also referred to as first processing.
  • the process executing unit 140a executes a process of matching the line detected in step S24a and the edge detected in step S25. This process is also called a second process.
  • the process execution unit 140a matches the point cloud detected in step S23a with the point cloud indicated by the point cloud information 112 by executing the first process and the second process.
  • Embodiment 4 Next, Embodiment 4 will be described. In Embodiment 4, mainly matters different from Embodiment 1 will be described. In the fourth embodiment, descriptions of items common to the first embodiment are omitted. In Embodiment 1, alignment between points is performed. In the fourth embodiment, a case in which points are not aligned will be described.
  • FIG. 13 is a block diagram showing functions of the terminal device according to the fourth embodiment.
  • the terminal device 100c has an acquisition unit 120c, a detection unit 130c, and a process execution unit 140c.
  • the acquisition unit 120 c differs from the acquisition unit 120 in that it does not acquire the position information 111 . Therefore, the storage unit 110 does not have to store the position information 111 .
  • Main functions of the detection unit 130c will be described.
  • the detection unit 130c detects a plane area of the object based on the point group. 4 illustrates the detection process.
  • FIG. 14 is a diagram showing an example of detection processing according to the fourth embodiment.
  • FIG. 14 shows an image 330 generated by the terminal device 100c.
  • the detection unit 130c detects a point cloud based on the image 330.
  • the detection unit 130c detects the plane area 331 of the belt conveyor 200 based on the point group.
  • the planar area be a characteristic planar area.
  • the planar regions may be triangles, hexagons, circles, and the like.
  • the detection unit 130c may detect a three-dimensional area in the object based on the point group.
  • the three-dimensional shape is desirably a characteristic shape.
  • the plane area detected based on the point group is also called a first area.
  • a three-dimensional region detected based on the point group is also referred to as a first region.
  • the detection unit 130c detects a plurality of planar regions of the object based on the point group information 112 when planar regions are detected based on the image. Further, when a three-dimensional region is detected based on the image, the detection unit 130 c detects a plurality of three-dimensional regions in the object based on the point group information 112 .
  • Step S is a flowchart illustrating an example of processing executed by the terminal device according to the fourth embodiment; FIG.
  • the acquisition unit 120 c acquires an image obtained by imaging the belt conveyor 200 with the imaging device 104 .
  • the image may be a two-dimensional image or a three-dimensional image.
  • the detection unit 130c Based on the image, the detection unit 130c detects a plurality of characteristic points of the belt conveyor 200 as a point group.
  • the point group is also called a first point group.
  • Step S43 The detection unit 130c detects the plane area of the belt conveyor 200 based on the detected point group.
  • Step S ⁇ b>44 Based on the point group information 112 , the detection unit 130 c detects multiple areas that are multiple plane areas of the belt conveyor 200 .
  • Step S45 The process execution unit 140c identifies the planar area detected in step S43 from among the plurality of areas (that is, the plurality of planar areas). This processing is also referred to as first processing.
  • Step S46 The process execution unit 140c executes a process of matching the plane area detected in step S43 and the plane area specified in step S45. This process is also called a second process.
  • the processing execution unit 140c matches the point cloud detected in step S42 with the point cloud indicated by the point cloud information 112 by executing the second processing.
  • Step S47 The process executing unit 140c executes local registration.
  • Step S48 The process execution unit 140c determines whether or not the point group detected in step S42 and the point group indicated by the point group information 112 match. If the condition is satisfied, the process proceeds to step S49. If the condition is not met, the process ends. (Step S ⁇ b>49 ) The processing execution unit 140 c executes processing for displaying display information associated with the point group information 112 on the display 105 .
  • the terminal device 100c can perform appropriate alignment even for an image based on an environment with large fluctuations.
  • Embodiment 5 Next, Embodiment 5 will be described. In Embodiment 5, mainly matters different from Embodiments 1 to 4 will be described. Further, in the fifth embodiment, descriptions of matters common to the first to fourth embodiments are omitted.
  • FIG. 16 is a diagram illustrating an example of an information processing system according to a fifth embodiment;
  • the information processing system includes an information processing device 400 and a terminal device 500 .
  • the information processing device 400 and the terminal device 500 communicate via a network.
  • the information processing device 400 is a device that executes an information processing method.
  • the information processing device 400 has a processor, a volatile memory device, and a non-volatile memory device. Further, the information processing device 400 may have a processing circuit.
  • the terminal device 500 has an imaging device for imaging an object and a display.
  • Embodiments 1 to 4 cases where terminal devices 100, 100a, 100b, and 100c perform processing have been described.
  • the information processing device 400 has the functions of the terminal devices 100, 100a, 100b, and 100c.
  • the information processing apparatus 400 includes position information 111 indicating a first position, point cloud information 112, an image obtained by capturing an image of a target object by the imaging device, and designation specified by the user in the image. Acquire the specified position information indicating the position.
  • the information processing device 400 acquires the position information 111 indicating the first position and the point group information 112 from a volatile or nonvolatile storage device.
  • the information processing device 400 acquires the image and the designated position information from the terminal device 500 .
  • the information processing device 400 detects a plurality of feature points of the object as a first point group based on the image.
  • the information processing device 400 detects multiple edges of the object based on the first point group.
  • the information processing apparatus 400 detects a first edge existing within a preset range from the specified position from among the plurality of detected edges.
  • the information processing device 400 detects multiple edges of the object based on the point group information 112 indicating the second point group, which is the multiple feature points of the object.
  • the information processing device 400 detects a second edge existing within a range from the first position from among the plurality of detected edges.
  • the information processing device 400 executes a first process of aligning the specified position with the first position.
  • the information processing device 400 executes a second process of aligning the first edge and the second edge.
  • the information processing apparatus 400 aligns the first point group and the second point group by executing the first process and the second process.
  • the information processing apparatus 400 includes position information 111 indicating a first position, which is a position of a characteristic part of an object, point cloud information 112, and an image obtained by imaging the object with the imaging device. to get The information processing device 400 detects the first position based on the image.
  • the information processing device 400 detects a plurality of feature points of the object as a first point group based on the image.
  • the information processing device 400 detects multiple edges of the object based on the first point group.
  • the information processing apparatus 400 detects a first edge existing within a preset range from the detected first position from among the detected edges.
  • the information processing device 400 detects multiple edges of the object based on the point group information 112 indicating the second point group, which is the multiple feature points of the object.
  • the information processing device 400 detects a second edge existing within a range from the first position indicated by the position information 111 from among the detected edges.
  • the information processing device 400 executes a first process of aligning the detected first position with the first position indicated by the position information 111 .
  • the information processing device 400 executes a second process of aligning the first edge and the second edge.
  • the information processing apparatus 400 aligns the first point group and the second point group by executing the first process and the second process.
  • the information processing device 400 acquires the point cloud information 112 and an image obtained by imaging the target object.
  • the information processing device 400 detects a plurality of feature points of the object as a first point group based on the image.
  • the information processing apparatus 400 detects a first area, which is a planar area of the object or a three-dimensional area in the object, based on the first point group.
  • the information processing apparatus 400 Based on the point group information 112 indicating the second point group, which is a plurality of characteristic points of the object, the information processing apparatus 400 detects a plurality of planar regions of the object or a plurality of three-dimensional regions of the object. to detect the region of
  • the information processing device 400 executes a first process of identifying a first area from among the plurality of areas.
  • the information processing device 400 executes a second process of matching the first region of the first point group and the first region of the second point group.
  • the information processing apparatus 400 aligns the first point group and the second point group by executing the first process
  • the information processing apparatus 400 executes local registration, and if the first point group and the second point group match, the information processing apparatus 400 converts the display information associated with the point group information 112 to the relevant point group. Perform processing for displaying on the display. Specifically, the information processing device 400 transmits display information and an instruction to display the display information to the terminal device 500 . Thereby, the display information is displayed on the display of the terminal device 500 .
  • the information processing device 400 has the functions of the terminal devices 100, 100a, 100b, and 100c.
  • the information processing device 400 can achieve the same effects as those described in the first to fourth embodiments.
  • 100, 100a, 100b, 100c terminal device 101 processor, 102 volatile storage device, 103 non-volatile storage device, 104 imaging device, 105 display, 110 storage unit, 111 position information, 112 point group information, 113 position, 114 edge , 120, 120a, 120b, 120c acquisition unit, 130, 130a, 130b, 130c detection unit, 140, 140a, 140b, 140c processing execution unit, 200 belt conveyor, 300 image, 301 designated position, 302 edge, 310 image, 311 Switch, 312 signal, 320 object, 321 angle, 322 straight line, 330 image, 331 plane area, 400 information processing device, 500 terminal device.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

An information processing device (100) comprises: an acquisition unit (120) that acquires position information (111) indicating a first position, point cloud information (112), an image obtained by imaging an object, and specified position information indicating a specified position; a detection unit (130) that detects a first point cloud on the basis of an image, detects a plurality of edges of an object on the basis of the first point cloud, detects a first edge from among the plurality of detected edges, detects a plurality of edges of an object on the basis of point cloud information (112) indicating a second point cloud, which serves as a plurality of feature points of the object, and detects a second edge from among the detected plurality of edges; and a process execution unit (140) that executes a first process for matching a specified position and a first position, and executes a second process for matching the first edge and the second edge so as to match the first point cloud and the second point cloud.

Description

情報処理装置、情報処理システム、情報処理方法、及び情報処理プログラムInformation processing device, information processing system, information processing method, and information processing program
 本開示は、情報処理装置、情報処理システム、情報処理方法、及び情報処理プログラムに関する。 The present disclosure relates to an information processing device, an information processing system, an information processing method, and an information processing program.
 AR(Augmented Reality)の技術が知られている。例えば、ARでは、対象物を撮像することにより得られた画像に基づく点群と、予め取得されている点群とのマッチングが行われる。マッチングでは、グローバルレジストレーションとローカルレジストレーションとが行われる。例えば、非特許文献1には、グローバルレジストレーションが記載されている。 AR (Augmented Reality) technology is known. For example, in AR, a point group based on an image obtained by imaging an object is matched with a previously acquired point group. Matching involves global registration and local registration. For example, Non-Patent Document 1 describes global registration.
 ところで、工場などに設置されている装置が対象物とされる場合がある。そして、当該対象物を撮像することにより得られた画像に基づく点群と、予め取得されている点群とのマッチングが行われる。マッチングのグローバルレジストレーションでは、非特許文献1の技術を用いる方法が考えられる。 By the way, there are cases where devices installed in factories and the like are targeted. Then, matching is performed between the point group based on the image obtained by imaging the object and the point group acquired in advance. A method using the technology of Non-Patent Document 1 is conceivable for global registration of matching.
 ここで、対象物は移動しないが、工場などに存在する物が、移動される場合がある。例えば、対象物の周囲に存在する物(例えば、棚、机など)が移動される。また、例えば、対象物によって、物が移動される。例えば、対象物がベルトコンベアである場合、ベルトコンベアによって、ベルトコンベアの上に存在する物が移動される。このように、工場などの環境に存在する物は、一定の位置に存在しない場合がある。すなわち、工場などの環境に存在する物の位置の変動は、大きい。非特許文献1の技術を用いる方法は、変動が小さい場合に有効である。よって、変動が大きい場合、非特許文献1の技術を用いることは、望ましくない可能性がある。 Here, the target object does not move, but there are cases where things that exist in factories etc. are moved. For example, objects existing around the object (eg, shelves, desks, etc.) are moved. Also, for example, an object is moved. For example, if the object is a belt conveyor, the belt conveyor moves the object on the belt conveyor. Thus, objects that exist in an environment such as a factory may not exist in a fixed position. That is, the variation in the position of an object existing in an environment such as a factory is large. The method using the technique of Non-Patent Document 1 is effective when the variation is small. Therefore, if the variation is large, using the technique of Non-Patent Document 1 may not be desirable.
 本開示の目的は、変動が大きい環境に基づく情報でも、適切な位置合わせを行うことである。 The purpose of this disclosure is to perform appropriate registration even for information based on an environment that fluctuates greatly.
 本開示の一態様に係る情報処理装置が提供される。情報処理装置は、第1の位置を示す位置情報、点群情報、対象物を撮像することにより得られた画像、及び前記画像の中でユーザが指定した指定位置を示す指定位置情報を取得する取得部と、前記画像に基づいて前記対象物の複数の特徴点を、第1の点群として検出し、前記第1の点群に基づいて前記対象物の複数のエッジを検出し、検出された複数のエッジの中から、前記指定位置から予め設定された範囲内に存在する第1のエッジを検出し、前記対象物の複数の特徴点である第2の点群を示す前記点群情報に基づいて、前記対象物の複数のエッジを検出し、検出された複数のエッジの中から、前記第1の位置から前記範囲内に存在する第2のエッジを検出する検出部と、前記指定位置と前記第1の位置とを合わせる第1の処理を実行し、前記第1のエッジと前記第2のエッジとを合わせる第2の処理を実行し、前記第1の処理と前記第2の処理とを実行することで、前記第1の点群と前記第2の点群とを合わせる処理実行部と、を有する。 An information processing device according to one aspect of the present disclosure is provided. The information processing device acquires position information indicating a first position, point cloud information, an image obtained by imaging an object, and specified position information indicating a specified position specified by a user in the image. an acquisition unit that detects a plurality of feature points of the object based on the image as a first point group, detects a plurality of edges of the object based on the first point group, and detects detecting a first edge existing within a preset range from the specified position from among the plurality of edges obtained, and detecting the point group information indicating a second point group that is a plurality of characteristic points of the object; a detection unit that detects a plurality of edges of the object based on the specified executing a first process of aligning a position with the first position; executing a second process of aligning the first edge with the second edge; and a processing execution unit that aligns the first point group and the second point group by executing a process.
 本開示によれば、変動が大きい環境に基づく情報でも、適切な位置合わせを行うことができる。 According to the present disclosure, appropriate alignment can be performed even with information based on an environment with large fluctuations.
実施の形態1のマッチングの概要を示す図である。4 is a diagram showing an overview of matching according to Embodiment 1; FIG. 実施の形態1の端末装置が有するハードウェアを示す図である。2 is a diagram showing hardware included in the terminal device according to the first embodiment; FIG. 実施の形態1の端末装置の機能を示すブロック図である。2 is a block diagram showing functions of the terminal device according to Embodiment 1; FIG. 実施の形態1の端末装置が実行する処理の例を示すフローチャートである。4 is a flow chart showing an example of processing executed by the terminal device according to the first embodiment; 実施の形態1のグローバルレジストレーションの例を示す図である。FIG. 3 is a diagram showing an example of global registration according to the first embodiment; FIG. 実施の形態2の端末装置の機能を示すブロック図である。FIG. 10 is a block diagram showing functions of a terminal device according to Embodiment 2; 実施の形態2の検出処理の具体例を示す図である。FIG. 10 is a diagram showing a specific example of detection processing according to the second embodiment; 実施の形態2の端末装置が実行する処理の例を示すフローチャートである。10 is a flow chart showing an example of processing executed by the terminal device according to the second embodiment; 実施の形態3の端末装置の機能を示すブロック図である。FIG. 11 is a block diagram showing functions of a terminal device according to Embodiment 3; 実施の形態3の検出処理の例を示す図である。FIG. 13 is a diagram illustrating an example of detection processing according to the third embodiment; FIG. 実施の形態3の端末装置が実行する処理の例を示すフローチャートである。13 is a flow chart showing an example of processing executed by a terminal device according to Embodiment 3; 実施の形態3の変形例の端末装置が実行する処理の例を示すフローチャートである。FIG. 13 is a flow chart showing an example of processing executed by a terminal device according to a modification of the third embodiment; FIG. 実施の形態4の端末装置の機能を示すブロック図である。FIG. 12 is a block diagram showing functions of a terminal device according to Embodiment 4; 実施の形態4の検出処理の例を示す図である。FIG. 12 is a diagram showing an example of detection processing according to the fourth embodiment; FIG. 実施の形態4の端末装置が実行する処理の例を示すフローチャートである。FIG. 13 is a flow chart showing an example of processing executed by a terminal device according to a fourth embodiment; FIG. 実施の形態5の情報処理システムの例を示す図である。FIG. 12 illustrates an example of an information processing system according to a fifth embodiment; FIG.
 以下、図面を参照しながら実施の形態を説明する。以下の実施の形態は、例にすぎず、本開示の範囲内で種々の変更が可能である。 Embodiments will be described below with reference to the drawings. The following embodiments are merely examples, and various modifications are possible within the scope of the present disclosure.
実施の形態1.
 図1は、実施の形態1のマッチングの概要を示す図である。図1は、端末装置100と、ベルトコンベア200とを示している。
 例えば、端末装置100は、スマートフォン、タブレット端末などである。端末装置100は、情報処理装置とも言う。端末装置100は、情報処理方法を実行する装置である。また、端末装置100は、ユーザが使用する装置である。
Embodiment 1.
FIG. 1 is a diagram showing an overview of matching according to the first embodiment. FIG. 1 shows a terminal device 100 and a belt conveyor 200. As shown in FIG.
For example, the terminal device 100 is a smart phone, a tablet terminal, or the like. The terminal device 100 is also called an information processing device. The terminal device 100 is a device that executes an information processing method. Also, the terminal device 100 is a device used by a user.
 ベルトコンベア200は、対象物の一例である。ベルトコンベア200の上に存在する物は、移動される。すなわち、ベルトコンベア200の上に存在する物は、一定の位置に存在しない。 The belt conveyor 200 is an example of an object. Objects present on the belt conveyor 200 are moved. That is, the objects existing on the belt conveyor 200 do not exist at fixed positions.
 ユーザは、工場内で、端末装置100を用いて、対象物であるベルトコンベア200を撮像する。ベルトコンベア200を撮像することにより得られた画像に基づく点群と、予め取得されている点群とのマッチングが行われる。マッチングでは、グローバルレジストレーションとローカルレジストレーションとが行われる。なお、点群とは、複数の特徴点の集まりである。例えば、マッチングが成功した場合、端末装置100のディスプレイには、ベルトコンベア200のCG(Computer Graphics)が表示される。 A user uses the terminal device 100 to capture an image of the belt conveyor 200, which is an object, in the factory. A point group based on an image obtained by imaging the belt conveyor 200 is matched with a previously obtained point group. Matching involves global registration and local registration. A point group is a collection of a plurality of feature points. For example, if the matching is successful, the CG (Computer Graphics) of the belt conveyor 200 is displayed on the display of the terminal device 100 .
 以下の説明では、端末装置100が実行するグローバルレジストレーションを主に説明する。 In the following description, the global registration performed by the terminal device 100 will be mainly described.
 次に、端末装置100が有するハードウェアを説明する。
 図2は、実施の形態1の端末装置が有するハードウェアを示す図である。端末装置100は、プロセッサ101、揮発性記憶装置102、不揮発性記憶装置103、撮像装置104、及びディスプレイ105を有する。
Next, hardware included in the terminal device 100 will be described.
FIG. 2 illustrates hardware included in the terminal device according to the first embodiment. The terminal device 100 has a processor 101 , a volatile memory device 102 , a nonvolatile memory device 103 , an imaging device 104 and a display 105 .
 プロセッサ101は、端末装置100全体を制御する。例えば、プロセッサ101は、CPU(Central Processing Unit)、FPGA(Field Programmable Gate Array)などである。プロセッサ101は、マルチプロセッサでもよい。また、端末装置100は、処理回路を有してもよい。処理回路は、単一回路又は複合回路でもよい。 The processor 101 controls the terminal device 100 as a whole. For example, the processor 101 is a CPU (Central Processing Unit), FPGA (Field Programmable Gate Array), or the like. Processor 101 may be a multiprocessor. Moreover, the terminal device 100 may have a processing circuit. The processing circuit may be a single circuit or multiple circuits.
 揮発性記憶装置102は、端末装置100の主記憶装置である。例えば、揮発性記憶装置102は、RAM(Random Access Memory)である。不揮発性記憶装置103は、端末装置100の補助記憶装置である。例えば、不揮発性記憶装置103は、HDD(Hard Disk Drive)、又はSSD(Solid State Drive)である。
 撮像装置104は、対象物を撮像する。例えば、撮像装置104は、カメラである。ディスプレイ105は、情報を表示する。
The volatile memory device 102 is the main memory device of the terminal device 100 . For example, the volatile memory device 102 is RAM (Random Access Memory). The nonvolatile storage device 103 is an auxiliary storage device of the terminal device 100 . For example, the nonvolatile memory device 103 is a HDD (Hard Disk Drive) or an SSD (Solid State Drive).
The imaging device 104 images an object. For example, imaging device 104 is a camera. A display 105 displays information.
 次に、端末装置100が有する機能を説明する。
 図3は、実施の形態1の端末装置の機能を示すブロック図である。端末装置100は、記憶部110、取得部120、検出部130、及び処理実行部140を有する。
Next, functions of the terminal device 100 will be described.
FIG. 3 is a block diagram showing functions of the terminal device according to the first embodiment. The terminal device 100 has a storage unit 110 , an acquisition unit 120 , a detection unit 130 and a process execution unit 140 .
 記憶部110は、揮発性記憶装置102又は不揮発性記憶装置103に確保した記憶領域として実現してもよい。
 取得部120、検出部130、及び処理実行部140の一部又は全部は、処理回路によって実現してもよい。また、取得部120、検出部130、及び処理実行部140の一部又は全部は、プロセッサ101が実行するプログラムのモジュールとして実現してもよい。例えば、プロセッサ101が実行するプログラムは、情報処理プログラムとも言う。例えば、情報処理プログラムは、記録媒体に記録されている。
The storage unit 110 may be implemented as a storage area secured in the volatile storage device 102 or the nonvolatile storage device 103 .
A part or all of the acquisition unit 120, the detection unit 130, and the processing execution unit 140 may be realized by a processing circuit. Also, part or all of the acquisition unit 120, the detection unit 130, and the processing execution unit 140 may be implemented as modules of a program executed by the processor 101. FIG. For example, a program executed by the processor 101 is also called an information processing program. For example, the information processing program is recorded on a recording medium.
 記憶部110は、位置情報111と点群情報112とを記憶してもよい。位置情報111は、位置を示す情報である。当該位置は、第1の位置とも言う。当該位置は、2次元座標又は3次元座標である。点群情報112は、対象物の複数の特徴点である点群を示す情報である。当該点群は、第2の点群とも言う。点群情報112は、位置情報111を含むことができる。よって、位置情報111が示す位置と、点群情報112が示す点群とは、同じ座標系で表すことができる。 The storage unit 110 may store the position information 111 and the point group information 112. The position information 111 is information indicating a position. This position is also referred to as the first position. The positions are two-dimensional coordinates or three-dimensional coordinates. The point cloud information 112 is information indicating a point cloud that is a plurality of feature points of the object. The point group is also called a second point group. Point cloud information 112 may include location information 111 . Therefore, the position indicated by the position information 111 and the point group indicated by the point group information 112 can be expressed in the same coordinate system.
 また、位置情報111は、ARオブジェクトの位置を示す情報でもよい。この文章は、次のように表現してもよい。位置情報111は、ARマーカの位置を示す情報でもよい。 Also, the position information 111 may be information indicating the position of the AR object. This sentence can be expressed as follows. The position information 111 may be information indicating the position of the AR marker.
 取得部120は、位置情報111と点群情報112とを取得する。例えば、取得部120は、位置情報111と点群情報112とを記憶部110から取得する。ここで、位置情報111と点群情報112とは、外部装置(例えば、クラウドサーバ)に格納されてもよい。位置情報111と点群情報112とが、外部装置に格納されている場合、取得部120は、位置情報111と点群情報112とを外部装置から取得する。 The acquisition unit 120 acquires the position information 111 and the point group information 112. For example, the acquisition unit 120 acquires the position information 111 and the point group information 112 from the storage unit 110 . Here, the position information 111 and the point cloud information 112 may be stored in an external device (for example, cloud server). When the position information 111 and the point group information 112 are stored in an external device, the acquisition unit 120 acquires the position information 111 and the point group information 112 from the external device.
 取得部120の他の機能は、後で説明する。また、検出部130及び処理実行部140の機能は、後で説明する。また、以下の説明では、対象物は、ベルトコンベア200とする。 Other functions of the acquisition unit 120 will be described later. Also, functions of the detection unit 130 and the processing execution unit 140 will be described later. Also, in the following description, the object is the belt conveyor 200 .
 次に、端末装置100が実行する処理を、フローチャートを用いて説明する。
 図4は、実施の形態1の端末装置が実行する処理の例を示すフローチャートである。
 (ステップS11)取得部120は、撮像装置104がベルトコンベア200を撮像することにより得られた画像を取得する。当該画像は、2次元画像でも、3次元画像でもよい。
 また、ディスプレイ105には、当該画像が表示されている。ユーザは、任意の位置をタップする。タップされた位置は、当該画像の中でユーザが指定した指定位置である。指定位置を示す情報を指定位置情報と呼ぶ。指定位置情報は、タップ位置情報と呼んでもよい。
 なお、例えば、位置情報111がARオブジェクトの位置を示す場合、ユーザは、ARオブジェクトの位置をタップする。
Next, processing executed by the terminal device 100 will be described using a flowchart.
4 is a flowchart illustrating an example of processing executed by the terminal device according to Embodiment 1. FIG.
(Step S<b>11 ) The acquisition unit 120 acquires an image obtained by imaging the belt conveyor 200 with the imaging device 104 . The image may be a two-dimensional image or a three-dimensional image.
Also, the image is displayed on the display 105 . A user taps an arbitrary position. The tapped position is a specified position specified by the user in the image. Information indicating the specified position is called specified position information. The designated position information may be called tap position information.
Note that, for example, when the position information 111 indicates the position of the AR object, the user taps the position of the AR object.
 (ステップS12)取得部120は、指定位置情報を取得する。
 (ステップS13)検出部130は、当該画像に基づいて、ベルトコンベア200の複数の特徴点を、点群として検出する。当該点群は、第1の点群とも言う。
(Step S12) The acquisition unit 120 acquires designated position information.
(Step S13) Based on the image, the detection unit 130 detects a plurality of characteristic points of the belt conveyor 200 as a point group. The point group is also called a first point group.
 (ステップS14)検出部130は、検出された点群に基づいて、ベルトコンベア200の複数のエッジを検出する。例えば、検出部130は、非特許文献2に記載の方法を用いて、当該複数のエッジを検出する。
 検出部130は、当該複数のエッジの中から、指定位置情報が示す指定位置から予め設定された範囲内に存在する1つのエッジを検出する。言い換えれば、検出部130は、当該複数のエッジの中から、指定位置の近傍に存在する1つのエッジを検出する。当該エッジは、第1のエッジとも言う。
(Step S14) The detection unit 130 detects a plurality of edges of the belt conveyor 200 based on the detected point group. For example, the detection unit 130 uses the method described in Non-Patent Document 2 to detect the plurality of edges.
The detection unit 130 detects one edge existing within a preset range from the specified position indicated by the specified position information from among the plurality of edges. In other words, the detection unit 130 detects one edge existing in the vicinity of the designated position from among the plurality of edges. The edge is also called the first edge.
 (ステップS15)検出部130は、点群情報112に基づいて、ベルトコンベア200の複数のエッジを検出する。複数のエッジの検出方法は、ステップS14と同じである。
 検出部130は、当該複数のエッジの中から、位置情報111が示す位置から予め設定された範囲内に存在する1つのエッジを検出する。言い換えれば、検出部130は、位置情報111が示す位置の近傍に存在する1つのエッジを検出する。当該エッジは、第2のエッジとも言う。
(Step S<b>15 ) The detection unit 130 detects a plurality of edges of the belt conveyor 200 based on the point group information 112 . A method of detecting a plurality of edges is the same as in step S14.
The detection unit 130 detects one edge existing within a preset range from the position indicated by the position information 111 from among the plurality of edges. In other words, the detection unit 130 detects one edge existing near the position indicated by the position information 111 . This edge is also called a second edge.
 (ステップS16)処理実行部140は、指定位置情報が示す指定位置と位置情報111が示す位置とを合わせる処理を実行する。当該処理は、第1の処理とも言う。
 処理実行部140は、ステップS14で検出されたエッジと、ステップS15で検出されたエッジとを合わせる処理を実行する。当該処理は、第2の処理とも言う。
 処理実行部140は、第1の処理と第2の処理とを実行することで、ステップS13で検出された点群と、点群情報112が示す点群とを合わせる。
 また、処理実行部140は、第1の処理と第2の処理とにおいて、座標系が異なっている場合、同じ座標系に合わせる。
(Step S<b>16 ) The process executing unit 140 executes a process of aligning the designated position indicated by the designated position information with the position indicated by the position information 111 . This processing is also referred to as first processing.
The process executing unit 140 executes a process of combining the edges detected in step S14 and the edges detected in step S15. This process is also called a second process.
The process execution unit 140 performs the first process and the second process to match the point cloud detected in step S13 with the point cloud indicated by the point cloud information 112 .
Further, when the coordinate systems are different between the first process and the second process, the process execution unit 140 adjusts them to the same coordinate system.
 (ステップS17)処理実行部140は、ローカルレジストレーションを実行する。
 (ステップS18)処理実行部140は、ステップS13で検出された点群と、点群情報112が示す点群とが一致しているか否かを判定する。言い換えれば、処理実行部140は、ステップS13で検出された点群と、点群情報112が示す点群とがマッチングしているか否かを判定する。なお、一致は、完全一致でなくてもよい。例えば、一致している割合が80%である場合、処理実行部140は、2つの点群が一致していると判定する。
 条件を満たす場合、処理は、ステップS19に進む。条件を満たさない場合、処理は、終了する。
(Step S17) The process executing unit 140 executes local registration.
(Step S18) The process execution unit 140 determines whether or not the point group detected in step S13 and the point group indicated by the point group information 112 match. In other words, the processing execution unit 140 determines whether or not the point group detected in step S13 and the point group indicated by the point group information 112 match. Note that the match does not have to be a perfect match. For example, when the matching rate is 80%, the processing execution unit 140 determines that the two point groups match.
If the condition is satisfied, the process proceeds to step S19. If the condition is not met, the process ends.
 (ステップS19)処理実行部140は、点群情報112に対応付けられている表示情報をディスプレイ105に表示するための処理を実行する。なお、例えば、表示情報は、ベルトコンベア200のCGである。
 これにより、ディスプレイ105には、表示情報が表示される。
(Step S<b>19 ) The processing execution unit 140 executes processing for displaying the display information associated with the point cloud information 112 on the display 105 . In addition, for example, the display information is the CG of the belt conveyor 200 .
Thereby, the display information is displayed on the display 105 .
 次に、グローバルレジストレーションを具体的に説明する。
 図5は、実施の形態1のグローバルレジストレーションの例を示す図である。図5の点群情報112には、位置情報111が示す位置113が含まれている状態を示している。また、点群情報112には、ベルトコンベア200の上に存在する物体の点群が含まれていなくてもよい。
 取得部120は、撮像装置104がベルトコンベア200を撮像することにより得られた画像300を取得する。取得部120は、指定位置情報を取得する。
Next, global registration will be specifically described.
FIG. 5 is a diagram showing an example of global registration according to the first embodiment. Point cloud information 112 in FIG. 5 shows a state in which position 113 indicated by position information 111 is included. Also, the point cloud information 112 may not include the point cloud of the object existing on the belt conveyor 200 .
The acquisition unit 120 acquires an image 300 obtained by imaging the belt conveyor 200 with the imaging device 104 . Acquisition unit 120 acquires specified position information.
 検出部130は、画像300に基づいて、ベルトコンベア200の複数の特徴点を、点群として検出する。検出部130は、検出された点群に基づいて、指定位置情報が示す指定位置301の近傍に存在するエッジ302を検出する。
 検出部130は、点群情報112に基づいて、位置情報111が示す位置113の近傍に存在するエッジ114を検出する。
Based on the image 300, the detection unit 130 detects a plurality of characteristic points of the belt conveyor 200 as a point group. Based on the detected point group, the detection unit 130 detects an edge 302 existing in the vicinity of the designated position 301 indicated by the designated position information.
Based on the point group information 112 , the detection unit 130 detects edges 114 that exist near the position 113 indicated by the position information 111 .
 処理実行部140は、指定位置情報が示す指定位置301と位置情報111が示す位置113とを合わせる処理を実行する。処理実行部140は、エッジ302とエッジ114とを合わせる処理を実行する。処理実行部140は、これらの処理を実行することで、画像300に基づいて検出された点群と、点群情報112が示す点群と合わせる。 The processing execution unit 140 executes processing for aligning the specified position 301 indicated by the specified position information with the position 113 indicated by the position information 111 . The processing execution unit 140 executes processing for combining the edge 302 and the edge 114 . By executing these processes, the processing execution unit 140 matches the point cloud detected based on the image 300 with the point cloud indicated by the point cloud information 112 .
 実施の形態1によれば、変動が大きい環境に基づく画像でも、端末装置100は、適切な位置合わせを行うことができる。 According to Embodiment 1, the terminal device 100 can perform appropriate alignment even for an image based on an environment with large fluctuations.
 また、上記では、点群が画像に基づいて、検出される場合を説明した。点群は、次のように検出されてもよい。まず、取得部120は、赤外線センサから情報を取得する。検出部130は、赤外線センサから取得した情報に基づいて、点群を検出する。なお、当該点群は、第1の点群とも言う。 In addition, the above explained the case where the point cloud is detected based on the image. The point cloud may be detected as follows. First, the acquisition unit 120 acquires information from the infrared sensor. The detection unit 130 detects the point cloud based on the information acquired from the infrared sensor. Note that the point group is also referred to as a first point group.
実施の形態2.
 次に、実施の形態2を説明する。実施の形態2では、実施の形態1と相違する事項を主に説明する。そして、実施の形態2では、実施の形態1と共通する事項の説明を省略する。
 実施の形態1では、ユーザが指定した位置を示す指定位置情報を取得する場合を説明した。実施の形態2では、位置情報111と位置合わせする位置を自動で検出する場合を説明する。
Embodiment 2.
Next, Embodiment 2 will be described. In Embodiment 2, mainly matters different from Embodiment 1 will be described. In the second embodiment, descriptions of items common to the first embodiment are omitted.
In the first embodiment, the case of acquiring the specified position information indicating the position specified by the user has been described. In the second embodiment, a case of automatically detecting a position to be aligned with the position information 111 will be described.
 図6は、実施の形態2の端末装置の機能を示すブロック図である。端末装置100aは、取得部120a、検出部130a、及び処理実行部140aを有する。
 まず、位置情報111は、ベルトコンベア200の特徴箇所の位置を示す。当該位置は、第1の位置とも言う。
FIG. 6 is a block diagram showing functions of the terminal device according to the second embodiment. The terminal device 100a has an acquisition unit 120a, a detection unit 130a, and a process execution unit 140a.
First, the position information 111 indicates the position of the characteristic part of the belt conveyor 200 . This position is also referred to as the first position.
 取得部120aは、指定位置情報を取得しない点が取得部120と異なる。
 検出部130aは、画像に基づいて、ベルトコンベア200の特徴箇所の位置を検出する。検出処理を例示する。
The obtaining unit 120a differs from the obtaining unit 120 in that it does not obtain the designated position information.
The detection unit 130a detects the position of the characteristic part of the belt conveyor 200 based on the image. 4 illustrates the detection process.
 図7は、実施の形態2の検出処理の具体例を示す図である。端末装置100aは、ベルトコンベア200を撮像することにより、画像310を生成する。画像310には、スイッチ311及び信号312が含まれている。
 検出部130aは、ベルトコンベア200の特徴箇所の位置を検出する。例えば、検出部130aは、一般物体認識技術を用いて、ベルトコンベア200の特徴箇所の位置を検出する。例えば、検出される特徴箇所は、スイッチ311である。また、例えば、検出される特徴箇所は、信号312である。
FIG. 7 is a diagram illustrating a specific example of detection processing according to the second embodiment. The terminal device 100a generates an image 310 by capturing an image of the belt conveyor 200. FIG. Image 310 includes switch 311 and signal 312 .
The detection unit 130 a detects the position of the characteristic part of the belt conveyor 200 . For example, the detection unit 130a detects the position of the characteristic part of the belt conveyor 200 using general object recognition technology. For example, the detected feature is the switch 311 . Also, for example, the detected feature location is the signal 312 .
 検出部130aの他の機能は、後で説明する。処理実行部140aの機能は、後で説明する。 Other functions of the detection unit 130a will be described later. The function of the processing execution unit 140a will be described later.
 次に、端末装置100aが実行する処理を、フローチャートを用いて説明する。
 図8は、実施の形態2の端末装置が実行する処理の例を示すフローチャートである。
 (ステップS21)取得部120aは、撮像装置104がベルトコンベア200を撮像することにより得られた画像を取得する。当該画像は、2次元画像でも、3次元画像でもよい。
 (ステップS22)検出部130aは、当該画像に基づいて、ベルトコンベア200の特徴箇所の位置を1つ検出する。
 (ステップS23)検出部130aは、当該画像に基づいて、ベルトコンベア200の複数の特徴点を、点群として検出する。当該点群は、第1の点群とも言う。
Next, processing executed by the terminal device 100a will be described using a flowchart.
8 is a flowchart illustrating an example of processing executed by the terminal device according to the second embodiment; FIG.
(Step S<b>21 ) The acquisition unit 120 a acquires an image obtained by imaging the belt conveyor 200 with the imaging device 104 . The image may be a two-dimensional image or a three-dimensional image.
(Step S22) The detection unit 130a detects one characteristic position of the belt conveyor 200 based on the image.
(Step S23) Based on the image, the detection unit 130a detects a plurality of characteristic points of the belt conveyor 200 as a point group. The point group is also called a first point group.
 (ステップS24)検出部130aは、検出された点群に基づいて、ベルトコンベア200の複数のエッジを検出する。
 検出部130aは、当該複数のエッジの中から、当該特徴箇所の位置から予め設定された範囲内に存在する1つのエッジを検出する。言い換えれば、検出部130aは、当該複数のエッジの中から、当該特徴箇所の位置の近傍に存在する1つのエッジを検出する。当該エッジは、第1のエッジとも言う。
(Step S24) The detection unit 130a detects a plurality of edges of the belt conveyor 200 based on the detected point group.
The detection unit 130a detects one edge existing within a preset range from the position of the characteristic location from among the plurality of edges. In other words, the detection unit 130a detects one edge existing in the vicinity of the position of the characteristic location from among the plurality of edges. The edge is also called the first edge.
 (ステップS25)検出部130aは、点群情報112に基づいて、ベルトコンベア200の複数のエッジを検出する。
 検出部130aは、当該複数のエッジの中から、位置情報111が示す位置(すなわち、ベルトコンベア200の特徴箇所の位置)から予め設定された範囲内に存在する1つのエッジを検出する。言い換えれば、検出部130aは、位置情報111が示す位置の近傍に存在する1つのエッジを検出する。当該エッジは、第2のエッジとも言う。
(Step S<b>25 ) The detection unit 130 a detects a plurality of edges of the belt conveyor 200 based on the point group information 112 .
The detection unit 130a detects one edge existing within a preset range from the position indicated by the position information 111 (that is, the position of the characteristic part of the belt conveyor 200) from among the plurality of edges. In other words, the detection unit 130a detects one edge that exists in the vicinity of the position indicated by the position information 111. FIG. This edge is also called a second edge.
 (ステップS26)処理実行部140aは、当該特徴箇所の位置と位置情報111が示す位置とを合わせる処理を実行する。当該処理は、第1の処理とも言う。
 処理実行部140aは、ステップS24で検出されたエッジと、ステップS25で検出されたエッジとを合わせる処理を実行する。当該処理は、第2の処理とも言う。
 処理実行部140aは、第1の処理と第2の処理とを実行することで、ステップS23で検出された点群と、点群情報112が示す点群とを合わせる。
(Step S<b>26 ) The process execution unit 140 a executes a process of aligning the position of the feature location with the position indicated by the position information 111 . This processing is also referred to as first processing.
The process executing unit 140a executes a process of combining the edges detected in step S24 and the edges detected in step S25. This process is also called a second process.
The process execution unit 140a matches the point cloud detected in step S23 with the point cloud indicated by the point cloud information 112 by performing the first process and the second process.
 (ステップS27)処理実行部140aは、ローカルレジストレーションを実行する。
 (ステップS28)処理実行部140aは、ステップS23で検出された点群と、点群情報112が示す点群とが一致しているか否かを判定する。条件を満たす場合、処理は、ステップS29に進む。条件を満たさない場合、処理は、終了する。
 (ステップS29)処理実行部140aは、点群情報112に対応付けられている表示情報をディスプレイ105に表示するための処理を実行する。
(Step S27) The process executing unit 140a executes local registration.
(Step S28) The process execution unit 140a determines whether or not the point group detected in step S23 and the point group indicated by the point group information 112 match. If the condition is satisfied, the process proceeds to step S29. If the condition is not met, the process ends.
(Step S<b>29 ) The processing execution unit 140 a executes processing for displaying display information associated with the point cloud information 112 on the display 105 .
 実施の形態2によれば、端末装置100aは、自動で特徴箇所の位置を検出する。そのため、実施の形態2では、実施の形態1のように、ユーザは、指定する動作を行わなくて済む。よって、実施の形態2によれば、端末装置100aは、ユーザの負担を軽減できる。 According to Embodiment 2, the terminal device 100a automatically detects the position of the feature location. Therefore, in the second embodiment, unlike in the first embodiment, the user does not have to perform the designated operation. Therefore, according to Embodiment 2, the terminal device 100a can reduce the burden on the user.
実施の形態3.
 次に、実施の形態3を説明する。実施の形態3では、実施の形態1と相違する事項を主に説明する。そして、実施の形態3では、実施の形態1と共通する事項の説明を省略する。
 実施の形態1では、画像を用いてエッジを検出する場合を説明した。実施の形態3では、トラッキング技術を用いて、線を検出する場合を説明する。
Embodiment 3.
Next, Embodiment 3 will be described. In the third embodiment, mainly matters different from the first embodiment will be described. In the third embodiment, descriptions of matters common to the first embodiment are omitted.
In the first embodiment, the case of detecting edges using an image has been described. In the third embodiment, a case of detecting lines using tracking technology will be described.
 図9は、実施の形態3の端末装置の機能を示すブロック図である。端末装置100bは、取得部120b、検出部130b、及び処理実行部140bを有する。
 図9では、検出部130bの主な機能を説明する。そして、取得部120b、検出部130b、及び処理実行部140bの詳細な機能については、後で説明する。
 検出部130bは、物体の軌跡を検出する。言い換えれば、検出部130bは、物体の移動軌跡を検出する。検出処理を例示する。
FIG. 9 is a block diagram showing functions of the terminal device according to the third embodiment. The terminal device 100b has an acquisition unit 120b, a detection unit 130b, and a process execution unit 140b.
Main functions of the detection unit 130b will be described with reference to FIG. Detailed functions of the acquisition unit 120b, the detection unit 130b, and the processing execution unit 140b will be described later.
The detection unit 130b detects the trajectory of the object. In other words, the detection unit 130b detects the movement trajectory of the object. 4 illustrates the detection process.
 図10は、実施の形態3の検出処理の例を示す図である。図10は、ベルトコンベア200が物体320を移動させていることを示している。検出部130bは、トラッキング技術を用いて、物体320の軌跡を検出する。例えば、検出部130bは、物体320の角321をトラッキングすることで、物体320の軌跡を検出する。検出部130bは、軌跡に基づいて、直線322を検出する。 FIG. 10 is a diagram showing an example of detection processing according to the third embodiment. FIG. 10 shows conveyor belt 200 moving object 320 . The detection unit 130b detects the trajectory of the object 320 using tracking technology. For example, the detection unit 130 b detects the trajectory of the object 320 by tracking the corner 321 of the object 320 . The detection unit 130b detects the straight line 322 based on the trajectory.
 また、検出部130bは、複数の物体の軌跡に基づいて、直線322を検出してもよい。さらに、軌跡が曲線である場合、検出部130bは、軌跡に基づいて、曲線を検出する。 Also, the detection unit 130b may detect the straight line 322 based on the trajectories of a plurality of objects. Furthermore, when the trajectory is a curve, the detection unit 130b detects the curve based on the trajectory.
 次に、端末装置100bが実行する処理を、フローチャートを用いて説明する。
 図11は、実施の形態3の端末装置が実行する処理の例を示すフローチャートである。
 (ステップS31)取得部120bは、物体を移動させるベルトコンベア200を撮像することにより得られた映像を取得する。
 また、ディスプレイ105には、当該映像が表示されている。ユーザは、任意の場所をタップする。タップされた場所は、ユーザが指定した指定位置である。指定位置を示す情報を指定位置情報と呼ぶ。
Next, processing executed by the terminal device 100b will be described using a flowchart.
11 is a flowchart illustrating an example of processing executed by the terminal device according to the third embodiment; FIG.
(Step S31) The acquisition unit 120b acquires an image obtained by imaging the belt conveyor 200 that moves the object.
Also, the image is displayed on the display 105 . A user taps an arbitrary place. The tapped place is the specified position specified by the user. Information indicating the specified position is called specified position information.
 (ステップS32)取得部120bは、指定位置情報を取得する。
 (ステップS33)検出部130bは、当該映像に基づいて、ベルトコンベア200の複数の特徴点を、点群として検出する。詳細には、検出部130bは、当該映像である複数の画像のうちの1つの画像に基づいて、ベルトコンベア200の複数の特徴点を、点群として検出する。当該点群は、第1の点群とも言う。
 (ステップS34)検出部130bは、当該映像に基づいて、ベルトコンベア200上の物体の軌跡を検出する。検出部130bは、軌跡に基づいて、線を検出する。当該線は、第1の線とも言う。
(Step S32) The acquisition unit 120b acquires designated position information.
(Step S33) Based on the image, the detection unit 130b detects a plurality of characteristic points of the belt conveyor 200 as a point group. Specifically, the detection unit 130b detects a plurality of characteristic points of the belt conveyor 200 as a point group based on one of the plurality of images that are the videos. The point group is also called a first point group.
(Step S34) The detection unit 130b detects the trajectory of the object on the belt conveyor 200 based on the image. The detection unit 130b detects lines based on the trajectory. The line is also called the first line.
 (ステップS35)検出部130bは、点群情報112に基づいて、ベルトコンベア200の複数のエッジを検出する。
 検出部130bは、当該複数のエッジの中から、位置情報111が示す位置から予め設定された範囲内に存在する1つのエッジを検出する。言い換えれば、検出部130bは、位置情報111が示す位置の近傍に存在する1つのエッジを検出する。当該エッジは、第2のエッジとも言う。
(Step S<b>35 ) The detection unit 130 b detects a plurality of edges of the belt conveyor 200 based on the point group information 112 .
The detection unit 130b detects one edge existing within a preset range from the position indicated by the position information 111 from among the plurality of edges. In other words, the detection unit 130b detects one edge that exists in the vicinity of the position indicated by the position information 111. FIG. This edge is also called a second edge.
 (ステップS36)処理実行部140bは、指定位置情報が示す指定位置と位置情報111が示す位置とを合わせる処理を実行する。当該処理は、第1の処理とも言う。
 処理実行部140bは、ステップS34で検出された線と、ステップS35で検出されたエッジとを合わせる処理を実行する。当該処理は、第2の処理とも言う。
 処理実行部140bは、第1の処理と第2の処理とを実行することで、ステップS33で検出された点群と、点群情報112が示す点群とを合わせる。
(Step S36) The process executing unit 140b executes a process of matching the specified position indicated by the specified position information and the position indicated by the position information 111. FIG. This processing is also referred to as first processing.
The process executing unit 140b executes a process of matching the line detected in step S34 and the edge detected in step S35. This process is also called a second process.
The process execution unit 140b matches the point cloud detected in step S33 with the point cloud indicated by the point cloud information 112 by executing the first process and the second process.
 (ステップS37)処理実行部140bは、ローカルレジストレーションを実行する。
 (ステップS38)処理実行部140bは、ステップS33で検出された点群と、点群情報112が示す点群とが一致しているか否かを判定する。条件を満たす場合、処理は、ステップS39に進む。条件を満たさない場合、処理は、終了する。
 (ステップS39)処理実行部140bは、点群情報112に対応付けられている表示情報をディスプレイ105に表示するための処理を実行する。
(Step S37) The process executing unit 140b executes local registration.
(Step S38) The process execution unit 140b determines whether or not the point group detected in step S33 and the point group indicated by the point group information 112 match. If the condition is satisfied, the process proceeds to step S39. If the condition is not met, the process ends.
(Step S<b>39 ) The processing execution unit 140 b executes processing for displaying display information associated with the point cloud information 112 on the display 105 .
 実施の形態3によれば、変動が大きい環境に基づく映像でも、端末装置100bは、適切な位置合わせを行うことができる。 According to Embodiment 3, the terminal device 100b can perform appropriate alignment even for images based on an environment with large fluctuations.
実施の形態3の変形例.
 実施の形態3の変形例では、実施の形態2と実施の形態3とが組み合わされる場合を説明する。なお、符号は、実施の形態2の符号を用いる。
Modification of Embodiment 3.
A modification of the third embodiment describes a case where the second embodiment and the third embodiment are combined. In addition, the code|symbol of Embodiment 2 is used for a code|symbol.
 図12は、実施の形態3の変形例の端末装置が実行する処理の例を示すフローチャートである。図12の処理は、ステップS21a~24a,26aが実行される点が図8の処理と異なる。そのため、図12では、ステップS21a~24a,26aを説明する。そして、ステップS21a~24a,26a以外の処理の説明は、省略する。 FIG. 12 is a flowchart showing an example of processing executed by a terminal device according to a modification of the third embodiment. The process of FIG. 12 differs from the process of FIG. 8 in that steps S21a to 24a and 26a are executed. Therefore, in FIG. 12, steps S21a to 24a and 26a will be explained. Further, description of processes other than steps S21a to 24a and 26a is omitted.
 (ステップS21a)取得部120aは、物体を移動させるベルトコンベア200を撮像することにより得られた映像を取得する。
 (ステップS22a)検出部130aは、当該映像に基づいて、ベルトコンベア200の特徴箇所の位置を検出する。
(Step S21a) The acquisition unit 120a acquires an image obtained by imaging the belt conveyor 200 that moves the object.
(Step S22a) The detection unit 130a detects the position of the characteristic part of the belt conveyor 200 based on the image.
 (ステップS23a)検出部130aは、当該映像に基づいて、ベルトコンベア200の複数の特徴点を、点群として検出する。詳細には、検出部130aは、当該映像である複数の画像のうちの1つの画像に基づいて、ベルトコンベア200の複数の特徴点を、点群として検出する。当該点群は、第1の点群とも言う。
 (ステップS24a)検出部130aは、当該映像に基づいて、ベルトコンベア200上の物体の軌跡を検出する。検出部130aは、軌跡に基づいて、線を検出する。当該線は、第1の線とも言う。
(Step S23a) Based on the image, the detection unit 130a detects a plurality of feature points of the belt conveyor 200 as a point group. Specifically, the detection unit 130a detects a plurality of feature points of the belt conveyor 200 as a point group based on one of the plurality of images that are the video. The point group is also called a first point group.
(Step S24a) The detection unit 130a detects the trajectory of the object on the belt conveyor 200 based on the image. The detection unit 130a detects a line based on the trajectory. The line is also called the first line.
 (ステップS26a)処理実行部140aは、当該特徴箇所の位置と位置情報111が示す位置とを合わせる処理を実行する。当該処理は、第1の処理とも言う。
 処理実行部140aは、ステップS24aで検出された線と、ステップS25で検出されたエッジとを合わせる処理を実行する。当該処理は、第2の処理とも言う。
 処理実行部140aは、第1の処理と第2の処理とを実行することで、ステップS23aで検出された点群と、点群情報112が示す点群とを合わせる。
(Step S<b>26 a ) The process execution unit 140 a executes a process of matching the position of the characteristic location and the position indicated by the position information 111 . This processing is also referred to as first processing.
The process executing unit 140a executes a process of matching the line detected in step S24a and the edge detected in step S25. This process is also called a second process.
The process execution unit 140a matches the point cloud detected in step S23a with the point cloud indicated by the point cloud information 112 by executing the first process and the second process.
 実施の形態3の変形例によれば、変動が大きい環境に基づく映像でも、適切な位置合わせが可能である。 According to the modified example of Embodiment 3, appropriate alignment is possible even for images based on an environment with large fluctuations.
実施の形態4.
 次に、実施の形態4を説明する。実施の形態4では、実施の形態1と相違する事項を主に説明する。そして、実施の形態4では、実施の形態1と共通する事項の説明を省略する。
 実施の形態1では、点同士の位置合わせが行われた。実施の形態4では、点同士の位置合わせが行われない場合を説明する。
Embodiment 4.
Next, Embodiment 4 will be described. In Embodiment 4, mainly matters different from Embodiment 1 will be described. In the fourth embodiment, descriptions of items common to the first embodiment are omitted.
In Embodiment 1, alignment between points is performed. In the fourth embodiment, a case in which points are not aligned will be described.
 図13は、実施の形態4の端末装置の機能を示すブロック図である。端末装置100cは、取得部120c、検出部130c、及び処理実行部140cを有する。
 取得部120cは、位置情報111を取得しない点が取得部120と異なる。そのため、記憶部110は、位置情報111を記憶していなくてもよい。
 検出部130cの主な機能を説明する。検出部130cは、点群に基づいて、対象物の平面領域を検出する。検出処理を例示する。
FIG. 13 is a block diagram showing functions of the terminal device according to the fourth embodiment. The terminal device 100c has an acquisition unit 120c, a detection unit 130c, and a process execution unit 140c.
The acquisition unit 120 c differs from the acquisition unit 120 in that it does not acquire the position information 111 . Therefore, the storage unit 110 does not have to store the position information 111 .
Main functions of the detection unit 130c will be described. The detection unit 130c detects a plane area of the object based on the point group. 4 illustrates the detection process.
 図14は、実施の形態4の検出処理の例を示す図である。図14は、端末装置100cが生成した画像330を示している。検出部130cは、画像330に基づいて、点群を検出する。検出部130cは、点群に基づいて、ベルトコンベア200の平面領域331を検出する。 FIG. 14 is a diagram showing an example of detection processing according to the fourth embodiment. FIG. 14 shows an image 330 generated by the terminal device 100c. The detection unit 130c detects a point cloud based on the image 330. FIG. The detection unit 130c detects the plane area 331 of the belt conveyor 200 based on the point group.
 平面領域は、特徴的な平面領域であることが望ましい。例えば、平面領域は、三角形、六角形、円などでもよい。また、検出部130cは、点群に基づいて、対象物の中の立体形状の領域を検出してもよい。立体形状は、特徴的な形状であることが望ましい。なお、点群に基づいて検出された平面領域は、第1の領域とも言う。また、点群に基づいて検出された立体形状の領域は、第1の領域とも言う。 It is desirable that the planar area be a characteristic planar area. For example, the planar regions may be triangles, hexagons, circles, and the like. Further, the detection unit 130c may detect a three-dimensional area in the object based on the point group. The three-dimensional shape is desirably a characteristic shape. Note that the plane area detected based on the point group is also called a first area. A three-dimensional region detected based on the point group is also referred to as a first region.
 検出部130cは、画像に基づいて平面領域が検出された場合、点群情報112に基づいて、対象物の複数の平面領域を検出する。また、検出部130cは、画像に基づいて立体形状の領域が検出された場合、点群情報112に基づいて、対象物の中の複数の立体形状の領域を検出する。 The detection unit 130c detects a plurality of planar regions of the object based on the point group information 112 when planar regions are detected based on the image. Further, when a three-dimensional region is detected based on the image, the detection unit 130 c detects a plurality of three-dimensional regions in the object based on the point group information 112 .
 検出部130cの他の機能は、後で説明する。処理実行部140cの機能は、後で説明する。 Other functions of the detection unit 130c will be described later. The function of the processing execution unit 140c will be described later.
 次に、端末装置100cが実行する処理を、フローチャートを用いて説明する。
 図15は、実施の形態4の端末装置が実行する処理の例を示すフローチャートである。
 (ステップS41)取得部120cは、撮像装置104がベルトコンベア200を撮像することにより得られた画像を取得する。当該画像は、2次元画像でも、3次元画像でもよい。
 (ステップS42)検出部130cは、当該画像に基づいて、ベルトコンベア200の複数の特徴点を、点群として検出する。当該点群は、第1の点群とも言う。
 (ステップS43)検出部130cは、検出された点群に基づいて、ベルトコンベア200の平面領域を検出する。
Next, processing executed by the terminal device 100c will be described using a flowchart.
15 is a flowchart illustrating an example of processing executed by the terminal device according to the fourth embodiment; FIG.
(Step S<b>41 ) The acquisition unit 120 c acquires an image obtained by imaging the belt conveyor 200 with the imaging device 104 . The image may be a two-dimensional image or a three-dimensional image.
(Step S42) Based on the image, the detection unit 130c detects a plurality of characteristic points of the belt conveyor 200 as a point group. The point group is also called a first point group.
(Step S43) The detection unit 130c detects the plane area of the belt conveyor 200 based on the detected point group.
 (ステップS44)検出部130cは、点群情報112に基づいて、ベルトコンベア200の複数の平面領域である複数の領域を検出する。
 (ステップS45)処理実行部140cは、複数の領域(すなわち、複数の平面領域)の中から、ステップS43で検出された平面領域を特定する。当該処理は、第1の処理とも言う。
(Step S<b>44 ) Based on the point group information 112 , the detection unit 130 c detects multiple areas that are multiple plane areas of the belt conveyor 200 .
(Step S45) The process execution unit 140c identifies the planar area detected in step S43 from among the plurality of areas (that is, the plurality of planar areas). This processing is also referred to as first processing.
 (ステップS46)処理実行部140cは、ステップS43で検出された平面領域と、ステップS45で特定された平面領域とを合わせる処理を実行する。当該処理は、第2の処理とも言う。
 処理実行部140cは、第2の処理を実行することで、ステップS42で検出された点群と、点群情報112が示す点群とを合わせる。
(Step S46) The process execution unit 140c executes a process of matching the plane area detected in step S43 and the plane area specified in step S45. This process is also called a second process.
The processing execution unit 140c matches the point cloud detected in step S42 with the point cloud indicated by the point cloud information 112 by executing the second processing.
 (ステップS47)処理実行部140cは、ローカルレジストレーションを実行する。
 (ステップS48)処理実行部140cは、ステップS42で検出された点群と、点群情報112が示す点群とが一致しているか否かを判定する。条件を満たす場合、処理は、ステップS49に進む。条件を満たさない場合、処理は、終了する。
 (ステップS49)処理実行部140cは、点群情報112に対応付けられている表示情報をディスプレイ105に表示するための処理を実行する。
(Step S47) The process executing unit 140c executes local registration.
(Step S48) The process execution unit 140c determines whether or not the point group detected in step S42 and the point group indicated by the point group information 112 match. If the condition is satisfied, the process proceeds to step S49. If the condition is not met, the process ends.
(Step S<b>49 ) The processing execution unit 140 c executes processing for displaying display information associated with the point group information 112 on the display 105 .
 実施の形態4によれば、変動が大きい環境に基づく画像でも、端末装置100cは、適切な位置合わせを行うことができる。 According to Embodiment 4, the terminal device 100c can perform appropriate alignment even for an image based on an environment with large fluctuations.
実施の形態5.
 次に、実施の形態5を説明する。実施の形態5では、実施の形態1~4と相違する事項を主に説明する。そして、実施の形態5では、実施の形態1~4と共通する事項の説明を省略する。
Embodiment 5.
Next, Embodiment 5 will be described. In Embodiment 5, mainly matters different from Embodiments 1 to 4 will be described. Further, in the fifth embodiment, descriptions of matters common to the first to fourth embodiments are omitted.
 図16は、実施の形態5の情報処理システムの例を示す図である。情報処理システムは、情報処理装置400と端末装置500とを含む。情報処理装置400と端末装置500とは、ネットワークを介して、通信する。
 情報処理装置400は、情報処理方法を実行する装置である。情報処理装置400は、プロセッサ、揮発性記憶装置、及び不揮発性記憶装置を有する。また、情報処理装置400は、処理回路を有してもよい。
 端末装置500は、対象物を撮像する撮像装置とディスプレイとを有する。
FIG. 16 is a diagram illustrating an example of an information processing system according to a fifth embodiment; The information processing system includes an information processing device 400 and a terminal device 500 . The information processing device 400 and the terminal device 500 communicate via a network.
The information processing device 400 is a device that executes an information processing method. The information processing device 400 has a processor, a volatile memory device, and a non-volatile memory device. Further, the information processing device 400 may have a processing circuit.
The terminal device 500 has an imaging device for imaging an object and a display.
 実施の形態1~4では、端末装置100,100a,100b,100cが処理を行う場合を説明した。情報処理装置400は、端末装置100,100a,100b,100cの機能を有する。 In Embodiments 1 to 4, cases where terminal devices 100, 100a, 100b, and 100c perform processing have been described. The information processing device 400 has the functions of the terminal devices 100, 100a, 100b, and 100c.
 例えば、情報処理装置400は、第1の位置を示す位置情報111、点群情報112、当該撮像装置が対象物を撮像することにより得られた画像、及び当該画像の中でユーザが指定した指定位置を示す指定位置情報を取得する。例えば、情報処理装置400は、第1の位置を示す位置情報111と点群情報112とを揮発性記憶装置又は不揮発性記憶装置から取得する。また、例えば、情報処理装置400は、当該画像と当該指定位置情報とを端末装置500から取得する。情報処理装置400は、当該画像に基づいて対象物の複数の特徴点を、第1の点群として検出する。情報処理装置400は、第1の点群に基づいて対象物の複数のエッジを検出する。情報処理装置400は、検出された複数のエッジの中から、指定位置から予め設定された範囲内に存在する第1のエッジを検出する。情報処理装置400は、対象物の複数の特徴点である第2の点群を示す点群情報112に基づいて、対象物の複数のエッジを検出する。情報処理装置400は、検出された複数のエッジの中から、第1の位置から範囲内に存在する第2のエッジを検出する。情報処理装置400は、指定位置と第1の位置とを合わせる第1の処理を実行する。情報処理装置400は、第1のエッジと第2のエッジとを合わせる第2の処理を実行する。情報処理装置400は、第1の処理と第2の処理とを実行することで、第1の点群と第2の点群とを合わせる。
 また、例えば、情報処理装置400は、対象物の特徴箇所の位置である第1の位置を示す位置情報111、点群情報112、及び当該撮像装置が対象物を撮像することにより得られた画像を取得する。情報処理装置400は、画像に基づいて第1の位置を検出する。情報処理装置400は、画像に基づいて対象物の複数の特徴点を、第1の点群として検出する。情報処理装置400は、第1の点群に基づいて対象物の複数のエッジを検出する。情報処理装置400は、検出された複数のエッジの中から、検出された第1の位置から予め設定された範囲内に存在する第1のエッジを検出する。情報処理装置400は、対象物の複数の特徴点である第2の点群を示す点群情報112に基づいて、対象物の複数のエッジを検出する。情報処理装置400は、検出された複数のエッジの中から、位置情報111が示す第1の位置から範囲内に存在する第2のエッジを検出する。情報処理装置400は、検出された第1の位置と位置情報111が示す第1の位置とを合わせる第1の処理を実行する。情報処理装置400は、第1のエッジと第2のエッジとを合わせる第2の処理を実行する。情報処理装置400は、第1の処理と第2の処理とを実行することで、第1の点群と第2の点群とを合わせる。
For example, the information processing apparatus 400 includes position information 111 indicating a first position, point cloud information 112, an image obtained by capturing an image of a target object by the imaging device, and designation specified by the user in the image. Acquire the specified position information indicating the position. For example, the information processing device 400 acquires the position information 111 indicating the first position and the point group information 112 from a volatile or nonvolatile storage device. Also, for example, the information processing device 400 acquires the image and the designated position information from the terminal device 500 . The information processing device 400 detects a plurality of feature points of the object as a first point group based on the image. The information processing device 400 detects multiple edges of the object based on the first point group. The information processing apparatus 400 detects a first edge existing within a preset range from the specified position from among the plurality of detected edges. The information processing device 400 detects multiple edges of the object based on the point group information 112 indicating the second point group, which is the multiple feature points of the object. The information processing device 400 detects a second edge existing within a range from the first position from among the plurality of detected edges. The information processing device 400 executes a first process of aligning the specified position with the first position. The information processing device 400 executes a second process of aligning the first edge and the second edge. The information processing apparatus 400 aligns the first point group and the second point group by executing the first process and the second process.
In addition, for example, the information processing apparatus 400 includes position information 111 indicating a first position, which is a position of a characteristic part of an object, point cloud information 112, and an image obtained by imaging the object with the imaging device. to get The information processing device 400 detects the first position based on the image. The information processing device 400 detects a plurality of feature points of the object as a first point group based on the image. The information processing device 400 detects multiple edges of the object based on the first point group. The information processing apparatus 400 detects a first edge existing within a preset range from the detected first position from among the detected edges. The information processing device 400 detects multiple edges of the object based on the point group information 112 indicating the second point group, which is the multiple feature points of the object. The information processing device 400 detects a second edge existing within a range from the first position indicated by the position information 111 from among the detected edges. The information processing device 400 executes a first process of aligning the detected first position with the first position indicated by the position information 111 . The information processing device 400 executes a second process of aligning the first edge and the second edge. The information processing apparatus 400 aligns the first point group and the second point group by executing the first process and the second process.
 また、例えば、情報処理装置400は、点群情報112と、対象物を撮像することにより得られた画像とを取得する。情報処理装置400は、画像に基づいて対象物の複数の特徴点を、第1の点群として検出する。情報処理装置400は、第1の点群に基づいて、対象物の平面領域又は対象物の中の立体形状の領域である第1の領域を検出する。情報処理装置400は、対象物の複数の特徴点である第2の点群を示す点群情報112に基づいて、対象物の複数の平面領域又は対象物の複数の立体形状の領域である複数の領域を検出する。情報処理装置400は、複数の領域の中から第1の領域を特定する第1の処理を実行する。情報処理装置400は、第1の点群の第1の領域と第2の点群の第1の領域とを合わせる第2の処理を実行する。情報処理装置400は、第1の処理と第2の処理とを実行することで、第1の点群と第2の点群とを合わせる。 Also, for example, the information processing device 400 acquires the point cloud information 112 and an image obtained by imaging the target object. The information processing device 400 detects a plurality of feature points of the object as a first point group based on the image. The information processing apparatus 400 detects a first area, which is a planar area of the object or a three-dimensional area in the object, based on the first point group. Based on the point group information 112 indicating the second point group, which is a plurality of characteristic points of the object, the information processing apparatus 400 detects a plurality of planar regions of the object or a plurality of three-dimensional regions of the object. to detect the region of The information processing device 400 executes a first process of identifying a first area from among the plurality of areas. The information processing device 400 executes a second process of matching the first region of the first point group and the first region of the second point group. The information processing apparatus 400 aligns the first point group and the second point group by executing the first process and the second process.
 また、例えば、情報処理装置400は、ローカルレジストレーションを実行し、第1の点群と第2の点群とが一致している場合、点群情報112に対応付けられている表示情報を当該ディスプレイに表示するための処理を実行する。詳細には、情報処理装置400は、表示情報と、当該表示情報の表示指示とを端末装置500に送信する。これにより、端末装置500の当該ディスプレイには、当該表示情報が表示される。 Further, for example, the information processing apparatus 400 executes local registration, and if the first point group and the second point group match, the information processing apparatus 400 converts the display information associated with the point group information 112 to the relevant point group. Perform processing for displaying on the display. Specifically, the information processing device 400 transmits display information and an instruction to display the display information to the terminal device 500 . Thereby, the display information is displayed on the display of the terminal device 500 .
 このように、情報処理装置400は、端末装置100,100a,100b,100cの機能を有する。 Thus, the information processing device 400 has the functions of the terminal devices 100, 100a, 100b, and 100c.
 実施の形態5によれば、情報処理装置400は、実施の形態1~4に記載の効果と同じ効果を実現できる。 According to the fifth embodiment, the information processing device 400 can achieve the same effects as those described in the first to fourth embodiments.
 以上に説明した各実施の形態における特徴は、互いに適宜組み合わせることができる。 The features of each embodiment described above can be combined as appropriate.
 100,100a,100b,100c 端末装置、 101 プロセッサ、 102 揮発性記憶装置、 103 不揮発性記憶装置、 104 撮像装置、 105 ディスプレイ、 110 記憶部、 111 位置情報、 112 点群情報、 113 位置、 114 エッジ、 120,120a,120b,120c 取得部、 130,130a,130b,130c 検出部、 140,140a,140b,140c 処理実行部、 200 ベルトコンベア、 300 画像、 301 指定位置、 302 エッジ、 310 画像、 311 スイッチ、 312 信号、 320 物体、 321 角、 322 直線、 330 画像、 331 平面領域、 400 情報処理装置、 500 端末装置。 100, 100a, 100b, 100c terminal device, 101 processor, 102 volatile storage device, 103 non-volatile storage device, 104 imaging device, 105 display, 110 storage unit, 111 position information, 112 point group information, 113 position, 114 edge , 120, 120a, 120b, 120c acquisition unit, 130, 130a, 130b, 130c detection unit, 140, 140a, 140b, 140c processing execution unit, 200 belt conveyor, 300 image, 301 designated position, 302 edge, 310 image, 311 Switch, 312 signal, 320 object, 321 angle, 322 straight line, 330 image, 331 plane area, 400 information processing device, 500 terminal device.

Claims (17)

  1.  第1の位置を示す位置情報、点群情報、対象物を撮像することにより得られた画像、及び前記画像の中でユーザが指定した指定位置を示す指定位置情報を取得する取得部と、
     前記画像に基づいて前記対象物の複数の特徴点を、第1の点群として検出し、前記第1の点群に基づいて前記対象物の複数のエッジを検出し、検出された複数のエッジの中から、前記指定位置から予め設定された範囲内に存在する第1のエッジを検出し、前記対象物の複数の特徴点である第2の点群を示す前記点群情報に基づいて、前記対象物の複数のエッジを検出し、検出された複数のエッジの中から、前記第1の位置から前記範囲内に存在する第2のエッジを検出する検出部と、
     前記指定位置と前記第1の位置とを合わせる第1の処理を実行し、前記第1のエッジと前記第2のエッジとを合わせる第2の処理を実行し、前記第1の処理と前記第2の処理とを実行することで、前記第1の点群と前記第2の点群とを合わせる処理実行部と、
     を有する情報処理装置。
    an acquisition unit that acquires position information indicating a first position, point cloud information, an image obtained by imaging an object, and specified position information indicating a specified position specified by a user in the image;
    detecting a plurality of feature points of the object based on the image as a first point group; detecting a plurality of edges of the object based on the first point group; and detecting a plurality of detected edges. A first edge existing within a preset range from the specified position is detected from among, and based on the point group information indicating a second point group that is a plurality of feature points of the object, a detection unit that detects a plurality of edges of the object and detects a second edge existing within the range from the first position from among the plurality of detected edges;
    executing a first process of aligning the specified position with the first position; executing a second process of aligning the first edge with the second edge; a processing execution unit that aligns the first point group and the second point group by executing the processing of 2;
    Information processing device having
  2.  前記第1の位置は、ARオブジェクトの位置である、
     請求項1に記載の情報処理装置。
    wherein the first position is the position of an AR object;
    The information processing device according to claim 1 .
  3.  前記取得部は、物体を移動させる前記対象物を撮像することにより得られた映像を取得し、
     前記検出部は、前記映像に基づいて前記第1の点群を検出し、前記映像に基づいて前記物体の軌跡を検出し、前記軌跡に基づいて、第1の線を検出し、
     前記処理実行部は、前記第2の処理で、前記第1の線と前記第2のエッジとを合わせる、
     請求項1又は2に記載の情報処理装置。
    The acquisition unit acquires an image obtained by imaging the object that moves the object,
    The detection unit detects the first point group based on the image, detects a trajectory of the object based on the image, detects a first line based on the trajectory,
    The processing execution unit aligns the first line and the second edge in the second processing.
    The information processing apparatus according to claim 1 or 2.
  4.  対象物の特徴箇所の位置である第1の位置を示す位置情報、点群情報、及び前記対象物を撮像することにより得られた画像を取得する取得部と、
     前記画像に基づいて前記第1の位置を検出し、前記画像に基づいて前記対象物の複数の特徴点を、第1の点群として検出し、前記第1の点群に基づいて前記対象物の複数のエッジを検出し、検出された複数のエッジの中から、検出された前記第1の位置から予め設定された範囲内に存在する第1のエッジを検出し、前記対象物の複数の特徴点である第2の点群を示す前記点群情報に基づいて、前記対象物の複数のエッジを検出し、検出された複数のエッジの中から、前記位置情報が示す前記第1の位置から前記範囲内に存在する第2のエッジを検出する検出部と、
     検出された前記第1の位置と前記位置情報が示す前記第1の位置とを合わせる第1の処理を実行し、前記第1のエッジと前記第2のエッジとを合わせる第2の処理を実行し、前記第1の処理と前記第2の処理とを実行することで、前記第1の点群と前記第2の点群とを合わせる処理実行部と、
     を有する情報処理装置。
    an acquisition unit that acquires position information indicating a first position, which is the position of a characteristic part of an object, point cloud information, and an image obtained by imaging the object;
    detecting the first position based on the image; detecting a plurality of characteristic points of the object based on the image as a first point group; and detecting the object based on the first point group detecting a plurality of edges of the object, detecting a first edge existing within a preset range from the detected first position from among the plurality of detected edges, and detecting a plurality of edges of the object A plurality of edges of the object are detected based on the point group information indicating a second point group that is feature points, and the first position indicated by the position information is selected from the plurality of detected edges. a detection unit that detects a second edge existing within the range from
    executing a first process of aligning the detected first position with the first position indicated by the position information, and executing a second process of aligning the first edge with the second edge; and a processing execution unit that combines the first point group and the second point group by executing the first processing and the second processing;
    Information processing device having
  5.  前記取得部は、物体を移動させる前記対象物を撮像することにより得られた映像を取得し、
     前記検出部は、前記映像に基づいて前記第1の位置を検出し、前記映像に基づいて前記第1の点群を検出し、前記映像に基づいて前記物体の軌跡を検出し、前記軌跡に基づいて、第1の線を検出し、
     前記処理実行部は、前記第2の処理で、前記第1の線と前記第2のエッジとを合わせる、
     請求項4に記載の情報処理装置。
    The acquisition unit acquires an image obtained by imaging the object that moves the object,
    The detection unit detects the first position based on the image, detects the first point group based on the image, detects a trajectory of the object based on the image, and detects the trajectory of the object based on the image. detecting the first line based on
    The processing execution unit aligns the first line and the second edge in the second processing.
    The information processing apparatus according to claim 4.
  6.  点群情報と、対象物を撮像することにより得られた画像とを取得する取得部と、
     前記画像に基づいて前記対象物の複数の特徴点を、第1の点群として検出し、前記第1の点群に基づいて、前記対象物の平面領域又は前記対象物の中の立体形状の領域である第1の領域を検出し、前記対象物の複数の特徴点である第2の点群を示す前記点群情報に基づいて、前記対象物の複数の平面領域又は前記対象物の複数の立体形状の領域である複数の領域を検出する検出部と、
     前記複数の領域の中から前記第1の領域を特定する第1の処理を実行し、前記第1の点群の前記第1の領域と前記第2の点群の前記第1の領域とを合わせる第2の処理を実行し、前記第2の処理を実行することで、前記第1の点群と前記第2の点群とを合わせる処理実行部と、
     を有する情報処理装置。
    an acquisition unit that acquires point cloud information and an image obtained by imaging an object;
    detecting a plurality of feature points of the object based on the image as a first point group, and detecting a planar area of the object or a three-dimensional shape in the object based on the first point group; A plurality of plane regions of the object or a plurality of plane regions of the object are detected based on the point group information indicating a second point group that is a plurality of feature points of the object. a detection unit that detects a plurality of areas that are three-dimensional areas of
    performing a first process of specifying the first area from among the plurality of areas, and identifying the first area of the first point group and the first area of the second point group; a processing execution unit that performs a second process of matching, and performs the second process to match the first point group and the second point group;
    Information processing device having
  7.  前記取得部は、赤外線センサから情報を取得し、
     前記検出部は、前記赤外線センサから取得した情報に基づいて、前記第1の点群を検出する、
     請求項1から6のいずれか1項に記載の情報処理装置。
    The acquisition unit acquires information from an infrared sensor,
    The detection unit detects the first point group based on information acquired from the infrared sensor.
    The information processing apparatus according to any one of claims 1 to 6.
  8.  前記対象物を撮像する撮像装置と、
     ディスプレイと、
     をさらに有し、
     前記処理実行部は、
     ローカルレジストレーションを実行し、
     前記第1の点群と前記第2の点群とが一致している場合、前記点群情報に対応付けられている表示情報を前記ディスプレイに表示するための処理を実行する、
     請求項1から7のいずれか1項に記載の情報処理装置。
    an imaging device that images the object;
    a display;
    further having
    The processing execution unit
    perform local registration,
    When the first point group and the second point group match, performing processing for displaying display information associated with the point group information on the display;
    The information processing apparatus according to any one of claims 1 to 7.
  9.  対象物を撮像する撮像装置とディスプレイとを有する端末装置と、
     情報処理装置と、
     を含み、
     前記情報処理装置は、
     第1の位置を示す位置情報、点群情報、前記撮像装置が前記対象物を撮像することにより得られた画像、及び前記画像の中でユーザが指定した指定位置を示す指定位置情報を取得し、
     前記画像に基づいて前記対象物の複数の特徴点を、第1の点群として検出し、
     前記第1の点群に基づいて前記対象物の複数のエッジを検出し、
     検出された複数のエッジの中から、前記指定位置から予め設定された範囲内に存在する第1のエッジを検出し、
     前記対象物の複数の特徴点である第2の点群を示す前記点群情報に基づいて、前記対象物の複数のエッジを検出し、
     検出された複数のエッジの中から、前記第1の位置から前記範囲内に存在する第2のエッジを検出し、
     前記指定位置と前記第1の位置とを合わせる第1の処理を実行し、
     前記第1のエッジと前記第2のエッジとを合わせる第2の処理を実行し、
     前記第1の処理と前記第2の処理とを実行することで、前記第1の点群と前記第2の点群とを合わせる、
     情報処理システム。
    a terminal device having an imaging device for imaging an object and a display;
    an information processing device;
    including
    The information processing device is
    Acquiring position information indicating a first position, point cloud information, an image obtained by imaging the object with the imaging device, and specified position information indicating a specified position specified by a user in the image. ,
    Detecting a plurality of feature points of the object based on the image as a first point group,
    detecting a plurality of edges of the object based on the first point cloud;
    Detecting a first edge existing within a preset range from the specified position from among the plurality of detected edges;
    Detecting a plurality of edges of the object based on the point group information indicating a second point group that is a plurality of feature points of the object;
    Detecting a second edge existing within the range from the first position from among the plurality of detected edges;
    performing a first process of aligning the specified position with the first position;
    performing a second process of aligning the first edge and the second edge;
    By performing the first process and the second process, the first point group and the second point group are combined,
    Information processing system.
  10.  対象物を撮像する撮像装置とディスプレイとを有する端末装置と、
     情報処理装置と、
     を含み、
     前記情報処理装置は、
     前記対象物の特徴箇所の位置である第1の位置を示す位置情報、点群情報、及び前記撮像装置が前記対象物を撮像することにより得られた画像を取得し、
     前記画像に基づいて前記第1の位置を検出し、
     前記画像に基づいて前記対象物の複数の特徴点を、第1の点群として検出し、
     前記第1の点群に基づいて前記対象物の複数のエッジを検出し、
     検出された複数のエッジの中から、検出された前記第1の位置から予め設定された範囲内に存在する第1のエッジを検出し、
     前記対象物の複数の特徴点である第2の点群を示す前記点群情報に基づいて、前記対象物の複数のエッジを検出し、
     検出された複数のエッジの中から、前記位置情報が示す前記第1の位置から前記範囲内に存在する第2のエッジを検出し、
     検出された前記第1の位置と前記位置情報が示す前記第1の位置とを合わせる第1の処理を実行し、
     前記第1のエッジと前記第2のエッジとを合わせる第2の処理を実行し、
     前記第1の処理と前記第2の処理とを実行することで、前記第1の点群と前記第2の点群とを合わせる、
     情報処理システム。
    a terminal device having an imaging device for imaging an object and a display;
    an information processing device;
    including
    The information processing device is
    Acquiring position information indicating a first position, which is the position of a characteristic part of the object, point cloud information, and an image obtained by imaging the object with the imaging device,
    detecting the first position based on the image;
    Detecting a plurality of feature points of the object based on the image as a first point group,
    detecting a plurality of edges of the object based on the first point cloud;
    Detecting a first edge existing within a preset range from the detected first position from among the plurality of detected edges;
    Detecting a plurality of edges of the object based on the point group information indicating a second point group that is a plurality of feature points of the object;
    detecting a second edge existing within the range from the first position indicated by the position information from among the detected edges;
    performing a first process of aligning the detected first position with the first position indicated by the position information;
    performing a second process of aligning the first edge and the second edge;
    By performing the first process and the second process, the first point group and the second point group are combined,
    Information processing system.
  11.  対象物を撮像する撮像装置とディスプレイとを有する端末装置と、
     情報処理装置と、
     を含み、
     前記情報処理装置は、
     点群情報と、前記対象物を撮像することにより得られた画像とを取得し、
     前記画像に基づいて前記対象物の複数の特徴点を、第1の点群として検出し、
     前記第1の点群に基づいて、前記対象物の平面領域又は前記対象物の中の立体形状の領域である第1の領域を検出し、
     前記対象物の複数の特徴点である第2の点群を示す前記点群情報に基づいて、前記対象物の複数の平面領域又は前記対象物の複数の立体形状の領域である複数の領域を検出し、
     前記複数の領域の中から前記第1の領域を特定する第1の処理を実行し、
     前記第1の点群の前記第1の領域と前記第2の点群の前記第1の領域とを合わせる第2の処理を実行し、
     前記第1の処理と前記第2の処理とを実行することで、前記第1の点群と前記第2の点群とを合わせる、
     情報処理システム。
    a terminal device having an imaging device for imaging an object and a display;
    an information processing device;
    including
    The information processing device is
    Acquiring point cloud information and an image obtained by imaging the object,
    Detecting a plurality of feature points of the object based on the image as a first point group,
    Detecting a first area, which is a planar area of the object or a three-dimensional area in the object, based on the first point group;
    Based on the point group information indicating a second point group that is a plurality of characteristic points of the object, a plurality of areas that are a plurality of planar areas of the object or a plurality of areas that are a plurality of three-dimensional shape areas of the object are determined. detect and
    performing a first process of identifying the first area from among the plurality of areas;
    performing a second process of matching the first region of the first point group and the first region of the second point group;
    By performing the first process and the second process, the first point group and the second point group are combined,
    Information processing system.
  12.  情報処理装置が、
     第1の位置を示す位置情報、点群情報、対象物を撮像することにより得られた画像、及び前記画像の中でユーザが指定した指定位置を示す指定位置情報を取得し、
     前記画像に基づいて前記対象物の複数の特徴点を、第1の点群として検出し、
     前記第1の点群に基づいて前記対象物の複数のエッジを検出し、
     検出された複数のエッジの中から、前記指定位置から予め設定された範囲内に存在する第1のエッジを検出し、
     前記対象物の複数の特徴点である第2の点群を示す前記点群情報に基づいて、前記対象物の複数のエッジを検出し、
     検出された複数のエッジの中から、前記第1の位置から前記範囲内に存在する第2のエッジを検出し、
     前記指定位置と前記第1の位置とを合わせる第1の処理を実行し、
     前記第1のエッジと前記第2のエッジとを合わせる第2の処理を実行し、
     前記第1の処理と前記第2の処理とを実行することで、前記第1の点群と前記第2の点群とを合わせる、
     情報処理方法。
    The information processing device
    Acquiring position information indicating a first position, point cloud information, an image obtained by imaging an object, and specified position information indicating a specified position specified by a user in the image,
    Detecting a plurality of feature points of the object based on the image as a first point group,
    detecting a plurality of edges of the object based on the first point cloud;
    Detecting a first edge existing within a preset range from the specified position from among the plurality of detected edges;
    Detecting a plurality of edges of the object based on the point group information indicating a second point group that is a plurality of feature points of the object;
    Detecting a second edge existing within the range from the first position from among the plurality of detected edges;
    performing a first process of aligning the specified position with the first position;
    performing a second process of aligning the first edge and the second edge;
    By performing the first process and the second process, the first point group and the second point group are combined,
    Information processing methods.
  13.  情報処理装置が、
     対象物の特徴箇所の位置である第1の位置を示す位置情報、点群情報、及び前記対象物を撮像することにより得られた画像を取得し、
     前記画像に基づいて前記第1の位置を検出し、
     前記画像に基づいて前記対象物の複数の特徴点を、第1の点群として検出し、
     前記第1の点群に基づいて前記対象物の複数のエッジを検出し、
     検出された複数のエッジの中から、検出された前記第1の位置から予め設定された範囲内に存在する第1のエッジを検出し、
     前記対象物の複数の特徴点である第2の点群を示す前記点群情報に基づいて、前記対象物の複数のエッジを検出し、
     検出された複数のエッジの中から、前記位置情報が示す前記第1の位置から前記範囲内に存在する第2のエッジを検出し、
     検出された前記第1の位置と前記位置情報が示す前記第1の位置とを合わせる第1の処理を実行し、
     前記第1のエッジと前記第2のエッジとを合わせる第2の処理を実行し、
     前記第1の処理と前記第2の処理とを実行することで、前記第1の点群と前記第2の点群とを合わせる、
     情報処理方法。
    The information processing device
    Acquiring position information indicating a first position, which is the position of a characteristic part of an object, point cloud information, and an image obtained by imaging the object,
    detecting the first position based on the image;
    Detecting a plurality of feature points of the object based on the image as a first point group,
    detecting a plurality of edges of the object based on the first point cloud;
    Detecting a first edge existing within a preset range from the detected first position from among the plurality of detected edges;
    Detecting a plurality of edges of the object based on the point group information indicating a second point group that is a plurality of feature points of the object;
    detecting a second edge existing within the range from the first position indicated by the position information from among the detected edges;
    performing a first process of aligning the detected first position with the first position indicated by the position information;
    performing a second process of aligning the first edge and the second edge;
    By performing the first process and the second process, the first point group and the second point group are combined,
    Information processing methods.
  14.  情報処理装置が、
     点群情報と、対象物を撮像することにより得られた画像とを取得し、
     前記画像に基づいて前記対象物の複数の特徴点を、第1の点群として検出し、
     前記第1の点群に基づいて、前記対象物の平面領域又は前記対象物の中の立体形状の領域である第1の領域を検出し、
     前記対象物の複数の特徴点である第2の点群を示す前記点群情報に基づいて、前記対象物の複数の平面領域又は前記対象物の複数の立体形状の領域である複数の領域を検出し、
     前記複数の領域の中から前記第1の領域を特定する第1の処理を実行し、
     前記第1の点群の前記第1の領域と前記第2の点群の前記第1の領域とを合わせる第2の処理を実行し、
     前記第2の処理を実行することで、前記第1の点群と前記第2の点群とを合わせる、
     情報処理方法。
    The information processing device
    Acquiring point cloud information and an image obtained by imaging an object,
    Detecting a plurality of feature points of the object based on the image as a first point group,
    Detecting a first area, which is a planar area of the object or a three-dimensional area in the object, based on the first point group;
    Based on the point group information indicating a second point group that is a plurality of characteristic points of the object, a plurality of areas that are a plurality of planar areas of the object or a plurality of areas that are a plurality of three-dimensional shape areas of the object are determined. detect and
    performing a first process of identifying the first area from among the plurality of areas;
    performing a second process of matching the first region of the first point group and the first region of the second point group;
    By executing the second process, the first point group and the second point group are combined,
    Information processing methods.
  15.  情報処理装置に、
     第1の位置を示す位置情報、点群情報、対象物を撮像することにより得られた画像、及び前記画像の中でユーザが指定した指定位置を示す指定位置情報を取得し、
     前記画像に基づいて前記対象物の複数の特徴点を、第1の点群として検出し、
     前記第1の点群に基づいて前記対象物の複数のエッジを検出し、
     検出された複数のエッジの中から、前記指定位置から予め設定された範囲内に存在する第1のエッジを検出し、
     前記対象物の複数の特徴点である第2の点群を示す前記点群情報に基づいて、前記対象物の複数のエッジを検出し、
     検出された複数のエッジの中から、前記第1の位置から前記範囲内に存在する第2のエッジを検出し、
     前記指定位置と前記第1の位置とを合わせる第1の処理を実行し、
     前記第1のエッジと前記第2のエッジとを合わせる第2の処理を実行し、
     前記第1の処理と前記第2の処理とを実行することで、前記第1の点群と前記第2の点群とを合わせる、
     処理を実行させる情報処理プログラム。
    information processing equipment,
    Acquiring position information indicating a first position, point cloud information, an image obtained by imaging an object, and specified position information indicating a specified position specified by a user in the image,
    Detecting a plurality of feature points of the object based on the image as a first point group,
    detecting a plurality of edges of the object based on the first point cloud;
    Detecting a first edge existing within a preset range from the specified position from among the plurality of detected edges;
    Detecting a plurality of edges of the object based on the point group information indicating a second point group that is a plurality of feature points of the object;
    Detecting a second edge existing within the range from the first position from among the plurality of detected edges;
    performing a first process of aligning the specified position with the first position;
    performing a second process of aligning the first edge and the second edge;
    By performing the first process and the second process, the first point group and the second point group are combined,
    An information processing program that causes a process to be executed.
  16.  情報処理装置に、
     対象物の特徴箇所の位置である第1の位置を示す位置情報、点群情報、及び前記対象物を撮像することにより得られた画像を取得し、
     前記画像に基づいて前記第1の位置を検出し、
     前記画像に基づいて前記対象物の複数の特徴点を、第1の点群として検出し、
     前記第1の点群に基づいて前記対象物の複数のエッジを検出し、
     検出された複数のエッジの中から、検出された前記第1の位置から予め設定された範囲内に存在する第1のエッジを検出し、
     前記対象物の複数の特徴点である第2の点群を示す前記点群情報に基づいて、前記対象物の複数のエッジを検出し、
     検出された複数のエッジの中から、前記位置情報が示す前記第1の位置から前記範囲内に存在する第2のエッジを検出し、
     検出された前記第1の位置と前記位置情報が示す前記第1の位置とを合わせる第1の処理を実行し、
     前記第1のエッジと前記第2のエッジとを合わせる第2の処理を実行し、
     前記第1の処理と前記第2の処理とを実行することで、前記第1の点群と前記第2の点群とを合わせる、
     処理を実行させる情報処理プログラム。
    information processing equipment,
    Acquiring position information indicating a first position, which is the position of a characteristic part of an object, point cloud information, and an image obtained by imaging the object,
    detecting the first position based on the image;
    Detecting a plurality of feature points of the object based on the image as a first point group,
    detecting a plurality of edges of the object based on the first point cloud;
    Detecting a first edge existing within a preset range from the detected first position from among the plurality of detected edges;
    Detecting a plurality of edges of the object based on the point group information indicating a second point group that is a plurality of feature points of the object;
    detecting a second edge existing within the range from the first position indicated by the position information from among the detected edges;
    performing a first process of aligning the detected first position with the first position indicated by the position information;
    performing a second process of aligning the first edge and the second edge;
    By performing the first process and the second process, the first point group and the second point group are combined,
    An information processing program that causes a process to be executed.
  17.  情報処理装置に、
     点群情報と、対象物を撮像することにより得られた画像とを取得し、
     前記画像に基づいて前記対象物の複数の特徴点を、第1の点群として検出し、
     前記第1の点群に基づいて、前記対象物の平面領域又は前記対象物の中の立体形状の領域である第1の領域を検出し、
     前記対象物の複数の特徴点である第2の点群を示す前記点群情報に基づいて、前記対象物の複数の平面領域又は前記対象物の複数の立体形状の領域である複数の領域を検出し、
     前記複数の領域の中から前記第1の領域を特定する第1の処理を実行し、
     前記第1の点群の前記第1の領域と前記第2の点群の前記第1の領域とを合わせる第2の処理を実行し、
     前記第2の処理を実行することで、前記第1の点群と前記第2の点群とを合わせる、
     処理を実行させる情報処理プログラム。
     
    information processing equipment,
    Acquiring point cloud information and an image obtained by imaging an object,
    Detecting a plurality of feature points of the object based on the image as a first point group,
    Detecting a first area, which is a planar area of the object or a three-dimensional area in the object, based on the first point group;
    Based on the point group information indicating a second point group that is a plurality of characteristic points of the object, a plurality of areas that are a plurality of planar areas of the object or a plurality of areas that are a plurality of three-dimensional shape areas of the object are determined. detect and
    performing a first process of identifying the first area from among the plurality of areas;
    performing a second process of matching the first region of the first point group and the first region of the second point group;
    By executing the second process, the first point group and the second point group are combined,
    An information processing program that causes a process to be executed.
PCT/JP2021/019117 2021-05-20 2021-05-20 Information processing device, information processing system, information processing method, and information processing program WO2022244172A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
PCT/JP2021/019117 WO2022244172A1 (en) 2021-05-20 2021-05-20 Information processing device, information processing system, information processing method, and information processing program
JP2023519163A JP7330420B2 (en) 2021-05-20 2021-05-20 Information processing device, information processing system, information processing method, and information processing program
TW110140038A TW202247094A (en) 2021-05-20 2021-10-28 Information processing device, information processing system, information processing method, and information processing program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2021/019117 WO2022244172A1 (en) 2021-05-20 2021-05-20 Information processing device, information processing system, information processing method, and information processing program

Publications (1)

Publication Number Publication Date
WO2022244172A1 true WO2022244172A1 (en) 2022-11-24

Family

ID=84141516

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/019117 WO2022244172A1 (en) 2021-05-20 2021-05-20 Information processing device, information processing system, information processing method, and information processing program

Country Status (3)

Country Link
JP (1) JP7330420B2 (en)
TW (1) TW202247094A (en)
WO (1) WO2022244172A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011191171A (en) * 2010-03-15 2011-09-29 Omron Corp Image processing device and image processing method
JP2018090099A (en) * 2016-12-02 2018-06-14 東日本旅客鉄道株式会社 System to patrol facility and method to patrol facility
JP2018169824A (en) * 2017-03-30 2018-11-01 株式会社パスコ Road facilities management support device and road facilities management support program
JP2019095876A (en) * 2017-11-20 2019-06-20 三菱電機株式会社 Three-dimensional point group display device, three-dimensional point group display system, method for displaying three-dimensional point group, three-dimensional point group display program, and recording medium
JP2020035448A (en) * 2018-08-30 2020-03-05 バイドゥ オンライン ネットワーク テクノロジー (ベイジン) カンパニー リミテッド Method, apparatus, device, storage medium for generating three-dimensional scene map

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011191171A (en) * 2010-03-15 2011-09-29 Omron Corp Image processing device and image processing method
JP2018090099A (en) * 2016-12-02 2018-06-14 東日本旅客鉄道株式会社 System to patrol facility and method to patrol facility
JP2018169824A (en) * 2017-03-30 2018-11-01 株式会社パスコ Road facilities management support device and road facilities management support program
JP2019095876A (en) * 2017-11-20 2019-06-20 三菱電機株式会社 Three-dimensional point group display device, three-dimensional point group display system, method for displaying three-dimensional point group, three-dimensional point group display program, and recording medium
JP2020035448A (en) * 2018-08-30 2020-03-05 バイドゥ オンライン ネットワーク テクノロジー (ベイジン) カンパニー リミテッド Method, apparatus, device, storage medium for generating three-dimensional scene map

Also Published As

Publication number Publication date
TW202247094A (en) 2022-12-01
JP7330420B2 (en) 2023-08-21
JPWO2022244172A1 (en) 2022-11-24

Similar Documents

Publication Publication Date Title
WO2020259481A1 (en) Positioning method and apparatus, electronic device, and readable storage medium
CN108805917B (en) Method, medium, apparatus and computing device for spatial localization
KR101725478B1 (en) Method for displaying augmented reality of based 3d point cloud cognition, apparatus and system for executing the method
US11051000B2 (en) Method for calibrating cameras with non-overlapping views
Gao et al. Robust RGB-D simultaneous localization and mapping using planar point features
US9373174B2 (en) Cloud based video detection and tracking system
JP2013508794A (en) Method for providing a descriptor as at least one feature of an image and method for matching features
US11715236B2 (en) Method and system for re-projecting and combining sensor data for visualization
US10769441B2 (en) Cluster based photo navigation
US10096123B2 (en) Method and device for establishing correspondence between objects in a multi-image source environment
US20180020203A1 (en) Information processing apparatus, method for panoramic image display, and non-transitory computer-readable storage medium
US10600202B2 (en) Information processing device and method, and program
US9135715B1 (en) Local feature cameras for structure from motion (SFM) problems with generalized cameras
WO2022244172A1 (en) Information processing device, information processing system, information processing method, and information processing program
JP2016021097A (en) Image processing device, image processing method, and program
US11617024B2 (en) Dual camera regions of interest display
Lee et al. Robust multithreaded object tracker through occlusions for spatial augmented reality
KR101935969B1 (en) Method and apparatus for detection of failure of object tracking and retracking based on histogram
US20230206468A1 (en) Tracking device, tracking method, and recording medium
KR20180128332A (en) Method for determining information related to filming location and apparatus for performing the method
Maidi et al. Natural feature tracking on a mobile handheld tablet
CN112070175A (en) Visual odometer method, device, electronic equipment and storage medium
Tan et al. User detection in real-time panoramic view through image synchronization using multiple camera in cloud
JP2018198030A (en) Information processor and program
Eddington II Markerless Affine Region Tracking and Augmentation Using MSER and SIFT

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21940786

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2023519163

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21940786

Country of ref document: EP

Kind code of ref document: A1