WO2017163384A1 - Dispositif, procédé et programme de traitement de données - Google Patents
Dispositif, procédé et programme de traitement de données Download PDFInfo
- Publication number
- WO2017163384A1 WO2017163384A1 PCT/JP2016/059480 JP2016059480W WO2017163384A1 WO 2017163384 A1 WO2017163384 A1 WO 2017163384A1 JP 2016059480 W JP2016059480 W JP 2016059480W WO 2017163384 A1 WO2017163384 A1 WO 2017163384A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- point
- points
- unit
- cloud data
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
Definitions
- the present invention relates to an image processing technique.
- the AR (Augmented Reality) display system generates a subject development image from 3D (3 dimensional) shape data and texture data of a subject, checks the image feature points of the subject development image, and tracks the subject.
- Patent Document 1 discloses an AR display system including an image input unit, a development view feature representation unit, a database, a database feature representation unit, a collation unit, and a display unit.
- the image input unit inputs an RGB image obtained by photographing with a camera or the like.
- the development view feature expression unit generates a development image from the 3D model and texture of the subject input via the image input unit, extracts image feature points, and calculates an image feature amount.
- the database stores in advance images taken from arbitrary positions (coordinates and orientations) with respect to various objects.
- the database feature representation unit reads an image from the database, extracts image feature points, and calculates a local descriptor.
- the collation unit compares the local descriptors of the image feature points calculated by the development view feature representation unit and the database feature representation unit to identify the most similar image in the database, and obtains the position of the camera with respect to the object at the time of shooting.
- the display unit is, for example, a display device.
- the AR display system of Patent Document 1 there is a problem that it is necessary to accumulate a large amount of images in a database in advance in a database. Further, the AR display system of Patent Document 1 has a problem that a developed image must be generated at high speed from a 3D model and a texture.
- the main object of the present invention is to solve the above-mentioned problems, and to speed up AR display without accumulating images in a database in advance and without generating a developed image. Objective.
- a point cloud data acquisition unit for acquiring point cloud data composed of a plurality of points, each of which represents a three-dimensional shape of an object, each of which is set with a three-dimensional coordinate;
- a point corresponding to the image feature point included in the captured image of the object is extracted from the plurality of points of the point cloud data, and the three-dimensional coordinates set for the extracted point are associated with the image feature point.
- an association unit for acquiring point cloud data composed of a plurality of points, each of which represents a three-dimensional shape of an object, each of which is set with a three-dimensional coordinate;
- the three-dimensional coordinates of the points corresponding to the image feature points in the captured image are associated with the image feature points. For this reason, according to the present invention, the data amount can be significantly reduced by holding only the three-dimensional coordinates of the image feature points.
- the search can be performed at high speed.
- the three-dimensional shape of the object is handled by the point cloud data, it is not necessary to generate a developed image or to store the RGB image in the database in advance, and the AR display can be speeded up.
- FIG. 3 is a diagram illustrating a functional configuration example of the AR display device according to the first embodiment.
- FIG. 4 is a flowchart showing an operation example of the AR display device according to the first embodiment.
- FIG. 6 is a diagram illustrating an example of a functional configuration of an AR display device according to a second embodiment.
- FIG. 9 is a flowchart showing an operation example of the AR display device according to the second embodiment.
- FIG. 10 is a diagram illustrating a functional configuration example of an AR display device according to a third embodiment.
- FIG. 10 is a flowchart showing an operation example of the AR display device according to the third embodiment.
- FIG. 4 is a diagram showing a hardware configuration example of an AR display device according to Embodiments 1 to 3.
- FIG. 4 is a diagram illustrating an example of an image according to the first embodiment.
- FIG. 4 is a diagram illustrating an example in which an annotation image is superimposed on an image according to the first embodiment.
- FIG. 10 is a diagram illustrating an example of image feature points according to the second embodiment.
- FIG. *** Explanation of configuration *** FIG. 1 shows a functional configuration example of an AR display device 1 according to the present embodiment.
- FIG. 7 shows a hardware configuration example of the AR display device 1 according to the present embodiment.
- the AR display device 1 is an example of a data processing device. The processing performed by the AR display device 1 corresponds to an example of a data processing method and a data processing program. First, an outline of the AR display device 1 according to the present embodiment will be described.
- the AR display device 1 acquires point cloud data and an annotation image.
- the point cloud data is data representing the three-dimensional shape of an object that is a subject.
- the point cloud data is composed of a plurality of points.
- Point cloud data is usually a collection of tens of thousands of points.
- Three-dimensional coordinates (hereinafter also referred to as 3D coordinates) are set for each point of the point cloud data.
- An annotation image is an image that is superimposed on a captured image of an object.
- FIG. 9 shows an AR image obtained by virtually superimposing an underground pipe annotation image on the road image shown in FIG.
- the graphics 50 (figure) indicating the shape of the pipe and the text 51 indicating the attributes of the pipe (in FIG. 9, the dimensions of the pipe) shown in FIG. 9 are annotation images.
- the AR display device 1 acquires, for example, a cylindrical graphic 50 representing a manhole, and displays the cylindrical graphic 50 at the position of the manhole in the image. Moreover, the AR display device 1 acquires the text 51 representing the dimensions of the pipe, and displays the acquired text 51 at an appropriate position in the image. As described above, when displaying the graphics 50 and the text 51 of the annotation image, the AR display device 1 selects one of the plurality of points of the point cloud data and sets the three-dimensional coordinates of the selected point. Corresponding to the graphics 50 or text 51 of the annotation image.
- the AR display device 1 includes a CPU (Central Processing Unit) 21, a memory 23, a GPU (Graphics Processing Unit) 25, a frame memory 26, and a RADAC (Random Access Memory Digital-to-Analog Converter) 27.
- Computer The CPU 21 executes a program that implements the annotation image editing unit 6, the world coordinate setting unit 7, and the perspective projection unit 8 shown in FIG. That is, the annotation image editing unit 6, the world coordinate setting unit 7, and the perspective projection unit 8 are realized by a program.
- the GPU 25 executes a program that realizes the AR superimposing unit 9. That is, the AR superimposing unit 9 is realized by a program.
- the GPU 25 uses the RAMDAC 27 when the AR superimposing unit 9 performs an operation as a program.
- a program for realizing the annotation image editing unit 6, the world coordinate setting unit 7, and the perspective projection unit 8 and a program for realizing the AR superimposing unit 9 are stored in the memory 23.
- the CPU 21 reads a program for realizing the annotation image editing unit 6, the world coordinate setting unit 7, and the perspective projection unit 8 from the memory 23 and executes this program.
- the GPU 25 reads a program for realizing the AR superimposing unit 9 and executes this program.
- the frame memory 26 stores annotation images.
- the AR display device 1 is connected to a 3D sensor 22, a keyboard / mouse 29, and a monitor 28.
- the 3D sensor 22 implements the image input unit 2, the RGB image generation unit 3, and the point cloud data generation unit 4 shown in FIG.
- the keyboard / mouse 29 implements the annotation image input unit 5 shown in FIG.
- the monitor 28 implements the display unit 10.
- the AR display device 1 includes an annotation image editing unit 6, a world coordinate setting unit 7, a perspective projection unit 8, and an AR superimposing unit 9.
- the annotation image editing unit 6 acquires an annotation image such as text or a figure from the annotation image input unit 5 and edits the acquired annotation image.
- the world coordinate setting unit 7 sets the three-dimensional coordinates of the annotation image to an arbitrary point in the point cloud data. More specifically, the world coordinate setting unit 7 acquires point cloud data representing the three-dimensional shape of the subject. In addition, the world coordinate setting unit 7 selects one of a plurality of points in the point cloud data, and associates the three-dimensional image set at the selected point with the annotation image.
- the superimposition position of the annotation image is defined by the point (point of the point cloud data) selected by the world coordinate setting unit 7. For example, by specifying the position in the RGB image (also referred to as a captured image) of the upper left vertex of the rectangle of the text 51 in FIG.
- the world coordinate setting unit 7 selects a point corresponding to the position in the RGB image of the top left vertex of the rectangle of the text 51 from the plurality of points of the point cloud data, and the text The point corresponding to the position in the RGB image of the lower right vertex of the 51 rectangle is selected.
- the world coordinate setting unit 7 is an example of a point cloud data acquisition unit and an association unit.
- the operations performed in the world coordinate setting unit 7 are examples of point cloud data acquisition processing and association processing.
- the perspective projection unit 8 projects an annotation image on 3D coordinates onto two-dimensional coordinates (hereinafter also referred to as 2D coordinates).
- the AR superimposing unit 9 superimposes the annotation image projected on the 2D coordinates by the perspective projection unit 8 on the RGB image.
- the image input unit 2 simultaneously measures the color and distance of the subject.
- the RGB image generation unit 3 generates an RGB image from the color of the subject.
- the point cloud data generation unit 4 generates point cloud data from the distance to the subject. In the RGB image and the point cloud data, the same subject is captured from the same position and the same angle. That is, the 3D sensor 22 generates an RGB image and point cloud data in parallel for the same subject.
- the annotation image input unit 5 inputs an annotation image such as text or graphics using a keyboard, a mouse, or the like.
- the display unit 10 displays the superimposition result of the AR superimposing unit 9. As described above, the image input unit 2, the RGB image generation unit 3, and the point cloud data generation unit 4 are realized by the 3D sensor 22 shown in FIG.
- the annotation image input unit 5 is realized by the keyboard / mouse 29 shown in FIG.
- the display unit 10 is realized by the monitor 28 shown in FIG.
- the image input unit 2 inputs the subject's color and distance measurement results to the RGB image generation unit 3 and the point cloud data generation unit 4.
- the RGB image generation unit 3 generates an RGB image and inputs the generated RGB image to the AR superimposing unit 9.
- the point cloud data generation unit 4 generates 3D coordinate point cloud data of the outline of the subject, and inputs the generated point cloud data to the world coordinate setting unit 7.
- the annotation image input unit 5 generates an annotation image such as text or graphics, and inputs the generated annotation image to the annotation image editing unit 6.
- the annotation image editing unit 6 edits an annotation image such as text or graphics, and inputs the edited annotation image to the world coordinate setting unit 7.
- the world coordinate setting unit 7 acquires an annotation image and point cloud data.
- the world coordinate setting unit 7 selects an arbitrary point from a plurality of points in the point cloud data, and associates the 3D coordinates set for the selected point with the annotation image to obtain an annotation image of 3D coordinates. . Further, the world coordinate setting unit 7 inputs a 3D coordinate annotation image to the perspective projection unit 8.
- the perspective projection unit 8 acquires a 3D coordinate annotation image and projects the 3D coordinate annotation image onto the 2D coordinate. Further, the perspective projection unit 8 inputs the annotation image projected on the 2D coordinates to the AR superimposing unit 9.
- the AR superimposing unit 9 acquires the annotation image projected on the 2D coordinates, and superimposes the annotation image projected on the 2D coordinates on the RGB image. Further, the AR superimposing unit 9 inputs the superimposition result to the display unit 10.
- the display unit 10 displays the superimposed result of the AR superimposing unit 9 as an AR display for the subject.
- the image input unit 2 captures a subject. More specifically, in the image input (step S2), the 3D sensor 22 captures the subject.
- the RGB image generation unit 3 In RGB image generation (step S3), the RGB image generation unit 3 generates an RGB image. More specifically, in the RGB image generation (step S3), the subject is red, green, blue using a CCD (Charge Coupled Device) image sensor in the 3D sensor 22 or a CMOS (Complementary Metal Oxide Semiconductor) image sensor. An RGB image having such color information is generated.
- CCD Charge Coupled Device
- CMOS Complementary Metal Oxide Semiconductor
- the point cloud data generation unit 4 In the point cloud data generation (step S4), the point cloud data generation unit 4 generates point cloud data. More specifically, in the point cloud data generation (step S4), the 3D sensor is set to the origin based on the time when the infrared ray emitted from the infrared ray output device in the 3D sensor 22 is reflected by the subject and returns to the infrared ray receiver. Point cloud data that is a set of 3D coordinate points of the outer shape of the subject.
- the annotation image input unit 5 inputs the annotation image to the annotation image editing unit 6. More specifically, in the annotation image input (step S5), the operator of the AR display device 1 inputs the annotation image to the AR display device 1 by operating the keyboard, mouse, or the like.
- annotation image editing edits text and graphics in the annotation image.
- step S7 3D coordinates of an arbitrary point of the point cloud data of the subject are given to the annotation image. More specifically, the world coordinate setting unit 7 selects any one of a plurality of points in the point cloud data in accordance with an instruction from the operator of the AR display device 1, and uses the 3D coordinates of the selected point as an annotation. Associate with an image.
- the perspective projection unit 8 projects an annotation image of 3D coordinates onto 2D coordinates. More specifically, the perspective projection unit 8 converts (X, Y, Z), which are the three-dimensional coordinates of the annotation image, to the coordinates (u, v) of the projection image, for example, by projective transformation shown in Equation 1 below. Convert to In Equation 1, [R
- the AR superimposing unit 9 superimposes the projection image of the annotation image on the RGB image.
- step S10 the display unit 10 displays the overlay result of the AR overlay (step S9).
- the annotation image is mapped to the point cloud data that is the 3D coordinates of the subject, and the projected image of the annotation following the position of the arbitrary 3D sensor is superimposed on the RGB image. AR can be realized.
- FIG. *** Explanation of configuration *** FIG. 3 shows a functional configuration example of the AR editing device 15 according to the present embodiment.
- the AR editing device 15 according to the present embodiment is also an example of a data processing device.
- the processing performed by the AR editing device 15 according to the present embodiment also corresponds to an example of a data processing method and a data processing program.
- the hardware configuration example of the AR editing device 15 is as shown in FIG. 7, similarly to the AR display device 1 according to the first embodiment.
- the perspective projection unit 8, the AR superimposing unit 9, and the display unit 10 are deleted from the configuration of the AR display device 1 of FIG.
- an image feature point extraction unit 11, an AR data output unit 12, and an AR data 13 are added to the configuration of the AR display device 1 of FIG.
- the image feature point extraction unit 11 and the AR data output unit 12 are realized by a program, which is executed by the CPU 21 in FIG.
- the image feature point extraction unit 11 analyzes the RGB image and extracts image feature points of the RGB image. Image feature points exist mainly at discontinuous points in the RGB image. Each point in FIG. 10 represents an image feature point.
- the image feature point extraction unit 11 extracts image feature points by, for example, the Harris method, the KTK method, the Canny method, the zero crossing method, the relaxation method, the Hough transform, the dynamic contour method, the level set method, and the like.
- the AR data 13 is data in which 3D coordinates in the world coordinate system of image feature points are recorded.
- the AR data output unit 12 outputs the AR data 13 to the outside of the AR editing device 15. In FIG.
- the image input unit 2, the RGB image generation unit 3, the point cloud data generation unit 4, the annotation image input unit 5, and the annotation image editing unit 6 are the same as those in the first embodiment, and thus description thereof is omitted.
- the world coordinate setting unit 7 selects any point from a plurality of points in the point cloud data, and the three-dimensional coordinates set for the selected point. Is associated with the annotation image. Further, the world coordinate setting unit 7 extracts a point corresponding to the image feature point from a plurality of points of the point cloud data, and associates the three-dimensional coordinates set to the extracted point with the image feature point.
- the image feature point extraction unit 11 extracts image feature points of the RGB image, and inputs the extracted image feature points to the world coordinate setting unit 7.
- the world coordinate setting unit 7 acquires an annotation image from the annotation image editing unit 6 and acquires point cloud data from the point cloud data generation unit 4 as in the first embodiment. Then, as in the first embodiment, the world coordinate setting unit 7 selects one of a plurality of points in the point cloud data, and uses the three-dimensional coordinates set for the selected point as an annotation image. Associate.
- the three-dimensional coordinates associated with the annotation image are referred to as first three-dimensional coordinates.
- the world coordinate setting unit 7 acquires image feature points from the image feature point extraction unit 11, extracts points corresponding to the acquired image feature points from a plurality of points of the point cloud data, and extracts the extracted points.
- 3D coordinates set in are associated with image feature points.
- the three-dimensional coordinates associated with the image feature points are referred to as second three-dimensional coordinates.
- the world coordinate setting unit 7 inputs the first three-dimensional coordinates and the second three-dimensional coordinates as AR data 13 to the AR data output unit 12.
- the AR data output unit 12 outputs the AR data 13 to the outside of the AR editing device 15.
- step S2 The image input (step S2), RGB image generation (step S3), point cloud data generation (step S4), annotation image input (step S5), and annotation image editing (step S6) in FIG. 4 are those shown in FIG. Since this is the same as the above, description thereof is omitted.
- the image feature point extraction unit 11 extracts image feature points from the RGB image.
- the image feature amount is described by the gradient of the brightness (brightness) of the peripheral pixels of each image feature point.
- the world coordinate setting unit 7 records the 3D coordinates (first three-dimensional coordinates and second three-dimensional coordinates) of the annotation image and the world coordinate system of the image feature points. 13 is generated.
- the AR data output unit 12 outputs the AR data to the outside of the AR editing device 15.
- the AR data obtained by mapping the image feature points extracted from the RGB image of the subject to the point cloud data that is the 3D coordinates can be stored in advance in the database. Further, it can be generated at high speed without generating a developed image.
- FIG. FIG. 5 shows a functional configuration example of the AR display device 100 according to the present embodiment.
- the AR display device 100 according to the present embodiment is also an example of a data processing device.
- the processing performed by the AR display device 100 according to the present embodiment also corresponds to examples of the data processing method and the data processing program.
- the hardware configuration example of the AR display device 100 according to the present embodiment is as shown in FIG. 7, similarly to the AR display device 1 according to the first embodiment.
- the point cloud data generation unit 4 the annotation image input unit 5, the annotation image editing unit 6, and the world coordinate setting unit 7 are deleted from the configuration of the AR display device 1 of FIG.
- an image feature point extraction unit 11, a position estimation unit 14, and an AR data input unit 16 are added to the configuration of the AR display device 1 of FIG.
- the image feature point extraction unit 11 and the position estimation unit 14 are realized by a program, and this program is executed by the CPU 21 of FIG.
- the AR data input unit 16 is realized by the keyboard / mouse 29 of FIG.
- the image feature point extraction unit 11 is the same as that shown in FIG. 3, analyzes the RGB image, and extracts image feature points of the RGB image.
- the operation performed by the image feature point extraction unit 11 is an example of image feature point extraction processing.
- the AR data input unit 16 acquires the AR data 13.
- the AR data 13 is the same as that described in the second embodiment.
- the position estimation unit 14 is a 3D imaging device based on 3D coordinates of image feature points in the world coordinate system and 2D coordinates in RGB images (2D coordinates of image feature points obtained by projective transformation of 3D coordinates of image feature points). The position of the sensor 22 is estimated.
- the position estimation unit 14 estimates the position when the 3D sensor 22 captures an RGB image based on the 3D coordinates of the image feature points and the 2D coordinates of the image feature points in the RGB image.
- the operation performed by the position estimation unit 14 is an example of position estimation processing.
- the AR data input unit 16 inputs the AR data 13 to the perspective projection unit 8 and the position estimation unit 14.
- the position estimation unit 14 estimates the position of the 3D sensor 22 from the 3D coordinates of the image feature points in the world coordinate system and the 2D coordinates in the RGB image, and inputs the estimated 3D sensor 22 position to the perspective projection unit 8.
- step S2 The image input (step S2), RGB image generation (step S3), perspective projection (step S8), AR superimposition (step S9), and display (step S10) in FIG. 6 are the same as those shown in FIG. The description is omitted. Further, the processing of image feature point extraction (step S11) is the same as that in FIG.
- the AR data input unit 16 inputs the AR data 13 to the perspective projection unit 8.
- the position estimation unit 14 estimates the position of the 3D sensor 22 in the RGB image. Specifically, the position estimation unit 14 detects the coordinates x on the RGB image corresponding to the image feature points of the three-dimensional coordinates (X, Y, Z) by matching the image feature amounts. If the coordinates obtained by reprojecting the three-dimensional coordinates (X, Y, Z) of the image feature points onto the RGB image by Equation 1 are x ⁇ , the reprojection error E is the Euclidean distance d (x, x ⁇ ) between x and x ⁇ . (Note that the notation with “ ⁇ ” diagonally above and to the right of x is the same as the notation with “ ⁇ ” immediately above x in Equation 2).
- the reprojection error E can be obtained using Equation 2.
- the position estimation unit 14 estimates the position of the 3D sensor 22 that minimizes the error E with i image feature points, that is, [R
- the position estimation unit 14 inputs the estimated position of the 3D sensor 22 to the perspective projection unit 8.
- the AR data in which the image feature points extracted from the RGB image of the subject are mapped to the point cloud data that is 3D coordinate data is used for estimating the position of the 3D sensor. Since it is not necessary to match the developed image of the 3D model with the RGB image at each position of the 3D sensor stored in the database in advance for estimation of the position of the 3D sensor, both images are unnecessary.
- the CPU 21 and the GPU 25 illustrated in FIG. 7 are ICs (Integrated Circuits) that perform processing.
- the memory 23 and the frame memory 26 illustrated in FIG. 7 are a RAM (Random Access Memory), a flash memory, an HDD (Hard Disk Drive), and the like.
- the memory 23 also stores an OS (Operating System).
- the CPU 21 executes functions of the annotation image editing unit 6, the world coordinate setting unit 7, the perspective projection unit 8, the image feature point extraction unit 11, the AR data output unit 12, and the position estimation unit 14 while executing at least a part of the OS. Execute the program to be realized.
- the CPU 21 executes the OS, task management, memory management, file management, communication control, and the like are performed.
- Information, data, signal values, and the like indicating the processing results of the annotation image editing unit 6, the world coordinate setting unit 7, the perspective projection unit 8, the image feature point extraction unit 11, the AR data output unit 12, and the position estimation unit 14.
- the variable value is stored in the memory 23 or a register or cache memory in the CPU 21.
- a program for realizing the functions of the annotation image editing unit 6, the world coordinate setting unit 7, the perspective projection unit 8, the image feature point extraction unit 11, the AR data output unit 12, the position estimation unit 14, and the AR superimposition unit 9 is You may memorize
- the AR display device 1, the AR editing device 15, and the AR display device 100 are respectively a logic IC (Integrated Circuit), a GA (Gate Array), an ASIC (Application Specific Integrated Circuit), and an FPGA (Field-Programmable Grating). It may be realized by an electronic circuit.
- the processor and the electronic circuit are also collectively referred to as a processing circuit.
- 1 AR display device 2 image input unit, 3 RGB image generation unit, 4 point cloud data generation unit, 5 annotation image input unit, 6 annotation image editing unit, 7 world coordinate setting unit, 8 perspective projection unit, 9 AR superimposition unit 10, display unit, 11 image feature point extraction unit, 12 AR data output unit, 13 AR data, 14 position estimation unit, 15 AR editing device, 16 AR data input unit, 21 CPU, 22 3D sensor, 23 memory 25 GPU, 26 frame memory, 27 RAMDAC, 28 monitor, 29 keyboard / mouse, 50 graphics, 51 text, 100 AR display.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Processing Or Creating Images (AREA)
Abstract
Selon l'invention, une unité de réglage de coordonnées universelles (7) acquiert des données de groupe de points représentant la forme tridimensionnelle d'un objet et comprenant une pluralité de points, des coordonnées tridimensionnelles étant définies pour chacun de ces points. De plus, l'unité de réglage de coordonnées universelles (7) extrait, à partir de la pluralité de points dans les données du groupe de points, les points correspondant aux points caractéristiques d'image inclus dans une image capturée de l'objet, puis associe l'ensemble de coordonnées tridimensionnelles concernant les points extraits aux points caractéristiques de l'image.
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2016/059480 WO2017163384A1 (fr) | 2016-03-24 | 2016-03-24 | Dispositif, procédé et programme de traitement de données |
JP2017548475A JP6293386B2 (ja) | 2016-03-24 | 2016-03-24 | データ処理装置、データ処理方法及びデータ処理プログラム |
TW105117710A TW201734954A (zh) | 2016-03-24 | 2016-06-04 | 資料處理裝置、資料處理方法以及資料處理程式產品 |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2016/059480 WO2017163384A1 (fr) | 2016-03-24 | 2016-03-24 | Dispositif, procédé et programme de traitement de données |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2017163384A1 true WO2017163384A1 (fr) | 2017-09-28 |
Family
ID=59900048
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2016/059480 WO2017163384A1 (fr) | 2016-03-24 | 2016-03-24 | Dispositif, procédé et programme de traitement de données |
Country Status (3)
Country | Link |
---|---|
JP (1) | JP6293386B2 (fr) |
TW (1) | TW201734954A (fr) |
WO (1) | WO2017163384A1 (fr) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107918955A (zh) * | 2017-11-15 | 2018-04-17 | 百度在线网络技术(北京)有限公司 | 增强现实方法和装置 |
JP2018169824A (ja) * | 2017-03-30 | 2018-11-01 | 株式会社パスコ | 道路施設管理支援装置及び道路施設管理支援プログラム |
WO2019098318A1 (fr) * | 2017-11-20 | 2019-05-23 | パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ | Procédé de génération de données de groupe de points tridimensionnels, procédé d'estimation de position, dispositif de génération de données de groupe de points tridimensionnels et dispositif d'estimation de position |
JP2019212225A (ja) * | 2018-06-08 | 2019-12-12 | 朝日航洋株式会社 | 端末装置および端末装置の制御方法 |
CN113168904A (zh) * | 2018-11-20 | 2021-07-23 | 阿特瑞斯公司 | 基于云的放射学评述和工作空间共享 |
WO2022044755A1 (fr) * | 2020-08-27 | 2022-03-03 | パシフィックコンサルタンツ株式会社 | Dispositif de gestion d'équipement, appareil de gestion d'équipement, programme de gestion d'équipement et support d'enregistrement |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TWI642903B (zh) * | 2017-10-13 | 2018-12-01 | 緯創資通股份有限公司 | 用於頭戴式顯示裝置的定位方法、定位器以及定位系統 |
CN110163904B (zh) * | 2018-09-11 | 2022-04-22 | 腾讯大地通途(北京)科技有限公司 | 对象标注方法、移动控制方法、装置、设备及存储介质 |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2013225245A (ja) * | 2012-04-23 | 2013-10-31 | Sony Corp | 画像処理装置、画像処理方法及びプログラム |
WO2014162852A1 (fr) * | 2013-04-04 | 2014-10-09 | ソニー株式会社 | Dispositif de traitement d'image, procédé de traitement d'image et programme |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5295416B1 (ja) * | 2012-08-01 | 2013-09-18 | ヤフー株式会社 | 画像処理装置、画像処理方法及び画像処理プログラム |
-
2016
- 2016-03-24 WO PCT/JP2016/059480 patent/WO2017163384A1/fr active Application Filing
- 2016-03-24 JP JP2017548475A patent/JP6293386B2/ja active Active
- 2016-06-04 TW TW105117710A patent/TW201734954A/zh unknown
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2013225245A (ja) * | 2012-04-23 | 2013-10-31 | Sony Corp | 画像処理装置、画像処理方法及びプログラム |
WO2014162852A1 (fr) * | 2013-04-04 | 2014-10-09 | ソニー株式会社 | Dispositif de traitement d'image, procédé de traitement d'image et programme |
Non-Patent Citations (1)
Title |
---|
HIROYUKI UCHIYAMA ET AL.: "Method for Displaying Images on Urban Structures by Augmented Reality", IEICE TECHNICAL REPORT, vol. 111, no. 500, 2012, pages 141 - 146, XP055422234 * |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2018169824A (ja) * | 2017-03-30 | 2018-11-01 | 株式会社パスコ | 道路施設管理支援装置及び道路施設管理支援プログラム |
CN107918955A (zh) * | 2017-11-15 | 2018-04-17 | 百度在线网络技术(北京)有限公司 | 增强现实方法和装置 |
WO2019098318A1 (fr) * | 2017-11-20 | 2019-05-23 | パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ | Procédé de génération de données de groupe de points tridimensionnels, procédé d'estimation de position, dispositif de génération de données de groupe de points tridimensionnels et dispositif d'estimation de position |
CN111373442A (zh) * | 2017-11-20 | 2020-07-03 | 松下电器(美国)知识产权公司 | 三维点群数据生成方法、位置推断方法、三维点群数据生成装置以及位置推断装置 |
JPWO2019098318A1 (ja) * | 2017-11-20 | 2020-11-19 | パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカPanasonic Intellectual Property Corporation of America | 三次元点群データ生成方法、位置推定方法、三次元点群データ生成装置、および、位置推定装置 |
JP7325332B2 (ja) | 2017-11-20 | 2023-08-14 | パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ | 三次元点群データ生成方法、および、三次元点群データ生成装置 |
JP2019212225A (ja) * | 2018-06-08 | 2019-12-12 | 朝日航洋株式会社 | 端末装置および端末装置の制御方法 |
CN113168904A (zh) * | 2018-11-20 | 2021-07-23 | 阿特瑞斯公司 | 基于云的放射学评述和工作空间共享 |
EP3864663A4 (fr) * | 2018-11-20 | 2022-09-28 | Arterys Inc. | Commentaire de radiologie en nuage et partage d'espace de travail |
US11915821B2 (en) | 2018-11-20 | 2024-02-27 | Arterys Inc. | Cloud-based radiology commenting and workspace sharing |
WO2022044755A1 (fr) * | 2020-08-27 | 2022-03-03 | パシフィックコンサルタンツ株式会社 | Dispositif de gestion d'équipement, appareil de gestion d'équipement, programme de gestion d'équipement et support d'enregistrement |
JP2022038803A (ja) * | 2020-08-27 | 2022-03-10 | パシフィックコンサルタンツ株式会社 | 設備管理プログラム、設備管理方法、および設備管理システム |
Also Published As
Publication number | Publication date |
---|---|
JP6293386B2 (ja) | 2018-03-14 |
TW201734954A (zh) | 2017-10-01 |
JPWO2017163384A1 (ja) | 2018-04-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6293386B2 (ja) | データ処理装置、データ処理方法及びデータ処理プログラム | |
CN111783820B (zh) | 图像标注方法和装置 | |
JP5094663B2 (ja) | 位置姿勢推定用モデル生成装置、位置姿勢算出装置、画像処理装置及びそれらの方法 | |
US8467596B2 (en) | Method and apparatus for object pose estimation | |
CN104715479A (zh) | 基于增强虚拟的场景复现检测方法 | |
WO2021258579A1 (fr) | Procédé et appareil d'épissage d'image, dispositif informatique et support de stockage | |
JP5538868B2 (ja) | 画像処理装置、その画像処理方法及びプログラム | |
CN112712487B (zh) | 一种场景视频融合方法、系统、电子设备及存储介质 | |
CN109934873B (zh) | 标注图像获取方法、装置及设备 | |
JP2010287174A (ja) | 家具シミュレーション方法、装置、プログラム、記録媒体 | |
TW202011353A (zh) | 深度資料處理系統的操作方法 | |
WO2021017589A1 (fr) | Procédé de fusion d'images basé sur une mise en correspondance de domaine de gradient | |
CN113379815A (zh) | 基于rgb相机与激光传感器的三维重建方法、装置及服务器 | |
JP2010205095A (ja) | 3次元物体認識装置、並びに3次元物体認識プログラム及びこれが記録されたコンピュータ読み取り可能な記録媒体 | |
JP2006113832A (ja) | ステレオ画像処理装置およびプログラム | |
WO2019080257A1 (fr) | Dispositif électronique, procédé d'affichage d'image panoramique de scène d'accident de véhicule et support d'informations | |
JP6341540B2 (ja) | 情報端末装置、方法及びプログラム | |
KR100466587B1 (ko) | 합성영상 컨텐츠 저작도구를 위한 카메라 정보추출 방법 | |
JP2008299670A (ja) | 画像領域抽出装置およびその制御方法、複合現実感提示システム及びコンピュータプログラム | |
JP2002135807A (ja) | 3次元入力のためのキャリブレーション方法および装置 | |
WO2021176877A1 (fr) | Dispositif de traitement d'images, procédé de traitement d'images, et programme de traitement d'images | |
EL Abbadi et al. | Panoramic Image Stitching Techniques Based on SURF and Singular Value Decomposition | |
JP2019159375A (ja) | 情報処理装置,重畳表示プログラム,重畳表示方法 | |
JP5719277B2 (ja) | 物体座標系変換行列推定成否判定装置および物体座標系変換行列推定成否判定方法ならびにそのプログラム | |
JP7329964B2 (ja) | 画像処理装置および画像処理方法およびプログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
ENP | Entry into the national phase |
Ref document number: 2017548475 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16895420 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 16895420 Country of ref document: EP Kind code of ref document: A1 |