WO2015040657A1 - 画像処理装置、画像処理方法および画像処理プログラム - Google Patents
画像処理装置、画像処理方法および画像処理プログラム Download PDFInfo
- Publication number
- WO2015040657A1 WO2015040657A1 PCT/JP2013/005597 JP2013005597W WO2015040657A1 WO 2015040657 A1 WO2015040657 A1 WO 2015040657A1 JP 2013005597 W JP2013005597 W JP 2013005597W WO 2015040657 A1 WO2015040657 A1 WO 2015040657A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- pixel region
- marker
- vertex
- image processing
- pixel
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/12—Edge-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
- G06T7/74—Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/24—Aligning, centring, orientation detection or correction of the image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20112—Image segmentation details
- G06T2207/20164—Salient point detection; Corner detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30204—Marker
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/24—Aligning, centring, orientation detection or correction of the image
- G06V10/245—Aligning, centring, orientation detection or correction of the image by locating a pattern; Special marks for positioning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/20—Scenes; Scene-specific elements in augmented reality scenes
Definitions
- the present invention relates to an image processing apparatus, an image processing method, and an image processing program for displaying display information at an arbitrary position on an image, for example.
- AR augmented reality
- the interface in the augmented reality technology uses a camera (may be referred to as a two-dimensional code) attached to an object in front of the user's eyes using a camera fixed in an arbitrary place or a camera that can move freely. After acquiring an image including a marker by photographing, it has a function of displaying, on a display, additional information related to the object and display information associated with the marker.
- the image processing method based on augmented reality is based on a reference position determined from the four corners of the captured marker and a physical coordinate system based on the reference position. (May be referred to as a preview screen), or superimposed on a display of a PC on which a camera is mounted or a portable terminal.
- this image processing method it is possible to associate display information with the display position of an object.
- Augmented Reality Technology functions that support the on-site work such as the identification of the fault location when a fault occurs in an electronic device and the user's fault repair work are realized.
- an object to be a work target or a marker having a specific shape feature is provided near the object, and display information is superimposed and displayed in real time on an image captured by the camera based on various information included in the marker.
- a technology for superimposing and displaying an internal image and an operation procedure of a copier associated with a paper jam occurrence position on a copier serving as an object to be worked in a repair work support for a paper jam failure of a copier has been proposed. Yes.
- the following improvement effect is expected by applying the image processing technology using augmented reality to the above-mentioned field work.
- (2-1) Just by photographing an object to be worked using a camera of an image processing apparatus that stores an electronic manual, the work location and procedure relating to the object to be worked are displayed on a display image (camera preview image and Display information (may also be referred to as object icons) in real time in the form of superimposed display. As a result, it is possible to shorten the user's work time and reduce work mistakes.
- Work result recording work is performed by switching to a work result input screen, for example, by pressing and selecting display information displayed in association with a work target object. . Since the work results are already digitized when the work results are recorded, the work result data can be easily organized and edited. As described above, in the image processing technology using augmented reality, various work efficiencies at the work site are expected.
- the display operation of the display information in the image processing technology using augmented reality based on the recognition of the current marker for example, while the recognition of the marker included in the image is successful, the recognition position and orientation of the marker in the image Display information is displayed based on the relative position of the marker to the reference position.
- the reference position of the marker is determined from, for example, the four corners of the marker as described above. Since the display position of the display information is defined in accordance with the reference position of the marker, it is important to improve the detection accuracy of the reference position of the marker to ensure the visibility of the display information for the user.
- An object of the present invention is to provide an image processing apparatus that improves the visibility of display information to a user.
- An image processing apparatus disclosed in the present invention includes an acquisition unit that acquires an image including a marker that defines display information and a reference position for displaying display information, using an image pattern including a first pixel region and a second pixel region. Prepare. Further, the image processing apparatus is configured to provide a first boundary between the first pixel region and the second pixel region in the first connection region where the first pixel region and the second pixel region are connected in the vertical direction, and the first connection region. A first line segment defined on the basis of the first pixel area and the second boundary line of the second pixel area, respectively, and a second line in which the first pixel area and the second pixel area are connected in the horizontal direction.
- the first vertex indicating the four corner-shaped vertices of the outer edge of the marker including the first pixel region, or the vertices of the plurality of first pixel regions, or the plurality of second pixels.
- An extraction unit that extracts a second vertex indicating the vertex of the region is provided.
- the image processing apparatus further includes a calculation unit that calculates the display position of the display information using the first vertex, the second vertex, or the intersection as a reference position.
- the visibility of display information for the user can be improved.
- (A) is an example of a marker.
- (B) is a conceptual diagram of a marker in which the shape of the four corners is unclear due to a quantization error.
- (C) is a conceptual diagram of a marker whose four corner shapes are unclear due to imaging blur. It is a functional block diagram of the image processing apparatus 1 by 1st Embodiment.
- (A) is a conceptual diagram of a four-corner shape.
- (B) is a checkerboard-shaped conceptual diagram.
- (C) is a conceptual diagram of a polygonal shape.
- A) is a conceptual diagram of the polygonal shape in an input image.
- (B) is a conceptual diagram of a checkered shape in the input image.
- (A) is a conceptual diagram of the 1st line segment in a checkered shape, and a 2nd line segment.
- (B) is a conceptual diagram of the 1st line segment and 2nd line segment in another shape.
- 3 is a flowchart of image processing by the image processing apparatus 1. It is a functional block diagram of the image processing apparatus 1 by 2nd Embodiment.
- (A) is an example of the 1st marker which showed the 1st vertex, the 2nd vertex, and the intersection.
- (B) is an example of the 2nd marker which showed the 1st vertex, the 2nd vertex, and the intersection.
- (C) is an example of a table including reliability. It is a hardware block diagram of the computer which functions as the image processing apparatus 1 by one Embodiment.
- the location of problems in the prior art will be described.
- the location of the problem has been newly found as a result of careful study of the prior art by the present inventors, and has not been known so far.
- the position and orientation information when displaying the display information can be calculated using, for example, the coordinates of the positions of the four corners (also referred to as vertices) of the marker as reference coordinates.
- a translation vector (Tx, Ty, Tz) representing the coordinates of the center position of the marker in the three-dimensional coordinate system (x, y, z) with reference to the photographing direction of the camera that photographs the marker, and the translation vector Position and orientation information can be calculated based on rotation vectors (Rx, Ry, Rz) representing the rotation angle of the marker with respect to each axis.
- the rotation angle of the marker indicates, for example, how much the origin is rotated with respect to the axes of the translation vectors Tx, Ty, and Tz when the center position of the marker is the origin.
- FIG. 1 is a definition diagram of rotation vectors and translation vectors. As shown in FIG.
- a translation vector Tz is defined perpendicular to the marker from the origin 0 at an arbitrary position, and translation vectors Tx and Ty corresponding to the translation vector Tz are defined. Further, rotation vectors Rx, Ry, and Rz are respectively defined with respect to the rotation directions of the translation vectors Tx, Ty, and Tz.
- a three-dimensional coordinate system (X, Y, Z) based on the marker shape (where the marker center is the origin, the direction of each side is the X axis, the Y axis, and the normal direction of the marker is the Z axis)
- FIG. 2A is an example of a marker.
- FIG. 2B is a conceptual diagram of a marker in which the shape of the four corners is unclear due to a quantization error.
- FIG. 2C is a conceptual diagram of a marker whose four corner shapes are obscured by imaging blur. The marker shown in FIG.
- 2A has, for example, a square shape of 8 modules in length and width, and the outer periphery of one module is fixed in a black region.
- a marker ID uniquely associated with the marker is represented by a pattern of white areas and black areas of six modules in the vertical and horizontal directions inside the marker.
- markers having various shapes can be applied in addition to the marker shown in FIG.
- the boundary line between the white region and the black region constituting the marker is obscured due to quantization error or imaging blur, and is shown in FIG. There arises a problem that the positions of the vertexes of the four corners of the marker do not coincide with the detection positions.
- the boundary line is defined in the white area and the black area
- a quantization error or imaging blur does not occur, the boundary line is defined at the original boundary line position of the marker.
- the black region will apparently expand, and the boundary line of the marker will be defined outside. In this case, since the calculation accuracy of a translation vector and a rotation vector falls, the subject that the display position of display information becomes unstable arises.
- the decrease in rotation vector calculation accuracy is often noticeable, and the display information shakes up and down, left and right around the center of the marker, or a rattling phenomenon occurs, degrading the visibility of the display information. Challenges arise. In other words, in order to provide an image processing apparatus that improves the visibility of display information to the user, it is necessary to reduce the influence of quantization error and imaging blur.
- the present inventors examined the following method as a comparative example with respect to the problem of deteriorating the visibility of the display information described above. For example, in order to reduce the detection error of the vertex coordinates of the four corner shapes due to quantization error or imaging blur, the detection processing of the marker is performed after interpolating between the pixels in the vicinity of the four corner shapes, and the pixel interval constituting the marker or less A method for obtaining vertex coordinates of a four-corner shape with an accuracy of (subpixel accuracy) is conceivable. However, even if this method is used, there is a limit to the amount of reduction in quantization error or error due to imaging blur, and the effect is not sufficient.
- a translation vector or a rotation vector is temporarily stored along a time series, and a smoothing filter or a predictive filter process such as a particle filter is used to reduce shaking and rattling.
- a method of reducing it is conceivable.
- the prediction filter process is merely a prediction process, and a problem arises from the viewpoint of accuracy with respect to the original correct value.
- the prediction filter process does not follow the moving speed, and thus a problem arises that a large error is generated instead.
- a method for increasing the resolution of an imaging lens for imaging a marker and an imaging element can be considered.
- FIG. 3 is a functional block diagram of the image processing apparatus 1 according to the first embodiment.
- the image processing apparatus 1 includes an acquisition unit 2, a recognition unit 3, a storage unit 4, a generation unit 5, an extraction unit 6, a calculation unit 7, and a display unit 8.
- the image processing apparatus 1 may be configured by an integrated circuit such as an ASIC (Application Specific Integrated Circuit) or an FPGA (Field Programmable Gate Array).
- ASIC Application Specific Integrated Circuit
- FPGA Field Programmable Gate Array
- the acquisition unit 2 is, for example, a hardware circuit based on wired logic.
- the acquisition unit 2 may be a functional module realized by a computer program executed by the image processing apparatus 1.
- the acquisition unit 2 includes a communication unit (not shown), and can bidirectionally transmit and receive data to and from various external devices via a communication line.
- the acquisition unit 2 includes display information (also referred to as object information) displayed by the display unit 8 to be described later and an image including a marker that defines a reference position for displaying the display information (the image is referred to as an input image). May be called).
- the display information is information including work contents associated with an object to be worked, for example.
- the shape of the marker acquired by the acquisition unit 2 may be, for example, the shape of the marker shown in FIG.
- the acquisition unit 2 acquires the image from, for example, an image sensor (not shown) (also referred to as a camera) that is an example of an external device.
- the imaging element is, for example, an imaging device such as a CCD (Charge Coupled Device) or a CMOS (Complementary Metal Oxide Semiconductor) camera.
- the image processing apparatus 1 can make the imaging element a part of the components of the image processing apparatus 1.
- the acquisition unit 2 outputs the acquired image to the recognition unit 3. Moreover, the acquisition part 2 outputs the acquired image to the display part 8 as needed.
- the recognition unit 3 is a hardware circuit based on wired logic, for example. Further, the recognition unit 3 may be a functional module realized by a computer program executed by the image processing apparatus 1.
- the recognition unit 3 receives the input image from the acquisition unit 2 and detects the position of the marker in the input image. For example, the recognizing unit 3 detects the two-dimensional coordinates of the positions of the four corner shapes of the marker in the two-dimensional coordinate system in the coordinate axis with the upper left corner of the input image as the origin, the horizontal direction as the x axis, and the vertical direction as the y axis.
- the recognizing unit 3 recognizes the marker ID corresponding to the marker with reference to, for example, the two-dimensional coordinates of the vertexes of the four corners of the marker.
- the marker ID may be recognized using an arbitrary algorithm, or a plurality of markers stored in advance in the storage unit 4 to be described later is referred to as a template.
- the marker ID may be recognized using matching. Note that the detection accuracy of the coordinates of the four corner shapes of the marker in the recognition unit 3 may be lower in principle than that used in the calculation of the translation vector or the rotation vector, so that it is not affected by quantization error or imaging blur.
- the recognition unit 3 outputs the recognized marker ID to the generation unit 5.
- the recognition unit 3 refers to the storage unit 4, for example, recognizes display information corresponding to the marker ID, and outputs the display information to the display unit 8.
- the storage unit 4 is, for example, a semiconductor memory device such as a flash memory, or a storage device such as an HDD (Hard Disk Drive) or an optical disk.
- the storage unit is not limited to the above-mentioned type of storage device, and may be a RAM (Random Access Memory) or a ROM (Read Only Memory).
- the storage unit 4 stores, for example, a marker used by the recognition unit 3 for template matching, a marker ID associated with the marker, display information corresponding to the marker ID, and the like. Further, the display information includes data of a relative positional relationship with respect to the display surface of the display unit 8 that is displayed based on the reference position. Note that the storage unit 4 is not necessarily included in the image processing apparatus 1.
- the storage unit 4 can be provided in an external device other than the image processing device 1 by using a communication unit (not shown) provided in the image processing device 1 via a communication line. Further, the data stored in the storage unit 4 may be stored in a cache or memory (not shown) included in each functional unit of the image processing apparatus 1 as necessary.
- the generation unit 5 is a hardware circuit based on wired logic, for example.
- the generation unit 5 may be a functional module that is realized by a computer program executed by the image processing apparatus 1.
- the generation unit 5 receives the marker ID from the recognition unit 3.
- generates should just be equivalent to the shape of the marker shown to Fig.2 (a), for example.
- the generation unit 5 may acquire the marker from the storage unit 4 and use the acquired marker as a marker pattern.
- the extraction unit 6 is a hardware circuit based on wired logic, for example. Further, the extraction unit 6 may be a functional module realized by a computer program executed by the image processing apparatus 1.
- the extraction unit 6 receives the marker pattern from the generation unit 5.
- the extraction unit 6 determines the position of a point of a specific shape feature from the marker pattern as a reference position on the marker pattern.
- the specific shape feature corresponds to, for example, a four-corner shape, a checkered shape, or a polygonal shape.
- FIG. 4A is a conceptual diagram of a four-corner shape.
- FIG. 4B is a conceptual diagram of a checkered shape.
- FIG. 4C is a conceptual diagram of a polygonal shape.
- the black region may be referred to as a first pixel region
- the white region may be referred to as a second pixel region.
- the first pixel region may be inverted to the white region and the second pixel region may be inverted to the black region as necessary (for example, according to the background color of the object to which the marker is added).
- the four-corner shape is the shape of the four corners of the outer edge of the marker composed of the first pixel region.
- a first vertex is set for each of the four corner shapes. The first vertex corresponds to one of the reference positions.
- the checkered shape is a shape in which the plurality of first pixel regions and the plurality of second pixel regions face each other. Also, the center point of the checkered shape is set as an intersection. Note that the intersection point corresponds to one of the reference positions.
- FIG. 4C in the polygonal shape, a plurality of first pixel regions are connected in an arbitrary number and an arbitrary shape. Note that the polygonal shape shown in FIG. 4C is an example, and the polygonal shape is composed of various connected shapes. Further, as shown in FIG. 4C, a second vertex is set at each vertex of the polygonal shape. The second vertex corresponds to one of the reference positions.
- the second vertex is a point where two sides of the polygonal shape intersect.
- the shorter line segment may be referred to as the short side
- the longer line segment may be referred to as the long side.
- the relative positional relationship data included in the display information stored in the storage unit 4 may include the first vertex, the second vertex, and the intersection.
- the marker included in the image acquired by the acquisition unit 2 is, for example, a rectangular marker in which unit-sized squares (modules) are stacked vertically and horizontally as shown in FIG.
- the marker ID is expressed using the difference in the combination pattern between the white area and the black area of the module. For this reason, when the marker ID stored in the marker is different, the combination pattern of the white area and the black area of the module inside the marker is also different.
- the above-mentioned checkered shape or polygonal shape is formed inside the marker.
- the coordinate positions of the first vertex, the second vertex, and the intersection formed by the four-corner shape, polygonal shape, and checkered shape can be uniquely determined if the recognition unit 3 can recognize the marker ID.
- the corresponding reference position can also be detected from the marker.
- the first vertex, the second vertex, and the intersection can be used as a reference position when calculating the translation vector or the rotation vector.
- a least square method can be used.
- an error at each reference position can be expressed by a mathematical expression, and a translation vector and a rotation vector can be calculated so that the sum of the errors is minimized.
- the calculation unit 7 calculates the ideal point PM0n at each reference position in the marker reference coordinate system on the image.
- a translation vector or a rotation vector can be calculated so that an error E expressed by the following equation is minimized.
- the extraction unit 6 determines the coordinate position of the marker included in the input image corresponding to the reference position on the marker pattern. For example, the extraction unit 6 uses the center of the marker pattern as the origin, and performs mapping conversion from the planar coordinate system of the x-axis and y-axis coordinate axes based on the origin to the coordinate system of the input image (also referred to as perspective transformation). Can be determined based on the coordinates of the four corners of the marker (first vertex) and projecting another reference position onto the coordinate system of the input image using this mapping transformation. Thereby, the rotation and inclination of the marker in the input image can be corrected. Note that the marker included in the input image may be cut out only for the marker portion as necessary.
- the extraction unit 6 extracts the first vertex, the second vertex, and the intersection in the marker of the input image corresponding to the first vertex, the second vertex, and the intersection in the marker pattern. In order to extract the first vertex, the second vertex, and the intersection of the marker in the input image, the extraction unit 6 compares the positions and shapes of the four corner shapes, checkerboard shape, and polygonal shape of each of the marker pattern and the marker in the input image. The four-corner shape, checkered shape, and polygonal shape of the marker of the corresponding input image are specified.
- the extraction unit 6 extracts the first vertex, the second vertex, and the intersection that are the reference positions in the input image from the four-corner shape, checkered shape, and polygonal shape specified from the markers included in the input image.
- details of the extraction method of the first vertex, the second vertex, and the intersection as the reference position in the input image by the extraction unit 6 will be described.
- the positions of the first vertex, the second vertex, and the intersection can be obtained as, for example, the intersection of the contour lines of the outer edges constituting the four-corner shape, the polygonal shape, and the checkered shape.
- the contour straight line can be obtained as an approximate straight line that passes through a contour point whose pixel gradation value is close to a specific value (generally referred to as a binarization threshold) between the white region and the black region. .
- a specific value can be determined in advance or appropriately determined according to the state of the image, the appropriate value may change due to imaging blur or quantization error. Similarly, the correct position of the contour straight line may become unclear due to imaging blur or quantization error.
- FIG. 5A is a conceptual diagram of a polygonal shape in the input image.
- FIG. 5B is a conceptual diagram of a checkered shape in the input image.
- FIG. 5A and FIG. 5B are affected by quantization error or imaging blur, and show a state in which the outline of the outer edge of the polygonal shape or the checkered shape is unclear.
- FIG. 5A and FIG. 5B the case where the specific value described above does not match the appropriate value will be described.
- contour points when the specific value is set higher (brighter) than the appropriate value are indicated by dotted lines.
- the contour point is detected outside the position of the original contour line of the outer edge, so the contour line defined based on the contour point is also more than the original contour position. Detected outside.
- the position of the original second vertex defined based on the polygonal shape in the input image is also detected by being shifted outward.
- This phenomenon similarly occurs at the first vertex defined based on the four-corner shape.
- the contour points are detected by being shifted in a point-symmetrical direction from the center point of the checkered shape. Since the imaging blur amount and quantization error amount in the vicinity area inside the marker are almost the same, this deviation width is also almost the same. Therefore, the deviation amount is offset in the contour straight line defined based on the contour point. It is prescribed as follows. For this reason, the intersection defined based on the checkerboard shape is detected at the original position. Details of this technical feature will be described below.
- a checkered black area is defined as a first pixel area
- a white area is defined as a second pixel area.
- a region where the first pixel region and the second pixel region are vertically connected is defined as a first connected region (for example, the right half region in FIG. 4B).
- a boundary line between the first pixel region and the second pixel region in the first connection region is set as a first boundary line.
- a boundary line between the first pixel region and the second pixel region (for example, the left half region in FIG. 4B) that is point-symmetric with respect to the first connection region is defined as a second boundary line.
- a line segment defined based on the first boundary line and the second boundary line is defined as a first line segment.
- the first line segment may be defined based on the midpoint of the first boundary line and the second boundary line, for example, parallel to the first boundary line and the second boundary line.
- a region where the first pixel region and the second pixel region are connected in the horizontal direction is defined as a second connection region (for example, the upper half region in FIG. 4B).
- a boundary line between the first pixel region and the second pixel region in the second connection region is defined as a third boundary line.
- a boundary line between the first pixel region and the second pixel region (for example, the lower half region in FIG. 4B) that is point-symmetric with respect to the second connection region is defined as a fourth boundary line.
- a line segment defined based on the third boundary line and the fourth boundary line is set as the second line segment.
- the second line segment may be defined based on the middle point of the third boundary line and the fourth boundary line, for example, parallel to the third boundary line and the fourth boundary line.
- the intersection defined based on the checkerboard shape is the same as the intersection of the first line segment and the second line segment.
- FIG. 6A is a conceptual diagram of the first line segment and the second line segment in the checkered shape.
- the first boundary line and the second boundary line are defined outside the original boundary line, for example, due to the influence of quantization error and imaging blur.
- the first boundary line and the second boundary line are defined outside the original boundary line, for example, due to the influence of quantization error and imaging blur.
- the first boundary line and the second boundary line are defined outside the original boundary line, for example, due to the influence of quantization error and imaging blur.
- the first boundary line and the second boundary line are defined outside the original boundary line, for example, due to the influence of quantization error and imaging blur.
- the first boundary line and the second boundary line are offset each other. This technical feature occurs in the second line segment as well.
- the intersection of the checkered shape is defined by the intersection of the first line segment and the second line segment in which the influence of the quantization error and imaging blur is offset, the reference position can be accurately defined. For this reason, since display information is displayed in the position which should be displayed originally,
- the first boundary line in the first connection area the first pixel area that is point-symmetric with respect to the first connection area, the second boundary line of the second pixel area, the first boundary line, and If the line segment defined based on the second boundary line is an intersection defined based on the first line segment and the second line segment, it is not necessary to have a checkered shape.
- FIG. 6B is a conceptual diagram of the first line segment and the second line segment in other shapes. FIG. 6B shows a state where the first connection region and the second connection region are discrete. However, as can be understood from FIG. 6B, the first line segment is defined based on the midpoint of the first boundary line and the second boundary line and parallel to the first boundary line and the second boundary line.
- the quantization error of the first boundary line and the second boundary line and the influence of imaging blur are cancelled.
- This technical feature occurs in the second line segment as well. Since the intersection of the shape is defined by the intersection of the first line segment and the second line segment in which the influence of the quantization error and imaging blur is canceled, the reference position can be accurately defined. For this reason, since the display information is displayed at the position where it should be originally displayed, the visibility of the display information is improved. In FIG. 6A, the first boundary line and the second boundary line are based on the intersection. Therefore, it is possible to greatly reduce the influence of imaging blur and quantization error.
- the extraction unit 6 can improve the detection accuracy of the reference position by performing sub-pixel accuracy interpolation processing on the input image with the reference position of the input image as a base point. Further, the extracting unit 6 determines that a coordinate having a large position shift after the interpolation process with respect to the reference position has a poor image quality near the coordinate position and has a large error, and excludes the coordinate at this time. good. Note that the error range to be eliminated may be set in an arbitrary range in advance.
- the extraction unit 6 calculates a translation vector or a rotation vector based on a priority determined from a predetermined shape feature and a criterion between the shape features to be described later when there are a plurality of checkerboard shapes or polygonal shapes in the marker. A reference position to be used for the selection may be selected. The extraction unit 6 outputs the selected reference position to the calculation unit 7.
- the extraction unit 6 can give priorities to a plurality of reference positions when a plurality of checkerboard shapes or polygonal shapes exist inside the marker.
- the longer the straight line the more pixel information (contour points) that can be used for the contour straight line calculation increases. This is a matter common to both the checkered shape and the polygonal shape. For this reason, the extraction unit 6 can use the feature in determining the priority order among the shape features.
- the extracting unit 6 compares the lengths of the shortest contour straight lines among contour straight lines (which may be referred to as edges) forming a checkered shape or a polygonal shape, and the straight lines (referred to as reference sides). It is possible to reduce the translation vector or rotation vector calculation error by using the higher priority in order from the longest.
- the four-corner shape can be basically interpreted as the length of the shortest straight line (may be referred to as a short side) in the polygonal shape being the maximum.
- FIG. 7 is an example of a table including shape features and priorities. As shown in the table 70 of FIG. 7, the priority is given in the order of checkered shape, four-corner shape, and polygonal shape.
- the error may be larger than the shape (polygonal shape) inside the marker.
- the priority order of the four corner shapes may be set lower than that of the polygon shape as necessary.
- FIG. 8 is an example of a table including priorities based on the length of a straight line in a checkered shape.
- the length of the straight line is 5 for 1 to 5 modules in the case of the checkered shape, and the polygonal shape. In some cases, there are six ways for 1 to 6 modules, and a priority is given by a combination of these. In the case of a polygonal shape, since there are at least four straight lines, the number of combinations becomes very large. For this reason, in the table 80, the case where the length of the straight line is simplified to two lengths of one module length or two modules or more is illustrated. Further, in the table 80 and the table 90, priority is given in the order of the shortest straight line length. Furthermore, when the lengths of the shortest straight lines are equal, priority is given in descending order of the sum of the lengths of the straight lines.
- the detection position of the contour point forming the contour straight line is another polygonal shape located on the contour straight line (the contour straight line extends to both ends of the marker). It will also be affected by the checkerboard shape. For this reason, when other polygonal shapes or checkered shapes exist nearby, there is a possibility that the detection accuracy of the contour straight line itself is also deteriorated. For example, a case will be described in which another polygonal shape exists on a polygonal contour line composed of contour lines of No.
- the calculation unit 7 in FIG. 3 is, for example, a hardware circuit based on wired logic.
- the calculation unit 7 may be a functional module realized by a computer program executed by the image processing apparatus 1.
- the calculation unit 7 receives the reference position from the extraction unit 6.
- the calculation unit 7 calculates the translation vector and the rotation vector using, for example, the above (Equation 1) or (Equation 2).
- the calculation unit 7 outputs the calculated translation vector and rotation vector to the display unit 8. Further, as a modification of the calculation unit 7, the center position of the marker defined from the four corners of the marker may be used as the reference position.
- a relative position between the center position and the intersection, the first vertex, or the second vertex may be calculated, and a translation vector and a rotation vector may be calculated based on the relative position.
- the intersection, the first vertex, or the second vertex may be referred to as a substantial reference position.
- the calculation unit 7 can further use the least square method after applying different weighting coefficients to each reference position using the above-described priority order based on the detection error. Thereby, the reference coordinates of the shape feature with a small error are preferentially used, and it is possible to reduce the calculation error of the translation vector or the rotation vector.
- the calculation unit 7 can calculate the translation vector or the rotation vector so that the error E expressed by the following equation is minimized. (Equation 3) By using the above (Equation 3), it is possible to reduce the calculation error of the translation vector and the rotation vector according to the priority order.
- FIG. 10 is an example of a table including shape features, priorities, and weighting coefficients.
- the higher weighting coefficients are given in the descending order of priority given in the table 80 of FIG. 8 and the table 90 of FIG.
- a high weighting coefficient is given with priority given to a checkered shape that can suppress the reference coordinate error most, and then a four corner shape weighting coefficient is given high.
- the calculation unit 7 may selectively use the reference coordinates in descending order of priority, instead of using all the reference coordinates in order to reduce the arithmetic processing. For example, the calculation unit 7 can calculate a translation vector or a rotation vector using 14 reference coordinates.
- the display unit 8 receives display information from the recognition unit 3 and receives a translation vector and a rotation vector from the calculation unit 7.
- the display unit 8 displays display information based on the translation vector and the rotation vector.
- the display unit 8 may receive an image from the acquisition unit 2 as necessary, and display the image by superimposing display information on the image.
- FIG. 11 is a flowchart of image processing by the image processing apparatus 1.
- the acquisition unit 2 is an image including display information displayed on the display unit 8 and a marker that defines a reference position for displaying the display information (the image may be referred to as an input image).
- Obtain (step S1101).
- the shape of the marker acquired by the acquisition unit 2 may be, for example, the shape of the marker shown in FIG.
- the acquisition unit 2 acquires the image from, for example, an image sensor (not shown) (also referred to as a camera) that is an example of an external device.
- the acquisition unit 2 outputs the acquired image to the recognition unit 3.
- the acquisition part 2 outputs the acquired image to the display part 8 as needed.
- the recognition unit 3 receives the input image from the acquisition unit 2 and detects the position of the marker in the input image. For example, the recognizing unit 3 detects the two-dimensional coordinates of the positions of the four corner shapes of the marker in the two-dimensional coordinate system in the coordinate axis with the upper left corner of the input image as the origin, the horizontal direction as the x axis, and the vertical direction as the y axis. For example, the recognition unit 3 recognizes the marker ID corresponding to the marker with reference to the two-dimensional coordinates of the four corners of the marker (step S1102). When the recognition unit 3 cannot recognize the marker ID (step S1102-No), the image processing apparatus 1 ends the image processing.
- the recognition unit 3 can recognize the marker ID (step S1102-Yes)
- the recognized marker ID is output to the generation unit 5.
- the recognition unit 3 refers to the storage unit 4, for example, recognizes display information corresponding to the marker ID, and outputs the display information to the display unit 8.
- the generation unit 5 receives the marker ID from the recognition unit 3.
- generates should just be equivalent to the shape of the marker shown to Fig.2 (a), for example.
- the generation unit 5 outputs the generated marker pattern to the extraction unit 6.
- Extraction unit 6 extracts a reference position (step S1104). Specifically, the extraction unit 6 receives a marker pattern from the generation unit 5. The extraction unit 6 determines the position of a point of a specific shape feature from the marker pattern as a reference position on the marker pattern.
- the specific shape feature corresponds to, for example, a four-corner shape, a checkered shape, or a polygonal shape.
- the extraction unit 6 determines the coordinate position of the marker included in the input image corresponding to the reference position on the marker pattern.
- the extraction unit 6 uses the center of the marker pattern as the origin, and performs mapping conversion from the planar coordinate system of the x-axis and y-axis coordinate axes based on the origin to the coordinate system of the input image (also referred to as perspective transformation). Can be determined based on the coordinates of the four corners of the marker (first vertex) and projecting another reference position onto the coordinate system of the input image using this mapping transformation. Thereby, the rotation and inclination of the marker in the input image can be corrected. Note that the marker included in the input image may be cut out only for the marker portion as necessary.
- the extraction unit 6 extracts the first vertex, the second vertex, and the intersection in the marker of the input image corresponding to the first vertex, the second vertex, and the intersection in the marker pattern. In order to extract the first vertex, the second vertex, and the intersection of the marker in the input image, the extraction unit 6 compares the positions and shapes of the four corner shapes, checkerboard shape, and polygonal shape of each of the marker pattern and the marker in the input image. The four-corner shape, checkered shape, and polygonal shape of the marker of the corresponding input image are specified.
- the extraction unit 6 extracts the first vertex, the second vertex, and the intersection that are the reference positions in the input image from the four-corner shape, checkered shape, and polygonal shape specified from the markers included in the input image.
- the extraction unit 6 outputs the extracted reference position to the calculation unit 7.
- the extraction unit 6 can perform sub-pixel precision interpolation processing on the input image with the reference position of the input image as a base point, thereby improving the detection accuracy of the reference position. Further, the extracting unit 6 determines that a coordinate having a large position shift after the interpolation process with respect to the reference position has a poor image quality near the coordinate position and has a large error, and excludes the coordinate at this time. good. Note that the error range to be eliminated may be determined in an arbitrary range in advance.
- step S1104 the extraction unit 6 determines the priority order determined from a predetermined shape feature and a reference between the shape features, which will be described later, when there are a plurality of checkerboard shapes or polygonal shapes in the marker.
- the reference position used for calculating the translation vector or the rotation vector may be selected based on the above. In this case, the extraction unit 6 outputs the selected reference position to the calculation unit 7.
- the calculation unit 7 receives the reference position from the extraction unit 6.
- the calculating unit 7 calculates a translation vector or a rotation vector using, for example, the above (Equation 1), (Equation 2), or (Equation 3) (step S1105).
- the calculation unit 7 outputs the calculated translation vector and rotation vector to the display unit 8.
- the display unit 8 receives display information from the recognition unit 3 and receives a translation vector and a rotation vector from the calculation unit 7.
- the display unit 8 displays display information based on the translation vector and the rotation vector (S1106).
- the display unit 8 may receive an image from the acquisition unit 2 as necessary, and display the image by superimposing display information on the image.
- the image processing apparatus 1 when the acquisition unit 2 continuously acquires images in step S1101, the image processing apparatus 1 repeatedly executes the processes in steps S1101 to S1106. If the acquisition unit 2 does not acquire an image in step S1101, the image processing shown in the flowchart of FIG.
- the visibility of display information for the user can be improved.
- the fact that the geometrical shape of the marker is known is used, and the coordinate position inside the marker is known and can be detected in addition to the four corner shapes of the marker.
- the display information is displayed at the position where it should be originally displayed, so that the visibility of the display information is improved.
- it is possible to reduce translation vector and rotation vector calculation errors by assigning priorities and weighting factors to sub-pixel-accurate interpolation and shape features that reduce errors from geometric features. .
- FIG. 12 is a functional block diagram of the image processing apparatus 1 according to the second embodiment.
- the image processing apparatus 1 includes an acquisition unit 2, a recognition unit 3, a storage unit 4, a generation unit 5, an extraction unit 6, a calculation unit 7, and a defining unit 9.
- the image processing apparatus 1 may be configured by an integrated circuit such as an ASIC (Application Specific Integrated Circuit) or an FPGA (Field Programmable Gate Array). Since the functions of the functional units of the image processing apparatus 1 other than the defining unit 9 in the second embodiment are the same as those in the first embodiment, detailed description thereof is omitted.
- ASIC Application Specific Integrated Circuit
- FPGA Field Programmable Gate Array
- the defining unit 9 is, for example, a hardware circuit based on wired logic.
- the defining unit 9 may be a functional module realized by a computer program executed by the image processing apparatus 1.
- the defining unit 9 receives the number of intersections, first vertices, or second vertices used to calculate the display position from the calculation unit 7.
- the defining unit defines the reliability of the translation vector or the rotation vector for displaying the display information based on the number of intersections, first vertices, or second vertices used for calculating the display position.
- FIG. 13A is an example of a first marker showing a first vertex, a second vertex, and an intersection.
- FIG. 13B is an example of the second marker showing the first vertex, the second vertex, and the intersection.
- the number of first vertices, second vertices, and intersections differs depending on the marker pattern, and therefore the translation vector and rotation vector calculation accuracy depends on the marker ID. Therefore, by calculating the reliability indicating the calculation accuracy of the translation vector and the rotation vector, it is possible to perform operations such as assigning a marker having a high calculation accuracy of the translation vector and the rotation vector to important display information.
- FIG. 13C is an example of a table including reliability. The reliability shown in the table 93 in FIG.
- 13C can be calculated by an arbitrary method.
- the number of shape features used for calculation of translation vectors and rotation vectors, and shape features with high priority It can be defined based on the number of In the table 93 of FIG. 13, a numerical value obtained by summing values obtained by multiplying the number of each shape feature by a predetermined reliability coefficient is exemplified as the reliability.
- the reliability In the two markers shown in FIG. 13A and FIG. 13B, the number of four corners and the number of polygons are the same, so the difference in the number of checkered shapes is expressed as a difference in reliability.
- the reliability of the translation vector / rotation vector according to the marker ID (in other words, the reliability of the marker) can be defined. Thereby, it is possible to assign a marker having high calculation accuracy of the translation vector and the rotation vector to the display information important to the user.
- FIG. 14 is a hardware configuration diagram of a computer that functions as the image processing apparatus 1 according to an embodiment.
- the image processing apparatus 1 includes a computer 100 and an input / output device (peripheral device) connected to the computer 100.
- the entire computer 100 is controlled by the processor 101.
- a RAM (Random Access Memory) 102 and a plurality of peripheral devices are connected to the processor 101 via a bus 109.
- the processor 101 may be a multiprocessor.
- the processor 101 is, for example, a CPU, an MPU (Micro Processing Unit), a DSP (Digital Signal Processor), an ASIC (Application Specific Integrated Circuit), or a PLD (Programmable Logic). Further, the processor 101 may be a combination of two or more elements of CPU, MPU, DSP, ASIC, and PLD.
- the RAM 102 is used as a main storage device of the computer 100.
- the RAM 102 temporarily stores at least a part of an OS (Operating System) program and application programs to be executed by the processor 101.
- the RAM 102 stores various data necessary for processing by the processor 101.
- Peripheral devices connected to the bus 109 include an HDD (Hard Disk Drive) 103, a graphic processing device 104, an input interface 105, an optical drive device 106, a device connection interface 107, and a network interface 108.
- HDD Hard Disk Drive
- the HDD 103 magnetically writes and reads data to and from the built-in disk.
- the HDD 103 is used as an auxiliary storage device of the computer 100, for example.
- the HDD 103 stores an OS program, application programs, and various data.
- a semiconductor storage device such as a flash memory can be used as the auxiliary storage device.
- a monitor 110 is connected to the graphic processing device 104.
- the graphic processing device 104 displays various images on the screen of the monitor 110 in accordance with instructions from the processor 101.
- Examples of the monitor 110 include a display device using a CRT (Cathode Ray Tube) and a liquid crystal display device.
- a keyboard 111 and a mouse 112 are connected to the input interface 105.
- the input interface 105 transmits signals sent from the keyboard 111 and the mouse 112 to the processor 101.
- the mouse 112 is an example of a pointing device, and other pointing devices can also be used. Examples of other pointing devices include a touch panel, a tablet, a touch pad, and a trackball.
- the optical drive device 106 reads data recorded on the optical disk 113 using a laser beam or the like.
- the optical disk 113 is a portable recording medium on which data is recorded so that it can be read by reflection of light.
- the optical disc 113 includes a DVD (Digital Versatile Disc), a DVD-RAM, a CD-ROM (Compact Disc Read Only Memory), a CD-R (Recordable) / RW (ReWriteable), and the like.
- a program stored in the optical disk 113 serving as a portable recording medium is installed in the image processing apparatus 1 via the optical drive device 106. The installed predetermined program can be executed by the image processing apparatus 1.
- the device connection interface 107 is a communication interface for connecting peripheral devices to the computer 100.
- a memory device 114 or a memory reader / writer 115 can be connected to the device connection interface 107.
- the memory device 114 is a recording medium equipped with a communication function with the device connection interface 107.
- the memory reader / writer 115 is a device that writes data to the memory card 116 or reads data from the memory card 116.
- the memory card 116 is a card type recording medium.
- the network interface 108 is connected to the network 117.
- the network interface 108 transmits and receives data to and from other computers or communication devices via the network 117.
- the computer 100 realizes the above-described image processing function by executing a program recorded on a computer-readable recording medium, for example.
- a program describing the processing contents to be executed by the computer 100 can be recorded in various recording media.
- the program can be composed of one or a plurality of functional modules.
- a program can be comprised from the functional module which implement
- a program to be executed by the computer 100 can be stored in the HDD 103.
- the processor 101 loads at least a part of the program in the HDD 103 into the RAM 102 and executes the program.
- a program to be executed by the computer 100 can also be recorded on a portable recording medium such as the optical disc 113, the memory device 114, and the memory card 116.
- the program stored in the portable recording medium becomes executable after being installed in the HDD 103 under the control of the processor 101, for example.
- the processor 101 can also read and execute a program directly from a portable recording medium.
- each component of each illustrated apparatus does not necessarily need to be physically configured as illustrated.
- the specific form of distribution / integration of each device is not limited to that shown in the figure, and all or a part thereof may be functionally or physically distributed or arbitrarily distributed in arbitrary units according to various loads or usage conditions. Can be integrated and configured.
- the various processes described in the above embodiments can be realized by executing a prepared program on a computer such as a personal computer or a workstation.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
Description
(1-1)作業対象箇所や手順を記載した大量の紙媒体の作業手順書を作業現場に持って行き、作業者が必要な個所を自分で選択して参照すると共に、作業手順書の記載と現場の状況を見比べながら作業を実施する。この為、必要箇所の選択に時間がかかる、間違った箇所を選択する・見比べる時に見間違いが発生するなどの作業ミスが発生する。
(1-2)作業結果は、紙媒体のチェックシートへ記入して記録する。記録内容を電子化する作業負荷は大きいことから、紙媒体のチェックシートのまま作業結果が保存されることになる為、作業結果の整理や解析が難しくなる。
(2-1)電子化したマニュアルを格納した画像処理装置のカメラを用いて作業対象の物体を撮影するだけで、その作業対象の物体に関する作業箇所や手順を、ディスプレイの画像(カメラプレビュー画像と称しても良い)への表示情報(オブジェクトアイコンと称しても良い)の重畳表示という形でリアルタイムに表示して提示する。これにより、ユーザの作業時間の短縮と作業ミスの低減を図ることが可能となる。
(2-2)作業結果の記録作業は、作業対象の物体に関連付けて表示されている表示情報を、例えば、手指で押下して選択することで、作業結果の入力用画面に切り替えて実施する。作業結果の記録完了時において、既に作業結果は電子化されている為、作業結果のデータの整理や編集が容易になる。この様に、拡張現実感を用いた画像処理技術においては、作業現場における様々な作業効率化が期待されている。
(数1)
PCn = R(θx、θy、θz)・PMn+T
但し、上述の(数1)において、
T=(Tx、Ty、Tz)
R(θx、θy、θz)=Rx(θx,)・Ry(θy)・Rz(θz)であり、更に、
であるものとする。また、回転ベクトル(Rx、Ry、Rz)は、上記の回転行列R(θx、θy、θz)から算出することが出来る。ここで、R(θx、θy、θz)は、上述の(数1)において、連立方程式を用いて算出することが出来る。例えば、マーカの4隅を用いる場合、4隅の各点は同一平面上にあること(Zn=0)を前提とすることで、1枚の画像中の4点分の点の座標が分かれば連立方程式を解くことが可能となる。この時、PMn=(Xn、Yn、Zn)、PCn=(xn、yn、zn)は、マーカの基準位置に基づいて一意的に係数をして定められる。この様に、並進ベクトルならびに回転ベクトルはマーカの基準位置に基づいて算出される。
図3は、第1の実施形態による画像処理装置1の機能ブロック図である。画像処理装置1は、取得部2、認識部3、記憶部4、生成部5、抽出部6、算出部7ならびに、表示部8を有する。なお、画像処理装置1は、例えば、ASIC(Application Specific Integrated Circuit)やFPGA(Field Programmable Gate Array)などの集積回路で構成しても良い。
(数2)
上述の(数2)を用いることで、並進ベクトルならびに回転ベクトルの算出誤差を低減させることが可能となる。
(数3)
上述の(数3)を用いることで、優先順位に応じた並進ベクトルならびに回転ベクトルの算出誤差を低減させることが可能となる。
図12は、第2の実施形態による画像処理装置1の機能ブロック図である。画像処理装置1は、取得部2、認識部3、記憶部4、生成部5、抽出部6、算出部7、ならびに規定部9を有する。なお、画像処理装置1は、例えば、ASIC(Application Specific Integrated Circuit)やFPGA(Field Programmable Gate Array)などの集積回路で構成しても良い。実施例2における規定部9以外の画像処理装置1の各機能部の機能は実施例1と同様である為、詳細な説明は省略する。
図14は、一つの実施形態による画像処理装置1として機能するコンピュータのハードウェア構成図である。図14に示す通り、画像処理装置1は、コンピュータ100、およびコンピュータ100に接続する入出力装置(周辺機器)を含んで構成される。
2 取得部
3 認識部
4 記憶部
5 生成部
6 抽出部
7 算出部
8 表示部
9 規定部
Claims (14)
- 第1画素領域と第2画素領域を含む画像パターンにより、表示情報と前記表示情報を表示する為の基準位置を規定するマーカを含む画像を取得する取得部と、
前記第1画素領域と前記第2画素領域が縦方向に連接する第1連接領域における前記第1画素領域と前記第2画素領域の第1境界線と、
前記第1連接領域に対してそれぞれ点対称となる前記第1画素領域と前記第2画素領域の第2境界線と
に基づいて規定される第1線分と、
前記第1画素領域と前記第2画素領域が横方向に連接する第2連接領域における前記第1画素領域と前記第2画素領域の第3境界線と、
前記第2連接領域に対してそれぞれ点対称となる前記第1画素領域と前記第2画素領域の第4境界線と
に基づいて規定される第2線分と、
の交点、
前記第1画素領域を含む前記マーカの外縁の4隅形状の頂点を示す第1頂点、または、
複数の前記第1画素領域の頂点、または、複数の前記第2画素領域の頂点を示す第2頂点、
を抽出する抽出部と、
前記第1頂点、前記第2頂点または前記交点を前記基準位置として前記表示情報の表示位置を算出する算出部と、
を備えることを特徴とする画像処理装置。 - 前記第1線分は、前記第1境界線と前記第2境界線に平行かつ前記第1境界線と前記第2境界線の中点に基づいて規定され、
前記第2線分は、前記第3境界線と前記第4境界線に平行かつ前記第3境界線と前記第4境界線の中点に基づいて規定されることを特徴とする請求項1記載の画像処理装置。 - 前記第2頂点は、複数の前記第1画素領域が連接、または複数の前記第2画素領域が連接した多角形状の頂点を含むことを特徴とする請求項1または請求項2記載の画像処理装置。
- 前記抽出部は、前記交点、前記第1頂点、前記第2頂点の順に優先度を付与し、
前記算出部は、前記優先度に基づいて前記表示位置を算出することを特徴とする請求項1ないし請求項3記載の何れか一項に記載の画像処理装置。 - 前記抽出部は、前記第2頂点を形成する長辺と短辺を抽出し、前記短辺が長い順に複数の前記第2頂点に対して前記優先度を付与することを特徴とする請求項3記載の画像処理装置。
- 前記抽出部は、前記短辺が同じ長さの場合、前記長辺が長い順に複数の前記第2頂点に対して前記優先度を付与することを特徴とする請求項5記載の画像処理装置。
- 前記抽出部は、複数の前記第1画素領域と、複数の前記第2画素領域同士が互いに対向する市松形状の中心点を前記交点として抽出することを特徴とする請求項1ないし請求項5記載の何れか一項に記載の画像処理装置。
- 前記抽出部は、複数の前記市松形状の縁辺を抽出し、前記縁辺が長い順に複数の前記交点に対して前記優先度を付与することを特徴とする請求項3または請求項4記載の画像処理装置。
- 前記優先度は重み付け係数を含み、
前記算出部は、前記重み付け係数を前記交点、前記第1頂点、前記第2頂点に対して適用して、前記基準位置を算出することを特徴とする請求項1記載の画像処理装置。 - 前記取得部が取得するマーカからマーカIDを認識する認識部と、
前記マーカIDから、マーカパターンを生成する生成部と、を更に含み、
前記抽出部は、
前記マーカパターンに含まれる前記4隅形状、前記市松形状、または前記多角形状と、
前記マーカに含まれる前記4隅形状、前記市松形状、または前記多角形状と
を対応付けて前記前記第1頂点、前記第2頂点、または前記交点を抽出することを特徴とする請求項7記載の画像処理装置。 - 前記表示位置の算出に用いられた前記交点、前記第1頂点、または、前記第2頂点の数に基づいて、前記表示情報を表示する並進ベクトルまたは回転ベクトルの信頼度を規定する規定部、
を更に備えることを特徴とする請求項2ないし請求項10の何れか一項に記載の画像処理装置。 - 前記抽出部は、前記基準位置に対して補完処理を実施することを特徴とする請求項1ないし請求項11何れか一項に記載の画像処理装置。
- 第1画素領域と第2画素領域を含む画像パターンにより、表示情報と前記表示情報を表示する為の基準位置を規定するマーカを含む画像を取得すること、
前記第1画素領域と前記第2画素領域が縦方向に連接する第1連接領域における前記第1画素領域と前記第2画素領域の第1境界線と、
前記第1連接領域に対してそれぞれ点対称となる前記第1画素領域と前記第2画素領域の第2境界線と
に基づいて規定される第1線分と、
前記第1画素領域と前記第2画素領域が横方向に連接する第2連接領域における前記第1画素領域と前記第2画素領域の第3境界線と、
前記第2連接領域に対してそれぞれ点対称となる前記第1画素領域と前記第2画素領域の第4境界線と
に基づいて規定される第2線分と、
の交点、
前記第1画素領域を含む前記マーカの外縁の4隅形状の頂点を示す第1頂点、または、
複数の前記第1画素領域の頂点、または、複数の前記第2画素領域の頂点を示す第2頂点、
を抽出すること、
前記第1頂点、前記第2頂点または前記交点を前記基準位置として前記表示情報の表示位置を算出すること、
を含むことを特徴とする画像処理方法。 - コンピュータに、
第1画素領域と第2画素領域を含む画像パターンにより、表示情報と前記表示情報を表示する為の基準位置を規定するマーカを含む画像を取得すること、
前記第1画素領域と前記第2画素領域が縦方向に連接する第1連接領域における前記第1画素領域と前記第2画素領域の第1境界線と、
前記第1連接領域に対してそれぞれ点対称となる前記第1画素領域と前記第2画素領域の第2境界線と
に基づいて規定される第1線分と、
前記第1画素領域と前記第2画素領域が横方向に連接する第2連接領域における前記第1画素領域と前記第2画素領域の第3境界線と、
前記第2連接領域に対してそれぞれ点対称となる前記第1画素領域と前記第2画素領域の第4境界線と
に基づいて規定される第2線分と、
の交点、
前記第1画素領域を含む前記マーカの外縁の4隅形状の頂点を示す第1頂点、または、
複数の前記第1画素領域の頂点、または、複数の前記第2画素領域の頂点を示す第2頂点、
を抽出すること、
前記第1頂点、前記第2頂点または前記交点を前記基準位置として前記表示情報の表示位置を算出すること、
を実行させることを特徴とする画像処理プログラム。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP13893691.9A EP3048555B1 (en) | 2013-09-20 | 2013-09-20 | Image processing device, image processing method, and image processing program |
PCT/JP2013/005597 WO2015040657A1 (ja) | 2013-09-20 | 2013-09-20 | 画像処理装置、画像処理方法および画像処理プログラム |
JP2015537434A JP6256475B2 (ja) | 2013-09-20 | 2013-09-20 | 画像処理装置、画像処理方法および画像処理プログラム |
US15/057,478 US9704246B2 (en) | 2013-09-20 | 2016-03-01 | Image processing apparatus, image processing method, and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2013/005597 WO2015040657A1 (ja) | 2013-09-20 | 2013-09-20 | 画像処理装置、画像処理方法および画像処理プログラム |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/057,478 Continuation US9704246B2 (en) | 2013-09-20 | 2016-03-01 | Image processing apparatus, image processing method, and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2015040657A1 true WO2015040657A1 (ja) | 2015-03-26 |
Family
ID=52688346
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2013/005597 WO2015040657A1 (ja) | 2013-09-20 | 2013-09-20 | 画像処理装置、画像処理方法および画像処理プログラム |
Country Status (4)
Country | Link |
---|---|
US (1) | US9704246B2 (ja) |
EP (1) | EP3048555B1 (ja) |
JP (1) | JP6256475B2 (ja) |
WO (1) | WO2015040657A1 (ja) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2017092947A (ja) * | 2015-11-09 | 2017-05-25 | ぺんてる株式会社 | 学習支援プログラム、学習支援装置、学習ツール生成用帳面および学習ツール生成方法 |
WO2017144060A1 (en) * | 2016-02-24 | 2017-08-31 | Zünd Skandinavien Aps | Cnc flatbed cutting machine, its method of operation, and a graphics sheet with a fiducial that indicates the orientation of the graphics sheet |
CN107909567A (zh) * | 2017-10-31 | 2018-04-13 | 华南理工大学 | 数字图像的细长型连通区域提取方法 |
CN116957524A (zh) * | 2023-09-21 | 2023-10-27 | 青岛阿斯顿工程技术转移有限公司 | 一种技术转移过程中人才信息智能管理方法及系统 |
CN118135261A (zh) * | 2024-05-06 | 2024-06-04 | 浙江大学 | 一种超大规模版图的图形匹配方法及系统 |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2018005091A (ja) * | 2016-07-06 | 2018-01-11 | 富士通株式会社 | 表示制御プログラム、表示制御方法および表示制御装置 |
CN109344832B (zh) * | 2018-09-03 | 2021-02-02 | 北京市商汤科技开发有限公司 | 图像处理方法及装置、电子设备和存储介质 |
JP7248490B2 (ja) * | 2019-04-24 | 2023-03-29 | 株式会社ソニー・インタラクティブエンタテインメント | 情報処理装置、デバイスの位置および姿勢の推定方法 |
US11151792B2 (en) | 2019-04-26 | 2021-10-19 | Google Llc | System and method for creating persistent mappings in augmented reality |
US11163997B2 (en) | 2019-05-05 | 2021-11-02 | Google Llc | Methods and apparatus for venue based augmented reality |
CN110910409B (zh) * | 2019-10-15 | 2023-10-27 | 平安科技(深圳)有限公司 | 一种灰度图像处理方法、装置和计算机可读存储介质 |
CN111508031B (zh) * | 2020-04-10 | 2023-11-21 | 中国科学院自动化研究所 | 特征自识别标定板 |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002032784A (ja) * | 2000-07-19 | 2002-01-31 | Atr Media Integration & Communications Res Lab | 仮想対象物操作装置および仮想対象物操作方法 |
JP2010287174A (ja) * | 2009-06-15 | 2010-12-24 | Dainippon Printing Co Ltd | 家具シミュレーション方法、装置、プログラム、記録媒体 |
Family Cites Families (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4119900A (en) * | 1973-12-21 | 1978-10-10 | Ito Patent-Ag | Method and system for the automatic orientation and control of a robot |
JP2940736B2 (ja) * | 1992-03-26 | 1999-08-25 | 三洋電機株式会社 | 画像処理装置及びこの画像処理装置における歪み補正方法 |
JP4147719B2 (ja) | 2000-04-07 | 2008-09-10 | マックス株式会社 | 釘打ち機のコンタクトアーム |
JP2001195597A (ja) | 2000-12-11 | 2001-07-19 | Mitsubishi Electric Corp | 画像処理装置 |
JP4532982B2 (ja) | 2004-05-14 | 2010-08-25 | キヤノン株式会社 | 配置情報推定方法および情報処理装置 |
US20070077987A1 (en) * | 2005-05-03 | 2007-04-05 | Tangam Gaming Technology Inc. | Gaming object recognition |
US9526587B2 (en) * | 2008-12-31 | 2016-12-27 | Intuitive Surgical Operations, Inc. | Fiducial marker design and detection for locating surgical instrument in images |
CA2566260C (en) * | 2005-10-31 | 2013-10-01 | National Research Council Of Canada | Marker and method for detecting said marker |
EP2112817A4 (en) * | 2007-08-03 | 2010-03-24 | Univ Keio | COMPOSITION ANALYSIS METHOD, IMAGE COMPUTER WITH COMPOSITION ANALYSIS FUNCTION, COMPOSITION ANALYSIS PROGRAM AND COMPUTER READABLE RECORDING MEDIUM |
JP2010160056A (ja) | 2009-01-08 | 2010-07-22 | Fuji Xerox Co Ltd | 位置計測装置およびプログラム |
JP5062497B2 (ja) * | 2010-03-31 | 2012-10-31 | アイシン・エィ・ダブリュ株式会社 | 風景画像認識を用いた自車位置検出システム |
WO2012001793A1 (ja) * | 2010-06-30 | 2012-01-05 | 富士通株式会社 | 画像処理プログラムおよび画像処理装置 |
JP5573618B2 (ja) * | 2010-11-12 | 2014-08-20 | 富士通株式会社 | 画像処理プログラムおよび画像処理装置 |
JP2011134343A (ja) * | 2011-02-24 | 2011-07-07 | Nintendo Co Ltd | 画像処理プログラム、画像処理装置、画像処理システム、および画像処理方法 |
-
2013
- 2013-09-20 WO PCT/JP2013/005597 patent/WO2015040657A1/ja active Application Filing
- 2013-09-20 EP EP13893691.9A patent/EP3048555B1/en active Active
- 2013-09-20 JP JP2015537434A patent/JP6256475B2/ja not_active Expired - Fee Related
-
2016
- 2016-03-01 US US15/057,478 patent/US9704246B2/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002032784A (ja) * | 2000-07-19 | 2002-01-31 | Atr Media Integration & Communications Res Lab | 仮想対象物操作装置および仮想対象物操作方法 |
JP2010287174A (ja) * | 2009-06-15 | 2010-12-24 | Dainippon Printing Co Ltd | 家具シミュレーション方法、装置、プログラム、記録媒体 |
Non-Patent Citations (2)
Title |
---|
"Proposal and Evaluation of Decommissioning Support Method of Nuclear Power Plants using Augmented Reality", COLLECTED PAPERS OF THE VIRTUAL REALITY SOCIETY OF JAPAN, vol. 13, no. 2, 2008, pages 289 - 300 |
See also references of EP3048555A4 |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2017092947A (ja) * | 2015-11-09 | 2017-05-25 | ぺんてる株式会社 | 学習支援プログラム、学習支援装置、学習ツール生成用帳面および学習ツール生成方法 |
WO2017144060A1 (en) * | 2016-02-24 | 2017-08-31 | Zünd Skandinavien Aps | Cnc flatbed cutting machine, its method of operation, and a graphics sheet with a fiducial that indicates the orientation of the graphics sheet |
CN107909567A (zh) * | 2017-10-31 | 2018-04-13 | 华南理工大学 | 数字图像的细长型连通区域提取方法 |
CN116957524A (zh) * | 2023-09-21 | 2023-10-27 | 青岛阿斯顿工程技术转移有限公司 | 一种技术转移过程中人才信息智能管理方法及系统 |
CN116957524B (zh) * | 2023-09-21 | 2024-01-05 | 青岛阿斯顿工程技术转移有限公司 | 一种技术转移过程中人才信息智能管理方法及系统 |
CN118135261A (zh) * | 2024-05-06 | 2024-06-04 | 浙江大学 | 一种超大规模版图的图形匹配方法及系统 |
CN118135261B (zh) * | 2024-05-06 | 2024-08-06 | 浙江大学 | 一种超大规模版图的图形匹配方法及系统 |
Also Published As
Publication number | Publication date |
---|---|
US9704246B2 (en) | 2017-07-11 |
EP3048555A4 (en) | 2017-03-08 |
US20160180536A1 (en) | 2016-06-23 |
JP6256475B2 (ja) | 2018-01-10 |
EP3048555B1 (en) | 2020-07-15 |
JPWO2015040657A1 (ja) | 2017-03-02 |
EP3048555A1 (en) | 2016-07-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6256475B2 (ja) | 画像処理装置、画像処理方法および画像処理プログラム | |
JP6417702B2 (ja) | 画像処理装置、画像処理方法および画像処理プログラム | |
JP5713159B2 (ja) | ステレオ画像による3次元位置姿勢計測装置、方法およびプログラム | |
US7161606B2 (en) | Systems and methods for directly generating a view using a layered approach | |
JP6842039B2 (ja) | カメラ位置姿勢推定装置、方法およびプログラム | |
CN108074267B (zh) | 交点检测装置及方法、摄像头校正系统及方法及记录介质 | |
JP6716996B2 (ja) | 画像処理プログラム、画像処理装置、及び画像処理方法 | |
JP6089722B2 (ja) | 画像処理装置、画像処理方法および画像処理プログラム | |
JP6645151B2 (ja) | 投影装置、投影方法及び投影用コンピュータプログラム | |
EP3633606B1 (en) | Information processing device, information processing method, and program | |
TW201616451A (zh) | 點雲套索選取系統及方法 | |
JP6880618B2 (ja) | 画像処理プログラム、画像処理装置、及び画像処理方法 | |
JP2007128373A (ja) | 画像処理方法、画像処理用のプログラムならびにその記憶媒体、および画像処理装置 | |
JP6031819B2 (ja) | 画像処理装置、画像処理方法 | |
JP4649559B2 (ja) | 3次元物体認識装置、並びに3次元物体認識プログラム及びこれが記録されたコンピュータ読み取り可能な記録媒体 | |
JP6579659B2 (ja) | 光源推定装置及びプログラム | |
JP2011155412A (ja) | 投影システムおよび投影システムにおける歪み修正方法 | |
US10146331B2 (en) | Information processing system for transforming coordinates of a position designated by a pointer in a virtual image to world coordinates, information processing apparatus, and method of transforming coordinates | |
JP7020240B2 (ja) | 認識装置、認識システム、プログラムおよび位置座標検出方法 | |
JP7003617B2 (ja) | 推定装置、推定方法、及び推定プログラム | |
CN112446895B (zh) | 棋盘格角点自动提取方法、系统、设备及介质 | |
JP2009146150A (ja) | 特徴位置検出方法及び特徴位置検出装置 | |
JP7061092B2 (ja) | 画像処理装置及びプログラム | |
JP2016072691A (ja) | 画像処理装置及びその制御方法、プログラム | |
JP6906177B2 (ja) | 交点検出装置、カメラ校正システム、交点検出方法、カメラ校正方法、プログラムおよび記録媒体 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 13893691 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2015537434 Country of ref document: JP Kind code of ref document: A |
|
REEP | Request for entry into the european phase |
Ref document number: 2013893691 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2013893691 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |