CN111344740A - Camera image processing method based on marker and augmented reality equipment - Google Patents

Camera image processing method based on marker and augmented reality equipment Download PDF

Info

Publication number
CN111344740A
CN111344740A CN201780096283.6A CN201780096283A CN111344740A CN 111344740 A CN111344740 A CN 111344740A CN 201780096283 A CN201780096283 A CN 201780096283A CN 111344740 A CN111344740 A CN 111344740A
Authority
CN
China
Prior art keywords
image
camera
sequence
marker
parameter matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201780096283.6A
Other languages
Chinese (zh)
Inventor
谢俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Royole Technologies Co Ltd
Original Assignee
Shenzhen Royole Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Royole Technologies Co Ltd filed Critical Shenzhen Royole Technologies Co Ltd
Publication of CN111344740A publication Critical patent/CN111344740A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a camera image processing method based on a marker and augmented reality equipment, wherein the method comprises the following steps: selecting or extracting a marker image, and carrying out perspective transformation on the marker image to obtain a marker image sequence; extracting sequence characteristic points of each sequence image in the marker image sequence; acquiring a current camera image, extracting image characteristic points of the current camera image, matching the image characteristic points of the current camera image with sequence characteristic points of a sequence image, and acquiring successfully matched characteristic point pairs; and according to the feature point pairs successfully matched, combining the internal reference matrix of the camera to calculate the external reference matrix of the current frame camera. The method can effectively identify the marker, can increase the identification rate of the marker under a large angle, can improve the accuracy rate of the feature point matching in the process of matching the marker image with the feature point of the camera image, has small calculated amount, and is suitable for mobile equipment.

Description

Camera image processing method based on marker and augmented reality equipment
Camera image processing method based on marker and augmented reality equipment technical field
[0001] The invention relates to the technical field of augmented reality, in particular to a camera image processing method based on a marker and augmented reality equipment.
Background
[0002] The existing Augmented Reality (AR) technology can actually shoot the relative position relationship between a scene and a mark symbol by a computer vision technology, and input an image shot by real time
Comparing the marker image with an image shot by real inches, specifically comprising the following steps: searching and identifying a connected region corresponding to the image of the marker in the shot image, taking the connected region as a candidate object to obtain the contour line of each connected region, and taking the connected region as a possible marker if four intersected straight edges can be extracted; and performing deformation correction by utilizing the angle characteristics found by the four right-angle edges so as to obtain the corresponding relation between the marker image and the shot image.
[0003] However, the marker image selected by the method is unique, when the camera changes angles and moves inches in the shooting process, the difference between the obtained image and the marker image is large, the images are different from each other in inches, when feature comparison is carried out, the data quantity to be compared is more, the operation speed is slower, and the identification rate and the accuracy rate of the marker are both poor
[0004] The technical problem to be solved by the present invention is to provide a method and an apparatus for processing a camera image based on a marker, which are suitable for a mobile device, and can effectively identify the marker and effectively improve the accuracy in the matching process of feature points, and a method and an apparatus for implementing augmented reality, and a computer-readable storage medium including the method. Solution to the problem
Technical solution
[0005] The technical scheme adopted by the invention for solving the technical problems is as follows: a camera image processing method based on a marker is constructed, and the method comprises the following steps:
[0006] a, selecting or extracting a marker image, and carrying out perspective transformation on the marker image to obtain a marker image sequence; [0007] b, extracting sequence characteristic points of each sequence image in the marker image sequence;
[0008] c, acquiring a current camera image;
[0009] extracting image characteristic points of the current camera image, and pairing the image characteristic points of the current camera image with the sequence characteristic points of the sequence image to obtain successfully matched characteristic point pairs;
[0010] and E, calculating an external parameter matrix of the current frame camera by combining the internal parameter matrix of the camera according to the successfully matched feature point pairs, wherein the external parameter matrix of the current frame camera is the coordinate corresponding relation of the marker image and the successfully matched feature point of the camera image.
[0011] The invention also provides a camera image processing device based on the marker, which comprises:
[0012] the system comprises a marker image sequence acquisition module, a marker image sequence acquisition module and a marker image conversion module, wherein the marker image sequence acquisition module is used for selecting or extracting one marker image and carrying out perspective transformation on the marker image to obtain a marker image sequence;
[0013] the first characteristic point extraction module is used for extracting the sequence characteristic points of each sequence image in the marker image sequence;
[0014] the current camera image acquisition module is used for acquiring a current camera image;
[0015] the characteristic point matching module is used for extracting the image characteristic points of the current camera image, matching the image characteristic points of the current camera image with the sequence characteristic points of the sequence image and acquiring successfully matched characteristic point pairs;
[0016] and the external parameter matrix calculation module is used for calculating an external parameter matrix of the current frame camera by combining the internal parameter matrix of the camera according to the successfully matched feature point pairs, wherein the external parameter matrix of the current frame camera is the coordinate corresponding relation of the marker image and the successfully matched feature point of the camera image. .
[0017] The invention also provides a method for realizing augmented reality, and the external parameter matrix of the camera is obtained by adopting the camera image processing method based on the marker.
[0018] The invention also provides a device for implementing augmented reality, comprising a processor for executing computer program instructions stored in a memory for implementing the steps of the method as described above.
[0019] The present invention also provides a computer readable storage medium having stored thereon a computer program for execution by a processor in inches for carrying out the steps of the method as described above.
Advantageous effects of the invention
Advantageous effects
[0020] The method can effectively identify the marker, can increase the identification rate of the marker under a large angle, can improve the accuracy rate of the matching of the characteristic points in the matching process of the marker image and the camera image characteristic points at the same time, has small calculated amount, and is suitable for mobile equipment.
Brief description of the drawings
Drawings
[0021] The invention will be further described with reference to the accompanying drawings and examples, in which:
[0022] FIG. 1 is a schematic flow chart of an embodiment of a method for processing an image of a camera based on a marker according to the present invention;
[0023] FIG. 2 is a schematic flowchart of a second embodiment of a marker-based camera image processing method according to the present invention;
[0024] FIG. 3-1 is a schematic diagram of a sequence of marker images;
[0025] FIG. 3-2 is a schematic view of an original marker image;
[0026] 3-3 are schematic images of a marker image generated by 2 perspective transformations in each direction;
[0027] FIG. 4 is a schematic diagram of a self-matching result of feature points of a marker image;
[0028] FIG. 5 is a schematic view of a region of interest and a region of non-interest;
[0029] FIG. 6 is a diagram of exemplary image processing;
[0030] FIG. 7 is a schematic diagram of an error analysis of a camera external reference matrix;
[0031] fig. 8 is a schematic diagram of a functional module of the marker-based camera functional relationship obtaining device according to the present invention.
Best mode for carrying out the invention
Best mode for carrying out the invention
[0032] For a more clear understanding of the technical features, objects and effects of the present invention, embodiments of the present invention will now be described in detail with reference to the accompanying drawings.
[0033] Referring to fig. 1, fig. 1 is a schematic flow chart of an embodiment of a marker-based camera image processing method according to the present invention
. The camera image processing method based on the marker in the embodiment can be applied to augmented reality technology.
[0034] As shown in fig. 1, the method for processing the image of the camera based on the marker in the embodiment includes the following steps: 0035 step A, selecting or extracting a marker image, and carrying out perspective transformation on the marker image to obtain a marker image sequence.
[0036] The marker image sequence can be obtained by performing posture transformation on the selected or extracted marker image by adopting a preset transformation matrix. Wherein the pose transformation performed on the selected or extracted marker images includes translation and rotation.
[0037] The selected or extracted marker image is a marker image pre-stored in a memory, wherein the marker image can be an image directly called from an image library or a live image obtained by field shooting and stored in the memory, and the source of the marker image is not particularly required.
[0038] The preset transformation matrix can be obtained by calculating the distance between the preset marker image and the camera in the conventional use scene, and the adopted transformation can be perspective transformation.
[0039] Further, before step a, the method for processing the image of the camera based on the marker further comprises
[0040] And A1, acquiring an internal reference matrix of the camera, wherein the internal reference matrix of the camera comprises parameter information of the camera.
[0041] The parameter information of the camera is various parameters of the camera itself, for example, the number of horizontal pixels and the number of vertical pixels of the camera itself, and the horizontal and vertical normalized focal lengths of the camera, etc. The parameters can be obtained by calibrating the camera in advance, or can be directly calculated by reading parameter information (pixels, focal lengths and the like) of the camera, and the embodiment does not make specific requirements.
[0042] Step A2, initializing system environment and configuring system parameters. The method mainly comprises the step of building a system hardware platform
The method comprises the steps of setting a drawing environment capable of supporting two-dimensional and three-dimensional graphs, distributing image cache space, identifying a camera and the like.
[0043] And B, extracting the sequence characteristic points of each sequence image in the marker image sequence.
[0044] Further, this embodiment further includes, before step C:
[0045] and B11, extracting the characteristic points of all sequence images in the marker image sequence by using a characteristic point extraction algorithm. For example, a surf feature point extraction algorithm, a sift feature point extraction algorithm, or an ORB feature point extraction algorithm may be employed.
[0046] In this embodiment, the ORB feature point extraction algorithm is preferably used to extract the sequence feature points of each sequence image.
[0047] Compared with surf characteristics and sift characteristics, the characteristic points of the marker images are extracted by the ORB algorithm, rotation invariance is achieved, the extraction speed is high, and therefore the scheme can be suitable for mobile equipment.
[0048] And B12, performing self-matching on the sequence feature points of each sequence image extracted in the step B11.
[0049] And performing self-matching on the extracted sequence feature points of each sequence image, namely matching the sequence feature points of each sequence image with the sequence feature points of each sequence image. The characteristic points with high similarity in each sequence image can be obtained by carrying out self-matching on the sequence characteristic points of each sequence image. [0050] It can be understood that, in this embodiment, a threshold method may be used for self-matching, that is, values at corresponding positions of ORB feature number sequences of any two sequence feature points in each sequence image are subtracted in the matching process, then absolute values are taken, the absolute values are accumulated to obtain an accumulated value of the absolute values, the accumulated value is a matching value of feature point matching, if the accumulated value is greater than the threshold, it is determined that matching is failed, and if the accumulated value is less than the threshold, it is determined that matching is successful, that is, the sequence feature points of the two sequence images are matched.
[0051] And B13, removing the sequence feature points successfully matched with the self-matching and keeping the sequence feature points failed in self-matching.
[0052] Since the sequence feature points successfully self-matched in each sequence image obtained in step B12 cause disorder of subsequent matching, in this step, the sequence feature points successfully self-matched in each sequence image are removed, so as to reduce the influence of the sequence feature points on subsequent operations, further increase the processing speed, reduce the matching and computation amount, and improve the accuracy of feature point matching; sequence feature points of self-matching failure are retained in the same inch for pairing in the subsequent steps.
[0053] And C, acquiring the current camera image.
[0054] It can be understood that the current camera image is a frame in the real environment captured by the camera, which is an image acquired in real time.
[0055] And D, extracting the image characteristic points of the obtained current camera image, and pairing the image characteristic points of the current camera image with the sequence characteristic points of the sequence image to obtain successfully matched characteristic point pairs.
[0056] Further, step D specifically includes:
[0057] and D11, identifying the interested areas in the current camera image based on the preset external parameter matrix, and removing the non-interested areas.
[0058] In the step C, a current camera image, that is, a current frame of inches shot by the camera is acquired, and according to a preset appearance parameter matrix, a search is performed in the acquired current camera image, and a region of a marker image obtained by matching the previous frame in the camera image is identified, wherein the region is an interested region in the current camera image, and a region except the interested region is removed based on the interested region. In the specific operation process, the region outside the interested region is replaced by the color block (for example, the non-interested region is completely filled with black), and after the non-interested region is completely filled with black, the characteristic points outside the interested region cannot be extracted, so that the calculation speed is increased.
[0059] It should be noted that, if the preset external reference matrix is not obtained according to the first frame of camera image processing, the preset external reference matrix is the external reference matrix of the camera obtained by the previous frame of camera image processing method based on the marker according to the present invention. And after the external parameter matrix of the camera is obtained in the previous frame, the external parameter matrix is stored in a memory and used as a preset external parameter matrix for the image processing of the camera in the next frame. If the preset external parameter matrix is obtained according to the image processing of the first frame of camera, the first frame of camera external parameter matrix can be obtained by adopting the existing image processing scheme and is stored in the memory and used as the preset external parameter matrix for the image processing of the second frame of camera.
[0060] It can be understood that if the outlier matrix inch of the camera is not obtained in the previous frame, the step is not performed, that is, the step Dl l is not required to be performed, that is, the region of interest of the obtained current camera image is not required to be identified.
[0061] And D12, extracting the interested characteristic points of the interested area in the current camera image.
[0062] The feature points of interest of the region of interest in the current camera image can be extracted by using a feature point extraction algorithm, and the ORB feature point extraction algorithm is preferably used for extraction in this embodiment.
[0063] D13 pairing the interested characteristic points of the interested region with the sequence characteristic points reserved in the step B13
[0064] Further, step D13 specifically includes:
[0065] and D131, acquiring a typical image from the marker image sequence according to the preset external reference matrix, and acquiring typical characteristic points of the typical image.
[0066] The typical image is a marker image closest to the coordinate correspondence of the matching feature points described by the external reference matrix, that is, the typical image is a sequence image closest to the angle, position and state of the current camera image shot by the current camera in all sequence images of the marker image sequence. It can be understood that when the last frame does not acquire the external reference matrix inch of the camera, the typical image does not need to be selected.
[0067] Specifically, the acquisition of a typical image can be achieved by:
[0068] d1311, obtaining sequence vertex coordinates corresponding to each sequence image in the marker image sequence;
[0069] d1312, calculating the length of each edge of each sequence image based on the sequence vertex coordinates, and sequentially storing to obtain a first edge length sequence of each sequence image;
[0070] d1313, performing normalization processing on the first edge length sequence of each obtained sequence image; [0071] D1314, obtaining interesting vertex coordinates of the interesting region according to a preset external reference matrix;
[0072] d1315, calculating a second side length sequence of the region of interest based on the obtained interesting vertex coordinates of the region of interest, and performing normalization processing on the calculated second side length sequence of the region of interest; [0073] d1316, respectively calculating the Euclidean distance or the Manhattan distance between the first edge length sequence of all the sequence images subjected to the normalization processing in the step D1313 and the second edge length sequence of the region of interest subjected to the normalization processing in the step D1315.
[0074] D1317, judging according to the obtained Euclidean distance or Manhattan distance, and obtaining a typical image.
[0075] And judging by calculating the Euclidean distance or the Manhattan distance between the second side length sequence of the current region of interest and the first side length sequence of the marker image, selecting the minimum Euclidean distance or the minimum Manhattan distance from the Euclidean distance or the Manhattan distance, and further determining a typical image, wherein the sequence image with the minimum Euclidean distance or the minimum Manhattan distance is the typical image.
[0076] It is to be understood that, after the sequence feature points of each sequence image in the marker image sequence are obtained in step B, the sequence feature points of each sequence image are correspondingly stored. Therefore, after the typical images included in the sequence images are obtained in step D1317, the corresponding sequence feature points can be obtained from the obtained typical images, and these sequence feature points are used as typical feature points of the typical images.
[0077] And D132, matching the interested characteristic points of the interested area with the typical characteristic points of the typical image.
[0078] By selecting the typical image from the marker image sequence and matching the extracted interesting characteristic points of the interesting region with the typical characteristic points of the typical image, the matching quantity can be greatly reduced, the matching time can be shortened, the operation speed can be accelerated, and the matching accuracy is higher.
[0079] Further, the step D132 specifically includes the steps of:
[0080] and D1321, performing unitary pairing on the typical characteristic points of the typical image and the interesting characteristic points of the interesting area by utilizing a threshold method.
[0081] D1322, judging whether the matching value of the typical characteristic point of the typical image and the interesting characteristic point of the interesting area is larger than a threshold value, if so, extracting the characteristic point of the typical marker image of which the matching value is larger than the threshold value.
[0082] For example, when a feature point of the typical image can be matched with a plurality of feature points of interest in the region of interest, it cannot be determined which feature point of interest in the region of interest matches the typical feature point of the typical image, which is likely to cause confusion.
[0083] And D1323, removing the characteristic feature points of the typical image with the matching value larger than the threshold value in the matching result to obtain the feature point pairs which are successfully matched.
[0084] In step D1322, after the typical feature points of the typical image whose pair value is greater than the threshold value are extracted, the typical feature points whose pair value is greater than the threshold value are removed from the matching result; and storing the typical characteristic points with the pairing value larger than the threshold value as difference characteristic points in a difference characteristic point table obtained in the last frame.
[0085] In other words, in step D132, based on the representative feature points (i.e., the representative feature point set) of the representative image and the feature points of interest of the region of interest, the pairing value of each of the representative feature points and each of the feature points of interest is calculated, and the calculated pairing value is compared with the threshold, if the calculated pairing value is greater than the threshold, the pairing is failed, and if the calculated pairing value is less than the threshold, the pairing is successful, i.e., the two feature points with the pairing value less than the threshold are the feature point pairs with successful matching. For example, one typical feature point of the typical image is set as X, one feature point of interest of the region of interest is set as Y, the pairing of the typical feature point X and the feature point of interest Y is set as m, if m is greater than a threshold value, the pairing of the typical feature point X and the feature point of interest Y fails, and the typical feature point X is saved as a difference feature point into a difference feature point table; if Ν is less than the threshold value, the typical feature point X and the feature point of interest γ are successfully paired, and the typical feature point X and the feature point of interest γ are a feature point pair successfully matched.
[0086] It is understood that if the current frame is the first frame, the difference feature point table is empty.
[0087] And E, calculating an extrinsic parameter matrix of the current frame camera by combining an intrinsic parameter matrix of the camera according to the successfully matched feature point pairs, wherein the extrinsic parameter matrix of the current frame camera is the coordinate corresponding relation of the feature points of the successful matching of the landmark images and the camera images.
[0088] Specifically, an RPP (robust Planar Pose) algorithm is adopted, and the external parameter matrix of the camera of the current frame is calculated according to the feature point pairs successfully matched and obtained in the step D and by combining the internal parameter matrix of the camera.
[0089] The external parameter matrix of the camera is a camera for shooting the marker, and how to shoot the image state of the currently collected marker in space through translation and rotation. I.e. the correspondence between the coordinates of the points of the marker image and the camera image captured by the camera, which is expressed by a function described in terms of a matrix, i.e. an external reference matrix, i.e. the correspondence between the coordinates of the points of the marker image and the camera image.
[0090] Further, the method for processing the image of the camera based on the marker of the embodiment further includes, after the step E, a step of
[0091] And F, carrying out error calculation on the extrinsic parameter matrix of the current frame camera acquired in the step E to acquire an error result.
[0092] The step F specifically comprises the following steps:
[0093] f1, calculating the calculation coordinates of the sequence feature point coordinates of the sequence images in the camera coordinate system by combining the coordinates of all feature points which are successfully matched with the sequence images and the current camera images based on the external reference matrix of the current camera acquired in the step E;
[0094] f2, calculating the error distance between the calculated coordinates obtained in the step F1 and the matching coordinates of the feature points successfully matched in the current camera image;
[0095] f3, calculating the average error distance according to all the obtained error distances.
[0096]0: and verifying whether the external parameter matrix of the camera of the current frame is correct or not according to the error result.
[0097]01: judging whether the average error distance is larger than an average error distance threshold value or not;
[0098]02: and E, if the average error distance is larger than the average error distance threshold value, the external parameter matrix of the camera obtained in the step E is wrong, otherwise, the external parameter matrix of the current frame camera obtained in the step E is determined to be correct.
[0099] Preferably, if it is determined in step G2 that the extrinsic parameter matrix of the current frame camera obtained in step E is correct, the following steps are further performed:
[0100] updating the region of interest and the difference characteristic point table stored in the previous frame;
[0101] and storing the external parameter matrix of the current frame camera. The saved external parameter matrix of the current frame camera can be used as a preset external parameter matrix of the camera image of the next frame.
[0102] It can be understood that, if it is verified in step G2 that the external reference matrix of the current frame camera obtained in step E is incorrect, the image processing fails, the obtained external reference matrix is not saved, and the saved region of interest and difference feature point table are emptied in the same time.
[0103] Referring to fig. 2, fig. 2 is a schematic flowchart of a second embodiment of the image processing method for a camera based on a marker according to the present invention. The camera image processing method based on the marker can be used for realizing reality augmentation technology.
[0104] As shown in fig. 2, the marker-based camera image processing method of the present embodiment comprises step 201-step 2
09. Specifically, the method comprises the following steps:
[0105] step 201, obtaining an internal reference matrix of a camera, wherein the internal reference matrix of the camera comprises parameter information of the camera
[0106] The parameter information of the camera is various parameters of the camera itself, for example, the number of horizontal pixels and the number of vertical pixels of the camera itself, and the horizontal and vertical normalized focal lengths of the camera, etc. The parameters can be obtained by calibrating the camera in advance, or can be directly calculated by reading parameter information (pixels, focal lengths and the like) of the camera, and the embodiment does not make specific requirements.
[0107] Step 202, selecting or extracting a marker image, and carrying out perspective transformation on the marker image to obtain a marker image sequence.
[0108] It will be appreciated that the essence of step 202 is to simulate marker images captured when the camera is not perpendicular to the marker images selected or extracted by the markers (which are the original marker images).
[0109] Specifically, the simulation mode can be obtained by perspective transformation, and the used preset transformation matrix can be obtained by presetting a distance matrix between the marker image and the camera in a conventional use scene.
[0110] Fig. 3-1 is a sequence of the marker images obtained by performing perspective transformation on the transformation matrix of fig. 3-2, wherein the perspective transformation matrix can be obtained by simulating the posture change of the camera to virtually construct an external reference matrix and combining the internal reference matrix of the camera for calculation.
[0111] It can be understood that in this embodiment, fig. 3-1 only generates 4 directions (one direction is changed every 90 degrees), and each direction has 1 inclination angle, but if it is required to obtain better effect, the directions and the angles of each direction can be increased (as shown in fig. 3-3); in fig. 3-3, 2 perspective transformations are performed for each direction to generate 2 images.
[0112] The transformation matrix is preset, and when a new marker image inch is adopted, the same preset transformation matrix can be adopted to generate a marker image sequence.
[0113] And 203, extracting the sequence characteristic points of each sequence image in the marker image sequence, performing self-matching on the sequence characteristic points of each sequence image, and removing the sequence characteristic points successfully subjected to self-matching.
[0114] In this embodiment, the extraction of the sequence feature points of each sequence image in the marker image sequence can be performed by
The ORB algorithm performs the extraction. ORB features extracted by ORB algorithm have rotation invariance and high extraction speed
And the method is suitable for the mobile equipment to operate in the moving process.
[0115] The ORB feature is a shaping number sequence with the length of 64, and the matching process is to subtract the values of the corresponding positions of the ORB feature number sequences of 2 feature points, then take the absolute values, and accumulate the absolute values to obtain the accumulated value of the absolute values, wherein the accumulated value is the matched value of the feature points. Judging by threshold method
And if the accumulated value is larger than the threshold value, judging that the matching fails, and if the accumulated value is smaller than the threshold value, judging that the matching succeeds.
[0116] In this embodiment, the purpose of performing self-matching on all the sequence feature points of each sequence image is to remove feature points with high similarity in the current marker image. The feature points represented by the points a, b and c shown in fig. 4 are the feature points that we want to remove, and the similarity of the three feature points is high, which may cause mismatching.
[0117] Further, for the ORB feature points calculated from the marker image generated by the change, the coordinates of the feature points are described by the coordinate positions before the transformation, and the values of the ORB feature sequence are not changed. For example, if a point with coordinates (1, 1) in the original marker image is subjected to perspective transformation to obtain coordinates (10, 10), and if the coordinate point is detected to be an ORB coordinate point in the transformed image, a transformed image for the ORB feature sequence is calculated (because the coordinate point is detected after transformation), but the coordinate position of the point is recorded by the original image coordinates (1, 1). By the method, the external parameter matrix inch of the camera can be solved by calculating the corresponding relation between the coordinates of the marker image and the coordinates of the marker image in the camera, and actually, the corresponding relation of a series of characteristic point coordinates is calculated, so that the correct result can be calculated only by using the coordinates of the original marker image.
[0118] And step 204, acquiring the current camera image.
[0119] And step 205, identifying the interested region in the current camera image based on the preset external reference matrix, and removing the non-interested region.
[0120] It should be noted that, if the preset external reference matrix is not obtained according to the first frame of camera image processing, the preset external reference matrix is the external reference matrix of the camera obtained by the previous frame of camera image processing method based on the marker according to the present invention. And in addition to the external parameter matrix of the camera obtained in the previous frame, the external parameter matrix is stored in a memory and used as a preset external parameter matrix for the image processing of the camera in the next frame. If the preset external parameter matrix is obtained according to the image processing of the first frame of camera, the first frame of camera external parameter matrix can be obtained by adopting the existing image processing scheme and is stored in the memory and used as the preset external parameter matrix for the image processing of the second frame of camera.
[0121] The region of interest is the result of the processing of the previous frame, which may be a polygon described by a plurality of vertices (e.g., a quadrilateral described by 4 vertices). And obtaining a corresponding region of interest in the camera image of the current frame according to the preset extrinsic parameter matrix.
[0122] As shown in fig. 5, the white region is an interested region, and the regions outside the white region are non-interested regions ± or, where all the quadrilateral external images (non-interested regions) are replaced by black, and all the non-interested regions are replaced by black, so that the feature points of the black region cannot be extracted, and the calculation speed can be greatly increased.
[0123] And step 206, extracting interesting characteristic points of the interesting area in the current camera image.
[0124] The feature points of interest of the region of interest in the current camera image are extracted by using a feature point extraction algorithm, and the feature points of interest are preferably extracted by using an ORB feature point extraction algorithm in this embodiment.
[0125] And step 207, acquiring a typical image from the marker image sequence according to the preset external reference matrix, and acquiring typical characteristic points of the typical image.
[0126] Step 207 is substantially for selecting a marker image in the marker image sequence closest to the current camera image state, i.e. the aforementioned typical image, from the marker image sequence, which is used for matching with the current camera image.
[0127] It can be understood that if the calculation of the external reference matrix of the previous frame is not successful (i.e. the external reference matrix of the camera obtained in the previous frame is wrong), the feature points of the original marker image, i.e. the feature points of fig. 3-2, are used. And otherwise, selecting a typical image from all the marker image sequences, and taking the typical characteristic point of the typical image for matching with the characteristic point of the camera image acquired by the current frame. The specific selection mode is as follows:
[0128] the lengths of four sides of each marker image in the marker image sequence are calculated and stored in sequence, and as shown in fig. 6, the lengths of 4 sides are stored in sequence of 5 lengths according to 1, 2, 3 and 4. For each sequence, each side length is divided by the side length No. 1 (including side No. 1) of the sequence, and normalization processing is performed. And then, calculating the saved external parameter matrix of the camera according to the previous frame, calculating the coordinate position of a point A, B, C, D on the camera image of the previous frame, calculating the lengths of line segments AC, AB, BD and DC on the camera image of the previous frame, forming a length sequence, and performing normalization processing in the same way. Matching is performed with all length sequences in the marker image sequence (the 5 length sequences described above).
[0129] The specific matching mode is that the Euclidean distance or the Manhattan distance of the length sequence formed by the camera images and the length sequence formed by the marker images are respectively calculated, and the minimum Euclidean distance or the Manhattan distance is selected from the calculated Euclidean distance or the calculated Manhattan distance, wherein the corresponding sequence image with the minimum Euclidean distance or the minimum Manhattan distance is the typical image.
[0130] And 208, pairing the typical characteristic points of the typical image acquired in the step 207 with the interested characteristic points of the interested region to acquire difference characteristic points, and removing the difference characteristic points from the matching result to acquire successfully matched characteristic point pairs.
[0131] And pairing the typical characteristic points of the typical image with the interesting characteristic points of the interesting area by using a threshold method.
And judging whether the matching value of the typical characteristic point of the typical image and the interesting characteristic point of the interesting area is larger than a threshold value, if so, extracting the characteristic point of the typical marker image of which the matching value is larger than the threshold value.
[0132] For example, when the feature points of a sequence image can be matched with a plurality of feature points in the region of interest, it cannot be determined which feature point in the region of interest the feature points of the sequence image are correctly paired with, which is likely to cause confusion, and in order to avoid confusion and interference with correct pairing, the typical feature points whose pairing values are greater than the threshold value need to be extracted, and the typical feature points whose pairing values are greater than the threshold value are used as the difference feature points obtained by the current frame.
[0133] Further, the typical feature points with the pair values larger than the threshold value are saved in the difference feature point table obtained in the last frame. It is understood that if the current frame is the first frame, the difference feature point table is empty.
[0134] And 209, calculating the external parameter matrix of the current frame camera by combining the internal parameter matrix of the camera according to the successfully matched characteristic point pairs obtained in the step 208, and verifying whether the calculated external parameter matrix of the current frame camera is correct.
[0135] In this step, a Robust Planar Position (RPP) algorithm is used to calculate the camera's external reference matrix. It can be understood that, the RPP algorithm is used to calculate the external reference matrix of the camera, which generally requires the internal reference matrix of the camera and the corresponding relationship between the coordinates of the feature points of the at least four pairs of marker images and the coordinates of the feature points of the camera image, that is, the corresponding relationship between the coordinates of the feature points of the camera image is needed
And the matched characteristic point pairs are greater than or equal to four.
[0136] In order to improve the matching accuracy, the method can adopt a threshold setting mode for processing, specifically, a threshold can be set, N (N >4) is assumed, before the calculation of the external parameter matrix is performed, whether the logarithm of the matched feature point pair is greater than N is judged, if yes, the subsequent operation is performed, and if not, the current calculation processing fails.
[0137] Further, in this embodiment, whether the calculated external parameter matrix of the camera of the current frame is correct is verified by an error analysis method.
[0138] As shown in the following mathematical model (which is a camera perspective projection model), Ml represents an internal reference matrix of the camera, and M2 represents a calculated external reference matrix of the camera for the current frame. Let 0XYZ be the world coordinate system and uv be the image coordinate system in pixels. If the coordinate of the object point P in the world coordinate system is (X, Y, Z), the coordinate of the corresponding object point P in the image coordinate system is (u, v).
[0139] The method directly takes the pixel point coordinates of the original marker as the X-axis and Y-axis coordinates of a world coordinate system, Z is 0 (the image position of the original marker is taken as a Z-axis zero point), and the pixel coordinates taken by the current camera is taken as a pixel coordinate system. The obtained internal reference matrix and external reference matrix of the camera represent the image coordinates of all points of the image of the inch marker of the original marker image, and the functional relation of the image coordinates of the camera is extracted. And calculating an image coordinate by using the functional relation and the coordinate of each successfully matched marker image feature point, and calculating the distance between the image coordinate and the successfully matched camera image feature point, wherein the distance is set as the error distance of the feature point of the marker image.
[0140] For the calculation result of each frame, the error distances of all feature points (successfully matched) of the currently calculated marker image are added, and divided by the logarithm of the total successful matching, so as to calculate the average error distance. Setting a threshold, and if the average error distance is larger than the threshold, failing to calculate, namely, the extrinsic parameter matrix of the camera calculated by the current frame is wrong; if the average error distance is smaller than the threshold, the calculation is successful, that is, the external parameter matrix of the camera calculated by the current frame is correct.
[0141] And step 210, if the calculated external parameter matrix of the camera of the current frame is determined to be correct in step 209, updating the region of interest and difference feature point table stored in the previous frame, and storing the external parameter matrix of the camera calculated by the current frame. And the saved external parameter matrix of the current frame camera is used as a preset external parameter matrix of the next frame camera image.
[0142] The method specifically comprises the following steps: and for the characteristic point of each marker image, if the error distance is greater than the error distance threshold value, the characteristic point of the marker image is a difference characteristic point, and the serial number of the characteristic point of the marker is recorded and added into a difference characteristic point table stored in the previous frame. Alternatively, the sum of error distances of the feature points of the marker image of the nearest consecutive frames may be counted, the error distance and the threshold may be set, and if the sum of error distances is greater than the sum of error distances threshold, the feature point of the marker image may be determined as a difference feature point, and the feature point of the marker image may be added to the difference feature point table stored in the previous frame.
[0143] As shown in fig. 7, the dotted line represents a feature point pair in which matching between the marker image feature point and the camera image feature point is successful, and the solid line represents a coordinate point correspondence derived by using the calculated external reference matrix of the camera after the RPP algorithm is executed. The left figure represents the positions of the feature points in the marker image by 1, 2, 3 and 4, and the right figure represents the positions of the corresponding feature points in the camera image by 1, wherein, 2', 3' and 4' are the feature points successfully paired with the feature points in the marker image by 1, 2, 3 and 4 in the camera image; . 2', 3', 4' are feature points of 1, 2, 3, 4 in the camera image obtained by performing reverse verification calculation by using the obtained external reference matrix after the RPP algorithm is adopted. And judging whether the result is correct or not according to the verification result. If the error distance between the feature points 4, 4' and 4 ″ in fig. 7 is large, the feature point 4 can be determined to be a poor feature point and added to the poor feature point table.
[0144] Further, if the calculated external reference matrix of the camera at the current frame is correct, that is, the calculation is successful, coordinates of A, B, C, D four points in the current camera image in fig. 6 are calculated according to the internal reference matrix and the external reference matrix, a quadrangle represented by vertexes of the four coordinates is an interested region, the coordinates of the four points are saved for a next frame to exclude an uninteresting region, and a sequence image (that is, the aforementioned typical image) closest to the change relationship described by the camera external reference matrix obtained at the current frame and typical feature points of the typical image are selected from the marker image sequence. [0145] It can be understood that if the extrinsic parameter matrix obtained from the current frame or any subsequent frame is incorrect, the difference feature point table of the interested region is cleared.
[0146] The invention also provides a method for realizing augmented reality, which adopts the camera image processing method based on the marker to obtain the external parameter matrix of the camera.
[0147] Further, the method for realizing augmented reality further comprises the following steps:
[0148] drawing a virtual graph under the current position of the camera in a preset model according to the internal reference matrix and the external reference matrix of the camera;
[0149] and synthesizing the obtained virtual graph with the current camera image to obtain a synthesized image. It is understood that the synthesized image is an AR image.
[0150] The invention also provides a device for implementing augmented reality, comprising a processor for executing computer program instructions stored in a memory for implementing the steps of the method as described above.
[0151] The present invention also provides a computer readable storage medium having stored thereon a computer program for execution by a processor for implementing the steps of the method as described above.
[0152] The invention also provides a camera image processing device based on the marker, which comprises:
[0153] a marker image sequence obtaining module 801, configured to select or extract one marker image, perform perspective transformation on the marker image, and obtain a marker image sequence;
[0154] a first feature point extraction module 802, configured to extract a sequence feature point of each sequence image in a marker image sequence;
[0155] a current camera image obtaining module 803, configured to obtain a current camera image;
[0156] a feature point pairing module 804, configured to extract image feature points of the current camera image, pair the image feature points of the current camera image with sequence feature points of a sequence image, and obtain feature point pairs successfully matched;
[0157] and an extrinsic parameter matrix calculating module 805, configured to calculate an extrinsic parameter matrix of the current frame camera by combining the internal parameter matrix of the camera according to the successfully matched feature point pairs, where the current frame camera extrinsic parameter matrix is a coordinate corresponding relationship between the marker image and the successfully matched feature point of the camera image.
[0158] Further, the camera functional relationship obtaining apparatus based on a marker of this embodiment further includes: [0159] an error verification module 806, configured to perform error calculation on the obtained external reference matrix of the current frame camera, obtain an error result, and verify whether the external reference matrix of the current frame camera is correct according to the error result. The above description is only an embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes performed by the present specification and drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (1)

  1. Claims
    A camera image processing method based on a marker is characterized by comprising the following steps:
    a, selecting or extracting a marker image, and carrying out perspective transformation on the marker image to obtain a marker image sequence;
    b, extracting sequence characteristic points of each sequence image in the marker image sequence; c, acquiring a current camera image;
    d, extracting image characteristic points of the current camera image, and pairing the image characteristic points of the current camera image with the sequence characteristic points of the sequence image to obtain successfully matched characteristic point pairs;
    and E, calculating an external parameter matrix of the current frame camera by combining the internal parameter matrix of the camera according to the successfully matched feature point pairs, wherein the external parameter matrix of the current frame camera is the coordinate corresponding relation between the marker image and the feature point successfully matched with the camera image.
    The method for processing the image of the camera based on the marker according to claim 1, wherein the step a is preceded by the step of:
    a1, acquiring an internal reference matrix of the camera, wherein the internal reference matrix of the camera comprises parameter information of the camera;
    a2, initializing the system environment and configuring the system parameters.
    The method for processing the image of the camera based on the marker according to claim 1, wherein the step a specifically comprises the steps of:
    and performing posture transformation on the selected or extracted marker image by adopting a preset transformation matrix to generate the marker image sequence.
    The method of claim 1, further comprising, before step C:
    and B11, extracting the characteristic points of all sequence images in the marker image sequence by using a characteristic point extraction algorithm.
    The method for processing the image of the camera based on the marker according to claim 4, wherein the step B11 comprises: and performing feature point extraction on all sequence images in the marker image sequence by utilizing an ORB feature point extraction algorithm.
    [ claim 6] the marker-based camera image processing method according to claim 4, wherein the step C is preceded by the step of:
    b12, carrying out self-matching on the sequence feature points of each sequence image extracted in the step B11;
    b13, removing the sequence feature points successfully matched by self and keeping the sequence feature points failed by self matching
    [ claim 7] the method for processing the image of the camera based on the marker according to claim 6, wherein the step D specifically comprises the steps of:
    d11, identifying an interested area in the current camera image based on the preset external parameter matrix, and removing a non-interested area;
    d12, extracting the interesting characteristic points of the interesting area in the current camera image; and D13, pairing the feature points of interest of the region of interest with the sequence feature points reserved in the step B13.
    [ claim 8] the method for processing the image of the camera based on the marker according to claim 7, wherein the step D13 specifically comprises the steps of:
    d131, acquiring a typical image from the marker image sequence according to the preset external reference matrix, and acquiring typical characteristic points of the typical image;
    and D132, matching the interested characteristic points of the interested area with the typical characteristic points of the typical image.
    [ claim 9] the method for processing the image of the camera based on the marker according to claim 8, wherein the step D131 specifically comprises the steps of:
    d1311, acquiring sequence vertex coordinates corresponding to each sequence image in the marker image sequence;
    d1312, calculating the length of each edge of each sequence image based on the sequence vertex coordinates, and sequentially storing to obtain a first edge length sequence of each sequence image;
    d1313, performing normalization processing on the first edge length sequence of each obtained sequence image;
    d1314, obtaining the interesting vertex coordinates of the interesting region according to the preset external parameter matrix;
    d1315, calculating a second side length sequence of the interested region based on the obtained interested vertex coordinates of the interested region, and performing normalization processing on the calculated second side length sequence of the interested region;
    d1316, respectively calculating Euclidean distance or Manhattan distance between the first side length sequence of all the sequence images subjected to the normalization processing in the step D1313 and the second side length sequence of the region of interest subjected to the normalization processing in the step D1315; d1317, judging according to the obtained Euclidean distance or Manhattan distance, and obtaining the typical image.
    [ claim 10] the method for processing the image of the camera based on the marker according to claim 9, wherein the step D132 specifically comprises:
    d1321, pairing the typical characteristic points of the typical image with the interesting characteristic points of the interesting area by using a threshold method;
    d1322, judging whether a matching value of the typical characteristic point of the typical image and the interesting characteristic point of the interesting area is larger than a threshold value, if so, extracting the typical characteristic point of the typical image of which the matching value is larger than the threshold value;
    and D1323, removing the typical characteristic points of the typical image with the matching value larger than the threshold value in the matching result to obtain the characteristic point pairs which are successfully matched.
    [ claim 11] the method for processing the image of the camera based on the marker according to claim 1, wherein the step E specifically comprises the steps of:
    and D, calculating an external parameter matrix of the current frame camera by adopting an RPP algorithm according to the successfully matched feature point pairs obtained in the step D and combining the internal parameter matrix of the camera.
    [ claim 12] the marker-based camera image processing method according to claim 1, further comprising, after the step E:
    f, carrying out error calculation on the extrinsic parameter matrix of the current frame camera acquired in the step E to acquire an error result; g, verifying whether the external parameter matrix of the current frame camera is correct or not according to the error result
    [ claim 13] the marker-based camera image processing method according to claim 12, wherein the step F specifically includes the steps of:
    f1, calculating the calculation coordinates of the sequence feature point coordinates of the sequence images in the camera coordinate system by combining the coordinates of all feature points which are successfully matched with the sequence images and the current camera images based on the external reference matrix of the current camera acquired in the step E;
    f2, calculating the error distance between the calculated coordinates obtained in the step F1 and the matching coordinates of the feature points successfully matched in the current camera image;
    f3, calculating the average error distance according to all the obtained error distances;
    the step G specifically comprises the following steps:
    01: judging whether the average error distance is larger than an average error distance threshold value or not;
    02: and E, if the average error distance is larger than the average error distance threshold value, the external parameter matrix of the current frame camera obtained in the step E is wrong, otherwise, the external parameter matrix of the current frame camera obtained in the step E is determined to be correct.
    [ claim 14] the marker-based camera image processing method according to claim 13, characterized in that the method further comprises:
    if the extrinsic parameter matrix of the current frame camera obtained in the step E is correct, further executing the following steps:
    updating the region of interest and the difference characteristic point table stored in the previous frame;
    and storing the extrinsic parameter matrix of the current frame camera as a preset extrinsic parameter matrix of the next frame camera image.
    [ claim 15] A marker-based camera image processing apparatus, characterized in that, the apparatus comprises a marker image sequence acquisition module for selecting or extracting a marker image, and performing perspective transformation on the marker image to obtain a marker image sequence; the first characteristic point extraction module is used for extracting the sequence characteristic points of each sequence image in the marker image sequence; the current camera image acquisition module is used for acquiring a current camera image;
    the characteristic point matching module is used for extracting image characteristic points of the current camera image, matching the image characteristic points of the current camera image with the sequence characteristic points of the sequence image, and acquiring successfully matched characteristic point pairs;
    an extrinsic parameter matrix calculation module, configured to calculate, according to the successfully matched feature point pair, an extrinsic parameter matrix of a current frame camera by combining an intrinsic parameter matrix of the camera, where the extrinsic parameter matrix of the current frame camera is a coordinate correspondence relationship between a marker image and a feature point where a camera image is successfully matched, and according to claim 15, the apparatus is further configured to:
    and the error verification module is used for carrying out error calculation on the obtained external parameter matrix of the current frame camera to obtain an error result, and verifying whether the external parameter matrix of the current frame camera is correct or not according to the error result.
    A method for realizing augmented reality, wherein the method adopts the marker-based camera image processing method of any one of claims 1 to 14 to obtain the external parameter matrix of the camera, and the method for realizing augmented reality according to claim 17 further comprises:
    drawing a virtual graph at the current position of the camera in a preset model according to the internal reference matrix and the external reference matrix of the camera;
    and synthesizing the obtained virtual graph with the current camera image to obtain a synthesized image. An apparatus for implementing augmented reality, the apparatus comprising a processor for executing a computer program stored in a memory to implement the steps of the method according to any one of claims 1 to 14.
    A computer-readable storage medium having stored thereon a computer program for execution by a processor for implementing the steps of the method according to any one of claims 1 to 14.
CN201780096283.6A 2017-10-30 2017-10-30 Camera image processing method based on marker and augmented reality equipment Pending CN111344740A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/108404 WO2019084726A1 (en) 2017-10-30 2017-10-30 Marker-based camera image processing method, and augmented reality device

Publications (1)

Publication Number Publication Date
CN111344740A true CN111344740A (en) 2020-06-26

Family

ID=66332453

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201780096283.6A Pending CN111344740A (en) 2017-10-30 2017-10-30 Camera image processing method based on marker and augmented reality equipment

Country Status (2)

Country Link
CN (1) CN111344740A (en)
WO (1) WO2019084726A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11663736B2 (en) * 2019-12-27 2023-05-30 Snap Inc. Marker-based shared augmented reality session creation

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101520849A (en) * 2009-03-24 2009-09-02 上海水晶石信息技术有限公司 Reality augmenting method and reality augmenting system based on image characteristic point extraction and random tree classification
CN101661617A (en) * 2008-08-30 2010-03-03 深圳华为通信技术有限公司 Method and device for camera calibration
US20100302366A1 (en) * 2009-05-29 2010-12-02 Zhao Bingyan Calibration method and calibration device
CN103411553A (en) * 2013-08-13 2013-11-27 天津大学 Fast calibration method of multiple line structured light visual sensor
CN105701827A (en) * 2016-01-15 2016-06-22 中林信达(北京)科技信息有限责任公司 Method and device for jointly calibrating parameters of visible light camera and infrared camera
CN106127737A (en) * 2016-06-15 2016-11-16 王向东 A kind of flat board calibration system in sports tournament is measured
CN106874865A (en) * 2017-02-10 2017-06-20 深圳前海大造科技有限公司 A kind of augmented reality implementation method based on image recognition

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2508830B (en) * 2012-12-11 2017-06-21 Holition Ltd Augmented reality system and method
CN103955931A (en) * 2014-04-29 2014-07-30 江苏物联网研究发展中心 Image matching method and device
CN104050475A (en) * 2014-06-19 2014-09-17 樊晓东 Reality augmenting system and method based on image feature matching
CN104299215B (en) * 2014-10-11 2017-06-13 中国兵器工业第二O二研究所 The image split-joint method that a kind of characteristic point is demarcated and matched
CN107248169B (en) * 2016-03-29 2021-01-22 中兴通讯股份有限公司 Image positioning method and device
CN107038758B (en) * 2016-10-14 2020-07-17 北京联合大学 Augmented reality three-dimensional registration method based on ORB operator

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101661617A (en) * 2008-08-30 2010-03-03 深圳华为通信技术有限公司 Method and device for camera calibration
CN101520849A (en) * 2009-03-24 2009-09-02 上海水晶石信息技术有限公司 Reality augmenting method and reality augmenting system based on image characteristic point extraction and random tree classification
US20100302366A1 (en) * 2009-05-29 2010-12-02 Zhao Bingyan Calibration method and calibration device
CN103411553A (en) * 2013-08-13 2013-11-27 天津大学 Fast calibration method of multiple line structured light visual sensor
CN105701827A (en) * 2016-01-15 2016-06-22 中林信达(北京)科技信息有限责任公司 Method and device for jointly calibrating parameters of visible light camera and infrared camera
CN106127737A (en) * 2016-06-15 2016-11-16 王向东 A kind of flat board calibration system in sports tournament is measured
CN106874865A (en) * 2017-02-10 2017-06-20 深圳前海大造科技有限公司 A kind of augmented reality implementation method based on image recognition

Also Published As

Publication number Publication date
WO2019084726A1 (en) 2019-05-09

Similar Documents

Publication Publication Date Title
CN111145238B (en) Three-dimensional reconstruction method and device for monocular endoscopic image and terminal equipment
US10510159B2 (en) Information processing apparatus, control method for information processing apparatus, and non-transitory computer-readable storage medium
CN107292949B (en) Three-dimensional reconstruction method and device of scene and terminal equipment
JP6464934B2 (en) Camera posture estimation apparatus, camera posture estimation method, and camera posture estimation program
CN107633526B (en) Image tracking point acquisition method and device and storage medium
CN102834845B (en) The method and apparatus calibrated for many camera heads
US11037325B2 (en) Information processing apparatus and method of controlling the same
CN110648397B (en) Scene map generation method and device, storage medium and electronic equipment
CN103218799B (en) The method and apparatus tracked for camera
CN108345821B (en) Face tracking method and device
CN109410316B (en) Method for three-dimensional reconstruction of object, tracking method, related device and storage medium
CN110926330B (en) Image processing apparatus, image processing method, and program
KR20120048370A (en) Object pose recognition apparatus and method using the same
US11922658B2 (en) Pose tracking method, pose tracking device and electronic device
JP5468824B2 (en) Method and apparatus for determining shape match in three dimensions
CN112819892B (en) Image processing method and device
JP2018113021A (en) Information processing apparatus and method for controlling the same, and program
WO2015113608A1 (en) Method for recognizing objects
CN109902675B (en) Object pose acquisition method and scene reconstruction method and device
JP2017123087A (en) Program, device and method for calculating normal vector of planar object reflected in continuous photographic images
JP2018055367A (en) Image processing device, image processing method, and program
US11189053B2 (en) Information processing apparatus, method of controlling information processing apparatus, and non-transitory computer-readable storage medium
JP2017097578A (en) Information processing apparatus and method
CN113298871B (en) Map generation method, positioning method, system thereof, and computer-readable storage medium
CN111344740A (en) Camera image processing method based on marker and augmented reality equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20200626

WD01 Invention patent application deemed withdrawn after publication