CN111210506A - Three-dimensional reduction method, system, terminal equipment and storage medium - Google Patents

Three-dimensional reduction method, system, terminal equipment and storage medium Download PDF

Info

Publication number
CN111210506A
CN111210506A CN201911398126.8A CN201911398126A CN111210506A CN 111210506 A CN111210506 A CN 111210506A CN 201911398126 A CN201911398126 A CN 201911398126A CN 111210506 A CN111210506 A CN 111210506A
Authority
CN
China
Prior art keywords
image
scanned
target
matching
features
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911398126.8A
Other languages
Chinese (zh)
Inventor
谈飞
黎文猎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tapuyihai Shanghai Intelligent Technology Co ltd
Original Assignee
Tapuyihai Shanghai Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tapuyihai Shanghai Intelligent Technology Co ltd filed Critical Tapuyihai Shanghai Intelligent Technology Co ltd
Priority to CN201911398126.8A priority Critical patent/CN111210506A/en
Publication of CN111210506A publication Critical patent/CN111210506A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Abstract

The invention provides a three-dimensional reduction method, a three-dimensional reduction system, terminal equipment and a storage medium, wherein the method comprises the following steps: after the projection equipment projects a preset pattern to a target to be scanned, acquiring second images from at least four image acquisition equipment; the view angle areas of adjacent image acquisition equipment in the at least four image acquisition equipment are overlapped in pairs; the second image comprises a target to be scanned after the preset pattern is projected; extracting the features of the second image, and matching according to the extracted image features to obtain an image pair; the image pair comprises at least two second images of overlapping scenes; respectively carrying out feature point matching on the image pairs to obtain matching groups; calculating according to the matching group to obtain a space point on the target to be scanned corresponding to each image characteristic; and generating and obtaining a three-dimensional model corresponding to the target to be scanned according to the space points. The method increases the number of image features, improves the precision of three-dimensional reduction, and reduces the cost of three-dimensional reduction.

Description

Three-dimensional reduction method, system, terminal equipment and storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a three-dimensional restoration method, a three-dimensional restoration system, a terminal device, and a storage medium.
Background
The application field of the object three-dimensional model is very wide, such as the fields of design simulation, virtual reality, 3D movies and the like. The establishment of the object three-dimensional model requires panoramic shooting of the object by means of a camera, and the corresponding object three-dimensional model is established by three-dimensional reduction after feature recognition of the shot image.
However, in the current three-dimensional reduction mode, since feature extraction in the image processing method can only extract relatively obvious corner features or edge features, and for an area without obvious features on an object, more image features cannot be extracted, a high-quality camera needs to be used for image acquisition, so that the three-dimensional reduction cost is relatively high. And the three-dimensional model of the object cannot be generated effectively and accurately due to less image features.
Disclosure of Invention
The invention aims to provide a three-dimensional reduction method, a three-dimensional reduction system, terminal equipment and a storage medium, which can increase the number of image features, improve the precision of three-dimensional reduction and reduce the cost of three-dimensional reduction.
The technical scheme provided by the invention is as follows:
the invention provides a three-dimensional reduction method, which comprises the following steps:
after the projection equipment projects a preset pattern to a target to be scanned, acquiring second images from at least four image acquisition equipment; the view angle areas of adjacent image acquisition equipment in the at least four image acquisition equipment are overlapped in pairs; the second image comprises a target to be scanned after a preset pattern is projected;
extracting the features of the second image, and matching according to the extracted image features to obtain an image pair; the image pair comprises at least two second images of overlapping scenes;
respectively carrying out feature point matching on the image pairs to obtain matching groups;
calculating according to the matching group to obtain a space point on the target to be scanned corresponding to each image feature;
and generating and obtaining a three-dimensional model corresponding to the target to be scanned according to the space points.
Further, the extracting the features of the second image and matching the extracted features of the second image to obtain an image pair includes:
respectively carrying out feature extraction on each second image through a first feature extraction algorithm and a second feature extraction algorithm to obtain a first feature point set and a second feature point set corresponding to each second image;
and respectively matching the first characteristic point set and the second characteristic point set to obtain the image pair.
Further, the step of respectively matching the first feature point set and the second feature point set to obtain the image pair includes:
matching a first feature point set corresponding to the current second image with first feature point sets corresponding to the rest second images to obtain a first matching result;
matching a second feature point set corresponding to the current second image with second feature point sets corresponding to the rest second images to obtain a second matching result;
and comparing the first matching result with the second matching result, determining the matching result with the same image characteristic ratio as a final matching result, and obtaining the image pair according to the final matching result.
Further, the generating and obtaining the three-dimensional model corresponding to the target to be scanned according to the space point includes:
connecting all the space points to generate a corresponding triangular mesh, and generating and obtaining a three-dimensional model corresponding to the target to be scanned according to the triangular mesh; or the like, or, alternatively,
searching adjacent space points in a preset area according to all the space points, connecting all the space points with the adjacent space points to generate a corresponding triangular mesh, and generating and obtaining a three-dimensional model corresponding to the target to be scanned according to the triangular mesh.
Further, after the connecting line generates a corresponding triangular mesh, the method includes the following steps before the three-dimensional model corresponding to the target to be scanned is obtained according to the triangular mesh generation:
and searching and deleting the crossed triangular plates or the overlapped triangular plates in the triangular mesh to obtain a new triangular mesh.
Further, the step of generating and obtaining the three-dimensional model corresponding to the target to be scanned according to all the spatial point connecting lines includes:
according to the region corresponding to the three-dimensional model, feature extraction and feature point matching are carried out again on the obtained second image, and a new space point is obtained through calculation according to a new matching group;
and updating the connecting line according to the new space point to obtain a three-dimensional model corresponding to the target to be scanned.
Further, after the projection device projects the preset pattern to the target to be scanned, the method includes the following steps before acquiring the second image from the at least four image acquisition devices:
acquiring first images from the at least four image acquisition devices; the first image comprises a target to be scanned;
the method for acquiring the three-dimensional model corresponding to the target to be scanned comprises the following steps:
performing feature extraction on the first image to obtain color features corresponding to the image features;
and performing texture mapping on the three-dimensional model according to the color characteristics of the image characteristics so as to restore the color and the texture of the target to be scanned.
The present invention also provides a three-dimensional reduction system, comprising: the system comprises a projection device, at least four image acquisition devices, a terminal device and a main frame;
the at least four image acquisition devices are arranged on the main frame, and the view angle areas of adjacent image acquisition devices in the at least four image acquisition devices are overlapped in pairs; the projection equipment is arranged on the main frame or close to the inner side of the main frame;
the projection equipment and the at least four image acquisition equipment are respectively connected with the terminal equipment;
after the projection equipment projects a preset pattern to the target to be scanned, the terminal equipment acquires a second image from the at least four image acquisition equipment; the second image comprises a target to be scanned after a preset pattern is projected;
the terminal equipment extracts the features of the second image and matches the extracted image features to obtain an image pair; the image pair comprises at least two second images of overlapping scenes;
the terminal equipment respectively matches the image pairs with the characteristic points to obtain matching groups, and calculates according to the matching groups to obtain space points on the target to be scanned corresponding to the characteristics of each image;
and the terminal equipment generates and obtains a three-dimensional model corresponding to the target to be scanned according to the space points.
The invention also provides a terminal device, which is characterized by comprising a processor, a memory and a computer program which is stored in the memory and can run on the processor, wherein the processor is used for executing the computer program stored in the memory and realizing the operation executed by the three-dimensional reduction method.
The present invention also provides a storage medium, wherein the storage medium stores at least one instruction, and the instruction is loaded and executed by a processor to implement the operations performed by the three-dimensional restoring method.
By the three-dimensional reduction method, the three-dimensional reduction system, the terminal equipment and the storage medium, the number of image features can be increased, the three-dimensional reduction accuracy is improved, and the three-dimensional reduction cost is reduced.
Drawings
The above features, technical features, advantages and implementations of a three-dimensional reduction method, system, terminal device and storage medium will be further described in the following detailed description of preferred embodiments in a clearly understandable manner with reference to the accompanying drawings.
FIG. 1 is a schematic diagram of a three-dimensional reduction system according to an embodiment of the present invention;
FIG. 2 is a flow chart of one embodiment of a three-dimensional reduction method of the present invention;
FIG. 3 is a flow chart of another embodiment of a three-dimensional reduction method of the present invention;
FIG. 4 is a flow chart of another embodiment of a three-dimensional reduction method of the present invention;
FIG. 5 is a schematic diagram of another embodiment of a three-dimensional reduction system of the present invention;
FIG. 6 is a schematic diagram of another embodiment of a three-dimensional reduction system of the present invention;
FIG. 7 is a schematic view of one aspect of the present invention in which regular shapes and defined patterns are combined;
FIG. 8 is a schematic view of a scene for projecting the combined pattern to an object to be scanned according to the present invention in FIG. 7;
FIG. 9 is a schematic diagram of one aspect of a solid pattern of icons according to the present invention;
FIG. 10 is a schematic diagram of a scene for projecting physical patterns of icons onto a target to be scanned according to the present invention;
FIG. 11 is a schematic diagram of a computer device according to an embodiment of the present invention.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. However, it will be apparent to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
For the sake of simplicity, the drawings only schematically show the parts relevant to the present invention, and they do not represent the actual structure as a product. In addition, in order to make the drawings concise and understandable, components having the same structure or function in some of the drawings are only schematically illustrated or only labeled. In this document, "one" means not only "only one" but also a case of "more than one".
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
In particular implementations, the terminal devices described in embodiments of the present application include, but are not limited to, other portable devices such as mobile phones, laptop computers, or tablet computers having touch sensitive surfaces (e.g., touch screen displays and/or touch pads).
The terminal device supports various applications, such as one or more of the following: a drawing application, a presentation application, a network creation application, a word processing application, a disc burning application, a spreadsheet application, a gaming application, a telephone application, a video conferencing application, an email application, an instant messaging application, an exercise support application, a photo management application, a digital camera application, a digital video camera application, a Web browsing application, a digital music player application, and/or a digital video player application.
In addition, in the description of the present application, the terms "first", "second", and the like are used only for distinguishing the description, and are not intended to indicate or imply relative importance.
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the following description will be made with reference to the accompanying drawings. It is obvious that the drawings in the following description are only some examples of the invention, and that for a person skilled in the art, other drawings and embodiments can be derived from them without inventive effort.
First embodiment
A three-dimensional reduction system, as shown in fig. 1, comprising: the system comprises a projection device, at least four image acquisition devices, a terminal device and a main frame;
the at least four image acquisition devices are arranged on the main frame, and the view angle areas of adjacent image acquisition devices in the at least four image acquisition devices are overlapped in pairs; the projection equipment is arranged on the main frame or close to the inner side of the main frame;
specifically, the image capturing device is a camera, and the camera includes, but is not limited to, a depth camera, a TOF camera, and a wide-angle camera. At least four image acquisition devices are arranged on the main frame, and the view angle areas of adjacent image acquisition devices in the at least four image acquisition devices are overlapped in pairs, so that images acquired by the at least four image acquisition devices at the same time have positions where scenes are overlapped, and illustratively, as shown in fig. 5, 32 cameras are arranged on the main frame. As shown in fig. 6, the main frame is provided with 8 cameras. The shape and size of the main frame are not limited, the types of the at least four cameras arranged on the main frame are not limited, the at least four cameras are arranged on the main frame, and the installation positions properly enable the visual angle areas of the adjacent image acquisition equipment to be overlapped in a pairwise mode, so that the panoramic view of the target to be scanned can be included after the images shot by the at least four cameras are spliced.
Even, when there are only two image capturing device groups, one of the image capturing device groups is located on the top frame of the main frame, and the other image capturing device group is located on the bottom frame of the main frame, as long as the viewing angle of the image capturing device group is large enough, the panoramic view of the object to be scanned can be captured.
The projection device may be a projector capable of projecting a preset pattern, an infrared projection lamp for projecting an infrared light pattern, or a laser projection lamp for projecting a laser light pattern.
The projection equipment and the at least four image acquisition equipment are respectively connected with the terminal equipment;
specifically, the projection device and the at least four image acquisition devices may be connected to the terminal device in a wired connection manner. The projection device, the image acquisition device and the terminal device can also be provided with a wireless transmission module, so that the projection device and the at least four image acquisition devices can be connected with the terminal device in a wireless connection mode. The wireless connection mode includes but is not limited to RFID wireless connection, WIFI wireless connection and 2G network wireless connection.
In addition, the number and installation positions of the projection apparatuses are not limited. The number of projection devices and image acquisition devices may or may not be the same. For example, when the number of the projection devices is the same as that of the image capturing devices, all the projection devices are disposed on the main frame, and each projection device is close to the image capturing device. When the quantity of the projection equipment is different from that of the image acquisition equipment, the moving parts are arranged on the main frame, and the projection equipment is detachably arranged on the moving parts. Alternatively, as shown in fig. 5, the projection apparatus is provided at a position close to the inner side of the main frame.
Based on the above three-dimensional reduction system, as shown in fig. 2, the following three-dimensional reduction steps are performed:
s110, after the projection equipment projects a preset pattern to a target to be scanned, the terminal equipment acquires second images from at least four image acquisition equipment; the second image comprises a target to be scanned after the preset pattern is projected;
specifically, the preset patterns include, but are not limited to, random color dots projected by the projection device, combination patterns (including a combination of regular shapes and definite patterns), icon solid patterns.
One case of a combination of regular shapes and definite patterns is shown in fig. 7, where precisely defined regular patterns are often used by cameras. Regular shapes with well-defined contours, such as the corners of a checkerboard pattern, are good targets for feature extraction algorithms. The target to be scanned is a solid color cube, the projection effect graph of the target to be scanned is shown in fig. 8, a clear combined pattern can be generated by simply improving some regular shapes, and the combined pattern can be projected on a relatively flat surface of the target to be scanned, because the characteristics required by calculation are less, and more geometrical characteristic details are added, the three-dimensional reconstruction performance can be improved, and meanwhile, the integrity and the accuracy of three-dimensional restoration are greatly improved.
One case of the icon solid pattern is shown in fig. 9, which has a feature that the color spectrum of the icon solid pattern is wide enough, and the outline has high semantic meaning. The target to be scanned is a solid color cube, and the projection effect graph of the target to be scanned is as shown in fig. 10, and the icon solid pattern can be projected on a relatively flat surface of the target to be scanned, because the features required for calculation are fewer, and more geometric feature details are added, the three-dimensional reconstruction performance can be improved, and meanwhile, the integrity and the accuracy of three-dimensional restoration are greatly increased.
S120, the terminal equipment extracts the features of the second image and matches the extracted image features to obtain an image pair; the image pair comprises at least two second images of overlapping scenes;
s130, the terminal equipment respectively matches the image pairs with the characteristic points to obtain matching groups, and calculates according to the matching groups to obtain the space points corresponding to the characteristics of the images on the target to be scanned;
and S140, the terminal equipment generates and obtains a three-dimensional model corresponding to the target to be scanned according to the space points.
Through the embodiment, the terminal device extracts the image features of the multiple groups of second image data through an image recognition method, performs image matching according to the image features, pairs the similar second images to obtain multiple image pairs, and performs fine matching according to the image pairs to obtain the matching relationship between the image features. The terminal equipment obtains a plurality of matching groups according to the feature point matching, so that the space points corresponding to the same image features (namely the same-name points) in the second image with the matching relationship in each image pair are obtained through calculation according to the matching groups, the terminal equipment processes according to all the space points and outputs a plurality of curved surfaces, and a three-dimensional model is generated according to the plurality of curved surfaces to complete the three-dimensional restoration of the target to be scanned. The projection equipment is used for projecting the preset pattern to the target to be scanned, so that the number of image features can be increased, the three-dimensional reduction accuracy is improved, and the three-dimensional reduction cost of the object is reduced.
Second embodiment
A three-dimensional reduction method, as shown in fig. 3, performs the following three-dimensional reduction steps:
s210, acquiring first images from at least four image acquisition devices; the first image comprises a target to be scanned;
specifically, before the projection device projects a preset pattern to the target to be scanned, a first image including the target to be scanned is acquired from at least four image acquisition devices.
S220, after the projection equipment projects a preset pattern to a target to be scanned, acquiring second images from at least four image acquisition equipment; the view angle areas of adjacent image acquisition equipment in the at least four image acquisition equipment are overlapped in pairs; the second image comprises a target to be scanned after the preset pattern is projected;
s230, respectively carrying out feature extraction on each second image through a first feature extraction algorithm and a second feature extraction algorithm to obtain a first feature point set and a second feature point set corresponding to each second image;
specifically, the feature extraction algorithm comprises a HARRISS feature extraction algorithm, a SUSAN feature extraction algorithm, a SIFT feature extraction algorithm, a SURF feature extraction algorithm, a KAZE feature extraction algorithm and an AKAZE feature extraction algorithm.
Illustratively, the first feature extraction algorithm is an AKAZE feature extraction algorithm, and the second feature extraction algorithm is a SIFT feature extraction algorithm. And respectively extracting the features of each second image through an AKAZE feature extraction algorithm to obtain a first feature point set corresponding to each second image. And respectively carrying out feature extraction on each second image through an SIFT feature extraction algorithm to obtain a second feature point set corresponding to each second image. Due to the characteristics of the AKAZE feature extraction algorithm and the SIFT feature extraction algorithm, the AKAZE feature extraction algorithm is usually suitable for the image feature extraction requirement that the extraction efficiency is high but the extraction image features are few, and the SIFT feature extraction algorithm is usually suitable for the image feature extraction requirement that the extraction efficiency is low but the extraction image features are many.
The SIFT feature extraction algorithm has scale and rotation invariance, is strong in robustness, is suitable for extracting various image feature information of scale transformation and rotation angles, is strong in accuracy, and has advantages under the condition that time cost does not need to be considered in the offline algorithm.
S240, respectively matching the first characteristic point set and the second characteristic point set to obtain an image pair;
specifically, the feature extraction algorithm is used for extractingTo a certain second picture P1Corresponding first feature point set 1 and second feature point set 1, and extracting a certain second image P by the feature extraction algorithm2A corresponding first set of feature points 2 and a first set of feature points 2.
S250, respectively carrying out feature point matching on the image pair to obtain matching groups;
specifically, image registration is performed after a first feature point set and a second feature point set are obtained. And after image registration, feature point matching is carried out, and because the neighbor search and the establishment of the KD tree for image registration can reduce the search range and improve the efficiency, but the neighbor search and the establishment of the KD tree are possibly not optimal, the neighborhood value is key, and the larger the neighborhood value is, the more accurate the neighborhood value is, the larger the calculation amount is. When the distance is smaller than a certain threshold value, the matching is considered to be successful, but the mismatching is more, and various measures are required to be taken to remove: if the ratio of the nearest distance to the next nearest distance is greater than a threshold, it should be rejected. After the feature point matching relationship is established, a matching list needs to be generated, for example, the 5 th image feature point of the second image 1, the 27 th image feature point of the second image 2, and the 169 th image feature point of the second image 3 are homologous points, then (1, 5), (2, 27), and (3, 115) belong to a matching group, so that a matching list can be generated, and meanwhile, when a matching group is generated, unnecessary matching needs to be eliminated: if a plurality of image features in a certain second image in a matching group all match the same image feature, the matching relation is definitely wrong, and the matching group is removed. If only two second images have the same image characteristic point, the matching group is also removed to avoid errors generated by candidate three-dimensional reconstruction.
Once the euclidean distance between the image features 1 and the image features 2 is within the preset distance value range, it is determined that the image features 1 and the image features 2 are the same image features imaged by different image acquisition devices at different positions at the same time and for the same spatial point of the target to be scanned, that is, the image features 1 and the image features 2 are a matching group.
Of course, feature point matching can also be performed on the pair of images through the SFM algorithm to obtain a matching group.
Illustratively, the second image 1 corresponds to a feature point set {11, 21, 31, 41 and 51}, the second image 2 corresponds to a feature point set {12, 32, 52, 62 and 72}, the second image 3 corresponds to a feature point set {13, 43, 53 and 163}, and if the euclidean distances between two of the feature points 11, 72 and 163 are within a preset distance value range, it is determined that the feature points 11, 72 and 163 are a matching group.
S260, calculating according to the matching group to obtain a space point on the target to be scanned corresponding to each image feature;
specifically, the spatial points corresponding to the image features are calculated by a three-point positioning method. For example, the left camera overlaps with the viewing angle regions of the front and rear cameras, respectively, and the right camera overlaps with the viewing angle regions of the front and rear cameras, respectively. The left camera, the right camera, the front camera and the rear camera respectively shoot a certain angular point P on a target to be scanned, and respectively acquire a second image 1, a second image 2, a second image 3 and a second image 4 which comprise the angular point P, so that the processor determines that the second image 1, the second image 2, the second image 3 and the second image 4 are image pairs, the second image 1, the second image 2, the second image 3 and the second image 4 have the same image feature P ', the image feature P' is a homonymous point, and then a spatial point of the image feature P 'corresponding to the angular point P can be calculated by a three-point positioning method according to the image feature P' and a conversion relation between an image coordinate system and a world coordinate system.
S270, generating and obtaining a three-dimensional model corresponding to the target to be scanned according to the space points;
s280, extracting the features of the first image to obtain color features corresponding to the image features;
s290, texture mapping is carried out on the three-dimensional model according to the color features of the image features so as to restore the color and the texture of the target to be scanned.
Through the embodiment, the terminal device extracts the image features of the multiple groups of second image data through an image recognition method, performs image matching according to the image features, pairs the similar second images to obtain multiple image pairs, and performs fine matching according to the image pairs to obtain the matching relationship between the image features. The terminal equipment obtains a plurality of matching groups according to the feature point matching, so that the space points corresponding to the same image features (namely the same-name points) in the second image with the matching relationship in each image pair are obtained through calculation according to the matching groups, the terminal equipment processes according to all the space points and outputs a plurality of curved surfaces, and a three-dimensional model is generated according to the plurality of curved surfaces to complete the three-dimensional restoration of the target to be scanned. And then texture mapping is carried out based on the color features of the corresponding image features extracted from the first image, so that the actual situation of the target to be scanned can be more approached. According to the invention, the projection equipment is used for projecting the preset pattern to the target to be scanned, so that the number of image features can be increased, the three-dimensional reduction accuracy is improved, and the three-dimensional reduction cost of the object is reduced.
Third embodiment
A three-dimensional reduction method, as shown in fig. 4, performs the following three-dimensional reduction steps:
s310, acquiring first images from at least four image acquisition devices; the first image comprises a target to be scanned;
s320, after the projection equipment projects a preset pattern to a target to be scanned, acquiring second images from at least four image acquisition equipment; the view angle areas of adjacent image acquisition equipment in the at least four image acquisition equipment are overlapped in pairs; the second image comprises a target to be scanned after the preset pattern is projected;
s330, respectively carrying out feature extraction on each second image through a first feature extraction algorithm and a second feature extraction algorithm to obtain a first feature point set and a second feature point set corresponding to each second image;
s340, respectively matching the first characteristic point set and the second characteristic point set to obtain an image pair;
s350, respectively carrying out feature point matching on the image pair to obtain a matching group;
s360, calculating according to the matching group to obtain a space point on the target to be scanned corresponding to each image feature;
s370, generating and obtaining a three-dimensional model corresponding to the target to be scanned according to the space points;
s380, according to the corresponding area of the three-dimensional model, feature extraction and feature point matching are carried out again on the obtained second image, and new space points are obtained through calculation according to a new matching group;
s385, updating the connecting line according to the new space point to obtain a three-dimensional model corresponding to the target to be scanned;
s390, extracting the features of the first image to obtain the color features of the corresponding image features;
s395, texture mapping is carried out on the three-dimensional model according to the color characteristics of the image characteristics so as to restore the color and the texture of the target to be scanned.
Through the embodiment, the terminal device extracts the image features of the multiple groups of second image data through an image recognition method, performs image matching according to the image features, pairs the similar second images to obtain multiple image pairs, and performs fine matching according to the image pairs to obtain the matching relationship between the image features. The terminal equipment obtains a plurality of matching groups according to the feature point matching, so that the space points corresponding to the same image features (namely the same-name points) in the second image with the matching relationship in each image pair are obtained through calculation according to the matching groups, the terminal equipment processes according to all the space points and outputs a plurality of curved surfaces, and a three-dimensional model is generated according to the plurality of curved surfaces to complete the three-dimensional restoration of the target to be scanned. The projection equipment is used for projecting the preset pattern to the target to be scanned, so that the number of image features can be increased, the three-dimensional reduction accuracy is improved, and the three-dimensional reduction cost of the object is reduced.
Fourth embodiment
A three-dimensional reduction method performs the following three-dimensional reduction steps:
s405, acquiring first images from at least four image acquisition devices; the first image comprises a target to be scanned;
s410, after the projection equipment projects a preset pattern to a target to be scanned, acquiring second images from at least four image acquisition equipment; the view angle areas of adjacent image acquisition equipment in the at least four image acquisition equipment are overlapped in pairs; the second image comprises a target to be scanned after the preset pattern is projected;
s415, respectively carrying out feature extraction on each second image through a first feature extraction algorithm and a second feature extraction algorithm to obtain a first feature point set and a second feature point set corresponding to each second image;
s420, matching the first feature point set corresponding to the current second image with the first feature point sets corresponding to the rest second images to obtain a first matching result;
s425, matching a second feature point set corresponding to the current second image with second feature point sets corresponding to the rest second images to obtain a second matching result;
s430, comparing the first matching result with the second matching result, determining the matching result with the same image characteristic ratio as a final matching result, and obtaining an image pair according to the final matching result;
specifically, a certain second image P is extracted through the above feature extraction algorithm1Corresponding first feature point set 1 and second feature point set 1, and extracting a certain second image P by the feature extraction algorithm2A corresponding first set of feature points 2 and a first set of feature points 2. The first number of identical image features may be obtained by comparing the image features included in the first set of feature points 1 with the first set of feature points 2. Similarly, by comparing the image features included in the second feature point set 1 and the second feature point set 2, a second number of the same image features is obtained.
If the second image P is judged according to the first feature point set 1 and the first feature point set 21And a second picture P2The image pair with the matching number of the feature points reaching the preset requirement is obtained, and the second image P is judged according to the second feature point set 1 and the second feature point set 21And a second picture P2Not an image pair. The first number is further compared to the second number. Determining the second image P if the first number is greater than the second number1And a second picture P2Is an image pair; determining the second image P if the first number is smaller than the second number1And a second picture P2Not an image pair; if the first number is equal to the second number, for the second image P1And a second picture P2And performing similarity detection, and setting a weight coefficient 1 corresponding to the similarity and a weight coefficient 2 corresponding to the total number of the same image features, wherein the sum of the weight coefficient 1 and the weight coefficient 2 is 100%. Calculating to obtain a weight value according to the similarity, the total number of the same image features, the weight coefficient 1 and the weight coefficient, and if the weight value is greater than a preset value, determining that the second image P is1And a second picture P2Is the image pair, otherwise determines the second image P1And a second picture P2Is the image pair or not.
Of course, if the second image P is determined according to the first feature point set 1 and the first feature point set 21And a second picture P2The second image P is determined not from the image pair but from the second feature point set 1 and the second feature point set 21And a second picture P2The image pairs with the matching number of the feature points reaching the preset requirement are obtained. The determination of the second picture P is made with reference to the above-described manner1And a second picture P2Whether or not the image pair is a pair will not be described in detail.
If the second image P is judged according to the first feature point set 1 and the first feature point set 21And a second picture P2The image pairs with the matching number of the feature points reaching the preset requirement are used, and the second image P is judged according to the second feature point set 1 and the second feature point set 21And a second picture P2Is a pair of images, a second image P is determined1And a second picture P2Are an image pair.
If the second image P is judged according to the first feature point set 1 and the first feature point set 21And a second picture P2Judging whether the image pair with the matching number of the feature points reaching the preset requirement is the second image P according to the second feature point set 1 and the second feature point set 21And a second picture P2Not image pair, the second image P is determined1And a second picture P2Not an image pair.
The above is merely exemplary of the second figureLike P1And a second picture P2The second image P can be matched and judged1And a second picture P3All the image acquisition devices except the second image P1And a second picture P2And performing matching judgment on the rest second images to obtain corresponding image pairs. In the same way, the second image PnWith all image-capturing devices capturing except the second image PnAnd performing matching judgment on all the rest second images except the rest second images to obtain corresponding image pairs.
S440, respectively carrying out feature point matching on the image pairs to obtain matching groups;
s450, calculating according to the matching groups to obtain space points on the target to be scanned corresponding to the image characteristics;
s460, connecting all the space points to generate a corresponding triangular mesh, and generating and obtaining a three-dimensional model corresponding to the target to be scanned according to the triangular mesh;
s470, according to the corresponding area of the three-dimensional model, performing feature extraction and feature point matching again from the acquired second image, and calculating according to a new matching group to obtain a new space point;
specifically, the beneficial effects of the expansion and improvement scheme are as follows: dense space points can be accurately obtained from a small number of space points, so that the generated three-dimensional space is closer to a real object of a target to be scanned. And amplifying the number of the space points by using an ANN-L2 method, namely acquiring all candidate feature points which correspond to the image feature points corresponding to the current space points and take R as a radius, wherein R is set as an average distance between all isolation distances of the detected feature points. The separation distance is the shortest distance from the image feature point corresponding to the previous spatial point to any other image feature point.
For the second image P1Each image feature point of
Figure BDA0002346849710000161
Find its other second image
Figure BDA0002346849710000162
Above, withSecond picture P1Characteristic point of the image
Figure BDA0002346849710000163
Image feature points being homonymous points
Figure BDA0002346849710000164
Thereby obtaining a matched set
Figure BDA0002346849710000165
And n is the index of the nth second image, and k is the characteristic point of the kth image on the second image with the index of n. The matching group is only an example, and other scenarios are not described in detail herein.
And calculating to obtain corresponding space points by each matching group according to a triangulation method. And calculating the image characteristic point corresponding to each space point and the imaged space point, finding a corresponding target second image according to the corresponding image characteristic point, diffusing according to the target to obtain a new image characteristic point, and calculating according to the new image characteristic point to obtain a new space point.
S480, updating the connection line according to the new space point to obtain a three-dimensional model corresponding to the target to be scanned;
s485, extracting the features of the first image to obtain the color features of the corresponding image features;
s490, texture mapping is performed on the three-dimensional model according to the color features of the image features to restore the color and the texture of the target to be scanned.
Preferably, after the connecting line generates the corresponding triangular mesh, the method includes the following steps before the three-dimensional model corresponding to the target to be scanned is obtained according to the triangular mesh generation:
and searching and deleting crossed triangular plates or overlapped triangular plates in the triangular mesh to obtain a new triangular mesh.
Through the embodiment, the terminal device extracts the image features of the multiple groups of second image data through an image recognition method, performs image matching according to the image features, pairs the similar second images to obtain multiple image pairs, and performs fine matching according to the image pairs to obtain the matching relationship between the image features. The terminal equipment obtains a plurality of matching groups according to the feature point matching, so that the space points corresponding to the same image features (namely the same-name points) in the second image with the matching relationship in each image pair are obtained through calculation according to the matching groups, the terminal equipment processes according to all the space points and outputs a plurality of curved surfaces, and a three-dimensional model is generated according to the plurality of curved surfaces to complete the three-dimensional restoration of the target to be scanned. The projection equipment is used for projecting the preset pattern to the target to be scanned, so that the number of image features can be increased, the three-dimensional reduction accuracy is improved, and the three-dimensional reduction cost of the object is reduced.
Fifth embodiment
A three-dimensional reduction method performs the following three-dimensional reduction steps:
s505, acquiring first images from at least four image acquisition devices; the first image comprises a target to be scanned;
s510, after the projection equipment projects a preset pattern to a target to be scanned, acquiring second images from at least four image acquisition equipment; the view angle areas of adjacent image acquisition equipment in the at least four image acquisition equipment are overlapped in pairs; the second image comprises a target to be scanned after the preset pattern is projected;
s515, respectively performing feature extraction on each second image through a first feature extraction algorithm and a second feature extraction algorithm to obtain a first feature point set and a second feature point set corresponding to each second image;
s520, matching the first feature point set corresponding to the current second image with the first feature point sets corresponding to the rest second images to obtain a first matching result;
s525, matching a second feature point set corresponding to the current second image with second feature point sets corresponding to the rest second images to obtain a second matching result;
s530, comparing the first matching result with the second matching result, determining the matching result with the same image characteristic ratio as a final matching result, and obtaining an image pair according to the final matching result;
s540, respectively carrying out feature point matching on the image pair to obtain a matching group;
s550, calculating according to the matching groups to obtain space points on the target to be scanned corresponding to the image features;
s560, searching adjacent space points in a preset area according to all the space points, connecting all the space points with the adjacent space points to generate a corresponding triangular mesh, and generating and obtaining a three-dimensional model corresponding to the target to be scanned according to the triangular mesh;
s570, according to the corresponding area of the three-dimensional model, performing feature extraction and feature point matching again on the acquired second image, and calculating according to a new matching group to obtain a new space point;
s580, updating the connecting line according to the new space point to obtain a three-dimensional model corresponding to the target to be scanned;
s585, extracting the characteristics of the first image to obtain color characteristics corresponding to the image characteristics;
s590 carries out texture mapping on the three-dimensional model according to the color characteristics of the image characteristics so as to restore the color and the texture of the target to be scanned.
Preferably, after the connecting line generates the corresponding triangular mesh, the method includes the following steps before the three-dimensional model corresponding to the target to be scanned is obtained according to the triangular mesh generation:
and searching and deleting crossed triangular plates or overlapped triangular plates in the triangular mesh to obtain a new triangular mesh.
Through the embodiment, the terminal device extracts the image features of the multiple groups of second image data through an image recognition method, performs image matching according to the image features, pairs the similar second images to obtain multiple image pairs, and performs fine matching according to the image pairs to obtain the matching relationship between the image features. The terminal equipment obtains a plurality of matching groups according to the feature point matching, so that the space points corresponding to the same image features (namely the same-name points) in the second image with the matching relationship in each image pair are obtained through calculation according to the matching groups, the terminal equipment processes according to all the space points and outputs a plurality of curved surfaces, and a three-dimensional model is generated according to the plurality of curved surfaces to complete the three-dimensional restoration of the target to be scanned. The projection equipment is used for projecting the preset pattern to the target to be scanned, so that the number of image features can be increased, the three-dimensional reduction accuracy is improved, and the three-dimensional reduction cost of the object is reduced.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of program modules is illustrated, and in practical applications, the above-described distribution of functions may be performed by different program modules, that is, the internal structure of the apparatus may be divided into different program units or modules to perform all or part of the above-described functions. Each program module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one processing unit, and the integrated unit may be implemented in a form of hardware, or may be implemented in a form of software program unit. In addition, the specific names of the program modules are only used for distinguishing the program modules from one another, and are not used for limiting the protection scope of the application.
One embodiment of the present invention, as shown in fig. 11, a terminal device 100, includes a processor 110, a memory 120, wherein the memory 120 is used for storing a computer program 121; the processor 110 is configured to execute the computer program 121 stored in the memory 120 to implement the operations performed in the three-dimensional reduction method embodiments corresponding to fig. 1 to 11.
The terminal device 100 may be a desktop computer, a notebook, a palm computer, a tablet computer, a mobile phone, a human-computer interaction screen, or the like. The terminal device 100 may include, but is not limited to, a processor 110, a memory 120. Those skilled in the art will appreciate that fig. 11 is merely an example of the terminal device 100, does not constitute a limitation of the terminal device 100, and may include more or less components than those shown, or combine certain components, or different components, such as: the terminal device 100 may also include input/output interfaces, display devices, network access devices, communication buses, communication interfaces, and the like. A communication interface and a communication bus, and may further include an input/output interface, wherein the processor 110, the memory 120, the input/output interface and the communication interface complete communication with each other through the communication bus. The memory 120 stores a computer program 121, and the processor 110 is configured to execute the computer program 121 stored in the memory 120 to implement the operations executed in the embodiments of the three-dimensional reduction method corresponding to fig. 1 to 11.
The Processor 110 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware component, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 120 may be an internal storage unit of the terminal device 100, such as: hard disk or memory of the terminal device. The memory may also be an external storage device of the terminal device, such as: the terminal equipment is provided with a plug-in hard disk, an intelligent memory card (SmartMediaCard, SMC), a Secure Digital (SD) card, a flash memory card (FlashCoard) and the like. Further, the memory 120 may also include both an internal storage unit and an external storage device of the terminal device 100. The memory 120 is used for storing the computer program 121 and other programs and data required by the terminal device 100. The memory may also be used to temporarily store data that has been output or is to be output.
A communication bus is a circuit that connects the described elements and enables transmission between the elements. For example, the processor 110 receives commands from other elements through the communication bus, decrypts the received commands, and performs calculations or data processing according to the decrypted commands. The memory 120 may include program modules such as a kernel (kernel), middleware (middleware), an Application Programming Interface (API), and applications. The program modules may be comprised of software, firmware or hardware, or at least two of the same. The input/output interface forwards commands or data entered by a user via the input/output interface (e.g., sensor, keyboard, touch screen). The communication interface connects the terminal device 100 with other network devices, user equipment, networks. For example, the communication interface may be connected to a network by wire or wirelessly to connect to external other network devices or user devices. The wireless communication may include at least one of: wireless fidelity (WiFi), Bluetooth (BT), Near Field Communication (NFC), Global Positioning Satellite (GPS) and cellular communications, among others. The wired communication may include at least one of: universal Serial Bus (USB), high-definition multimedia interface (HDMI), asynchronous transfer standard interface (RS-232), and the like. The network may be a telecommunications network and a communications network. The communication network may be a computer network, the internet of things, a telephone network. The terminal device 100 may be connected to the network through a communication interface, and a protocol by which the terminal device 100 communicates with other network devices may be supported by at least one of an application, an Application Programming Interface (API), middleware, a kernel, and a communication interface.
In an embodiment of the present invention, a storage medium stores at least one instruction, and the instruction is loaded and executed by a processor to implement the operations performed in the embodiment of the three-dimensional restoration method corresponding to fig. 1 to 11. For example, the storage medium may be a read-only memory (ROM), a Random Access Memory (RAM), a compact disc read-only memory (CD-ROM), a magnetic tape, a floppy disk, an optical data storage device, and the like.
They may be implemented in program code that is executable by a computing device such that it is executed by the computing device, or separately, or as individual integrated circuit modules, or as a plurality or steps of individual integrated circuit modules. Thus, the present invention is not limited to any specific combination of hardware and software.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or recited in detail in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units may be stored in a storage medium if they are implemented in the form of software functional units and sold or used as separate products. Based on such understanding, all or part of the flow in the method according to the embodiments of the present invention may also be implemented by sending instructions to relevant hardware by the computer program 121, where the computer program 121 may be stored in a storage medium, and when the computer program 121 is executed by a processor, the steps of the above-described embodiments of the method may be implemented. The computer program 121 may be in a source code form, an object code form, an executable file or some intermediate form, etc. The storage medium may include: any entity or device capable of carrying the computer program 121, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer memory, Read-only memory (ROM), Random Access Memory (RAM), electrical carrier signal, telecommunication signal, and software distribution medium, etc. It should be noted that the content of the storage medium may be increased or decreased as appropriate according to the requirements of legislation and patent practice in the jurisdiction, for example: in certain jurisdictions, in accordance with legislation and patent practice, computer-readable storage media do not include electrical carrier signals and telecommunications signals.
It should be noted that the above embodiments can be freely combined as necessary. The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (10)

1. A three-dimensional reduction method, comprising the steps of:
after the projection equipment projects a preset pattern to a target to be scanned, acquiring second images from at least four image acquisition equipment; the view angle areas of adjacent image acquisition equipment in the at least four image acquisition equipment are overlapped in pairs; the second image comprises a target to be scanned after a preset pattern is projected;
extracting the features of the second image, and matching according to the extracted image features to obtain an image pair; the image pair comprises at least two second images of overlapping scenes;
respectively carrying out feature point matching on the image pairs to obtain matching groups;
calculating according to the matching group to obtain a space point on the target to be scanned corresponding to each image feature;
and generating and obtaining a three-dimensional model corresponding to the target to be scanned according to the space points.
2. The three-dimensional reduction method according to claim 1, wherein the extracting the features of the second image and matching the extracted features of the second image to obtain an image pair comprises:
respectively carrying out feature extraction on each second image through a first feature extraction algorithm and a second feature extraction algorithm to obtain a first feature point set and a second feature point set corresponding to each second image;
and respectively matching the first characteristic point set and the second characteristic point set to obtain the image pair.
3. The three-dimensional reduction method according to claim 2, wherein the obtaining of the image pair by respectively matching according to the first feature point set and the second feature point set comprises:
matching a first feature point set corresponding to the current second image with first feature point sets corresponding to the rest second images to obtain a first matching result;
matching a second feature point set corresponding to the current second image with second feature point sets corresponding to the rest second images to obtain a second matching result;
and comparing the first matching result with the second matching result, determining the matching result with the same image characteristic ratio as a final matching result, and obtaining the image pair according to the final matching result.
4. The three-dimensional reduction method according to claim 1, wherein the generating and obtaining the three-dimensional model corresponding to the target to be scanned according to the spatial points comprises:
connecting all the space points to generate a corresponding triangular mesh, and generating and obtaining a three-dimensional model corresponding to the target to be scanned according to the triangular mesh; or the like, or, alternatively,
searching adjacent space points in a preset area according to all the space points, connecting all the space points with the adjacent space points to generate a corresponding triangular mesh, and generating and obtaining a three-dimensional model corresponding to the target to be scanned according to the triangular mesh.
5. The three-dimensional reduction method according to claim 4, wherein after the connecting line generates the corresponding triangular mesh, before the three-dimensional model corresponding to the target to be scanned is obtained according to the triangular mesh, the method comprises the following steps:
and searching and deleting the crossed triangular plates or the overlapped triangular plates in the triangular mesh to obtain a new triangular mesh.
6. The three-dimensional reduction method according to claim 1, wherein the step of generating and obtaining the three-dimensional model corresponding to the target to be scanned according to all the spatial point connecting lines comprises:
according to the region corresponding to the three-dimensional model, feature extraction and feature point matching are carried out again on the obtained second image, and a new space point is obtained through calculation according to a new matching group;
and updating the connecting line according to the new space point to obtain a three-dimensional model corresponding to the target to be scanned.
7. The three-dimensional reduction method according to any one of claims 1 to 6, wherein after the projection device projects the preset pattern onto the target to be scanned, the method comprises the following steps before the second image is acquired from at least four image acquisition devices:
acquiring first images from the at least four image acquisition devices; the first image comprises a target to be scanned;
the method for acquiring the three-dimensional model corresponding to the target to be scanned comprises the following steps:
performing feature extraction on the first image to obtain color features corresponding to the image features;
and performing texture mapping on the three-dimensional model according to the color characteristics of the image characteristics so as to restore the color and the texture of the target to be scanned.
8. A three-dimensional reduction system, comprising: the system comprises a projection device, at least four image acquisition devices, a terminal device and a main frame;
the at least four image acquisition devices are arranged on the main frame, and the view angle areas of adjacent image acquisition devices in the at least four image acquisition devices are overlapped in pairs; the projection equipment is arranged on the main frame or close to the inner side of the main frame;
the projection equipment and the at least four image acquisition equipment are respectively connected with the terminal equipment;
after the projection equipment projects a preset pattern to a target to be scanned, the terminal equipment acquires a second image from the at least four image acquisition equipment; the second image comprises a target to be scanned after a preset pattern is projected;
the terminal equipment extracts the features of the second image and matches the extracted image features to obtain an image pair; the image pair comprises at least two second images of overlapping scenes;
the terminal equipment respectively matches the image pairs with the characteristic points to obtain matching groups, and calculates according to the matching groups to obtain space points on the target to be scanned corresponding to the characteristics of each image;
and the terminal equipment generates and obtains a three-dimensional model corresponding to the target to be scanned according to the space points.
9. A terminal device, comprising a processor, a memory, and a computer program stored in the memory and executable on the processor, wherein the processor is configured to execute the computer program stored in the memory to implement the operations performed by the three-dimensional restoration method according to any one of claims 1 to 7.
10. A storage medium having stored therein at least one instruction, which is loaded and executed by a processor to perform operations performed by a three-dimensional reduction method according to any one of claims 1 to 7.
CN201911398126.8A 2019-12-30 2019-12-30 Three-dimensional reduction method, system, terminal equipment and storage medium Pending CN111210506A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911398126.8A CN111210506A (en) 2019-12-30 2019-12-30 Three-dimensional reduction method, system, terminal equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911398126.8A CN111210506A (en) 2019-12-30 2019-12-30 Three-dimensional reduction method, system, terminal equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111210506A true CN111210506A (en) 2020-05-29

Family

ID=70786500

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911398126.8A Pending CN111210506A (en) 2019-12-30 2019-12-30 Three-dimensional reduction method, system, terminal equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111210506A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111860225A (en) * 2020-06-30 2020-10-30 北京百度网讯科技有限公司 Image processing method and device, electronic equipment and storage medium
CN112734652A (en) * 2020-12-22 2021-04-30 同济大学 Near-infrared blood vessel image projection correction method based on binocular vision
CN113449420A (en) * 2021-06-28 2021-09-28 浙江图盛输变电工程有限公司温州科技分公司 Three-dimensional measurement data analysis method for image live-action management and control platform

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106296797A (en) * 2015-06-10 2017-01-04 西安蒜泥电子科技有限责任公司 A kind of spatial digitizer characteristic point modeling data processing method
CN106934376A (en) * 2017-03-15 2017-07-07 成都创想空间文化传播有限公司 A kind of image-recognizing method, device and mobile terminal
CN109146935A (en) * 2018-07-13 2019-01-04 中国科学院深圳先进技术研究院 A kind of point cloud registration method, device, electronic equipment and readable storage medium storing program for executing
CN110246163A (en) * 2019-05-17 2019-09-17 联想(上海)信息技术有限公司 Image processing method and its device, equipment, computer storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106296797A (en) * 2015-06-10 2017-01-04 西安蒜泥电子科技有限责任公司 A kind of spatial digitizer characteristic point modeling data processing method
CN106934376A (en) * 2017-03-15 2017-07-07 成都创想空间文化传播有限公司 A kind of image-recognizing method, device and mobile terminal
CN109146935A (en) * 2018-07-13 2019-01-04 中国科学院深圳先进技术研究院 A kind of point cloud registration method, device, electronic equipment and readable storage medium storing program for executing
CN110246163A (en) * 2019-05-17 2019-09-17 联想(上海)信息技术有限公司 Image processing method and its device, equipment, computer storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111860225A (en) * 2020-06-30 2020-10-30 北京百度网讯科技有限公司 Image processing method and device, electronic equipment and storage medium
CN111860225B (en) * 2020-06-30 2023-12-12 阿波罗智能技术(北京)有限公司 Image processing method and device, electronic equipment and storage medium
CN112734652A (en) * 2020-12-22 2021-04-30 同济大学 Near-infrared blood vessel image projection correction method based on binocular vision
CN113449420A (en) * 2021-06-28 2021-09-28 浙江图盛输变电工程有限公司温州科技分公司 Three-dimensional measurement data analysis method for image live-action management and control platform

Similar Documents

Publication Publication Date Title
US11189037B2 (en) Repositioning method and apparatus in camera pose tracking process, device, and storage medium
US11605214B2 (en) Method, device and storage medium for determining camera posture information
WO2020207191A1 (en) Method and apparatus for determining occluded area of virtual object, and terminal device
CN109961406B (en) Image processing method and device and terminal equipment
CN110246163B (en) Image processing method, image processing device, image processing apparatus, and computer storage medium
CN109285136B (en) Multi-scale fusion method and device for images, storage medium and terminal
CN109151442B (en) Image shooting method and terminal
CN110517214B (en) Method and apparatus for generating image
CN111210506A (en) Three-dimensional reduction method, system, terminal equipment and storage medium
CN110059652B (en) Face image processing method, device and storage medium
KR20160074500A (en) Mobile video search
US20180060682A1 (en) Parallax minimization stitching method and apparatus using control points in overlapping region
TW201918772A (en) Apparatus and method of five dimensional (5D) video stabilization with camera and gyroscope fusion
US10726612B2 (en) Method and apparatus for reconstructing three-dimensional model of object
US10839599B2 (en) Method and device for three-dimensional modeling
CN110909209B (en) Live video searching method and device, equipment, server and storage medium
CN108961267B (en) Picture processing method, picture processing device and terminal equipment
WO2021136386A1 (en) Data processing method, terminal, and server
JP2022550948A (en) 3D face model generation method, device, computer device and computer program
KR102649993B1 (en) Image processing method, image processing device, and electronic devices applying the same
CN108776822B (en) Target area detection method, device, terminal and storage medium
US20230316529A1 (en) Image processing method and apparatus, device and storage medium
CN110463177A (en) The bearing calibration of file and picture and device
CN112802081A (en) Depth detection method and device, electronic equipment and storage medium
WO2021142843A1 (en) Image scanning method and device, apparatus, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination