CN113554035A - Feature matching method based on optical flow, intelligent terminal and computer storage medium - Google Patents
Feature matching method based on optical flow, intelligent terminal and computer storage medium Download PDFInfo
- Publication number
- CN113554035A CN113554035A CN202110623317.0A CN202110623317A CN113554035A CN 113554035 A CN113554035 A CN 113554035A CN 202110623317 A CN202110623317 A CN 202110623317A CN 113554035 A CN113554035 A CN 113554035A
- Authority
- CN
- China
- Prior art keywords
- matching
- feature
- point
- image
- optical flow
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000003287 optical effect Effects 0.000 title claims abstract description 49
- 238000000034 method Methods 0.000 title claims abstract description 46
- 239000011159 matrix material Substances 0.000 claims abstract description 26
- 230000009466 transformation Effects 0.000 claims abstract description 26
- 238000012795 verification Methods 0.000 claims abstract description 18
- 230000002159 abnormal effect Effects 0.000 claims description 11
- 238000004590 computer program Methods 0.000 claims description 7
- 238000000605 extraction Methods 0.000 claims description 6
- 238000004422 calculation algorithm Methods 0.000 claims description 3
- 238000005070 sampling Methods 0.000 claims description 3
- 238000010586 diagram Methods 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000013178 mathematical model Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
Images
Landscapes
- Image Analysis (AREA)
Abstract
The invention provides a feature matching method based on an optical flow, an intelligent terminal and a computer storage medium, wherein the feature matching method based on the optical flow comprises the following steps: s101: acquiring a first feature point set of a first image, and performing sparse optical flow tracking to acquire a matching relation of shi-tomasi feature points; s102: extracting a second characteristic point set of the second image, acquiring a first matching point pair through the first characteristic point set and the second characteristic point set, and acquiring a homography transformation matrix corresponding to the shi-tomasi characteristic point; s103: acquiring a third characteristic point through the homography transformation matrix, and generating a second matching point pair according to the pixel distance between the third characteristic point and the second characteristic point; s104: and performing geometric consistency verification on the second matching point pairs, and acquiring a third matching point pair set according to a verification result. The method has the advantages that the light stream information between the images assists the feature matching, the calculated amount of feature point matching is reduced, the time consumption is short, the stability is good, the interference of similar features in the images is reduced, the mismatching rate is reduced, and the accuracy is high.
Description
Technical Field
The present invention relates to the field of computer vision, and in particular, to a feature matching method based on optical flow, an intelligent terminal, and a computer storage medium.
Background
Feature matching, namely finding a reliable correspondence between two feature point sets, is a basic and important task of computer vision, and stable and reliable feature matching technologies are needed as the basis for image stitching, three-dimensional reconstruction and image retrieval.
The current feature matching technology mainly comprises two categories, one of which is a nearest neighbor matching method, namely, calculating the matching distance between each feature descriptor of a current frame and all the descriptors in a reference frame, and comparing the feature descriptor with the closest matching distance in the reference frame with a preset threshold value to determine whether the feature descriptor is the optimal match; the other method is a nearest neighbor proportion method improved based on the nearest neighbor method, namely, the matching distance is calculated between each feature descriptor of the current frame and all the descriptors in the reference frame, the ratio of the nearest matching distance in the reference frame to the next nearest matching distance in the reference frame is compared with a preset threshold value, and whether the feature descriptor closest to the matching distance is the optimal matching is determined. However, in the matching process of the two matching methods, for each feature descriptor of the current frame, the matching distance needs to be calculated from all the feature descriptors in the reference frame one by one, so the two matching methods are time-consuming, and the matching accuracy cannot be guaranteed due to mismatching of similar features in the image.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides a characteristic matching method based on an optical flow, an intelligent terminal and a computer storage medium, when a first matching point pair is obtained, shi-tomasi characteristic points in a first image and a second image are obtained in a sparse optical flow tracking mode, a homography matrix is obtained according to the shi-tomasi characteristic points, a second matching point pair with higher matching degree is obtained through the homography matrix, abnormal data in the second matching point pair are removed in a geometric consistency checking mode to obtain a final matching point pair, and optical flow information between images assists characteristic matching, so that the calculated amount of characteristic point matching is reduced, the time consumption is short, the stability is good, the interference of similar characteristics in the images is reduced, the mismatching rate is reduced, and the accuracy is high.
In order to solve the above problems, the present invention adopts a technical solution as follows: an optical flow-based feature matching method, comprising: s101: acquiring a first image and a second image with continuous video stream information, acquiring a first feature point set of the first image, and performing sparse optical flow tracking on the first image and the second image to acquire a matching relation of shi-tomasi feature points; s102: extracting a second feature point set of the second image, performing feature matching on the first feature point set and the second feature point set to form a first matching point pair, and acquiring a homography transformation matrix corresponding to the shi-tomasi feature point, wherein the first matching point pair comprises a matched first feature point and a matched second feature point; s103: performing homographic transformation on the first characteristic point through the homographic transformation matrix to generate a third characteristic point of the first characteristic point in a second image pixel coordinate system, and acquiring a second matching point pair according to a pixel distance between the third characteristic point and the second characteristic point; s104: and performing geometric consistency verification based on homography through the second matching point pairs, and acquiring a third matching point pair set according to a verification result, wherein the third matching point pair set comprises matching information of the characteristic points in the first image and the second image.
Further, the step of acquiring the first feature point set of the first image specifically includes: and carrying out SIFT or SURF feature extraction on the first image to obtain a first feature point set.
Further, the step of performing sparse optical flow tracking on the first image and the second image to acquire the matching relationship between the shi-tomasi feature points specifically includes: extracting the shi-tomasi features of the first image, obtaining the shi-tomasi feature points in the first image, performing LK sparse optical flow tracking through the shi-tomasi feature points, and obtaining the matching relation of the first image and the shi-tomasi feature points in the second image.
Further, the step of extracting the second feature point set of the second image specifically includes: and carrying out SIFT or SURF feature extraction on the second image to obtain the second feature point set.
Further, the step of obtaining the homography transformation matrix corresponding to the shi-tomasi feature point specifically includes: and calculating a homography transformation matrix corresponding to the shi-tomasi characteristic points by a least square method or RANSAC.
Further, the step of performing homographic transformation on the first feature point through the homographic transformation matrix to generate a third feature point of the first feature point in a second image pixel coordinate system specifically includes: by the formulaObtaining a third characteristic point, wherein p'1Is the pixel coordinate of the first feature point, p' is the pixel coordinate of the third feature point, HN+1,NIs a homographic transformation matrix.
Further, the step of obtaining a second matching point pair according to the pixel distance between the third feature point and the second feature point specifically includes: and judging whether the pixel distance between the third characteristic point and the second characteristic point meets a preset condition, and rejecting abnormal data in the first matching point pair according to a judgment result to obtain a second matching point pair, wherein the preset condition is that the pixel distance is smaller than a preset value.
Further, the step of performing homography-based geometric consistency verification on the second matching point pairs and obtaining a third matching point pair set according to a verification result specifically includes: and verifying the second matching point pair by a random sampling consistency algorithm, and removing abnormal data in the second matching point pair according to a verification result to obtain a third matching point pair set.
Based on the same inventive concept, the invention also provides an intelligent terminal, which comprises: a processor, a memory communicatively connected to the memory, the memory storing a computer program by which the processor executes the optical flow-based feature matching method as described above.
Based on the same inventive concept, the present invention also proposes a computer storage medium storing program data used to perform the optical flow-based feature matching method as described above.
Compared with the prior art, the invention has the beneficial effects that: the method comprises the steps of obtaining a first matching point pair, obtaining a matching relation of shi-tomasi characteristic points in a first image and a second image in a sparse optical flow tracking mode, obtaining a homography according to the shi-tomasi characteristic points, obtaining a second matching point pair with higher matching degree through the homography, eliminating abnormal data in the second matching point pair in a geometric consistency checking mode to obtain a final matching point pair, assisting characteristic matching through optical flow information between images, reducing the calculation amount of characteristic point matching, being short in time consumption, good in stability, reducing the interference of similar characteristics in the images, reducing the mismatching rate and being high in accuracy.
Drawings
FIG. 1 is a flow chart of an embodiment of a method for optical flow-based feature matching according to the present invention;
FIG. 2 is a flow chart of another embodiment of the optical flow-based feature matching method of the present invention;
FIG. 3 is a block diagram of an embodiment of an intelligent terminal according to the present invention;
FIG. 4 is a block diagram of an embodiment of a computer storage medium.
Detailed Description
The present invention will be further described with reference to the accompanying drawings and the detailed description, and it should be noted that any combination of the embodiments or technical features described below can be used to form a new embodiment without conflict.
Referring to fig. 1-2, fig. 1 is a flow chart of an embodiment of a feature matching method based on optical flow according to the present invention; FIG. 2 is a flowchart illustrating another embodiment of the optical flow-based feature matching method according to the present invention. The optical flow-based feature matching method of the present invention is described in detail with reference to fig. 1-2.
In the embodiment, the device executing the optical flow-based feature matching method may be a computer, a mobile phone, a tablet computer, or other intelligent devices capable of performing feature matching and sparse optical flow tracking.
In this embodiment, the optical flow-based feature matching method includes:
s101: the method comprises the steps of obtaining a first image and a second image with continuous video stream information, obtaining a first feature point set of the first image, and carrying out sparse optical flow tracking on the first image and the second image to obtain the matching relation of shi-tomasi feature points.
In this embodiment, the first image and the second image are two images captured by the same camera in time sequence, and the two images may be two adjacent images or two non-adjacent images.
In a specific embodiment, the first image and the second image are images of the nth frame and the N +1 th frame which are shot by the same camera according to time.
In this embodiment, the step of acquiring the first feature point set of the first image specifically includes: and performing Scale-invariant feature transform (SIFT) or Speeded Up Robust Features (SURF) feature extraction on the first image to obtain a first feature point set.
In this embodiment, the step of performing sparse optical flow tracking on the first image and the second image to obtain the matching relationship between the shi-tomasi feature points specifically includes: extracting the shi-tomasi characteristics of the first image to obtain the shi-tomasi characteristic points in the first image, carrying out LK sparse optical flow tracking through the shi-tomasi characteristic points, and obtaining the matching relation of the shi-tomasi characteristic points in the first image and the second image.
In this embodiment, the shi-tomasi feature points exist in the form of a set of points.
S102: and extracting a second characteristic point set of the second image, performing characteristic matching on the first characteristic point set and the second characteristic point set to form a first matching point pair, and acquiring a homography transformation matrix corresponding to the shi-tomasi characteristic point, wherein the first matching point pair comprises the matched first characteristic point and the matched second characteristic point.
In this embodiment, the step of extracting the second feature point set of the second image specifically includes: and carrying out SIFT or SURF feature extraction on the second image to obtain a second feature point set.
In this embodiment, SNN (Second nearest neighbor) is used to perform coarse feature matching on the first feature point set and the Second feature point set, so as to obtain a first matching point pair.
In this embodiment, the step of obtaining the homography transformation matrix corresponding to the shi-tomasi feature point specifically includes: the homography transformation matrix corresponding to the shi-tomasi feature points is calculated by the least square method or RANSAC (Random Sample Consensus).
In a specific embodiment, coarse matching is performed over RANSAC to reduce the effect of outlier data.
S103: and acquiring a third characteristic point of the first characteristic point under the pixel coordinate of the second image through the homography transformation matrix, and selecting a second matching point pair according to the pixel distance between the third characteristic point and the second characteristic point.
The step of performing homographic transformation on the first feature point through the homographic transformation matrix to generate a third feature point of the first feature point in the second image pixel coordinate system specifically includes: by the formula Obtaining a third feature point, wherein p1'is the pixel coordinate of the first feature point, p' is the pixel coordinate of the third feature point, HN+1,NIs a homographic transformation matrix.
The step of obtaining a second matching point pair according to the pixel distance between the third feature point and the second feature point specifically includes: and judging whether the pixel distance between the third characteristic point and the first characteristic point meets a preset condition, and rejecting abnormal data in the first matching point pair according to a judgment result to obtain a second matching point pair, wherein the preset condition is that the pixel distance is smaller than a preset value.
In a specific embodiment, a list of shi-tomasi feature points continuously tracked between the first image and the second image is calculatedShould transform matrix HN+1,NAnd homography transforming the P1 'point set formed by the extracted feature points of the first image into the pixel coordinate system of the second image to obtain a P1' point set. Homographic transformation formula isTo obtain (here)p′1Is the pixel coordinate in the P1' point set). Calculating the pixel distance of each pair of matching points in the pairs of matching points P1 'and P2' (the feature point set formed by the feature points extracted from the second image)Wherein, p'2Is the pixel coordinate in the P2' point set. When in useAnd if not, determining that the preset condition is not met, determining that the matching point pair corresponding to the matching point pair in the first matching point pair is abnormal data, and rejecting the abnormal data. Wherein,represents the pixel distance, σ is 10.
S104: and performing geometric consistency verification based on homography through the second matching point pairs, and acquiring a third matching point pair set according to a verification result, wherein the third matching point pair set comprises matching information of the characteristic points in the first image and the second image.
In this embodiment, performing geometric consistency verification based on homography through the second matching point pair, and the step of obtaining a third matching point pair set according to the verification result specifically includes: and verifying the second matching point pair by a random sampling consistency algorithm, and removing abnormal data in the second matching point pair according to a verification result to obtain a third matching point pair set.
The step of verifying the second matching point pair includes: given a set P of N data points in the second pair of matching points, the parameters of the model can be fitted in the following iterative manner, assuming that most of the points in the set can be generated by one model and that at least N points (N < N) can be fitted to the parameters. The following operations are performed k times:
(1) randomly selecting n data points from P;
(2) fitting a model M by using the n data points;
(3) and calculating the distance between each point and the model M for the rest data points in the P, wherein if the distance exceeds the threshold value, the point is determined as an outer point, if the distance does not exceed the threshold value, the point is determined as an inner point, and the number M of the inner points corresponding to the model M is recorded.
After k iterations, the model M with the largest M is selected as the result of the fitting. And eliminating outer points with the distance exceeding the threshold value from the M, and determining inner points corresponding to the model M as characteristic points in the third matching point pair set. And processing the first image and the second image according to the third matching point.
In this embodiment, the RANSAC mathematical model for performing homography geometric consistency verification is a homography matrix, and n is 4.
Has the advantages that: the characteristic matching method based on the optical flow obtains a first matching point pair, obtains the matching relation of shi-tomasi characteristic points in a first image and a second image in a sparse optical flow tracking mode, obtains a homography matrix according to the shi-tomasi characteristic points, obtains a second matching point pair with higher matching degree through the homography matrix, eliminates abnormal data in the second matching point pair in a geometric consistency checking mode to obtain a final matching point pair, assists characteristic matching through the optical flow information between the images, reduces the calculation amount of characteristic point matching, is short in time consumption and good in stability, reduces the interference of similar characteristics in the images, reduces the mismatching rate, and is high in accuracy.
Based on the same inventive concept, the present invention further provides an intelligent terminal, please refer to fig. 3, fig. 3 is a structural diagram of an embodiment of the intelligent terminal of the present invention, and the intelligent terminal of the present invention is described with reference to fig. 3.
In this embodiment, the intelligent terminal includes: the optical flow-based feature matching method comprises a processor and a memory, wherein the processor is connected with the memory in a communication mode, the memory stores a computer program, and the processor executes the optical flow-based feature matching method based on the embodiment through the computer program.
Based on the same inventive concept, the present invention further provides a computer storage medium, please refer to fig. 4, where fig. 4 is a structural diagram of an embodiment of the computer storage medium according to the present invention.
In the present embodiment, a computer storage medium stores program data used to execute the optical flow-based feature matching method as described in the above embodiments.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The integrated module, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow in the method according to the above embodiments may be implemented by a computer program, which may be stored in a computer-readable storage medium and used to implement the steps of the above embodiments. Wherein the program data comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer memory, Read-only memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, etc. It should be noted that the computer readable medium may contain other components which may be suitably increased or decreased as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media which may not include electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Claims (10)
1. An optical flow-based feature matching method, characterized in that the optical flow-based feature matching method comprises:
s101: acquiring a first image and a second image with continuous video stream information, acquiring a first feature point set of the first image, and performing sparse optical flow tracking on the first image and the second image to acquire a matching relation of shi-tomasi feature points;
s102: extracting a second feature point set of the second image, performing feature matching on the first feature point set and the second feature point set to form a first matching point pair, and acquiring a homography transformation matrix corresponding to the shi-tomasi feature point, wherein the first matching point pair comprises a matched first feature point and a matched second feature point;
s103: performing homographic transformation on the first characteristic point through the homographic transformation matrix to generate a third characteristic point of the first characteristic point in a second image pixel coordinate system, and acquiring a second matching point pair according to a pixel distance between the third characteristic point and the second characteristic point;
s104: and performing geometric consistency verification based on homography through the second matching point pairs, and acquiring a third matching point pair set according to a verification result, wherein the third matching point pair set comprises matching information of the characteristic points in the first image and the second image.
2. The optical flow-based feature matching method of claim 1, wherein the step of obtaining the first set of feature points of the first image specifically comprises:
and carrying out SIFT or SURF feature extraction on the first image to obtain a first feature point set.
3. The optical flow-based feature matching method according to claim 1, wherein the step of performing sparse optical flow tracking on the first image and the second image to obtain the matching relationship between shi-tomasi feature points specifically comprises:
extracting the shi-tomasi features of the first image, obtaining the shi-tomasi feature points in the first image, performing LK sparse optical flow tracking through the shi-tomasi feature points, and obtaining the matching relation of the first image and the shi-tomasi feature points in the second image.
4. The optical flow-based feature matching method of claim 1, wherein the step of extracting the second feature point set of the second image specifically comprises:
and carrying out SIFT or SURF feature extraction on the second image to obtain the second feature point set.
5. The optical flow-based feature matching method according to claim 1, wherein the step of obtaining the homographic transformation matrix corresponding to the shi-tomasi feature points specifically comprises:
and calculating a homography transformation matrix corresponding to the shi-tomasi characteristic points by a least square method or RANSAC.
6. The optical flow-based feature matching method according to claim 1, wherein the step of performing homographic transformation on the first feature point through the homographic transformation matrix to generate a third feature point of the first feature point in a second image pixel coordinate system specifically comprises:
7. The optical flow-based feature matching method according to claim 1, wherein the step of obtaining a second matching point pair according to the pixel distance between the third feature point and the second feature point specifically comprises:
and judging whether the pixel distance between the third characteristic point and the second characteristic point meets a preset condition, and rejecting abnormal data in the first matching point pair according to a judgment result to obtain a second matching point pair, wherein the preset condition is that the pixel distance is smaller than a preset value.
8. The optical flow-based feature matching method according to claim 1, wherein the step of performing homography-based geometric consistency verification through the second matching point pairs and obtaining a third matching point pair set according to the verification result specifically includes:
and verifying the second matching point pair by a random sampling consistency algorithm, and removing abnormal data in the second matching point pair according to a verification result to obtain a third matching point pair set.
9. An intelligent terminal, characterized in that, intelligent terminal includes: a processor, a memory communicatively connected to the memory, the memory storing a computer program by which the processor performs the optical flow-based feature matching method of any of claims 1-8.
10. A computer storage medium storing program data for performing the optical flow-based feature matching method according to any one of claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110623317.0A CN113554035A (en) | 2021-06-04 | 2021-06-04 | Feature matching method based on optical flow, intelligent terminal and computer storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110623317.0A CN113554035A (en) | 2021-06-04 | 2021-06-04 | Feature matching method based on optical flow, intelligent terminal and computer storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113554035A true CN113554035A (en) | 2021-10-26 |
Family
ID=78101966
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110623317.0A Pending CN113554035A (en) | 2021-06-04 | 2021-06-04 | Feature matching method based on optical flow, intelligent terminal and computer storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113554035A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114022518A (en) * | 2022-01-05 | 2022-02-08 | 深圳思谋信息科技有限公司 | Method, device, equipment and medium for acquiring optical flow information of image |
CN114972629A (en) * | 2022-04-14 | 2022-08-30 | 广州极飞科技股份有限公司 | Feature point matching method, device, equipment and storage medium |
-
2021
- 2021-06-04 CN CN202110623317.0A patent/CN113554035A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114022518A (en) * | 2022-01-05 | 2022-02-08 | 深圳思谋信息科技有限公司 | Method, device, equipment and medium for acquiring optical flow information of image |
CN114972629A (en) * | 2022-04-14 | 2022-08-30 | 广州极飞科技股份有限公司 | Feature point matching method, device, equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110660066B (en) | Training method of network, image processing method, network, terminal equipment and medium | |
EP3709266A1 (en) | Human-tracking methods, apparatuses, systems, and storage media | |
CN108337551B (en) | Screen recording method, storage medium and terminal equipment | |
CN109376256B (en) | Image searching method and device | |
WO2020024744A1 (en) | Image feature point detecting method, terminal device, and storage medium | |
CN114612987B (en) | Expression recognition method and device | |
CN111340109A (en) | Image matching method, device, equipment and storage medium | |
CN113554035A (en) | Feature matching method based on optical flow, intelligent terminal and computer storage medium | |
Kastryulin et al. | Pytorch image quality: Metrics for image quality assessment | |
CN111131688B (en) | Image processing method and device and mobile terminal | |
CN113158773B (en) | Training method and training device for living body detection model | |
CN115511752A (en) | BP neural network-based point coordinate distortion removal method and storage medium | |
CN106709915B (en) | Image resampling operation detection method | |
CN111161348B (en) | Object pose estimation method, device and equipment based on monocular camera | |
CN113869330A (en) | Underwater fish target detection method and device and storage medium | |
CN117934888A (en) | Data aggregation method, system, device and storage medium | |
CN111222446B (en) | Face recognition method, face recognition device and mobile terminal | |
CN111402177A (en) | Definition detection method, system, device and medium | |
CN116012418A (en) | Multi-target tracking method and device | |
Zhou et al. | Exploring weakly-supervised image manipulation localization with tampering Edge-based class activation map | |
CN112487943B (en) | Key frame de-duplication method and device and electronic equipment | |
Karimi et al. | Quality assessment for retargeted images: A review | |
CN113239738B (en) | Image blurring detection method and blurring detection device | |
CN103077396B (en) | The vector space Feature Points Extraction of a kind of coloured image and device | |
CN114842210A (en) | Feature point matching method and device, terminal equipment and computer readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |