CN109740674A - A kind of image processing method, device, equipment and storage medium - Google Patents
A kind of image processing method, device, equipment and storage medium Download PDFInfo
- Publication number
- CN109740674A CN109740674A CN201910011494.6A CN201910011494A CN109740674A CN 109740674 A CN109740674 A CN 109740674A CN 201910011494 A CN201910011494 A CN 201910011494A CN 109740674 A CN109740674 A CN 109740674A
- Authority
- CN
- China
- Prior art keywords
- image
- frame image
- current frame
- visual signature
- training
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Landscapes
- Image Analysis (AREA)
Abstract
This application discloses a kind of image processing method, device, equipment and storage mediums.This method comprises: obtaining the current frame image of camera acquisition, and extract the visual signature of current frame image;According to the visual signature of current frame image, the feature vector of current frame image is generated;The feature vector of current frame image is divided into multiple subvectors, and quantifies multiple subvectors, generates the aspect indexing of the visual signature of current frame image;The aspect indexing of the aspect indexing of the visual signature of current frame image and the visual signature of each training image is matched, determines the matching characteristic pair of current frame image and each training image;The aspect indexing of the visual signature of each training image is obtained based on subcode book;The training image that the quantity of matching characteristic pair is greater than the first preset threshold is determined as to the similar image of current frame image.The quick identification of image may be implemented in the technical program.
Description
Technical field
The present disclosure relates generally to field of computer technology, and in particular to technical field of image processing more particularly to a kind of figure
As processing method, device, equipment and storage medium.
Background technique
In recent years, with the fast development of semiconductor technology and the promotion of artificial intelligence tide, quick picture recognition
Become research hotspot in the fields such as augmented reality and robot localization with track algorithm.
At present in image recognition processes, it is mainly based upon tree-like BoW (Bag-of-words, bag of words) model realization.This
Kind mode needs to establish fairly large tree-like visual dictionary, leads to image recognition processes to reach preferable recognition effect
It takes a long time, and tree-like visual dictionary memory usage is high, will receive limitation in the platform use of the memory-limiteds such as embedded.
Summary of the invention
In view of drawbacks described above in the prior art or deficiency, it is intended to provide a kind of scheme that can quickly identify image.
In a first aspect, the embodiment of the present application provides a kind of image processing method, comprising:
The current frame image of camera acquisition is obtained, and extracts the visual signature of the current frame image;
According to the visual signature of the current frame image, the feature vector of current frame image is generated;
The feature vector of the current frame image is divided into multiple subvectors, and quantifies the multiple subvector, is generated
The aspect indexing of the visual signature of the current frame image;
The aspect indexing of the visual signature of the current frame image and preparatory train in obtained training set of images are respectively instructed
The aspect indexing for practicing the visual signature of image is matched, and determines that the matching of the current frame image and each training image is special
Sign pair;Wherein, the aspect indexing of the visual signature of each training image is obtained based on subcode book, the subcode book be by
Space cutting where the visual signature of each training image is multiple subspaces, and is instructed in each subspace
The code book got;
The training image that the quantity of the matching characteristic pair is greater than the first preset threshold is determined as the current frame image
Similar image.
Optionally, the aspect indexing of the visual signature of each training image determines as follows:
Training set of images is obtained, and extracts the visual signature of each training image in described image training set;
The visual signature is divided into M sub-spaces, and carries out clustering in each subspace, obtains institute
State the M subcode books being made of k code word;
According to subcode book described at least one, the aspect indexing of the visual signature of the training image is generated.
Optionally, after the feature vector for generating current frame image, the method also includes:
It calculates the feature vector of the current frame image and trains the feature vector of obtained each training image in advance
Similitude, determine the similarity of the current frame image and each training image;
The training image that similarity is greater than the second preset threshold is determined as quasi- similar image;Then
The aspect indexing of the visual signature of the current frame image and preparatory train in obtained training set of images are respectively instructed
The aspect indexing for practicing the visual signature of image is matched, and determines that the matching of the current frame image and each training image is special
Sign pair:
By the progress of the visual signature of the aspect indexing of the visual signature of the current frame image and the quasi- similar image
Match, determines the matching characteristic pair of the current frame image and the quasi- similar image.
Optionally, the training image that the quantity of the matching characteristic pair is greater than the first preset threshold is determined as described current
After the similar image of frame image, the method also includes:
According to the matching characteristic pair of the current frame image and the similar image, determine by the similar image described in
The first camera pose of current frame image;
Continue to obtain the next frame image of the current frame image;
According to the first camera pose, position of the similar image in the next frame image is determined.
Optionally, it according to the matching characteristic pair of the current frame image and the similar image, determines by the similar diagram
First camera pose as arriving the current frame image, comprising:
According to the matching characteristic pair of the current frame image and the similar image, the current frame image and described is determined
The matching characteristic point pair of similar image;
According to the matching characteristic point in the 3D coordinate and the current frame image of the matching characteristic point in the similar image
2D coordinate, determine the first camera pose.
Optionally, according to the first camera pose, position of the similar image in the next frame image is determined,
Include:
The 3D coordinate of matching characteristic point in the similar image is projected into described work as according to the first camera pose
Prior image frame determines the 2D coordinate and 3D coordinate of the matching characteristic point in the current frame image;
According to the 2D coordinate and 3D coordinate of the matching characteristic point in the current frame image, using based on luminosity error most
Small square law is determined by the current frame image to the second camera pose of the next frame image;
The 3D coordinate of matching characteristic point in the current frame image is projected according to the second camera pose, is obtained
2D coordinate to after the matching characteristic point projection in the current frame image;
According to the 2D coordinate after the matching characteristic point projection in the current frame image, determine the similar image described
Position in next frame image.
Optionally, after the 2D coordinate after obtaining the matching characteristic point projection in the current frame image, the method is also
Include:
It is described next successively to judge whether the 2D coordinate in the current frame image after each matching characteristic point projection is located at
Within the scope of the image coordinate of frame image;
According to judging result, determine that the 2D coordinate after projecting in the current frame image is located at the figure of the next frame image
As the quantity of the matching characteristic point in coordinate range;Then
According to the 2D coordinate after the matching characteristic point projection in the current frame image, determine the similar image described
Position in next frame image, comprising:
2D coordinate after projecting in the current frame image is located within the scope of the image coordinate of the next frame image
When the quantity of matching characteristic point is greater than third predetermined threshold value, according to the 2D after the matching characteristic point projection in the current frame image
Coordinate determines position of the similar image in the next frame image.
Second aspect, the embodiment of the present application also provides a kind of pattern recognition devices, comprising:
Feature extraction unit for obtaining the current frame image of camera acquisition, and extracts the vision of the current frame image
Feature;
Feature vector generation unit generates the spy of current frame image for the visual signature according to the current frame image
Levy vector;
Aspect indexing generation unit for the feature vector of the current frame image to be divided into multiple subvectors, and is measured
Change the multiple subvector, generates the aspect indexing of the visual signature of the current frame image;
Matching unit, the image for obtaining the aspect indexing of the visual signature of the current frame image and preparatory training
The aspect indexing of the visual signature of each training image is matched in training set, determines the current frame image and each training
The matching characteristic pair of image;Wherein, the aspect indexing of the visual signature of each training image is obtained based on subcode book, institute
It is multiple subspaces that state subcode book, which be by the space cutting where the visual signature of each training image, and in each son
The code book being trained in space;
Image identification unit, the training image for the quantity of the matching characteristic pair to be greater than to the first preset threshold determine
For the similar image of the current frame image.
The third aspect, the embodiment of the present application also provides a kind of equipment, comprising: at least one processor, at least one deposits
The computer program instructions of reservoir and storage in the memory, when the computer program instructions are held by the processor
Such as above-mentioned image processing method is realized when row.
Fourth aspect, the embodiment of the present application also provides a kind of computer readable storage mediums, are stored thereon with computer
Program instruction, which is characterized in that such as above-mentioned image processing method is realized when the computer program instructions are executed by processor.
Image procossing scheme provided by the embodiments of the present application provides a kind of matching process of new visual signature, i.e., will
The vision of each training image is special in the aspect indexing of the visual signature of current frame image and the training set of images that training obtains in advance
The aspect indexing of sign is matched, wherein the aspect indexing of the visual signature of current frame image is by the feature of current frame image
Vector is divided into multiple subvectors, and quantifies what multiple subvectors obtained, and the aspect indexing of the visual signature of each training image
It is to be obtained based on subcode book, it is multiple subspaces that subcode book, which is by the space cutting where the visual signature of each training image,
And the code book being trained in every sub-spaces.The aspect indexing that this mode obtains greatly reduces storage rule
Mould, and then matching speed is improved, image recognition can be fast implemented.
Detailed description of the invention
By reading a detailed description of non-restrictive embodiments in the light of the attached drawings below, the application's is other
Feature, objects and advantages will become more apparent upon:
Fig. 1 shows a kind of exemplary process diagram of image processing method provided by the embodiments of the present application;
Fig. 2 shows the schematic diagrames of the aspect indexing of the visual signature of each training image of training;
Fig. 3 shows the schematic diagram of all visual signatures training subcode book using each training image;
Fig. 4 shows the schematic diagram of the aspect indexing of the visual signature of the training image generated according to M sub- code books;
Fig. 5 shows to obtain the schematic diagram of the quasi- similar image of current frame image;
Fig. 6 shows the schematic diagram of image trace;
Fig. 7 shows a kind of exemplary block diagram of image processing apparatus provided by the embodiments of the present application;
Fig. 8 shows the structural schematic diagram for being suitable for the computer system for the server for being used to realize the embodiment of the present application.
Specific embodiment
The application is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched
The specific embodiment stated is used only for explaining related invention, rather than the restriction to the invention.It also should be noted that in order to
Convenient for description, part relevant to invention is illustrated only in attached drawing.
It should be noted that in the absence of conflict, the features in the embodiments and the embodiments of the present application can phase
Mutually combination.
As mentioned in the background, at present in image recognition processes, it is mainly based upon tree-like BoW model realization.
This mode needs to establish fairly large tree-like visual dictionary, leads to image recognition mistake to reach preferable recognition effect
Journey takes a long time, and tree-like visual dictionary memory usage is high, will receive limit in the platform use of the memory-limiteds such as embedded
System.
In view of the drawbacks described above of the prior art, the embodiment of the present application provides a kind of image procossing scheme.The technical solution
A kind of matching process of new visual signature is provided, i.e., by the aspect indexing of the visual signature of current frame image and training in advance
The aspect indexing of the visual signature of each training image is matched in obtained training set of images, wherein the view of current frame image
The aspect indexing for feeling feature is the feature vector of current frame image to be divided into multiple subvectors, and quantify multiple subvectors and obtain
, and the aspect indexing of the visual signature of each training image is obtained based on subcode book, subcode book is by each training image
Space cutting where visual signature is multiple subspaces, and the code book being trained in every sub-spaces.This side
The aspect indexing that formula obtains greatly reduces storage size, and then improves matching speed, can fast implement image recognition.
The application is described in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
With reference to Fig. 1, it illustrates a kind of exemplary process diagrams of image processing method provided by the embodiments of the present application.
This method comprises:
Step 110, the current frame image of camera acquisition is obtained, and extracts the visual signature of current frame image.
In the embodiment of the present application, it can be calculated based on SIFT (Scale-Invariant Feature, scale invariant feature)
Method, SURF (Speeded Up Robust Features accelerates robust feature) algorithm or ORB (Oriented FAST and
Rotated BRIEF, rapid characteristic points extract and description) algorithm extracts the visual signature of current frame image, but it is of the invention
The Visual Feature Retrieval Process method of current frame image is without being limited thereto, for example, it is also possible to extract the texture maps feature of current frame image, side
To histogram of gradients feature and color histogram feature etc..
Step 120, the visual signature of current frame image is divided into multiple subvectors, and quantifies multiple subvectors, generated
The aspect indexing of the visual signature of current frame image.
Specifically, can be M sub-spaces according to vector dimension cutting by each visual signature in current frame image, it is false
If the visual signature of current frame image is SIFT feature, then the dimension of SIFT feature is 128, then first that the SIFT of 128 dimensions is special
Sign is cut into M subvector, and the dimension of each subvector is 128/M, then successively quantifies each subvector, last according to each
The quantized result of subvector generates aspect indexing.
Step 130, in the training set of images that training obtains by the aspect indexing of the visual signature of current frame image and in advance
The aspect indexing of the visual signature of each training image is matched, and determines the matching characteristic of current frame image and each training image
It is right.
Wherein, characteristic matching is to referring to that aspect indexing being capable of matched two visual signatures.For example, certain in current frame image
The aspect indexing of a visual signature is 001, if the aspect indexing of some visual signature of training image is also 001, this
Two visual signatures are exactly one group of matching characteristic pair.
It should be noted that the aspect indexing of some visual signature is 001 in current frame image, if had in training image
Multiple aspect indexings are also 001 visual signature, then choosing a visual signature and present frame from this multiple visual signature
Visual signature in image forms one group of matching characteristic pair.
Wherein, the aspect indexing of the visual signature of each training image is obtained based on subcode book.
It is multiple subspaces that subcode book, which is by the space cutting where the visual signature of each training image, and empty in every height
The interior code book being trained.
Code book refers to that the k cluster centre clustered using clustering algorithm to visual signature, each cluster centre are known as
The collection of code word, k cluster centre is collectively referred to as code book.
Specifically, the aspect indexing of the visual signature of each training image can be determined according to mode as shown in Figure 2:
Step 210, training set of images is obtained, and extracts the visual signature of each training image in training set of images.
Wherein, the side of the method for the visual signature of each training image and the visual signature of said extracted current frame image is extracted
Method is identical, and details are not described herein.
Step 220, the visual signature of each training image is divided into M sub-spaces, and is gathered in every sub-spaces
Alanysis obtains the M subcode books being made of k code word.
Wherein, subspace refers to the space where the subvector of the correspondence dimension of all visual signatures of training image.
As shown in figure 3, to use the schematic diagram of all visual signatures of each training image training subcode book.Wherein, with M=
For 3, all visual signatures of each training image are divided into 3 sub-spaces, and carry out cluster point in every sub-spaces
Analysis obtains 3 subcode books being made of k code word.
Step 230, according at least one subcode book, the aspect indexing of the visual signature of training image is generated.
In the embodiment of the present application, the aspect indexing of the visual signature of training image can be generated according to M sub- code books.Such as
Shown in Fig. 4, for the schematic diagram of the aspect indexing of the visual signature of the training image generated according to M sub- code books.
Specifically, quantifying the subvector of each visual signature of training image respectively in every sub-spaces, and according to each
The quantized result of M subvector of visual signature generates aspect indexing.Wherein, shown in the scale of aspect indexing such as formula (1):
Wherein, qi is the quantized result of i-th of subvector, and index is the view of the training image generated according to M sub- code books
Feel the aspect indexing of feature.
Optionally, in order to further decrease the scale of aspect indexing, matching speed is improved, also can be used M-1 or M-2
Subcode book generates aspect indexing.
Step 140, the training image that the quantity of matching characteristic pair is greater than the first preset threshold is determined as current frame image
Similar image.
The matching characteristic of current frame image and each training image is determined to later, then quantity is greater than by statistical magnitude
The training image of first preset threshold is determined as the similar image of present frame.
The embodiment of the present application provides a kind of image procossing scheme.The present solution provides a kind of new visual signatures
Matching process, i.e., by the aspect indexing of the visual signature of current frame image and each training in the preparatory training set of images trained and obtained
The aspect indexing of the visual signature of image is matched, wherein the aspect indexing of the visual signature of current frame image is will be current
The feature vector of frame image is divided into multiple subvectors, and quantifies what multiple subvectors obtained, and the vision of each training image is special
The aspect indexing of sign is obtained based on subcode book, and subcode book is to be by the space cutting where the visual signature of each training image
Multiple subspaces, and the code book being trained in every sub-spaces.The aspect indexing that this mode obtains greatly drops
Low storage size, and then matching speed is improved, image recognition can be fast implemented.
Optionally, after the visual signature that step 110 extracts current frame image, first training image can also be carried out just
Choosing, obtains the quasi- similar image of current frame image.
Specifically, can be as shown in figure 5, including the following steps:
Step 510, according to the visual signature of current frame image, the bag of words vector of current frame image is generated.
Specifically, extracting the visual signature of current frame image first and constructing the feature descriptor of visual signature, then lead to
It crosses clustering algorithm (such as k-means algorithm) training to cluster feature descriptor, generates code book.Then pass through KNN (K-
NearestNeighbor, K arest neighbors) algorithm quantization visual signature, it finally obtains and passes through TF-IDF ((term frequency-
Inverse document frequency, term frequency-inverse document frequency) weighting image histogram vector, i.e. BoW vector.
Step 520, it calculates the feature vector of current frame image and trains the feature vector of obtained each training image in advance
Similitude, determine the similarity of current frame image and each training image.
Wherein, the feature vector of each training image is consistent with the acquisition methods of the feature vector of above-mentioned current frame image,
This is repeated no more.
Furthermore it is possible to the spy by the Euclidean distance of two BoW vectors of calculating or COS distance etc. as current frame image
Levy the standard of the similitude of the feature vector of each training image of vector sum.
Step 530, the training image that similarity is greater than the second preset threshold is determined as quasi- similar image.
The quasi- similar image of a part thus can be first filtered out from a large amount of training image, by subsequent visual signature
The time-consuming of matching process further shortens.
Based on above-mentioned steps 510 to step 530, step 130 can be specifically included:
The visual signature of the aspect indexing of the visual signature of current frame image and quasi- similar image is matched, determination is worked as
The matching characteristic pair of prior image frame and quasi- similar image.
Above-mentioned image processing method can be applied in image recognition and tracking technique field.
Optionally, the training image that the quantity of matching characteristic pair is greater than the first preset threshold is determined as current frame image
After similar image, the embodiment of the present application can also include the steps that image trace as shown in FIG. 6:
Step 610, it according to the matching characteristic pair of current frame image and similar image, determines by similar image to present frame figure
The first camera pose of picture.
In the embodiment of the present application, according to the matching characteristic pair of current frame image and similar image, present frame figure can determine
The matching characteristic of picture and similar image point pair;That is, the corresponding characteristic point of each visual signature, therefore one group of matching is special
Sign is to just corresponding one group of matching characteristic point pair.
It has been determined that the matching characteristic point of current frame image and similar image, can be according to the matching in similar image to later
The 2D coordinate of matching characteristic point in the 3D coordinate and current frame image of characteristic point, determines first camera pose.
Specifically, plane is z=0 plane where assuming the matching characteristic point of similar image, so that 2D pixel coordinate (u, v)
Become 3D coordinate (u, v, 0), then according to corresponding 3D-2D matching characteristic point to using PnP algorithm to calculate first camera position
Appearance, i.e. T=[R | t], wherein T is first camera pose, and R is spin matrix, and t is translation matrix.
Step 620, continue the next frame image of acquisition current frame image.
Step 630, according to first camera pose, position of the similar image in next frame image is determined.
Step 630 can be realized as follows:
The 3D coordinate of matching characteristic point in similar image is projected to present frame figure according to first camera pose by the first step
Picture determines the 2D coordinate and 3D coordinate of the matching characteristic point in current frame image;
Wherein it is possible to be determined according to following formula (2) and (3):
P '=RP+t; (2)
Wherein, P is the 3D coordinate of the matching characteristic point in similar image, P, for the matching characteristic point in current frame image
3D coordinate, (u, v) are the 2D coordinate of the matching characteristic point in current frame image, and K is camera intrinsic parameter.
Second step, according to the 2D coordinate and 3D coordinate of the matching characteristic point in current frame image, using based on luminosity error
Least square method determine by current frame image to the second camera pose of the next frame image;
It can be determined according to following formula (4):
Wherein, T* is second camera pose, and Pi is the 3D coordinate of matching characteristic point in current frame image, and pi is present frame figure
The 2D coordinate of matching characteristic point as in, n are matching characteristic point to quantity, and K is camera internal reference, and R, t are value to be estimated, and zi is projection
Depth value (known) in the process, I1 () are the gray value of image of corresponding points.
It using gauss-newton method or arranges that literary Burger horse is overstated and special method solves above formula, can be obtained by current frame image to next
The second camera pose of frame image.
The 3D coordinate of matching characteristic point in current frame image is projected according to second camera pose, is obtained by third step
The 2D coordinate after the projection of matching characteristic point into current frame image.
4th step determines similar image next according to the 2D coordinate after the matching characteristic point projection in current frame image
Position in frame image.
Wherein, after obtaining the 2D coordinate after the matching characteristic point projection in current frame image, first can successively judge
Whether the 2D coordinate in current frame image after each matching characteristic point projection is located within the scope of the image coordinate of next frame image;And
According to judging result, determine that the 2D coordinate after projecting in current frame image is located within the scope of the image coordinate of next frame image
Quantity with characteristic point.
If the 2D coordinate after projecting in current frame image is located at the spy of the matching within the scope of the image coordinate of next frame image
The quantity for levying point is very few, then illustrates that the similar image has been not present in next frame image, then tracking process terminates.At this time may be used
Continue acquisition next frame image with return to be tracked.
If the 2D coordinate after projecting in current frame image is located at the spy of the matching within the scope of the image coordinate of next frame image
When the quantity of sign point is greater than third predetermined threshold value, illustrate that there are still the similar images in next frame image, then according to present frame
The 2D coordinate after the projection of matching characteristic point in image, determines position of the similar image in next frame image.
In the embodiment of the present application, the tracking and positioning of image is realized by least square method.
It should be noted that although describing the operation of the method for the present invention in the accompanying drawings with particular order, this is not required that
Or hint must execute these operations in this particular order, or have to carry out operation shown in whole and be just able to achieve the phase
The result of prestige.On the contrary, the step of describing in flow chart can change and execute sequence.Additionally or alternatively, it is convenient to omit certain
Multiple steps are merged into a step and executed, and/or a step is decomposed into execution of multiple steps by step.
With further reference to Fig. 7, it illustrates a kind of exemplary structures of image processing apparatus provided by the embodiments of the present application
Block diagram.
The device includes:
Feature extraction unit 71 for obtaining the current frame image of camera acquisition, and extracts the view of the current frame image
Feel feature;
Feature vector generation unit 72 generates current frame image for the visual signature according to the current frame image
Feature vector;
Aspect indexing generation unit 73, for the feature vector of the current frame image to be divided into multiple subvectors, and
Quantify the multiple subvector, generates the aspect indexing of the visual signature of the current frame image;
Matching unit 74, the figure for obtaining the aspect indexing of the visual signature of the current frame image and preparatory training
As the aspect indexing of the visual signature of training image each in training set is matched, the current frame image and each instruction are determined
Practice the matching characteristic pair of image;Wherein, the aspect indexing of the visual signature of each training image is obtained based on subcode book,
It is multiple subspaces that the subcode book, which is by the space cutting where the visual signature of each training image, and each described
The code book being trained in subspace;
Image identification unit 75, the training image for the quantity of the matching characteristic pair to be greater than the first preset threshold are true
It is set to the similar image of the current frame image.
Optionally, which can also include:
Training unit is used for:
Training set of images is obtained, and extracts the visual signature of each training image in described image training set;
The visual signature of each training image is divided into M sub-spaces, and is gathered in each subspace
Alanysis obtains the M subcode books being made of k code word;
According to subcode book described at least one, the aspect indexing of the visual signature of the training image is generated.
Optionally, which can also include:
Quasi- similar image determination unit, is used for:
According to the visual signature of current frame image, the bag of words vector of current frame image is generated;
Calculate the bag of words vector of bag of words vector sum each training image that training obtains in advance of the current frame image
Similitude, determine the similarity of the current frame image and each training image;
The training image that similarity is greater than the second preset threshold is determined as quasi- similar image.
Then matching unit 74 is specifically used for:
By the progress of the visual signature of the aspect indexing of the visual signature of the current frame image and the quasi- similar image
Match, determines the matching characteristic pair of the current frame image and the quasi- similar image.
Optionally, which can also include:
First camera pose determination unit, for the matching characteristic according to the current frame image and the similar image
It is right, determine the first camera pose by the similar image to the current frame image;
Acquiring unit, for continuing to obtain the next frame image of the current frame image;
Positioning unit, for determining the similar image in the next frame image according to the first camera pose
Position.
Optionally, first camera pose determination unit, is specifically used for:
According to the matching characteristic pair of the current frame image and the similar image, the current frame image and described is determined
The matching characteristic point pair of similar image;
According to the matching characteristic point in the 3D coordinate and the current frame image of the matching characteristic point in the similar image
2D coordinate, determine the first camera pose.
Optionally, positioning unit is specifically used for:
The 3D coordinate of matching characteristic point in the similar image is projected into described work as according to the first camera pose
Prior image frame determines the 2D coordinate and 3D coordinate of the matching characteristic point in the current frame image;
According to the 2D coordinate and 3D coordinate of the matching characteristic point in the current frame image, using based on luminosity error most
Small square law is determined by the current frame image to the second camera pose of the next frame image;
The 3D coordinate of matching characteristic point in the current frame image is projected according to the second camera pose, is obtained
2D coordinate to after the matching characteristic point projection in the current frame image;
According to the 2D coordinate after the matching characteristic point projection in the current frame image, determine the similar image described
Position in next frame image.
Optionally, can also include:
Judging unit is used for:
It is described next successively to judge whether the 2D coordinate in the current frame image after each matching characteristic point projection is located at
Within the scope of the image coordinate of frame image;
According to judging result, determine that the 2D coordinate after projecting in the current frame image is located at the figure of the next frame image
As the quantity of the matching characteristic point in coordinate range.
Then positioning unit is specifically used for:
2D coordinate after projecting in the current frame image is located within the scope of the image coordinate of the next frame image
When the quantity of matching characteristic point is greater than third predetermined threshold value, according to the 2D after the matching characteristic point projection in the current frame image
Coordinate determines position of the similar image in the next frame image.
It should be appreciated that each step in the method that the systems or unit recorded in device 700 and reference Fig. 1-6 are described
It is rapid corresponding.It is equally applicable to device 700 and unit wherein included above with respect to the operation and feature of method description as a result,
Details are not described herein.
Below with reference to Fig. 8, it illustrates the computer systems 800 for the server for being suitable for being used to realize the embodiment of the present application
Structural schematic diagram.
As shown in figure 8, computer system 800 includes central processing unit (CPU) 801, it can be read-only according to being stored in
Program in memory (ROM) 802 or be loaded into the program in random access storage device (RAM) 803 from storage section 808 and
Execute various movements appropriate and processing.In RAM 803, also it is stored with system 800 and operates required various programs and data.
CPU 801, ROM 802 and RAM 803 are connected with each other by bus 804.Input/output (I/O) interface 805 is also connected to always
Line 804.
I/O interface 805 is connected to lower component: the importation 806 including keyboard, mouse etc.;It is penetrated including such as cathode
The output par, c 807 of spool (CRT), liquid crystal display (LCD) etc. and loudspeaker etc.;Storage section 808 including hard disk etc.;
And the communications portion 809 of the network interface card including LAN card, modem etc..Communications portion 809 via such as because
The network of spy's net executes communication process.Driver 810 is also connected to I/O interface 805 as needed.Detachable media 811, such as
Disk, CD, magneto-optic disk, semiconductor memory etc. are mounted on as needed on driver 810, in order to read from thereon
Computer program be mounted into storage section 808 as needed.
Particularly, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of Fig. 1-Fig. 6 description
Software program.For example, embodiment of the disclosure includes a kind of computer program product comprising be tangibly embodied in machine readable
Computer program on medium, the computer program include the program code for executing the method for Fig. 1-Fig. 6.Such
In embodiment, which can be downloaded and installed from network by communications portion 809, and/or is situated between from detachable
Matter 811 is mounted.
Flow chart and block diagram in attached drawing are illustrated according to the system of various embodiments of the invention, method and computer journey
The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generation
A part of one module, program segment or code of table, a part of the module, program segment or code include one or more
Executable instruction for implementing the specified logical function.It should also be noted that in some implementations as replacements, institute in box
The function of mark can also occur in a different order than that indicated in the drawings.For example, two boxes succeedingly indicated are practical
On can be basically executed in parallel, they can also be executed in the opposite order sometimes, and this depends on the function involved.Also it wants
It is noted that the combination of each box in block diagram and or flow chart and the box in block diagram and or flow chart, Ke Yiyong
The dedicated hardware based system of defined functions or operations is executed to realize, or can be referred to specialized hardware and computer
The combination of order is realized.
Being described in the embodiment of the present application involved unit or module can be realized by way of software, can also be with
It is realized by way of hardware.Described unit or module also can be set in the processor.These units or module
Title does not constitute the restriction to the unit or module itself under certain conditions.
As on the other hand, present invention also provides a kind of computer readable storage medium, the computer-readable storage mediums
Matter can be computer readable storage medium included in device described in above-described embodiment;It is also possible to individualism, not
The computer readable storage medium being fitted into equipment.Computer-readable recording medium storage has one or more than one journey
Sequence, described program are used to execute the image processing method for being described in the application by one or more than one processor.
Above description is only the preferred embodiment of the application and the explanation to institute's application technology principle.Those skilled in the art
Member is it should be appreciated that invention scope involved in the application, however it is not limited to technology made of the specific combination of above-mentioned technical characteristic
Scheme, while should also cover in the case where not departing from the inventive concept, it is carried out by above-mentioned technical characteristic or its equivalent feature
Any combination and the other technical solutions formed.Such as features described above has similar function with (but being not limited to) disclosed herein
Can technical characteristic replaced mutually and the technical solution that is formed.
Claims (10)
1. a kind of image processing method, which is characterized in that the described method includes:
The current frame image of camera acquisition is obtained, and extracts the visual signature of the current frame image;
The visual signature of the current frame image is divided into multiple subvectors, and quantifies the multiple subvector, described in generation
The aspect indexing of the visual signature of current frame image;
Train the aspect indexing of the visual signature of the current frame image and in advance each training figure in obtained training set of images
The aspect indexing of the visual signature of picture is matched, and determines the matching characteristic of the current frame image and each training image
It is right;Wherein, the aspect indexing of the visual signature of each training image is obtained based on subcode book, and the subcode book is by institute
Space cutting where stating the visual signature of each training image is multiple subspaces, and is trained in each subspace
Obtained code book;
The training image that the quantity of the matching characteristic pair is greater than the first preset threshold is determined as to the phase of the current frame image
Like image.
2. the method according to claim 1, wherein the aspect indexing of the visual signature of each training image is pressed
It is determined according to such as under type:
Training set of images is obtained, and extracts the visual signature of each training image in described image training set;
The visual signature of each training image is divided into M sub-spaces, and carries out cluster point in each subspace
Analysis obtains the M subcode books being made of k code word;
According to subcode book described at least one, the aspect indexing of the visual signature of the training image is generated.
3. the method according to claim 1, wherein after extracting the visual signature of the current frame image, institute
State method further include:
According to the visual signature of current frame image, the bag of words vector of current frame image is generated;
Calculate the phase of the bag of words vector of bag of words vector sum each training image that training obtains in advance of the current frame image
Like property, the similarity of the current frame image and each training image is determined;
The training image that similarity is greater than the second preset threshold is determined as quasi- similar image;Then
Train the aspect indexing of the visual signature of the current frame image and in advance each training figure in obtained training set of images
The aspect indexing of the visual signature of picture is matched, and determines the matching characteristic of the current frame image and each training image
It is right:
The visual signature of the aspect indexing of the visual signature of the current frame image and the quasi- similar image is matched, really
The matching characteristic pair of the fixed current frame image and the quasi- similar image.
4. the method according to claim 1, wherein the quantity of the matching characteristic pair is greater than the first default threshold
The training image of value is determined as after the similar image of the current frame image, the method also includes:
According to the matching characteristic pair of the current frame image and the similar image, determine by the similar image to it is described currently
The first camera pose of frame image;
Continue to obtain the next frame image of the current frame image;
According to the first camera pose, position of the similar image in the next frame image is determined.
5. according to the method described in claim 4, it is characterized in that, according to of the current frame image and the similar image
With feature pair, the first camera pose by the similar image to the current frame image is determined, comprising:
According to the matching characteristic pair of the current frame image and the similar image, the current frame image and described similar is determined
The matching characteristic point pair of image;
According to the 2D of the matching characteristic point in the 3D coordinate and the current frame image of the matching characteristic point in the similar image
Coordinate determines the first camera pose.
6. according to the method described in claim 4, it is characterized in that, determining the similar diagram according to the first camera pose
As the position in the next frame image, comprising:
The 3D coordinate of matching characteristic point in the similar image is projected into the present frame according to the first camera pose
Image determines the 2D coordinate and 3D coordinate of the matching characteristic point in the current frame image;
According to the 2D coordinate and 3D coordinate of the matching characteristic point in the current frame image, the minimum two based on luminosity error is used
Multiplication is determined by the current frame image to the second camera pose of the next frame image;
The 3D coordinate of matching characteristic point in the current frame image is projected according to the second camera pose, obtains institute
2D coordinate after stating the matching characteristic point projection in current frame image;
According to the 2D coordinate after the matching characteristic point projection in the current frame image, determine the similar image described next
Position in frame image.
7. according to the method described in claim 6, it is characterized in that, obtaining the matching characteristic point projection in the current frame image
After 2D coordinate afterwards, the method also includes:
Successively judge whether the 2D coordinate in the current frame image after each matching characteristic point projection is located at the next frame figure
Within the scope of the image coordinate of picture;
According to judging result, determine that the 2D coordinate after projecting in the current frame image is located at the image seat of the next frame image
Mark the quantity of the matching characteristic point in range;Then
According to the 2D coordinate after the matching characteristic point projection in the current frame image, determine the similar image described next
Position in frame image, comprising:
The matching within the scope of image coordinate that 2D coordinate after projecting in the current frame image is located at the next frame image
When the quantity of characteristic point is greater than third predetermined threshold value, sat according to the 2D after the matching characteristic point projection in the current frame image
Mark, determines position of the similar image in the next frame image.
8. a kind of image processing apparatus, which is characterized in that described device includes:
Feature extraction unit for obtaining the current frame image of camera acquisition, and extracts the visual signature of the current frame image;
Feature vector generation unit, for the visual signature according to the current frame image, generate the feature of current frame image to
Amount;
Aspect indexing generation unit for the feature vector of the current frame image to be divided into multiple subvectors, and quantifies institute
Multiple subvectors are stated, the aspect indexing of the visual signature of the current frame image is generated;
Matching unit, the image training for obtaining the aspect indexing of the visual signature of the current frame image and preparatory training
It concentrates the aspect indexing of the visual signature of each training image to be matched, determines the current frame image and each training image
Matching characteristic pair;Wherein, the aspect indexing of the visual signature of each training image is obtained based on subcode book, the son
It is multiple subspaces that code book, which is by the space cutting where the visual signature of each training image, and in each subspace
The code book being inside trained;
Image identification unit, the training image for the quantity of the matching characteristic pair to be greater than the first preset threshold are determined as institute
State the similar image of current frame image.
9. a kind of equipment characterized by comprising at least one processor, at least one processor and be stored in described deposit
Computer program instructions in reservoir realize such as claim 1- when the computer program instructions are executed by the processor
Method described in any one of 7.
10. a kind of computer readable storage medium, is stored thereon with computer program instructions, which is characterized in that when the calculating
Such as method of any of claims 1-7 is realized when machine program instruction is executed by processor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910011494.6A CN109740674B (en) | 2019-01-07 | 2019-01-07 | Image processing method, device, equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910011494.6A CN109740674B (en) | 2019-01-07 | 2019-01-07 | Image processing method, device, equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109740674A true CN109740674A (en) | 2019-05-10 |
CN109740674B CN109740674B (en) | 2021-01-22 |
Family
ID=66363613
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910011494.6A Active CN109740674B (en) | 2019-01-07 | 2019-01-07 | Image processing method, device, equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109740674B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111242230A (en) * | 2020-01-17 | 2020-06-05 | 腾讯科技(深圳)有限公司 | Image processing method and image classification model training method based on artificial intelligence |
CN111311758A (en) * | 2020-02-24 | 2020-06-19 | Oppo广东移动通信有限公司 | Augmented reality processing method and device, storage medium and electronic equipment |
CN111703656A (en) * | 2020-05-19 | 2020-09-25 | 河南中烟工业有限责任公司 | Method for correcting orientation of circulating smoke box skin |
CN112668632A (en) * | 2020-12-25 | 2021-04-16 | 浙江大华技术股份有限公司 | Data processing method and device, computer equipment and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103440348A (en) * | 2013-09-16 | 2013-12-11 | 重庆邮电大学 | Vector-quantization-based overall and local color image searching method |
CN104199923A (en) * | 2014-09-01 | 2014-12-10 | 中国科学院自动化研究所 | Massive image library retrieving method based on optimal K mean value Hash algorithm |
CN105426533A (en) * | 2015-12-17 | 2016-03-23 | 电子科技大学 | Image retrieving method integrating spatial constraint information |
CN108984642A (en) * | 2018-06-22 | 2018-12-11 | 西安工程大学 | A kind of PRINTED FABRIC image search method based on Hash coding |
-
2019
- 2019-01-07 CN CN201910011494.6A patent/CN109740674B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103440348A (en) * | 2013-09-16 | 2013-12-11 | 重庆邮电大学 | Vector-quantization-based overall and local color image searching method |
CN104199923A (en) * | 2014-09-01 | 2014-12-10 | 中国科学院自动化研究所 | Massive image library retrieving method based on optimal K mean value Hash algorithm |
CN105426533A (en) * | 2015-12-17 | 2016-03-23 | 电子科技大学 | Image retrieving method integrating spatial constraint information |
CN108984642A (en) * | 2018-06-22 | 2018-12-11 | 西安工程大学 | A kind of PRINTED FABRIC image search method based on Hash coding |
Non-Patent Citations (1)
Title |
---|
朱玉滨: ""基于SIFT的图像检索技术研究"", 《中国优秀硕士论文全文数据库》 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111242230A (en) * | 2020-01-17 | 2020-06-05 | 腾讯科技(深圳)有限公司 | Image processing method and image classification model training method based on artificial intelligence |
CN111311758A (en) * | 2020-02-24 | 2020-06-19 | Oppo广东移动通信有限公司 | Augmented reality processing method and device, storage medium and electronic equipment |
CN111703656A (en) * | 2020-05-19 | 2020-09-25 | 河南中烟工业有限责任公司 | Method for correcting orientation of circulating smoke box skin |
CN112668632A (en) * | 2020-12-25 | 2021-04-16 | 浙江大华技术股份有限公司 | Data processing method and device, computer equipment and storage medium |
CN112668632B (en) * | 2020-12-25 | 2022-04-08 | 浙江大华技术股份有限公司 | Data processing method and device, computer equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN109740674B (en) | 2021-01-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10198623B2 (en) | Three-dimensional facial recognition method and system | |
Dvornik et al. | On the importance of visual context for data augmentation in scene understanding | |
Lee et al. | Simultaneous traffic sign detection and boundary estimation using convolutional neural network | |
CN111241989B (en) | Image recognition method and device and electronic equipment | |
CN109740674A (en) | A kind of image processing method, device, equipment and storage medium | |
CN105493078B (en) | Colored sketches picture search | |
CN110659582A (en) | Image conversion model training method, heterogeneous face recognition method, device and equipment | |
CN109960742B (en) | Local information searching method and device | |
Hernández-Vela et al. | BoVDW: Bag-of-Visual-and-Depth-Words for gesture recognition | |
CN103988232A (en) | IMAGE MATCHING by USING MOTION MANIFOLDS | |
CN111680678A (en) | Target area identification method, device, equipment and readable storage medium | |
CN114758362B (en) | Clothing changing pedestrian re-identification method based on semantic perception attention and visual shielding | |
CN112633084A (en) | Face frame determination method and device, terminal equipment and storage medium | |
KR102434574B1 (en) | Method and apparatus for recognizing a subject existed in an image based on temporal movement or spatial movement of a feature point of the image | |
CN109726621B (en) | Pedestrian detection method, device and equipment | |
Chiu et al. | See the difference: Direct pre-image reconstruction and pose estimation by differentiating hog | |
Pierce et al. | Reducing annotation times: Semantic segmentation of coral reef survey images | |
CN103295026A (en) | Spatial local clustering description vector based image classification method | |
Schels et al. | Synthetically trained multi-view object class and viewpoint detection for advanced image retrieval | |
CN111753618A (en) | Image recognition method and device, computer equipment and computer readable storage medium | |
CN108229498B (en) | Zipper piece identification method, device and equipment | |
CN110135363A (en) | Based on differentiation dictionary insertion pedestrian image search method, system, equipment and medium | |
CN115439733A (en) | Image processing method, image processing device, terminal equipment and computer readable storage medium | |
CN115661444A (en) | Image processing method, device, equipment, storage medium and product | |
CN114882372A (en) | Target detection method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |