CN112270372B - Method, device, computer equipment and medium for determining target object - Google Patents

Method, device, computer equipment and medium for determining target object Download PDF

Info

Publication number
CN112270372B
CN112270372B CN202011227367.9A CN202011227367A CN112270372B CN 112270372 B CN112270372 B CN 112270372B CN 202011227367 A CN202011227367 A CN 202011227367A CN 112270372 B CN112270372 B CN 112270372B
Authority
CN
China
Prior art keywords
feature vector
vector
image
detected
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011227367.9A
Other languages
Chinese (zh)
Other versions
CN112270372A (en
Inventor
刘铁
王净
邵珠宏
尚媛园
丁辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Wanzhida Technology Co ltd
Original Assignee
Capital Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Capital Normal University filed Critical Capital Normal University
Priority to CN202011227367.9A priority Critical patent/CN112270372B/en
Publication of CN112270372A publication Critical patent/CN112270372A/en
Application granted granted Critical
Publication of CN112270372B publication Critical patent/CN112270372B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a method, a device, computer equipment and a medium for determining a target object. The method comprises the following steps: acquiring an image to be detected and a reference image; determining a first feature vector in the reference image and a second feature vector in the image to be detected by utilizing feature extraction; for each first feature vector in the reference image, determining a first candidate matching vector of the first feature vector in the image to be detected by calculating a first similarity between the first feature vector and each second feature vector in the image to be detected; determining a target matching vector of the first feature vector in the image to be detected by utilizing second similarity and correlation between the first feature vector and each candidate matching vector of the first feature vector in the image to be detected aiming at each first feature vector in the reference image; and determining a target object in the image to be detected based on the position of each first feature vector corresponding to the target matching vector in the image to be detected.

Description

Method, device, computer equipment and medium for determining target object
Technical Field
The present application relates to the field of image matching, and in particular, to a method, apparatus, computer device, and storage medium for determining a target object.
Background
In the development of science and technology, image matching is gradually applied to various fields, such as fields of target recognition, image stitching, visual positioning, intelligent robots, medical treatment and the like. Image matching is the application of a computer and corresponding mathematical theory to the corresponding processing of a given image for a specific purpose.
The method for determining the target object by the common image matching technology is determined by Euclidean distance, but the target object is determined by the method, which is easily influenced by factors such as noise, illumination, scale and the like, so that the determined target object is inaccurate.
Disclosure of Invention
In view of the foregoing, the present application provides a method, apparatus, computer device and medium for determining a target object, which can improve accuracy of determining a target object in the prior art.
In a first aspect, an embodiment of the present application provides a method for determining a target object, where the method includes:
acquiring an image to be detected and a reference image;
determining a first feature vector in the reference image and a second feature vector in the image to be detected by utilizing feature extraction; wherein the first feature vector is a feature vector corresponding to the target object;
For each first feature vector in the reference image, determining a first candidate matching vector of the first feature vector in the image to be detected by calculating a first similarity between the first feature vector and each second feature vector in the image to be detected;
determining a target matching vector of the first feature vector in the image to be detected by utilizing second similarity and correlation between the first feature vector and each first candidate matching vector of the first feature vector in the image to be detected aiming at each first feature vector in the reference image;
and determining the target object in the image to be detected based on the position corresponding to the target matching vector of each first feature vector in the image to be detected.
Optionally, the first feature vector is calculated by the following steps:
determining a first position of a first sampling point in the reference image by using the position and the scale of each pixel point in the reference image;
determining a first direction of the first sampling point based on the gradient histogram corresponding to the first sampling point;
and calculating the first feature vector corresponding to the first sampling point according to the first position and the first direction corresponding to the first sampling point.
Optionally, the second feature vector is calculated by the following steps:
determining a second position of a second sampling point in the image to be detected by using the position and the scale of each pixel point in the image to be detected;
determining a second direction of the second sampling point based on the gradient histogram corresponding to the second sampling point;
and calculating the second feature vector corresponding to the second sampling point according to the second position and the second direction corresponding to the second sampling point.
Optionally, the determining, for each first feature vector in the reference image, a target matching vector of the first feature vector in the image to be detected by using a second similarity and a correlation between the first feature vector and each first candidate matching vector of the first feature vector in the image to be detected, includes:
determining the second similarity between the first feature vector and each first candidate matching vector in the image to be detected based on the calculated first feature vector and the Euclidean distance between the first feature vector and each first candidate matching vector in the image to be detected;
Determining the correlation degree between the first feature vector and each first candidate matching vector in the image to be detected based on the calculated first feature vector and the correlation parameter between the first feature vector and each first candidate matching vector in the image to be detected;
and determining a target matching vector which is most matched with the first feature vector from the first candidate matching vectors according to the second similarity and the correlation parameter corresponding to each first candidate matching vector in the image to be detected.
Optionally, the determining, for each first feature vector in the reference image, a first candidate matching vector of the first feature vector in the image to be detected by calculating a first similarity between the first feature vector and each second feature vector in the image to be detected includes:
for each first feature vector in the reference image, determining a second candidate matching vector of the first feature vector in the image to be detected by calculating a first similarity between the first feature vector and each second feature vector in the image to be detected;
For each second candidate matching vector, determining a third candidate matching vector of the second candidate matching vector in the reference image by calculating a third similarity between the first feature vector and each second candidate matching vector in the image to be detected;
and determining a second candidate feature vector corresponding to the third candidate matching vector containing the first feature vector as the first candidate matching vector for each first feature vector in the reference image.
In a second aspect, an embodiment of the present application provides an apparatus for determining a target object, where the apparatus includes:
the acquisition module is used for: for acquiring an image to be detected and a reference image.
And an extraction module: for determining a first feature vector in the reference image and a second feature vector in the image to be detected using feature extraction; the first feature vector is a feature vector corresponding to the target object.
And a matching module: for each first feature vector in the reference image, determining a first candidate matching vector of the first feature vector in the image to be detected by calculating a first similarity between the first feature vector and each second feature vector in the image to be detected.
The target matching vector determining module: and the target matching vector of the first feature vector in the image to be detected is determined by utilizing the second similarity and the correlation between the first feature vector and each first candidate matching vector of the first feature vector in the image to be detected aiming at each first feature vector in the reference image.
A target object determining module: and the target object is determined in the image to be detected based on the position corresponding to the target matching vector of each first feature vector in the image to be detected.
Optionally, the target matching vector determining module includes:
a first calculation unit: and the second similarity between the first feature vector and each first candidate matching vector in the image to be detected is determined based on the calculated first feature vector and Euclidean distance between the first feature vector and each first candidate matching vector in the image to be detected.
A second calculation unit: and the correlation degree between the first feature vector and each first candidate matching vector in the image to be detected is determined based on the calculated first feature vector and the correlation parameter between the first feature vector and each first candidate matching vector in the image to be detected.
A first determination unit: and determining a target matching vector which is most matched with the first feature vector from the first candidate matching vectors according to the second similarity and the correlation parameter corresponding to each first candidate matching vector in the image to be detected.
Optionally, the matching module includes:
a second determination unit: and for each first feature vector in the reference image, determining a second candidate matching vector of the first feature vector in the image to be detected by calculating a first similarity between the first feature vector and each second feature vector in the image to be detected.
A third determination unit: and for each second candidate matching vector, determining a third candidate matching vector of the second candidate matching vector in the reference image by calculating a third similarity between the first feature vector and each second candidate matching vector in the image to be detected.
A fourth determination unit: and determining a second candidate feature vector corresponding to the third candidate matching vector containing the first feature vector as the first candidate matching vector for each first feature vector in the reference image.
In a third aspect, an embodiment of the present application provides an electronic device, including: a processor, a storage medium storing machine-readable instructions executable by the processor, the processor in communication with the storage medium via the bus when the electronic device is running, the processor executing the machine-readable instructions to perform steps of the method as described.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs steps of the method.
In the method for determining the target object, which is provided by the embodiment of the application, in the process of determining the target matching vector of the first feature vector of the reference image in the image to be detected, the target matching vector is screened by using a method combining Euclidean distance and relativity estimation, so that the similarity relation of each pair of feature vectors is analyzed more deeply, the accuracy of the determined target object is higher, and the problem of inaccurate determined target object caused by factors such as noise, illumination, scale and the like in the process of determining the target object is effectively prevented.
In order to make the above objects, features and advantages of the present application more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart of a method for determining a target object according to an embodiment of the present application;
FIG. 2 is a flowchart illustrating a method for determining a first feature vector according to an embodiment of the present application;
fig. 3 is a schematic flow chart of an apparatus for determining a target object according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a computer program 400 according to an embodiment of the present application;
fig. 5 is a gradient histogram of a first sampling point according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more apparent, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. The components of the embodiments of the present application generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the application, as presented in the figures, is not intended to limit the scope of the application, as claimed, but is merely representative of selected embodiments of the application. All other embodiments, which can be made by a person skilled in the art without making any inventive effort, are intended to be within the scope of the present application.
It will be apparent to those having ordinary skill in the art that the general principles defined herein may be applied to other embodiments and applications without departing from the spirit and scope of the application. Although the application is described primarily in the context of determining the position of a tracked object, it should be understood that this is only one exemplary embodiment.
It should be noted that the term "comprising" will be used in embodiments of the application to indicate the presence of the features stated hereafter, but not to exclude the addition of other features.
In the prior art, in the process of determining a target object, the similarity is often determined by a computing part of similarity measurement of two feature vectors based on Euclidean distance only, and the similarity is determined to be the same feature if the distance is similar. However, due to the influence of illumination change, visual angle change, target form change, image noise and the like, the image to be detected may have appearance changes with different degrees, thereby influencing the determination effect. Therefore, merely measuring similarity of features based on euclidean distance creates a problem of inaccuracy in target object determination.
Thus, in order to solve the problem of inaccuracy in determining a target object caused by measuring similarity of feature vectors based on only euclidean distance, an embodiment of the present application provides a method for determining a target object, as shown in fig. 1, including the following steps:
S101, acquiring an image to be detected and a reference image.
In the above steps, the image to be detected may be one image or may be a plurality of consecutive image sequences. Wherein a plurality of consecutive image sequences may be acquired from a video. The video can be obtained by means of a vehicle-mounted camera mounted on the vehicle, a camera mounted on the intelligent robot, and the like. The reference image refers to an image containing the target object.
In a specific implementation, when a user has a requirement for determining a target object, that is, the server receives a request for determining the target object submitted by the user, the server acquires an image to be detected and a reference image, wherein the reference image and the image to be detected can be acquired from the same image sequence or from different image sequences.
S102, determining a first feature vector in the reference image and a second feature vector in the image to be detected by utilizing feature extraction; the first feature vector is a feature vector corresponding to the target object.
In the above step, the first feature vector is a feature vector corresponding to the target object, that is, information corresponding to feature points that the target object exhibits in the reference image. The first feature vector includes at least one feature vector. The second feature vector is information corresponding to feature points corresponding to objects contained in the image to be detected. The target object refers to an object contained in the reference image. The second feature vector includes at least one feature vector.
In specific implementation, a first feature vector of a feature point corresponding to a target object is determined in a reference image through a feature extraction algorithm, and a second feature vector of a feature point corresponding to an object existing in an image to be detected is determined in the image to be detected through a feature extraction algorithm. The feature extraction algorithm may be SIFT (Scale Invariant Feature Transform, scale-invariant feature transform algorithm), among others.
S103, for each first feature vector in the reference image, determining a first candidate matching vector of the first feature vector in the image to be detected by calculating a first similarity between the first feature vector and each second feature vector in the image to be detected.
In the above step, the first similarity refers to a similarity between the first feature vector and each second feature vector in the image to be detected, and the similarity measurement may be implemented by using a euclidean distance measurement method. The first candidate matching vector is a second feature vector having a higher similarity with the first feature vector among the second feature vectors. The first candidate matching vector may comprise at least one second feature vector.
In a specific implementation, the reference image includes a plurality of first feature vectors, for each first feature vector, a first similarity between the first feature vector and each second feature vector in the image to be detected is calculated, then all second feature vectors are screened by using a preset similarity threshold value, second feature vectors with the first similarity exceeding the preset similarity threshold value are screened, and the screened second feature vectors can be determined as first candidate matching vectors corresponding to the first feature vectors.
For example: the image to be detected comprises 4 second feature vectors which are respectively: a second eigenvector a, a second eigenvector B, a second eigenvector C and a second eigenvector D. The reference image comprises a first feature vector E, the first similarity between a second feature vector A and the first feature vector E is 0.1, the first similarity between a second feature vector B and the first feature vector E is 0.2, the first similarity between a second feature vector C and the first feature vector E is 0.7, the first similarity between a second feature vector D and the first feature vector E is 0.8, a preset similarity threshold value is 0.5, and the first similarity between the second feature vector C and the first feature vector D exceeds the similarity threshold value to be the second feature vector C and the second feature vector D, so that the second feature vector C and the second feature vector D are determined to be first candidate matching vectors.
S104, determining a target matching vector of the first feature vector in the image to be detected by utilizing second similarity and correlation between the first feature vector and each first candidate matching vector of the first feature vector in the image to be detected aiming at each first feature vector in the reference image.
In the above step, the second similarity refers to a similarity between the first feature vector and each first candidate matching vector in the image to be detected, and the similarity may be measured by using a euclidean distance measurement method. The degree of correlation refers to the percentage of the relationship between the first feature vector and the first candidate matching vector. The target matching vector is determined according to the second similarity and the correlation degree corresponding to each first candidate matching vector.
In a specific implementation, the reference image includes a plurality of first feature vectors, for each first feature vector, a second similarity and a correlation between the first feature vector and each first candidate matching vector corresponding to the first feature vector are calculated, a similarity measurement value corresponding to the first candidate matching vector is calculated by using the second similarity and the correlation corresponding to the first candidate matching vector, and then a first candidate matching vector with the largest similarity measurement value in the first candidate matching vector is determined as a target matching vector.
S105, determining the target object in the image to be detected based on the position corresponding to the target matching vector of each first feature vector in the image to be detected.
In the above step, the target matching vector is determined by each first feature vector, so that positions corresponding to a plurality of target matching vectors can be obtained in the image to be detected, the feature region corresponding to the object contained in the reference image can be found in the image to be detected by the positions corresponding to the plurality of target matching vectors, and the target object is determined in the image to be detected by the feature region.
Through the five steps, in the process of determining the target object, the first candidate matching vector is determined in the image to be detected through the first similarity, then the target matching vector is further determined in the first candidate matching vector by utilizing the second similarity and the correlation between the first feature vector and the first candidate matching vector, and the target object is determined in the image to be detected through the target matching vector. The method for determining the target object can reduce the error rate of determining the target matching vector through secondary screening of the second similarity and the correlation, and can reduce the influence of factors such as illumination, environment and the like in the image and improve the accuracy of determining the target object through the method for determining the target object through the correlation.
The feature points corresponding to the target object in the reference image can be represented by the first feature vector, and the more accurate the first feature vector is calculated, the more accurate the first feature vector can represent the target object in the reference image, so the application provides a method for calculating the first feature vector, as shown in fig. 2, the first feature vector is calculated by the following steps:
s201, determining a first position of a first sampling point in the reference image by using the position and the scale of each pixel point in the reference image.
In the above step, the first position is a position where the first sampling point is located in the reference image. The first sampling point is a pixel point whose gray value is changed drastically after the reference image is smoothed. The extraction process of the first sampling point can be divided into two steps: the first step is to convolve the reference image with a variable-scale Gaussian function to construct a scale space; the second step is to detect stable feature points in the scale space as first sampling points.
In particular implementations, we can determine the first sample point by a feature extraction algorithm, which can be SIFT. For example: the step of determining the first sampling point using SIFT algorithm may be as follows:
Step 2011, constructing a scale space by convolving the reference image with a variable-scale gaussian function:
L(x,y,σ)=G(x,y,σ)*I(x,y);
where L (x, y, σ) is a scale space representing (x, y) pixels in the reference image at σ scale, G (x, y, σ) is a gaussian function at σ scale, I represents the reference image, (x, y) represents pixels on the reference image I, x represents convolution, and σ is a spatial scale factor.
In step 2012, stable feature points are detected within the scale space.
In the above step, in order to effectively detect stable feature points in the scale space, a gaussian Difference (DOG) function may be used to detect the feature points. The DOG function is an approximation of a normalized laplacian of gaussian (LOG), which is defined as the difference of 2 adjacent gaussian kernel functions of different dimensions:
wherein I represents a reference image, (x, y) represents a pixel point on the reference image I, x represents convolution, σ is a spatial scale factor; d (x, y, σ) is a scale space of (x, y) pixel points in the differential image constructed by using the DOG function based on the reference image at the σ scale; g (x, y, kσ) is a gaussian function at kσ scale; g (x, y, σ) is a gaussian function at the σ scale; l (x, y, kσ) is a scale space representing the (x, y) pixel point in the reference image at kσ scale; l (x, y, σ) is a scale space representing the (x, y) pixel point in the reference image at the σ scale; where k is a certain scale in space, and the feature point under the scale is compared with 26 points corresponding to the same scale (8 adjacent points) and the upper and lower adjacent scales (9×2 points) to ensure that an extremum point can be detected in the scale space. If the feature point is a maximum or minimum point, it is considered to be a first sampling point at that scale.
S202, determining a first direction of the first sampling point based on the gradient histogram corresponding to the first sampling point.
In the step, the first direction is to use the gradient histogram to count the gradient direction of the pixels in the neighborhood window of the first sampling point, and the direction with the highest amplitude is determined as the first direction. The neighborhood window of the first sampling point is obtained by taking the first sampling point as a coordinate center, selecting a certain area range and scanning. For example: a window range of 16 x 16 may be selected for scanning to obtain a neighborhood window.
In a specific implementation, the direction invariance of the first feature vector can be ensured based on the first direction of the first sampling point, and the construction process of the first direction of the first sampling point can be divided into two steps, for example:
step 2021: calculating gradient directions and magnitudes of pixel points on a reference image, wherein:
M(x,y)=[(L(x+1,y,σ)-L(x-1,y,σ)) 2 +(L(x,y+1,σ)-L(x,y-1,σ))] 1/2
wherein (x, y) is the coordinates of the pixel point on the reference image, sigma is the spatial scale factor, θ (x, y) is the gradient direction of the pixel point on the reference image, and M (x, y) is the amplitude of the pixel point on the reference image; l (x+1, y, sigma) is the scale space of the pixel point with the position of (x+1, y) in the reference image under the sigma scale; l (x-1, y, sigma) is the scale space of the pixel point with the position (x-1, y) in the reference image under the sigma scale; l (x, y+1, σ) is a scale space of the pixel point with the position (x, y+1) in the reference image at the σ scale; l (x, y-1, sigma) is the scale space of the pixel point with the position (x, y-1) in the reference image under the sigma scale.
Step 2022: based on the gradient direction and the amplitude of each pixel point in the reference image determined in the steps, determining the first direction of the first sampling point by using the gradient histogram, and taking the gradient direction corresponding to the highest amplitude accumulated value in the gradient histogram as the first direction of the first sampling point as shown in fig. 5.
S203, calculating the first feature vector corresponding to the first sampling point according to the first position and the first direction corresponding to the first sampling point.
In the above step, the first feature vector represents feature information corresponding to the first sampling point. The construction process can be divided into four steps, for example: the first step is to confirm that the direction is consistent with the direction of the first sampling point, ensure the direction invariance; step two, selecting a window area with a certain range for scanning to obtain a target area; dividing the obtained target area, and counting the gradient direction in each divided sub-area; and step four, calculating the gradient direction of each sub-region to obtain a group of data, and taking the obtained data as a first feature vector. For example: the first feature vector determination process may be as follows:
step 2031: the coordinate axis is first rotated to a first direction of the first sampling point.
Step 2032: then, taking the first sampling point as the center, selecting a window area of 16×16 to scan the first sampling point to obtain a target area.
Step 2033: the target area is divided into 16 sub-areas in a 4 x 4 manner, and a gradient cumulative histogram of these 8 directions is calculated on each sub-area. Wherein the 8 directions are gradient histograms simplifying the direction range of 0-360 degrees into 8 gradient directions.
Step 2034: 128 data can be obtained by calculating 8 gradient directions of each sub-region of the 16 sub-regions, and the obtained 128 data are used as a first feature vector.
Optionally, the second feature vector may be calculated by:
and determining a second position of a second sampling point in the image to be detected by using the position and the scale of each pixel point in the image to be detected.
In the above step, the second position is the position of the second sampling point in the image to be detected. The second sampling point is a pixel point whose gray value changes drastically after the smoothing process is performed on the image to be detected. The second sampling point extraction process can be divided into two steps, wherein the first step is to convolve the image to be detected with a gaussian function with variable scale to construct a scale space (the detailed calculation process can refer to step 2011 above); the second step is to detect stable feature points in the scale space (the detailed calculation process can refer to step 2012 above) as second sampling points.
And determining a second direction of the second sampling point based on the gradient histogram corresponding to the second sampling point.
In the step, the second direction is to use the gradient histogram to count the gradient directions of the pixels in the neighborhood window of the second sampling point, and determine the gradient direction corresponding to the highest accumulated value of the amplitude values as the second direction. The neighborhood window of the second sampling point is a pixel area obtained by taking the second sampling point as a coordinate center and selecting a certain range for scanning. For example: a range of 16 x 16 may be selected for scanning to obtain a neighborhood window. The second direction determination process of the second sampling point may be divided into two steps, for example: the first step calculates the gradient direction and magnitude of the second sample point (for detailed calculation, reference is made to step 2021 above); the second step is to use the gradient histogram to count the gradient directions of the pixels in the neighborhood window of the second sampling point based on the gradient directions and the magnitudes of the second sampling point, and use the gradient direction corresponding to the highest magnitude in the gradient histogram as the second direction of the second sampling point (the detailed calculation process can refer to step 2022 above). The gradient histogram is counted by taking the gradient direction of the second sampling point as a horizontal axis and taking the amplitude accumulated value of the second sampling point as a vertical axis.
And calculating the second feature vector corresponding to the second sampling point according to the second position and the second direction corresponding to the second sampling point.
In the above step, the second feature vector is a representation of feature information corresponding to the second sampling point. The construction process of the second feature vector may be divided into four steps, for example: the first step is to confirm that the direction is consistent with the direction of the second sampling point, ensure the direction invariance; step two, selecting a window area with a certain range for scanning to obtain a target area; dividing the obtained target area, and counting the gradient direction in each divided sub-area; and a fourth step of calculating the gradient direction of each sub-region to obtain a second feature vector.
In order to determine the target object, similarity matching is required to be performed on each first feature vector in the reference image and a second feature vector corresponding to the image to be detected, so that a target matching vector which is most matched with the first feature vector is screened out, and the target object is determined by the position of the target matching vector corresponding to the image to be detected. Accordingly, the present application provides a method for determining a target match vector, comprising the steps of:
And determining the second similarity between the first feature vector and each first candidate matching vector in the image to be detected based on the calculated first feature vector and the Euclidean distance between the first feature vector and each first candidate matching vector in the image to be detected.
In the above step, the second similarity refers to a similarity between the first feature vector and each first candidate matching vector in the image to be detected, and the similarity may be measured by using a euclidean distance measurement method.
In a specific implementation, the reference image includes a plurality of first feature vectors, and for each first feature vector, a second similarity between the first feature vector and each first candidate matching vector corresponding to the first feature vector is calculated to obtain a second similarity measurement value.
And determining the correlation degree between the first feature vector and each first candidate matching vector in the image to be detected based on the calculated first feature vector and the correlation parameter between the first feature vector and each first candidate matching vector in the image to be detected.
In the above step, the degree of correlation refers to the percentage of the relationship between the first feature vector and each of the first candidate matching vectors. The degree of correlation is determined by obtaining a correlation parameter between the first feature vector and each first candidate matching vector in the image to be detected. Wherein, the correlation parameters are:
wherein, the liquid crystal display device comprises a liquid crystal display device,is the correlation between the two feature vectors; a, B represent a certain feature vector in the reference image and the image to be detected, respectively, and a= [ a (1), a (2) ], a (N)] T ,B=[b(1),b(2),...,b(N)] T N represents the maximum dimension of the feature vector, λ represents the risk sensitivity parameter, and we set λ=1, κ σ (·) represents a shift invariant gaussian kernel with bandwidth σ, and:
and determining a target matching vector which is most matched with the first feature vector from the first candidate matching vectors according to the second similarity and the correlation parameter corresponding to each first candidate matching vector in the image to be detected.
In the above steps, the reference image includes a plurality of first feature vectors, for each first feature vector, a second similarity and a correlation between the first feature vector and each first candidate matching vector corresponding to the first feature vector are calculated, a similarity metric corresponding to the first candidate matching vector is calculated by using the second similarity and the correlation corresponding to the first candidate matching vector, and then the first candidate matching vector with the largest similarity metric in the first candidate matching vector is determined as the target matching vector.
In a specific implementation, the similarity measurement value between each first candidate matching vector can be obtained by calculating the second similarity and the correlation between each first candidate matching vector of the first feature vector and combining the calculated second similarity and correlation together, and the first candidate matching vector with the largest similarity measurement value is reserved as the target matching vector by comparing the similarity measurement value between each first candidate matching vector. The determination of the target matching vector can be divided into three steps, for example:
the first step is to calculate a second similarity between each first candidate matching vector and the first characteristic vector in the image to be detected: ρ (A, B) s ). Wherein A represents a first feature vector, B s Representing a set of each first candidate matching vector. The number of first candidate matching vectors in the set may be denoted S e {1, 2.
The second step is to calculate the correlation degree between each first candidate matching vector and the first characteristic vector in the image to be detected: f (A, B) s ). Wherein A represents a first feature vector, B s Representing a set of each first candidate matching vector. The number of first candidate matching vectors in the set may be denoted S e {1, 2.
And thirdly, combining the calculated second similarity with the calculated second similarity to obtain a similarity measurement value between each candidate matching vector: q (A, B) s )=1/ρ(A,B s )+F(A,B s ). Wherein Q (A, B) s ) Is each firstSimilarity measure between candidate matching vector and first feature vector; ρ (A, B) s ) Is the euclidean distance between each first candidate matching vector and the first feature vector; f (A, B) s ) Is the correlation between each first candidate matching vector and the first feature vector; wherein A represents a first feature vector, B s Representing a set of each first candidate matching vector. The number of first candidate matching vectors in the set may be denoted S e {1, 2. And (3) calculating the similarity measurement value of each first candidate matching vector in the image to be detected, and reserving the first candidate matching vector with the maximum similarity measurement value as a target matching vector. For example: the image to be detected contains 2 first candidate matching vectors which are respectively: the reference image comprises a first candidate matching vector X and a second candidate matching vector Y, wherein the reference image comprises a first feature vector Z, the similarity measurement value between the first candidate matching vector X and the first feature vector Z is 0.8, the similarity measurement value between the first candidate matching vector Y and the first feature vector Z is 0.9, and the first candidate matching vector Y is finally determined to be a target matching vector through comparing the similarity measurement values between the first candidate matching vector X and the first feature vector Z.
In the process of determining the target object, it is important to determine a target matching vector matched with the first feature vector in the image to be detected, wherein the target matching vector is screened from the first candidate matching vectors, and the accuracy of determining the first candidate matching vector directly influences the accuracy of the target matching vector. Accordingly, the present application provides a method of determining a first candidate matching vector, comprising the steps of:
and for each first feature vector in the reference image, determining a second candidate matching vector of the first feature vector in the image to be detected by calculating a first similarity between the first feature vector and each second feature vector in the image to be detected.
In the above step, the second candidate matching vector is a second feature vector having a higher first similarity with the first feature vector, and the similarity measurement method may be a euclidean distance measurement method.
In specific implementation, the reference image includes a plurality of first feature vectors, for each first feature vector, a first similarity between the first feature vector and each second feature vector in the image to be detected is calculated, then all second feature vectors are screened by using a preset similarity threshold value which is preset, and second feature vectors with the first similarity exceeding the preset similarity threshold value are screened out and are determined as second candidate matching vectors corresponding to the first feature vectors.
And for each second candidate matching vector, determining a third candidate matching vector of the second candidate matching vector in the reference image by calculating a third similarity between the first feature vector and each second candidate matching vector in the image to be detected.
In the above step, the third similarity refers to a similarity between the first feature vector and the second candidate matching vector. The similarity measurement method can adopt Euclidean distance measurement method. The third candidate matching vector is a second candidate matching vector having a higher third similarity to the first feature vector.
In specific implementation, based on each second candidate matching vector screened in the steps, calculating a third similarity between each second candidate matching vector and the first feature vector, screening all second candidate matching vectors by using a preset similarity threshold value, screening out second candidate matching vectors with the third similarity exceeding the preset similarity threshold value, and determining the screened second candidate matching vectors as third candidate matching vectors.
And determining a second candidate feature vector corresponding to the third candidate matching vector containing the first feature vector as the first candidate matching vector for each first feature vector in the reference image.
In the step, the first candidate matching vector is screened twice, and the first screening is to determine a second candidate matching vector in the image to be detected based on the screening of the first feature vector through the first similarity; the second filtering is to determine a third candidate matching vector in the reference image through filtering of a third similarity based on the second candidate matching vector in the image to be detected. And finally, taking the second candidate matching vector corresponding to the third candidate matching vector as the first candidate matching vector. The first candidate matching vector is screened twice, so that the matching degree with the first feature vector is effectively improved, and the target matching vector can be more accurately determined.
The embodiment of the application provides a method for determining a target object, wherein in the process of determining the target object, the similarity is determined by adopting a measuring method combining Euclidean distance and relativity in a calculating part of similarity measurement of two feature vectors, so that the error rate of determining a target matching vector can be reduced, and the problem of inaccurate target object determination caused by measuring the similarity of the feature vectors only based on the Euclidean distance is effectively solved.
The embodiment of the application provides a device for determining a target object, as shown in fig. 3, which comprises:
The acquisition module 301: for acquiring an image to be detected and a reference image.
Extraction module 302: for determining a first feature vector in the reference image and a second feature vector in the image to be detected using feature extraction; the first feature vector is a feature vector corresponding to the target object.
Matching module 303: for each first feature vector in the reference image, determining a first candidate matching vector of the first feature vector in the image to be detected by calculating a first similarity between the first feature vector and each second feature vector in the image to be detected.
Target match vector preservation module 304: and the target matching vector of the first feature vector in the image to be detected is determined by utilizing the second similarity and the correlation between the first feature vector and each first candidate matching vector of the first feature vector in the image to be detected aiming at each first feature vector in the reference image.
The target object determination module 305: and the target object is determined in the image to be detected based on the position corresponding to the target matching vector of each first feature vector in the image to be detected.
Optionally, the extracting module 302 includes:
a first extraction unit: and determining a first position of a first sampling point in the reference image by using the position and the scale of each pixel point in the reference image.
A second extraction unit: and determining a first direction of the first sampling point based on the gradient histogram corresponding to the first sampling point.
A third extraction unit: and the first feature vector is used for calculating the first feature vector corresponding to the first sampling point according to the first position and the first direction corresponding to the first sampling point.
Optionally, the extracting module 302 further includes:
a fourth extraction unit: and determining a second position of a second sampling point in the image to be detected by using the position and the scale of each pixel point in the image to be detected.
A fifth extraction unit: and determining a second direction of the second sampling point based on the gradient histogram corresponding to the second sampling point.
A sixth extraction unit: and the second feature vector is used for calculating the second feature vector corresponding to the second sampling point according to the second position and the second direction corresponding to the second sampling point.
Optionally, the target matching vector retaining module includes:
A first calculation unit: and the second similarity between the first feature vector and each first candidate matching vector in the image to be detected is determined based on the calculated first feature vector and Euclidean distance between the first feature vector and each first candidate matching vector in the image to be detected.
A second calculation unit: and the correlation degree between the first feature vector and each first candidate matching vector in the image to be detected is determined based on the calculated first feature vector and the correlation parameter between the first feature vector and each first candidate matching vector in the image to be detected.
A first determination unit: and determining a target matching vector which is most matched with the first feature vector from the first candidate matching vectors according to the second similarity and the correlation parameter corresponding to each first candidate matching vector in the image to be detected.
Optionally, the matching module includes:
a second determination unit: for each first feature vector in the reference image, determining a second candidate matching vector of the first feature vector in the image to be detected by calculating a first similarity between the first feature vector and each second feature vector in the image to be detected.
A third determination unit: for each second candidate matching vector, determining a third candidate matching vector of the second candidate matching vector in the reference image by calculating a third similarity between the first feature vector and each second candidate matching vector in the image to be detected.
A fourth determination unit: and the second candidate feature vector corresponding to the third candidate matching vector containing the first feature vector is determined as the first candidate matching vector for each first feature vector in the reference image.
Corresponding to a method for determining a target object in fig. 1, an embodiment of the present application further provides a computer device 400, as shown in fig. 4, where the device includes a memory 401, a processor 402, and a computer program stored in the memory 401 and capable of running on the processor 402, where the processor 402 implements the method for determining a target object when executing the computer program.
Specifically, the memory 401 and the processor 402 may be general-purpose memories and processors, which are not limited herein, and when the processor 402 runs a computer program stored in the memory 401, the method for determining a target object may be executed, so as to solve the problem in the prior art that the determining process for the target object is susceptible to inaccuracy caused by factors such as noise, illumination, scale, and the like.
Corresponding to a method of determining a target object in fig. 1, an embodiment of the present application further provides a computer readable storage medium having a computer program stored thereon, which when executed by a processor performs the steps of a method of determining a target object as described above.
Specifically, the storage medium can be a general storage medium, such as a mobile disk, a hard disk and the like, and when a computer program on the storage medium is run, the method for determining the target object can be executed, so that the problem of inaccuracy in determining the target object in the prior art is solved.
In the embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other manners. The above-described apparatus embodiments are merely illustrative, for example, the division of the units is merely a logical function division, and there may be other manners of division in actual implementation, and for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some communication interface, device or unit indirect coupling or communication connection, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments provided in the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a Read-only memory (ROM), a random access memory (RAM, randomAccessMemory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
It should be noted that: like reference numerals and letters in the following figures denote like items, and thus once an item is defined in one figure, no further definition or explanation of it is required in the following figures, and furthermore, the terms "first," "second," "third," etc. are used merely to distinguish one description from another and are not to be construed as indicating or implying relative importance.
Finally, it should be noted that: the above examples are only specific embodiments of the present application, and are not intended to limit the scope of the present application, but it should be understood by those skilled in the art that the present application is not limited thereto, and that the present application is described in detail with reference to the foregoing examples: any person skilled in the art may modify or easily conceive of the technical solution described in the foregoing embodiments, or perform equivalent substitution of some of the technical features, while remaining within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the corresponding technical solutions. Are intended to be encompassed within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (8)

1. A method of determining a target object, comprising:
acquiring an image to be detected and a reference image;
determining a first feature vector in the reference image and a second feature vector in the image to be detected by utilizing feature extraction; wherein the first feature vector is a feature vector corresponding to the target object;
for each first feature vector in the reference image, determining a first candidate matching vector of the first feature vector in the image to be detected by calculating a first similarity between the first feature vector and each second feature vector in the image to be detected;
determining a target matching vector of the first feature vector in the image to be detected by utilizing second similarity and correlation between the first feature vector and each first candidate matching vector of the first feature vector in the image to be detected aiming at each first feature vector in the reference image;
determining the target object in the image to be detected based on the position corresponding to the target matching vector of each first feature vector in the image to be detected;
The first feature vector is calculated by the following steps:
determining a first position of a first sampling point in the reference image by using the position and the scale of each pixel point in the reference image;
determining a first direction of the first sampling point based on the gradient histogram corresponding to the first sampling point;
calculating the first feature vector corresponding to the first sampling point according to the first position and the first direction corresponding to the first sampling point;
the second feature vector is calculated by the following steps:
determining a second position of a second sampling point in the image to be detected by using the position and the scale of each pixel point in the image to be detected;
determining a second direction of the second sampling point based on the gradient histogram corresponding to the second sampling point;
and calculating the second feature vector corresponding to the second sampling point according to the second position and the second direction corresponding to the second sampling point.
2. The method of claim 1, wherein the determining, for each first feature vector in the reference image, a target match vector for the first feature vector in the image to be detected using a second similarity and correlation between the first feature vector and each candidate match vector for the first feature vector in the image to be detected, comprises:
Determining the second similarity between the first feature vector and each first candidate matching vector in the image to be detected based on the calculated first feature vector and the Euclidean distance between the first feature vector and each first candidate matching vector in the image to be detected;
determining the correlation degree between the first feature vector and each first candidate matching vector in the image to be detected based on the calculated first feature vector and the correlation parameter between the first feature vector and each first candidate matching vector in the image to be detected;
and determining a target matching vector which is most matched with the first feature vector from the first candidate matching vectors according to the second similarity and the correlation parameter corresponding to each first candidate matching vector in the image to be detected.
3. The method of claim 1, wherein the determining, for each first feature vector in the reference image, a first candidate matching vector for the first feature vector in the image to be detected by calculating a first similarity between the first feature vector and each second feature vector in the image to be detected, comprises:
For each first feature vector in the reference image, determining a second candidate matching vector of the first feature vector in the image to be detected by calculating a first similarity between the first feature vector and each second feature vector in the image to be detected;
for each second candidate matching vector, determining a third candidate matching vector of the second candidate matching vector in the reference image by calculating a third similarity between the first feature vector and each second candidate matching vector in the image to be detected;
and determining a second candidate feature vector corresponding to the third candidate matching vector containing the first feature vector as the first candidate matching vector for each first feature vector in the reference image.
4. An apparatus for determining a target object, comprising:
the acquisition module is used for: the method comprises the steps of acquiring an image to be detected and a reference image;
and an extraction module: for determining a first feature vector in the reference image and a second feature vector in the image to be detected using feature extraction; wherein the first feature vector is a feature vector corresponding to the target object;
And a matching module: for each first feature vector in the reference image, determining a first candidate matching vector of the first feature vector in the image to be detected by calculating a first similarity between the first feature vector and each second feature vector in the image to be detected;
the target matching vector determining module: for each first feature vector in the reference image, determining a target matching vector of the first feature vector in the image to be detected by using a second similarity and a correlation between the first feature vector and each first candidate matching vector in the image to be detected;
a target object determining module: the target object is determined in the image to be detected based on the position corresponding to the target matching vector of each first feature vector in the image to be detected;
the extraction module comprises:
a first extraction unit: the method comprises the steps of determining a first position of a first sampling point in a reference image by using the position and the scale of each pixel point in the reference image;
a second extraction unit: the first direction of the first sampling point is determined based on the gradient histogram corresponding to the first sampling point;
A third extraction unit: the first feature vector is used for calculating the first feature vector corresponding to the first sampling point according to the first position and the first direction corresponding to the first sampling point;
the extraction module further comprises:
a fourth extraction unit: the method comprises the steps of determining a second position of a second sampling point in the image to be detected by using the position and the scale of each pixel point in the image to be detected;
a fifth extraction unit: the second direction of the second sampling point is determined based on the gradient histogram corresponding to the second sampling point;
a sixth extraction unit: and the second feature vector is used for calculating the second feature vector corresponding to the second sampling point according to the second position and the second direction corresponding to the second sampling point.
5. The apparatus of claim 4, wherein the target match vector determination module comprises:
a first calculation unit: the method comprises the steps of determining the second similarity between the first feature vector and each first candidate matching vector in the image to be detected based on the calculated first feature vector and Euclidean distance between the first feature vector and each first candidate matching vector in the image to be detected;
A second calculation unit: the correlation degree between the first feature vector and each first candidate matching vector in the image to be detected is determined based on the calculated first feature vector and a correlation parameter between the first feature vector and each first candidate matching vector in the image to be detected;
a first determination unit: and determining a target matching vector which is most matched with the first feature vector from the first candidate matching vectors according to the second similarity and the correlation parameter corresponding to each first candidate matching vector in the image to be detected.
6. The apparatus of claim 4, wherein the matching module comprises:
a second determination unit: for each first feature vector in the reference image, determining a second candidate matching vector of the first feature vector in the image to be detected by calculating a first similarity between the first feature vector and each second feature vector in the image to be detected;
a third determination unit: for each second candidate matching vector, determining a third candidate matching vector of the second candidate matching vector in the reference image by calculating a third similarity between the first feature vector and each second candidate matching vector in the image to be detected;
A fourth determination unit: and determining a second candidate feature vector corresponding to the third candidate matching vector containing the first feature vector as the first candidate matching vector for each first feature vector in the reference image.
7. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the steps of the method of any of the preceding claims 1-3 when the computer program is executed.
8. A computer-readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, performs the steps of the method of any of the preceding claims 1-3.
CN202011227367.9A 2020-11-06 2020-11-06 Method, device, computer equipment and medium for determining target object Active CN112270372B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011227367.9A CN112270372B (en) 2020-11-06 2020-11-06 Method, device, computer equipment and medium for determining target object

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011227367.9A CN112270372B (en) 2020-11-06 2020-11-06 Method, device, computer equipment and medium for determining target object

Publications (2)

Publication Number Publication Date
CN112270372A CN112270372A (en) 2021-01-26
CN112270372B true CN112270372B (en) 2023-09-29

Family

ID=74345999

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011227367.9A Active CN112270372B (en) 2020-11-06 2020-11-06 Method, device, computer equipment and medium for determining target object

Country Status (1)

Country Link
CN (1) CN112270372B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107967482A (en) * 2017-10-24 2018-04-27 广东中科南海岸车联网技术有限公司 Icon-based programming method and device
CN109101867A (en) * 2018-06-11 2018-12-28 平安科技(深圳)有限公司 A kind of image matching method, device, computer equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105518709B (en) * 2015-03-26 2019-08-09 北京旷视科技有限公司 The method, system and computer program product of face for identification

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107967482A (en) * 2017-10-24 2018-04-27 广东中科南海岸车联网技术有限公司 Icon-based programming method and device
CN109101867A (en) * 2018-06-11 2018-12-28 平安科技(深圳)有限公司 A kind of image matching method, device, computer equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于sift特征的图像匹配算法;刘柏江;姜明新;;信息系统工程(第05期);全文 *

Also Published As

Publication number Publication date
CN112270372A (en) 2021-01-26

Similar Documents

Publication Publication Date Title
Zhang et al. Corner detection based on gradient correlation matrices of planar curves
US10636168B2 (en) Image processing apparatus, method, and program
US8718321B2 (en) Method of image processing
EP2079054B1 (en) Detection of blobs in images
US9633281B2 (en) Point cloud matching method
Patel et al. Image registration of satellite images with varying illumination level using HOG descriptor based SURF
US10540750B2 (en) Electronic device with an upscaling processor and associated method
KR20170113122A (en) Information processing apparatus and method of controlling the same
CN103679720A (en) Fast image registration method based on wavelet decomposition and Harris corner detection
EP2927635B1 (en) Feature set optimization in vision-based positioning
Kim et al. Robust corner detection based on image structure
CN112270372B (en) Method, device, computer equipment and medium for determining target object
CN111652277A (en) False positive filtering method, electronic device and computer readable storage medium
Bastanlar et al. Corner validation based on extracted corner properties
CN116030280A (en) Template matching method, device, storage medium and equipment
CN116091998A (en) Image processing method, device, computer equipment and storage medium
Wu et al. A novel method of corner detector for SAR images based on Bilateral Filter
Chen et al. Rapid multi-modality preregistration based on SIFT descriptor
CN115205558A (en) Multi-mode image matching method and device with rotation and scale invariance
Wu et al. An accurate feature point matching algorithm for automatic remote sensing image registration
CN109815791B (en) Blood vessel-based identity recognition method and device
Li et al. Unmanned aerial vehicle image matching based on improved RANSAC algorithm and SURF algorithm
CN113223033A (en) Poultry body temperature detection method, device and medium based on image fusion
CN113689397A (en) Workpiece circular hole feature detection method and workpiece circular hole feature detection device
CN108629788B (en) Image edge detection method, device and equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20240222

Address after: 518000 1002, Building A, Zhiyun Industrial Park, No. 13, Huaxing Road, Henglang Community, Longhua District, Shenzhen, Guangdong Province

Patentee after: Shenzhen Wanzhida Technology Co.,Ltd.

Country or region after: China

Address before: 105 West Third Ring Road North, Haidian District, Beijing

Patentee before: Capital Normal University

Country or region before: China