CN111915532B - Image tracking method and device, electronic equipment and computer readable medium - Google Patents

Image tracking method and device, electronic equipment and computer readable medium Download PDF

Info

Publication number
CN111915532B
CN111915532B CN202010790275.5A CN202010790275A CN111915532B CN 111915532 B CN111915532 B CN 111915532B CN 202010790275 A CN202010790275 A CN 202010790275A CN 111915532 B CN111915532 B CN 111915532B
Authority
CN
China
Prior art keywords
frame image
points
preset
image
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010790275.5A
Other languages
Chinese (zh)
Other versions
CN111915532A (en
Inventor
孙曦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Douyin Vision Co Ltd
Douyin Vision Beijing Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN202010790275.5A priority Critical patent/CN111915532B/en
Publication of CN111915532A publication Critical patent/CN111915532A/en
Application granted granted Critical
Publication of CN111915532B publication Critical patent/CN111915532B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration by the use of histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20004Adaptive image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details

Abstract

The disclosure provides an image tracking method, an image tracking device, electronic equipment and a computer readable medium, and relates to the technical field of image processing. Performing image enhancement on a preset first region of a first frame image, wherein the first frame image is a post-adjacent frame of a second frame image, the second frame image comprises a preset second region subjected to image enhancement, and the first region corresponds to the second region; acquiring a plurality of first feature points corresponding to a plurality of second feature points preset in a second frame image in the first frame image according to a preset feature matching algorithm, wherein at least part of the second feature points are located in a second area, and at least part of the first feature points are located in a first area; deleting part of the feature points in the first feature points and the second feature points according to a preset algorithm; and determining a mapping rule of the point on the first frame image relative to the point on the second frame image according to the first characteristic point and the second characteristic point which are left after deletion, so as to be used for tracking a third frame image of a frame adjacent to the first frame image.

Description

Image tracking method and device, electronic equipment and computer readable medium
Technical Field
The embodiment of the disclosure relates to the technical field of image processing, in particular to an image tracking method, an image tracking device, an electronic device and a computer readable medium.
Background
Locating and tracking target images is a hot issue currently being investigated by technicians. When an image is tracked, if a large number of weak texture regions exist in the image, it is difficult to find feature points in the image for tracking.
In the prior art, when a weak texture image is tracked, depth information can be used, and depth map information corresponding to a current image is fused and input to an algorithm, so that the disadvantage of feature point missing is overcome, and a good tracking effect is still kept in a weak texture region. But rely on additional depth camera devices such as structured light lenses, ToF lenses, etc. with the aid of depth information. While some devices, such as low-end handsets, do not have a depth camera.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
In a first aspect, an image tracking method is provided, which includes:
performing image enhancement on a preset first region of a first frame image, wherein the first frame image is a post-adjacent frame of a second frame image, the second frame image comprises a preset second region subjected to image enhancement, and the first region corresponds to the second region;
acquiring a plurality of first feature points corresponding to a plurality of second feature points preset in a second frame image in the first frame image according to a preset feature matching algorithm, wherein at least part of the second feature points are located in a second area, and at least part of the first feature points are located in a first area;
deleting part of the feature points in the first feature points and the second feature points according to a preset algorithm;
and determining a mapping rule of the point on the first frame image relative to the point on the second frame image according to the first characteristic point and the second characteristic point which are left after deletion, so as to be used for tracking a third frame image of a frame adjacent to the first frame image.
In a second aspect, there is also provided an image tracking apparatus, comprising:
the image enhancement module is used for carrying out image enhancement on a preset first region of a first frame image, the first frame image is a post-adjacent frame of a second frame image, the second frame image comprises a preset second region subjected to image enhancement, and the first region corresponds to the second region;
the tracking module is used for acquiring a plurality of first feature points corresponding to a plurality of second feature points preset in a second frame image in the first frame image according to a preset feature matching algorithm, wherein at least part of the second feature points are located in the second area, and at least part of the first feature points are located in the first area;
the deleting module is used for deleting partial feature points in the first feature points and the second feature points according to a preset algorithm;
and the model generation module is used for determining a mapping rule of points on the first frame image relative to points on the second frame image according to the first characteristic points and the second characteristic points which are left after deletion so as to track the next adjacent frame image of the first frame image.
In a third aspect, an electronic device is also provided, which includes:
one or more processors;
a memory;
one or more application programs, wherein the one or more application programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to: the image tracking method shown in the first aspect of the present disclosure is performed.
In a fourth aspect, there is also provided a computer readable medium, on which a computer program is stored, which program, when executed by a processor, implements the image tracking method shown in the first aspect of the present disclosure.
Compared with the prior art, the technical scheme provided by the embodiment of the disclosure enhances the image of the preset first region corresponding to the second region, so that the contrast of the first region is higher, the image texture display is clearer, when the preset second feature point exists in the second region subjected to the image enhancement, the second feature point is subjected to the image tracking, the first feature point corresponding to the second feature point can be more easily tracked, and therefore, the stability of texture image tracking can be improved.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and features are not necessarily drawn to scale.
Fig. 1 is a schematic flowchart of an image tracking method according to an embodiment of the present disclosure;
fig. 2 is a schematic diagram of image enhancement performed on a first region of a first frame image according to an embodiment of the present disclosure;
fig. 3 is a schematic diagram of obtaining a first feature point according to the embodiment of the present disclosure;
fig. 4 is a schematic flowchart illustrating further steps of an image tracking method according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of an image tracking apparatus according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of an image tracking electronic device according to an embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing the devices, modules or units, and are not used for limiting the devices, modules or units to be different devices, modules or units, and also for limiting the sequence or interdependence relationship of the functions executed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
The present disclosure provides an image tracking method, an image tracking apparatus, an electronic device, and a medium, which are intended to solve the above technical problems in the prior art.
The following describes the technical solutions of the present disclosure and how to solve the above technical problems in specific embodiments. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments. Embodiments of the present disclosure will be described below with reference to the accompanying drawings.
Those skilled in the art will appreciate that the "terminal" used in the embodiments of the present disclosure may be a Mobile phone, a tablet computer, a PDA (Personal Digital Assistant), an MID (Mobile Internet Device), and the like.
Referring to fig. 1, an embodiment of the present disclosure provides an image tracking method, which can be applied to a terminal, the method including:
step S101: and performing image enhancement on a preset first region of a first frame image, wherein the first frame image is a post-adjacent frame of a second frame image, the second frame image comprises a preset second region subjected to image enhancement, and the first region corresponds to the second region.
The images tracked by the image tracking method can be images of different frames in a video, and can also be a plurality of images continuously shot by an image shooting device such as a mobile phone, a camera and the like. It will be appreciated that a plurality of images taken in succession, each taken, may be spaced at varying intervals, such as 0.5 s. As in the embodiment of the present disclosure, the second frame image, the first frame image, and the third frame image are consecutive frame images. The first frame image is a rear adjacent frame of the second frame image, and the third frame image is a front adjacent frame of the first frame image. It is to be understood that the second frame image may be preceded by an image, and the third frame image may be followed by an image, which is not limited in the embodiment of the present disclosure.
Referring to FIG. 2, the first area is indicated by the point A in the figure. The first region is a local region of the first frame image. The second region is a local region of the second frame image. The first area is obtained according to the second area and a preset rule, and the first area corresponds to the second area. The image enhancement is performed on the first area, that is, the local image of the first frame image, so that the contrast of the enhanced area of the image can be enlarged by the image enhancement, and more details can be displayed, for example, when the first area is a weak texture area of the first frame image, the weak texture area of the first frame image, that is, the first area can be displayed more clearly, and the subsequent processing is facilitated. When the image is enhanced, the image enhancement can be performed by using a Limited Contrast Adaptive Histogram Equalization (CLAHE) algorithm, the enhancement effect of the CLAHE algorithm is better than that of an Adaptive Histogram Equalization (AHE) algorithm, and the image enhancement speed is improved because the local enhancement is performed, the time consumption is shorter. Wherein the second region has been image enhanced before the first region has been image enhanced.
Step S102: and acquiring a plurality of first feature points corresponding to a plurality of second feature points preset in a second frame image in the first frame image according to a preset feature matching algorithm, wherein at least part of the second feature points are located in the second region, at least part of the first feature points are located in the first region, and the first region corresponds to the second region.
Referring to fig. 3, B is the second frame image and C is the first frame image. A plurality of second feature points are preset in the second frame image. The second feature point may be entirely located within the second region, or the second feature point may be partially located within the second region and partially located outside the second region. When the first feature point corresponding to the second feature point is obtained in the first frame image according to the feature matching algorithm, if the second feature point is located in the second region, the first feature point has a maximum probability of being located in the first region, so that at least part of the first feature point is located in the first region.
The feature matching algorithm may be selected as needed, and in the embodiment of the present disclosure, the feature matching algorithm is a Kanade Lucas Tomasi (KLT) algorithm. Under the assumption that the gray value of different images is fixed and unchanged at the same point in space, the KLT algorithm finds the moving speed and direction of the corresponding feature point by detecting the time-dependent change rate of the pixel point intensity in the observation window with the set size, so as to realize the tracking of the feature point. The KLT algorithm is prior art and embodiments of the present disclosure are not described in detail. It will be appreciated that due to the limitations of the algorithm, when tracking the second feature point to obtain the first feature point, a point with a matching error may be generated.
Step S103: and deleting part of the feature points in the first feature points and the second feature points according to a preset algorithm.
The preset algorithm is not limited, and the characteristic point pairs which are matched in error or the characteristic point pairs which are not consistent with other characteristic points well can be deleted. The preset algorithm may be a Random Sample Consensus (RANSAC) algorithm. The RANSAC algorithm is an algorithm for obtaining effective sample data by calculating mathematical model parameters of data according to a group of sample data sets including abnormal data, and is the prior art, and the embodiment of the disclosure is not specifically described. When deleting part of the first feature points and the second feature points according to a preset algorithm, deleting part of feature point pairs in the first feature points and the second feature points according to the preset algorithm, namely deleting one second feature point, and then deleting the first feature point corresponding to the deleted second feature point. The characteristic point pairs which are wrongly matched or have poor consistency with other characteristic points are deleted, so that the accuracy of the subsequent calculation mapping rule can be improved, and the accuracy and the stability of image tracking are improved. In the embodiment of the present disclosure, the remaining first feature points and second feature points after deletion at least include 4 pairs, that is, the remaining first feature points and second feature points are at least 4 respectively.
Step S104: and determining a mapping rule of the point on the first frame image relative to the point on the second frame image according to the first characteristic point and the second characteristic point which are left after deletion, so as to be used for tracking a third frame image of a frame adjacent to the first frame image.
And determining a mapping rule of points on the first frame image relative to points on the second frame image according to the first characteristic points and the second characteristic points which are left after deletion, namely determining a homography matrix when the first frame image is subjected to homography transformation relative to the second frame image according to the first characteristic points and the second characteristic points which are left after deletion. The matrix adopted during the homography transformation is a homography matrix.
It is understood that a homography exists for each picture pair, such as between a first frame image and a second frame image. How to calculate the homography matrix is the prior art, and the embodiments of the present disclosure are only briefly described.
For each group of picture pairs, such as for the second frame picture and the first frame picture, the homography matrix constraint corresponding to the picture pair is as in formula (1).
Figure BDA0002623535610000071
In formula (1), x2 and y2 are coordinates of the initial matching feature point in the first picture of the picture pair, namely coordinates of one point in the second picture, x1 and y1 are coordinates of the initial matching feature point in the second picture of the picture pair, namely coordinates of a corresponding point in the first picture, H is a homography matrix, and s is a coefficient before the homography coordinate.
Wherein h may represent:
Figure BDA0002623535610000072
wherein, the following can be derived:
Figure BDA0002623535610000073
Figure BDA0002623535610000074
enough characteristic point pairs in the first frame picture and the second frame picture can calculate the homography matrix H. If the first feature point and the second feature point remaining after the deletion include at least 4 pairs, the homography matrix H can be calculated, that is, the mapping rule of the point on the first frame image with respect to the point on the second frame image is determined. According to the mapping rule, the method can be used for tracking the third frame image of the next adjacent frame of the first frame image to obtain the mapping rule of the point on the third frame image relative to the point on the first frame image.
The image tracking method provided by the embodiment of the disclosure performs image enhancement on the preset first region corresponding to the second region, so that the contrast of the first region is higher, the image texture display is clearer, when the preset second feature point exists in the second region where the image enhancement is performed, the second feature point is subjected to image tracking, the first feature point corresponding to the second feature point can be more easily tracked, and therefore, the stability of texture image tracking can be improved.
Optionally, deleting a part of feature points in the first feature points and the second feature points according to a preset algorithm, including:
and deleting part of the feature points in the first feature points and the second feature points by adopting a preset first threshold value according to a RANSAC algorithm.
When deleting part of the first feature points and the second feature points according to the RANSAC algorithm, the deletion is performed according to a threshold preset by RANSACA. The larger the threshold value is, the higher the precision of screening and deleting is, and the more the deleted feature points are; the smaller the threshold value is, the lower the precision of the screening and deletion is, and the fewer the feature points to be deleted are. The size of the first threshold is not limited and may be set according to the experience of the user.
Optionally, after deleting some feature points in the first feature points and the second feature points by using a preset first threshold according to the RANSAC algorithm, the image tracking method further includes:
and when the total number of the first characteristic points and the second characteristic points which are remained after deletion is smaller than a first preset value, deleting part of the characteristic points in the first characteristic points and the second characteristic points by adopting a preset second threshold value according to a RANSAC algorithm so that the total number of the first characteristic points and the second characteristic points which are remained after deletion is larger than the first preset value, and the second threshold value is smaller than the first threshold value.
It can be understood that a preset second threshold is used to delete part of the first feature points and the second feature points, and the first feature points and the second feature points are feature points before deletion by using the first threshold.
The first preset value may be 8, that is, the remaining first feature points and second feature points are 4 pairs, and each first feature point corresponds to one second feature point. When the total number of the first characteristic points and the second characteristic points left after deletion is smaller than a first preset value, because the number of the left characteristic points is too small, the mapping rule of the points on the first frame image relative to the points on the second frame image cannot be determined; if the total number of the remaining first feature points and the remaining second feature points needs to be increased, the threshold of the RANSAC algorithm needs to be decreased to decrease the accuracy of the filtering deletion. The second threshold is smaller than the first threshold, and when the numerical value of the second threshold is appropriate, the total number of the first characteristic points and the second characteristic points remaining after deletion can be larger than the first preset value. The magnitude of the second threshold may be set based on user experience. It can be understood that, when the size of the second threshold does not meet the condition, and the total number of the first feature points and the second feature points remaining after deletion is smaller than the first preset value, the size of the second threshold is reduced, the reduced second threshold is used as the second threshold, and the preset second threshold is continuously used to delete part of the first feature points and the second feature points until the total number of the first feature points and the second feature points remaining after deletion is larger than the first preset value. The large first threshold value is adopted for screening and deleting, and the small second threshold value is adopted for screening and deleting when the first threshold value is inappropriate, so that the precision of screening and deleting can be ensured as much as possible, and the mapping rule obtained later is more accurate.
Optionally, before the image enhancement is performed on the preset first region of the first frame image, the image tracking method further includes:
and acquiring a first area corresponding to a second area of the second frame image in the first frame image according to a preset corresponding rule.
The preset correspondence rule is not limited, and the correspondence rule is used to make the position of the first region in the first frame image substantially consistent with the position of the second region in the second frame image. The preset corresponding rule may be a mapping rule that a preset point of a previous frame of the second frame image is mapped to a corresponding point position on the second frame image, that is, a transformation rule that the preset point of the previous frame of the second frame image is transformed to the corresponding point position on the second frame image, so that the position of the first region is appropriate, and the subsequent tracking and matching of the first feature point can be facilitated.
It is to be understood that the second area may be a second area determined by tracing a point on the second frame image, or may be a second area actively selected by the user. If the terminal for executing the method comprises an input device, such as a touch screen, etc., the user selects the second area by manipulating the input device.
Referring to fig. 4, before obtaining a plurality of first feature points corresponding to a plurality of second feature points preset in a second frame image in the first frame image according to a preset feature matching algorithm, the method further includes:
s401: and when the number of the second characteristic points in the second area of the second frame image is less than a second preset value, acquiring corner points in the second area.
When the second region is formed by tracking the second frame image by the image of the frame previous to the second frame image, the second region includes a certain number of feature points, and when the second frame image is the first image to start tracking or the second region is a newly set region, the second region does not include feature points. When the second region is formed by tracking the second frame image from the image of the frame previous to the second frame image, since the second feature points are deleted before the preset rule is formed, the number of the second feature points is reduced and is smaller than the second preset value, and when the second region does not include the feature points, the number of the second feature points is 0 and is also smaller than the second preset value. The value of the second preset value is not limited, and can be set according to experience or needs, for example, the second preset value is 30, 40, 45, 66, and the like. If the second preset value is 40, before acquiring the corner points, the number of the second feature points in the second region of the second frame image is 16, and since 16 is smaller than 40, the corner points in the second region need to be acquired. A corner is a type of image descriptor with rich texture information that can describe a salient part in an image, which emphasizes the presence of an object. The obtained corner may be obtained by a corner detection algorithm, such as FAST corner detection algorithm, Kitchen-Rosenfeld corner detection algorithm, Harris corner detection algorithm, KLT corner detection algorithm, SUSAN corner detection algorithm, and the like, which is not specifically limited in this embodiment. In order to increase the acquisition speed of the corner, optionally, a FAST corner detection algorithm with the highest calculation speed at present may be used to perform the corner acquisition.
S402: and when the number of the corner points is greater than a third preset value, acquiring a response value of the corner points, wherein the third preset value is the difference between the second preset value and the number of second feature points before the corner points are acquired.
If the second preset value is 40, before the corner point is obtained, the number of the second feature points in the second region of the second frame image is 16, and the third preset value is 24. And when the number of the corner points is greater than a third preset value, the sum of the number of the acquired corner points and the number of second characteristic points in a second region before the acquired corner points is greater than a second preset value. How to obtain the response value of the corner point is the prior art, and the embodiment of the disclosure is not described in detail.
S403: and reserving corner points corresponding to the response values with the size of the third preset value.
The number of the corner points is equal to the number of the third preset values. When the number of the corner points is larger than a third preset value, the corner points corresponding to response values with the size being equal to the number of the previous third preset value are reserved, the response values of the reserved corner points are larger, and the corner points meet the requirements.
S404: and taking the reserved corner points and second feature points in the second area as second feature points.
And the reserved angular points and the original second characteristic points in the second area are used as second characteristic points for obtaining first characteristic points in the first frame image subsequently. According to the scheme of the embodiment of the disclosure, the situation that the number of second feature points in the first frame image does not meet the tracking requirement can be prevented, and a proper number of corner points can be added to serve as the second feature points.
Optionally, the image tracking method further comprises:
and when the number of the corner points in the second area is less than a third preset value, acquiring the corner points in the second frame image so as to enable the number of the corner points to be greater than the third preset value.
When the number of corner points in the second region is smaller than a third preset value, it is indicated that the sum of the number of corner points and the number of second feature points before the corner points are acquired is smaller than the second preset value, and the corner points are also acquired in the second frame image and outside the second region, so that the number of the corner points is larger than the third preset value. When the number of the corner points is larger than a third preset value, continuously acquiring response values of the corner points, wherein the third preset value is the difference between the second preset value and the number of second feature points before the corner points are acquired; and reserving corner points corresponding to the response values with the size of the third preset value. According to the scheme of the embodiment of the disclosure, the situation that even if the second area is subjected to image enhancement, the corner points with proper number are still difficult to find can be prevented, and the corner points are obtained in the second frame image, so that the number of the corner points can meet the requirement.
Optionally, after determining a mapping rule of a point on the first frame image with respect to a point on the second frame image according to the first feature point and the second feature point remaining after the deletion, the method further includes:
updating a first region corresponding to a second region of the second frame image in the first frame image according to the mapping rule;
and taking the mapping rule as a corresponding rule of the first area position of the first frame image and the third area position of the third frame image.
In formula (3), h11 and h12 represent a rotation section between two frame images, h21 and h22 represent a scaling section between two frame images, and h13 and h23 represent a translation section between two frame images. The first area corresponding to the second area of the second frame image can be updated in the first frame image according to the mapping rule, namely according to the homography matrix, the position of the first area is more accurate, and the accuracy and the stability of subsequent image tracking can be further improved.
And taking the mapping rule as a corresponding rule of the first area position of the first frame image and the third area position of the third frame image, wherein the first area position is the position after the first area is updated. Therefore, the third area of the third frame image can be obtained subsequently, the subsequent images are tracked continuously according to the scheme of the embodiment of the disclosure, and the stability and accuracy of overall image tracking are improved.
Referring to fig. 5, an embodiment of the present disclosure provides an image tracking apparatus 50, which can implement the image tracking method of the above embodiment, and the image tracking apparatus 50 may include: an enhancement module 501, a tracking module 502, a deletion module 503, and a model generation module 504, wherein,
an enhancement module 501, configured to perform image enhancement on a preset first region of a first frame image, where the first frame image is a next adjacent frame of a second frame image, the second frame image includes a preset second region after the image enhancement, and the first region corresponds to the second region;
a tracking module 502, configured to obtain, in a first frame image according to a preset feature matching algorithm, a plurality of first feature points corresponding to a plurality of second feature points preset in a second frame image, where at least a part of the second feature points are located in a second region and at least a part of the first feature points are located in the first region;
a deleting module 503, configured to delete a part of the feature points in the first feature points and the second feature points according to a preset algorithm;
and a model generating module 504, configured to determine, according to the first feature point and the second feature point remaining after the deletion, a mapping rule of a point on the first frame image with respect to a point on the second frame image, so as to track a frame image adjacent to the first frame image.
The image tracking device provided by the embodiment of the disclosure performs image enhancement on the preset first region corresponding to the second region, so that the contrast of the first region is higher, the image texture display is clearer, when the preset second feature point exists in the second region subjected to the image enhancement, the second feature point is subjected to the image tracking, the first feature point corresponding to the second feature point can be more easily tracked, and therefore, the stability of texture image tracking can be improved.
Optionally, the deleting module 503 is specifically configured to delete a part of the feature points in the first feature point and the second feature point by using a preset first threshold according to a RANSAC algorithm.
Optionally, the image tracking device 50 may further include:
and the second deleting module is used for deleting part of the feature points in the first feature points and the second feature points by adopting a preset second threshold according to a RANSAC algorithm when the total number of the first feature points and the second feature points left after deletion is smaller than a first preset value, so that the total number of the first feature points and the second feature points left after deletion is larger than the first preset value, and the second threshold is smaller than the first threshold.
Optionally, the image tracking device 50 may further include:
and the area acquisition module is used for acquiring a first area of a second area corresponding to the second frame image from the first frame image according to a preset corresponding rule.
Optionally, the image tracking device 50 may further include:
the updating module is used for updating a first area corresponding to a second area of the second frame image in the first frame image according to the mapping rule;
and the corresponding module is used for taking the mapping rule as a corresponding rule of the first area position of the first frame image and the third area position of the third frame image.
Optionally, the image tracking device 50 may further include:
the first corner point acquisition module is used for acquiring corner points in a second area of the second frame image when the number of second feature points in the second area is less than a second preset value;
the response value acquisition module is used for acquiring the response value of the corner points when the number of the corner points is greater than a third preset value, wherein the third preset value is the difference between the second preset value and the number of second feature points before the corner points are acquired;
the reservation module is used for reserving corner points corresponding to response values with the size being in the number of the third preset values;
and the characteristic module is used for taking the reserved corner points and second characteristic points in the second area as second characteristic points.
Optionally, the image tracking device 50 may further include:
and the second corner acquisition module is used for acquiring corner points in the second frame image when the number of the corner points in the second area is less than a third preset value so as to enable the number of the corner points to be greater than the third preset value.
Referring to fig. 6, a schematic diagram of an electronic device 600 suitable for implementing embodiments of the present disclosure is shown. The terminal device in the embodiments of the present disclosure may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle terminal (e.g., a car navigation terminal), and the like, and a stationary terminal such as a digital TV, a desktop computer, and the like. The electronic device shown in the drawings is only an example and should not bring any limitation to the functions and use range of the embodiments of the present disclosure.
The electronic device includes: a memory and a processor, wherein the processor may be referred to as the processing device 601 hereinafter, and the memory may include at least one of a Read Only Memory (ROM)602, a Random Access Memory (RAM)603 and a storage device 608 hereinafter, which are specifically shown as follows:
as shown, the electronic device 600 may include a processing means (e.g., central processing unit, graphics processor, etc.) 601 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage means 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data necessary for the operation of the electronic apparatus 600 are also stored. The processing device 601, the ROM 602, and the RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
Generally, the following devices may be connected to the I/O interface 605: input devices 606 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 607 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 608 including, for example, tape, hard disk, etc.; and a communication device 609. The communication means 609 may allow the electronic device 600 to communicate with other devices wirelessly or by wire to exchange data. While the figure illustrates an electronic device 600 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 609, or may be installed from the storage means 608, or may be installed from the ROM 602. The computer program, when executed by the processing device 601, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: performing image enhancement on a preset first region of a first frame image, wherein the first frame image is a post-adjacent frame of a second frame image, the second frame image comprises a preset second region subjected to image enhancement, and the first region corresponds to the second region; acquiring a plurality of first feature points corresponding to a plurality of second feature points preset in a second frame image in the first frame image according to a preset feature matching algorithm, wherein at least part of the second feature points are located in a second area, and at least part of the first feature points are located in a first area; deleting part of the feature points in the first feature points and the second feature points according to a preset algorithm; and determining a mapping rule of the point on the first frame image relative to the point on the second frame image according to the first characteristic point and the second characteristic point which are left after deletion, so as to be used for tracking a third frame image of a frame adjacent to the first frame image.
Computer program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including but not limited to an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules or units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of a module or unit does not in some cases constitute a limitation of the unit itself, for example, the receiving module may also be described as a "unit for obtaining at least two internet protocol addresses".
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
According to one or more embodiments of the present disclosure, there is provided an image tracking method including:
performing image enhancement on a preset first region of a first frame image, wherein the first frame image is a post-adjacent frame of a second frame image, the second frame image comprises a preset second region subjected to image enhancement, and the first region corresponds to the second region;
acquiring a plurality of first feature points corresponding to a plurality of second feature points preset in a second frame image in the first frame image according to a preset feature matching algorithm, wherein at least part of the second feature points are located in a second area, and at least part of the first feature points are located in a first area;
deleting part of the feature points in the first feature points and the second feature points according to a preset algorithm;
and determining a mapping rule of the point on the first frame image relative to the point on the second frame image according to the first characteristic point and the second characteristic point which are left after deletion, so as to be used for tracking a third frame image of a frame adjacent to the first frame image.
According to one or more embodiments of the present disclosure, deleting a part of feature points from among the first feature points and the second feature points according to a preset algorithm includes:
and deleting part of the feature points in the first feature points and the second feature points by adopting a preset first threshold value according to a RANSAC algorithm.
According to one or more embodiments of the present disclosure, after deleting some feature points in the first feature points and the second feature points by using a preset first threshold according to a RANSAC algorithm, the image tracking method further includes:
and when the total number of the first characteristic points and the second characteristic points which are remained after deletion is smaller than a first preset value, deleting part of the characteristic points in the first characteristic points and the second characteristic points by adopting a preset second threshold value according to a RANSAC algorithm so that the total number of the first characteristic points and the second characteristic points which are remained after deletion is larger than the first preset value, and the second threshold value is smaller than the first threshold value.
According to one or more embodiments of the present disclosure, before the image enhancement is performed on the preset first region of the first frame image, the image tracking method further includes:
and acquiring a first area corresponding to a second area of the second frame image in the first frame image according to a preset corresponding rule.
According to one or more embodiments of the present disclosure, after determining a mapping rule of a point on the first frame image with respect to a point on the second frame image according to the first feature point and the second feature point remaining after the deletion, the image tracking method further includes:
updating a first region corresponding to a second region of the second frame image in the first frame image according to the mapping rule;
and taking the mapping rule as a corresponding rule of the first area position of the first frame image and the third area position of the third frame image.
According to one or more embodiments of the present disclosure, before acquiring, in the first frame image, a plurality of first feature points corresponding to a plurality of second feature points preset in the second frame image according to a preset feature matching algorithm, the method further includes:
when the number of second characteristic points in a second area of the second frame image is less than a second preset value, acquiring corner points in the second area;
when the number of the corner points is larger than a third preset value, acquiring a response value of the corner points, wherein the third preset value is the difference between the second preset value and the number of second feature points before the corner points are acquired;
reserving corner points corresponding to response values with the size being in the number of the third previous preset values;
and taking the reserved corner points and second feature points in the second area as second feature points.
According to one or more embodiments of the present disclosure, the image tracking method further includes:
and when the number of the corner points in the second area is less than a third preset value, acquiring the corner points in the second frame image so as to enable the number of the corner points to be greater than the third preset value.
According to one or more embodiments of the present disclosure, there is provided an image tracking apparatus including:
the system comprises a receiving module, a processing module and a processing module, wherein the receiving module is used for receiving a task processing request sent by a management service terminal, and the task processing request comprises a target isolation environment identifier and a target task;
and the distribution module is used for distributing the target task to the target image tracking environment corresponding to the target isolation environment identifier so as to execute the target task in the target image tracking environment.
According to one or more embodiments of the present disclosure, there is provided an image tracking apparatus including:
the image enhancement module is used for carrying out image enhancement on a preset first region of a first frame image, the first frame image is a post-adjacent frame of a second frame image, the second frame image comprises a preset second region subjected to image enhancement, and the first region corresponds to the second region;
the tracking module is used for acquiring a plurality of first feature points corresponding to a plurality of second feature points preset in a second frame image in the first frame image according to a preset feature matching algorithm, wherein at least part of the second feature points are located in the second area, and at least part of the first feature points are located in the first area;
the deleting module is used for deleting partial feature points in the first feature points and the second feature points according to a preset algorithm;
and the model generation module is used for determining a mapping rule of points on the first frame image relative to points on the second frame image according to the first characteristic points and the second characteristic points which are left after deletion so as to track the next adjacent frame image of the first frame image.
According to one or more embodiments of the present disclosure, the deleting module is specifically configured to delete a part of feature points in the first feature points and the second feature points by using a preset first threshold according to a RANSAC algorithm.
According to one or more embodiments of the present disclosure, the image tracking apparatus may further include:
and the second deleting module is used for deleting part of the feature points in the first feature points and the second feature points by adopting a preset second threshold according to a RANSAC algorithm when the total number of the first feature points and the second feature points left after deletion is smaller than a first preset value, so that the total number of the first feature points and the second feature points left after deletion is larger than the first preset value, and the second threshold is smaller than the first threshold.
According to one or more embodiments of the present disclosure, the image tracking apparatus may further include:
and the area acquisition module is used for acquiring a first area of a second area corresponding to the second frame image from the first frame image according to a preset corresponding rule.
According to one or more embodiments of the present disclosure, the image tracking apparatus may further include:
the updating module is used for updating a first area corresponding to a second area of the second frame image in the first frame image according to the mapping rule;
and the corresponding module is used for taking the mapping rule as a corresponding rule of the first area position of the first frame image and the third area position of the third frame image.
According to one or more embodiments of the present disclosure, the image tracking apparatus may further include:
the first corner point acquisition module is used for acquiring corner points in a second area of the second frame image when the number of second feature points in the second area is less than a second preset value;
the response value acquisition module is used for acquiring the response value of the corner points when the number of the corner points is greater than a third preset value, wherein the third preset value is the difference between the second preset value and the number of second feature points before the corner points are acquired;
the reservation module is used for reserving corner points corresponding to response values with the size being in the number of the third preset values;
and the characteristic module is used for taking the reserved corner points and second characteristic points in the second area as second characteristic points.
According to one or more embodiments of the present disclosure, the image tracking apparatus may further include:
and the second corner acquisition module is used for acquiring corner points in the second frame image when the number of the corner points in the second area is less than a third preset value so as to enable the number of the corner points to be greater than the third preset value.
According to one or more embodiments of the present disclosure, there is provided an electronic device including:
one or more processors;
a memory;
one or more application programs, wherein the one or more application programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to: an image tracking method according to any of the above embodiments is performed.
According to one or more embodiments of the present disclosure, there is provided a computer-readable medium having stored thereon a computer program which, when executed by a processor, implements the image tracking method of any of the above-described embodiments.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (10)

1. An image tracking method, comprising:
performing image enhancement on a preset first region of a first frame image, wherein the first frame image is a post-adjacent frame of a second frame image, the second frame image comprises a preset second region subjected to image enhancement, and the first region corresponds to the second region;
acquiring a plurality of first feature points corresponding to a plurality of second feature points preset in a second frame image in the first frame image according to a preset feature matching algorithm, wherein at least part of the second feature points are located in the second region, and at least part of the first feature points are located in the first region;
deleting part of the feature points in the first feature points and the second feature points according to a preset algorithm;
determining a mapping rule of points on the first frame image relative to points on the second frame image according to the first characteristic points and the second characteristic points which are left after deletion, so as to be used for tracking a third frame image of a next adjacent frame of the first frame image;
the first region is a local region of the first frame image, and the region type of the first region is a weak texture region.
2. The image tracking method according to claim 1, wherein the deleting of the partial feature points of the first feature points and the second feature points according to a preset algorithm includes:
and deleting part of the feature points in the first feature point and the second feature point by adopting a preset first threshold value according to a RANSAC algorithm.
3. The image tracking method according to claim 2, wherein after deleting some of the first feature points and the second feature points by using a preset first threshold according to a RANSAC algorithm, the method further comprises:
and when the total number of the first characteristic points and the second characteristic points which are remained after deletion is smaller than a first preset value, deleting part of the characteristic points in the first characteristic points and the second characteristic points by adopting a preset second threshold value according to a RANSAC algorithm so that the total number of the first characteristic points and the second characteristic points which are remained after deletion is larger than the first preset value, wherein the second threshold value is smaller than the first threshold value.
4. The image tracking method according to claim 1, wherein before the image enhancement of the preset first region of the first frame image, the method further comprises:
and acquiring a first region corresponding to a second region of the second frame image in the first frame image according to a preset corresponding rule.
5. The image tracking method according to claim 1, wherein after determining the mapping rule of the point on the first frame image with respect to the point on the second frame image based on the first feature point and the second feature point remaining after the deletion, the method further comprises:
updating a first region corresponding to a second region of the second frame image in the first frame image according to the mapping rule;
and taking the mapping rule as a corresponding rule of the first area position of the first frame image and the third area position of the third frame image.
6. The image tracking method according to claim 1, characterized in that: before the obtaining, according to a preset feature matching algorithm, a plurality of first feature points corresponding to a plurality of second feature points preset in a second frame image from the first frame image, the method further includes:
when the number of second characteristic points in a second area of the second frame image is less than a second preset value, acquiring corner points in the second area;
when the number of the corner points is larger than a third preset value, acquiring a response value of the corner points, wherein the third preset value is the difference between the second preset value and the number of second feature points before the corner points are acquired;
reserving corner points corresponding to response values with the size being in the number of the third previous preset values;
and taking the reserved corner points and second feature points in the second area as second feature points.
7. The image tracking method of claim 6, further comprising:
and when the number of the corner points in the second area is less than a third preset value, acquiring the corner points in the second frame image so as to enable the number of the corner points to be greater than the third preset value.
8. An image tracking apparatus, comprising:
the image enhancement module is used for carrying out image enhancement on a preset first region of a first frame image, wherein the first frame image is a rear adjacent frame of a second frame image, the second frame image comprises a preset second region subjected to image enhancement, and the first region corresponds to the second region;
the tracking module is used for acquiring a plurality of first feature points corresponding to a plurality of second feature points preset in a second frame image in the first frame image according to a preset feature matching algorithm, wherein at least part of the second feature points are located in the second area, and at least part of the first feature points are located in the first area;
the deleting module is used for deleting partial feature points in the first feature points and the second feature points according to a preset algorithm;
the model generation module is used for determining a mapping rule of points on the first frame image relative to points on the second frame image according to the first characteristic points and the second characteristic points which are left after deletion so as to track the next adjacent frame image of the first frame image;
the first region is a local region of the first frame image, and the region type of the first region is a weak texture region.
9. An electronic device, comprising:
one or more processors;
a memory;
one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to: performing the image tracking method according to any one of claims 1-7.
10. A computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the image tracking method of any one of claims 1 to 7.
CN202010790275.5A 2020-08-07 2020-08-07 Image tracking method and device, electronic equipment and computer readable medium Active CN111915532B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010790275.5A CN111915532B (en) 2020-08-07 2020-08-07 Image tracking method and device, electronic equipment and computer readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010790275.5A CN111915532B (en) 2020-08-07 2020-08-07 Image tracking method and device, electronic equipment and computer readable medium

Publications (2)

Publication Number Publication Date
CN111915532A CN111915532A (en) 2020-11-10
CN111915532B true CN111915532B (en) 2022-02-11

Family

ID=73283238

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010790275.5A Active CN111915532B (en) 2020-08-07 2020-08-07 Image tracking method and device, electronic equipment and computer readable medium

Country Status (1)

Country Link
CN (1) CN111915532B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112783995B (en) * 2020-12-31 2022-06-03 杭州海康机器人技术有限公司 V-SLAM map checking method, device and equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102999759A (en) * 2012-11-07 2013-03-27 东南大学 Light stream based vehicle motion state estimating method
CN109218695A (en) * 2017-06-30 2019-01-15 中国电信股份有限公司 Video image enhancing method, device, analysis system and storage medium
CN110188815A (en) * 2019-05-24 2019-08-30 广州市百果园信息技术有限公司 A kind of characteristic point method of sampling, device, equipment and storage medium
CN111144441A (en) * 2019-12-03 2020-05-12 东南大学 DSO luminosity parameter estimation method and device based on feature matching

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI420906B (en) * 2010-10-13 2013-12-21 Ind Tech Res Inst Tracking system and method for regions of interest and computer program product thereof
US10176592B2 (en) * 2014-10-31 2019-01-08 Fyusion, Inc. Multi-directional structured image array capture on a 2D graph
CN110555862A (en) * 2019-08-23 2019-12-10 北京数码视讯技术有限公司 Target tracking method, device, electronic equipment and computer-readable storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102999759A (en) * 2012-11-07 2013-03-27 东南大学 Light stream based vehicle motion state estimating method
CN109218695A (en) * 2017-06-30 2019-01-15 中国电信股份有限公司 Video image enhancing method, device, analysis system and storage medium
CN110188815A (en) * 2019-05-24 2019-08-30 广州市百果园信息技术有限公司 A kind of characteristic point method of sampling, device, equipment and storage medium
CN111144441A (en) * 2019-12-03 2020-05-12 东南大学 DSO luminosity parameter estimation method and device based on feature matching

Also Published As

Publication number Publication date
CN111915532A (en) 2020-11-10

Similar Documents

Publication Publication Date Title
CN111399956A (en) Content display method and device applied to display equipment and electronic equipment
CN111784712B (en) Image processing method, device, equipment and computer readable medium
CN111292420B (en) Method and device for constructing map
CN110728622B (en) Fisheye image processing method, device, electronic equipment and computer readable medium
CN110288625B (en) Method and apparatus for processing image
CN111598902B (en) Image segmentation method, device, electronic equipment and computer readable medium
CN112561840A (en) Video clipping method and device, storage medium and electronic equipment
CN111459364A (en) Icon updating method and device and electronic equipment
CN111915532B (en) Image tracking method and device, electronic equipment and computer readable medium
CN112418232A (en) Image segmentation method and device, readable medium and electronic equipment
CN115358919A (en) Image processing method, device, equipment and storage medium
CN115937290A (en) Image depth estimation method and device, electronic equipment and storage medium
CN115063335A (en) Generation method, device and equipment of special effect graph and storage medium
CN114419298A (en) Virtual object generation method, device, equipment and storage medium
CN111680754B (en) Image classification method, device, electronic equipment and computer readable storage medium
CN110399802B (en) Method, apparatus, medium, and electronic device for processing eye brightness of face image
CN112418233A (en) Image processing method, image processing device, readable medium and electronic equipment
CN112233207A (en) Image processing method, device, equipment and computer readable medium
CN110717467A (en) Head pose estimation method, device, equipment and storage medium
CN114125485B (en) Image processing method, device, equipment and medium
CN113744259B (en) Forest fire smoke detection method and equipment based on gray value increasing number sequence
CN111368015B (en) Method and device for compressing map
CN112884794B (en) Image generation method, device, electronic equipment and computer readable medium
CN115170674B (en) Camera principal point calibration method, device, equipment and medium based on single image
CN113808050A (en) Denoising method, denoising device, denoising equipment and storage medium for 3D point cloud

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder

Address after: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee after: Douyin Vision Co.,Ltd.

Address before: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee before: Tiktok vision (Beijing) Co.,Ltd.

Address after: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee after: Tiktok vision (Beijing) Co.,Ltd.

Address before: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee before: BEIJING BYTEDANCE NETWORK TECHNOLOGY Co.,Ltd.

CP01 Change in the name or title of a patent holder