CN114627163A - Global image target tracking method and system based on rapid scene splicing - Google Patents

Global image target tracking method and system based on rapid scene splicing Download PDF

Info

Publication number
CN114627163A
CN114627163A CN202210289942.0A CN202210289942A CN114627163A CN 114627163 A CN114627163 A CN 114627163A CN 202210289942 A CN202210289942 A CN 202210289942A CN 114627163 A CN114627163 A CN 114627163A
Authority
CN
China
Prior art keywords
spliced
area
region
topological relation
splicing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210289942.0A
Other languages
Chinese (zh)
Inventor
王海滨
纪文峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Genjian Intelligent Technology Co ltd
Original Assignee
Qingdao Genjian Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Genjian Intelligent Technology Co ltd filed Critical Qingdao Genjian Intelligent Technology Co ltd
Priority to CN202210289942.0A priority Critical patent/CN114627163A/en
Publication of CN114627163A publication Critical patent/CN114627163A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/292Multi-camera tracking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/32Indexing scheme for image data processing or generation, in general involving image mosaicing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a global image target tracking method and a system based on rapid scene splicing, which comprises the following steps: acquiring block images of each area at a plurality of continuous moments; for the blocked images of all the areas at each moment, calling a topological relation table and a topological relation dictionary of each area, counting the number of adjacent but un-spliced areas in each un-spliced area based on the topological relation table, selecting the un-spliced area corresponding to the maximum value as a spliced area, splicing the blocked images of the spliced area and the adjacent areas thereof according to the topological relation dictionary of the spliced area, deleting the topological relation table of the spliced area, judging whether the un-spliced area exists, if so, re-selecting the spliced area in the un-spliced area, splicing the blocked images of the spliced area and the adjacent areas thereof, deleting the topological relation table of the spliced area until the un-spliced area does not exist any more, and obtaining a global image; and tracking the target by adopting a tracking algorithm based on the global images at all the moments. The time for splicing is shortened, and the splicing efficiency is improved.

Description

Global image target tracking method and system based on rapid scene splicing
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a global image target tracking method and system based on rapid scene splicing.
Background
The statements in this section merely provide background information related to the present disclosure and may not necessarily constitute prior art.
With the rapid development of deep learning, the image stitching algorithm is mature day by day, many computer vision tasks no longer meet single-shot images, and the development is started to be carried out towards multi-shot and cross-shot with higher difficulty, so that the requirements on the rapidity of cross-shot stitching and global image target tracking are provided.
Disclosure of Invention
In order to solve the technical problems in the background art, the invention provides a global image target tracking method and system based on rapid scene splicing, which do not need to traverse all topological relation data when obtaining a global image, shorten the time for splicing and improve the splicing efficiency.
In order to achieve the purpose, the invention adopts the following technical scheme:
the invention provides a global image target tracking method based on rapid scene splicing, which comprises the following steps:
acquiring block images of each area at a plurality of continuous moments;
for the block images of all the areas at each moment, calling a topological relation table and a topological relation dictionary of each area, counting the number of adjacent but un-spliced areas in each un-spliced area based on the topological relation table, selecting the un-spliced area corresponding to the maximum value as a spliced area, splicing the block images of the spliced area and the adjacent areas thereof according to the topological relation dictionary of the spliced area, deleting the topological relation table of the spliced area, judging whether the un-spliced area exists, if so, reselecting the spliced area in the un-spliced area, splicing the block images of the spliced area and the adjacent areas thereof, deleting the topological relation table of the spliced area until the un-spliced area does not exist any more, and obtaining a global image;
and tracking the target by adopting a tracking algorithm based on the global images at all the moments.
Further, when a plurality of non-spliced regions corresponding to the maximum value exist, one non-spliced region corresponding to the maximum value is randomly selected as a spliced region.
Further, a key in the topological relation dictionary of a region represents the number of the adjoining region of the region.
Further, a value in the topological relation dictionary for a region represents a node of the region with a common edge of the neighboring region.
Further, there are at least four nodes per region, and there are at least two identical nodes on the common edge between a region and its adjacent regions.
The second aspect of the present invention provides a global image target tracking system based on fast scene splicing, which includes:
a block image acquisition module configured to: acquiring block images of each area at a plurality of continuous moments;
a stitching module configured to: for the blocked images of all the areas at each moment, calling a topological relation table and a topological relation dictionary of each area, counting the number of adjacent but un-spliced areas in each un-spliced area based on the topological relation table, selecting the un-spliced area corresponding to the maximum value as a spliced area, splicing the blocked images of the spliced area and the adjacent areas thereof according to the topological relation dictionary of the spliced area, deleting the topological relation table of the spliced area, judging whether the un-spliced area exists, if so, re-selecting the spliced area in the un-spliced area, and splicing the spliced area with the blocked images of the adjacent areas until the un-spliced area does not exist any more to obtain a global image;
a target tracking module configured to: and tracking the target by adopting a tracking algorithm based on the global images at all the moments.
Further, a key in the topological relation dictionary of a region represents the number of the adjoining region of the region.
Further, a value in the topological relation dictionary for a region represents a node of the region with a common edge of the neighboring region.
A third aspect of the present invention provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the steps in a global image target tracking method based on fast scene splicing as described above.
A fourth aspect of the present invention provides a computer device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor executes the computer program to implement the steps in the method for tracking global image targets based on fast scene splicing as described above.
Compared with the prior art, the invention has the beneficial effects that:
the invention provides a global image target tracking method based on rapid scene splicing, which collects cross-shot region data, logically applies cross-shot region topological relation to a scene splicing and target tracking algorithm, and provides a splicing optimization algorithm, so that all topological relation data do not need to be traversed when a global image is obtained, the time for splicing is shortened, and the splicing efficiency is improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, are included to provide a further understanding of the invention, and are incorporated in and constitute a part of this specification, illustrate exemplary embodiments of the invention and together with the description serve to explain the invention and not to limit the invention.
FIG. 1 is a flowchart of a global image target tracking method based on fast scene splicing according to a first embodiment of the present invention;
FIG. 2 is a schematic diagram of a global image target tracking method based on fast scene splicing according to a first embodiment of the present invention;
FIG. 3 is a schematic diagram of a partition area and a node according to a first embodiment of the present invention;
FIG. 4 is a schematic diagram of four-region stitching according to a first embodiment of the present invention;
fig. 5 is a schematic diagram of nine-region stitching according to the first embodiment of the present invention.
Detailed Description
The invention is further described with reference to the following figures and examples.
It is to be understood that the following detailed description is exemplary and is intended to provide further explanation of the invention as claimed. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of exemplary embodiments according to the invention. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, and it should be understood that when the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof, unless the context clearly indicates otherwise.
Example one
The embodiment provides a global image target tracking method based on rapid scene splicing, and as shown in fig. 1 and fig. 2, a plurality of block images at continuous moments acquired by image acquisition devices arranged in each area are acquired; for the block images of all the areas at each moment, calling a topological relation table and a topological relation dictionary of each area, counting the number of adjacent but un-spliced areas in each un-spliced area based on the topological relation table and the topological relation table, selecting the un-spliced area corresponding to the maximum value as a spliced area, splicing the segmented images of the spliced area and the adjacent area thereof according to the topological relation dictionary of the spliced area, deleting the topological relation table of the spliced area, deleting the spliced area in the topological relation table, judging whether an area which is not spliced exists or not, if so, reselecting the spliced area from the non-spliced area, splicing the segmented images of the spliced area and the adjacent area, deleting the topological relation table of the spliced area until no un-spliced area exists, and obtaining a global image; and tracking the target by adopting a tracking algorithm based on the global images at all the moments. The method specifically comprises the following steps:
step 1, acquiring the label and the node of each region.
Specifically, a node marking instruction of a user is obtained, an image acquisition device of each region is controlled to acquire an image, namely a block image acquired by each image acquisition device is transmitted to a display device and displayed through the display device, and the user marks the block image of each region and marks the node; and acquiring the label and the node marked by the user on the block image of each region, namely the label and the node of each region.
Specifically, in a certain scene, a user can self-define the scene into a plurality of areas, each area is respectively provided with an image acquisition device, and the angles of the erected image acquisition devices are adjusted to be the same and the time for acquiring data is synchronized.
For example, the scene is divided into 4 regions by self-definition, and the segmented images acquired by the image acquisition devices of each region are respectively marked with the labels 0,1,2, and 3, so that the labels of the corresponding regions are 0,1,2, and 3.
Each block image is marked with at least four nodes (namely, the positions of at least four pixels in each block image are marked as nodes, and the block images acquired by two image acquisition devices of adjacent areas have at least two same nodes), namely, the nodes are marked as boundary points of area division, as shown in fig. 3, N1、N2Etc. are marked as nodes for each zone's label. That is, there are at least four nodes per region, and there are at least two identical nodes on a common edge between a region and its adjacent regions.
The nodes of the segmented images acquired by the image acquisition device of a certain area marked by the user are the nodes of all the segmented images acquired by the image acquisition device of the area in the following step.
And 2, constructing a topological relation table and a topological relation dictionary among all the regions based on the label and the node of each region.
The topological relation refers to the connection and adjacency relation of image elements on the mutual space, and the shots can only be specified whether to be adjacent, and the shot information cannot be modeled as to the adjacent edges of the collected area images.
The topological relation dictionary refers to dictionary data for storing the topological relation of the block areas. The dictionary data is composed of key value pairs, and represents topological relations among different regions by utilizing a list and a dictionary data structure, wherein the list data represents topological adjacent relations among different regions, keys in the dictionary data represent labels of adjacent regions, and values in the dictionary represent node information of common edges of the adjacent regions.
Specifically, for a topological relation table of one region, the number of elements is the number of regions, and the element value represents the adjacency relation with the remaining regions, specifically, if at least two same nodes exist between two different regions, the element representing the adjacency relation between the two regions in the topological relation table is 1, otherwise, the element is-1.
For example, a length of the list is defined to be N, where N is 4, an index value of an element in the topological relation table indicates a label of a corresponding region, an element in the topological relation table indicates an adjacency relation between an index value region and a current region, where "1" indicates that the region is adjacent to the current region, and "0" indicates that the region is the current region, and "-1" indicates that the region is not adjacent to the current region, as shown in fig. 3, and taking region 0 as an example, the topological relation list is [0,1,1, -1 ].
Each dictionary data records the label and node information of one region and its adjacent regions, for example, the topological relation dictionary of the region with label 0 is {1: "N3N4”,2:“N1N4”}。
And 3, acquiring the block images of a plurality of continuous moments acquired by the image acquisition devices arranged in each area.
For example, a scene is divided into 4 areas by self-definition, each area is marked by 0,1,2 and 3 respectively, and the obtained block images are recorded as
Figure BDA0003561407970000071
Where t denotes the time step of the acquisition of data, it is understood that the tth frame, i.e. the tth one of several successive instants, 0,1,2,3 denotes the area, for example,
Figure BDA0003561407970000072
2 nd frame image representing 2 nd region acquisitionLike this.
And 4, splicing the block images collected in all the areas, and splicing the block images collected at the same time in the step 3 by using a SURF (speedUp Robust features) splicing algorithm according to the topological relation table and the topological relation dictionary obtained in the step 2, wherein the step specifically comprises the following steps:
based on the topological relation table, counting the number of the adjacent but un-spliced areas in the un-spliced area (i.e. the number of elements with the element value of 1 in the topological relation table corresponding to each un-spliced area), selecting the spliced area as area 0 (or area 1, area 2 or area 3) by using a splicing optimization algorithm, traversing the keys according to the corresponding topological relation dictionary data in step 2, further indexing the values, then, the SURF splicing algorithm is called to splice the spliced area and the blocked images of the corresponding adjacent areas, the topological relation table of the spliced area is deleted, meanwhile, elements between the topological relation tables corresponding to the regions which are not spliced and the spliced regions are changed into-1, and after the traversal of the topological relation dictionary data of the spliced regions is finished, a stitched image can be obtained with the stitched area and all areas adjacent to it, as shown in fig. 4.
The splicing optimization algorithm specifically comprises the following steps: using the list or dictionary data in step 2 to count the number of the adjacent but un-spliced regions in the un-spliced region (for the list, firstly, recording the number of the element value in the region topological relation list as 1 to obtain the number of the region adjacent to the region but not spliced, for the dictionary data, firstly, calculating the number of the key value pairs of the dictionary, which represents the number of the region adjacent to the region, then subtracting the spliced region in the previous splicing to obtain the number of the region adjacent to the region but not spliced, wherein the definition of the element in the list can tell the algorithm the region currently spliced, and the dictionary data only contains the adjacent relation information of the region), the region with the largest number is used as the region to be spliced, the optimization algorithm is called before each splicing to select the region to be spliced, after the region to be spliced is selected, according to the topological information of the information and the adjacent region, simultaneously calling SURF splicing algorithm for splicing, and so on until obtaining a global scene and closingIn the selection of the splicing region, as shown in fig. 4, when the images divided into four regions are spliced, any region of 0,1,2 or 3 can be selected for the first time to be spliced, because at the initial moment, each region is adjacent to a non-spliced region of 2, and only one non-spliced region is left during the second splicing, the splicing can be completed by only two steps, as shown in fig. 4. As shown in fig. 5, when the images divided into nine regions are stitched, the region 4 is selected for the first time, since the adjacent non-stitched region is the maximum value of 4 at the initial time, and when the images are stitched for the second time, the number of the adjacent non-stitched regions of the remaining four non-stitched regions is 0, and the remaining four regions are only stitched in an arbitrary order. Through the steps, the images of 4 areas acquired at the time of t (t is more than or equal to 0 and less than or equal to 100) can be acquired
Figure BDA0003561407970000081
Stitching into a global image Imgt
And 5: global image Img obtained after splicingtAnd realizing target tracking of the global image by using a Deepsort tracking algorithm.
The invention collects the data of the cross-shot area and applies the logic of the topological relation of the cross-shot area to the algorithm of scene splicing and target tracking. Firstly, dividing a global scene into a plurality of areas and calibrating each area, then erecting a lens to collect image data of each area, requiring the same angle of the lens erection, synchronizing the time of collecting the data across the lens, then marking boundary points of the divided areas and constructing a topological relation among the areas, representing the constructed topological relation by using a list and a dictionary data structure, and representing and splicing the partitioned images collected at the same time by using a general splicing algorithm and the topological relation data. We package the algorithm for practical testing. The method has the characteristics of universality and strong practicability. According to the algorithm, a cross-lens area topological relation is combined with a splicing and target tracking algorithm, topological logic is applied to a scene fast splicing algorithm, specifically, the topological relations such as adjacency and node among areas are expressed and stored by using a list and a dictionary data structure, splicing optimization is provided on the basis, and fast splicing and target tracking of the scene are achieved.
Example two
The embodiment provides a global image target tracking system based on rapid scene splicing, which specifically comprises the following modules:
a block image acquisition module configured to: acquiring block images of each area at a plurality of continuous moments;
a stitching module configured to: for the blocked images of all the areas at each moment, calling a topological relation table and a topological relation dictionary of each area, counting the number of adjacent but un-spliced areas in each un-spliced area based on the topological relation table, selecting the un-spliced area corresponding to the maximum value as a spliced area, splicing the blocked images of the spliced area and the adjacent areas thereof according to the topological relation dictionary of the spliced area, deleting the topological relation table of the spliced area, judging whether the un-spliced area exists, if so, re-selecting the spliced area in the un-spliced area, and splicing the spliced area with the blocked images of the adjacent areas until the un-spliced area does not exist any more to obtain a global image;
a target tracking module configured to: and tracking the target by adopting a tracking algorithm based on the global images at all the moments.
Wherein a key in the topological relation dictionary of a region represents a reference number of a contiguous region of the region.
Wherein a value in the topological relation dictionary for a region represents a node of the region and a common edge of the adjacent region.
It should be noted that, each module in the present embodiment corresponds to each step in the first embodiment one to one, and the specific implementation process is the same, which is not described herein again.
EXAMPLE III
The present embodiment provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the steps in the method for tracking global image targets based on fast scene splicing as described in the first embodiment.
Example four
The embodiment provides a computer device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor executes the computer program to implement the steps in the global image target tracking method based on fast scene splicing as described in the first embodiment.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of a hardware embodiment, a software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present invention has been described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A global image target tracking method based on rapid scene splicing is characterized by comprising the following steps:
acquiring block images of each area at a plurality of continuous moments;
for the block images of all the areas at each moment, calling a topological relation table and a topological relation dictionary of each area, counting the number of adjacent but un-spliced areas in each un-spliced area based on the topological relation table, selecting the un-spliced area corresponding to the maximum value as a spliced area, splicing the block images of the spliced area and the adjacent areas thereof according to the topological relation dictionary of the spliced area, deleting the topological relation table of the spliced area, judging whether the un-spliced area exists, if so, reselecting the spliced area in the un-spliced area, splicing the block images of the spliced area and the adjacent areas thereof, deleting the topological relation table of the spliced area until the un-spliced area does not exist any more, and obtaining a global image;
and tracking the target by adopting a tracking algorithm based on the global images at all the moments.
2. The global image target tracking method based on rapid scene splicing as claimed in claim 1, wherein when there are a plurality of un-spliced regions corresponding to the maximum value, an un-spliced region corresponding to the maximum value is randomly selected as a spliced region.
3. The global image target tracking method based on fast scene splicing as claimed in claim 1, wherein a key in a topological relation dictionary of a region represents a label of an adjacent region of the region.
4. The global image target tracking method based on fast scene splicing as claimed in claim 1, wherein the value in the topological relation dictionary of a region represents the node of the common edge of the region and the adjacent region.
5. The global image target tracking method based on fast scene splicing as claimed in claim 4, wherein each region has at least four nodes, and at least two same nodes exist on the common edge between a region and its adjacent region.
6. A global image target tracking system based on fast scene splicing is characterized by comprising:
a block image acquisition module configured to: acquiring block images of each area at a plurality of continuous moments;
a stitching module configured to: for the block images of all the areas at each moment, calling a topological relation table and a topological relation dictionary of each area, counting the number of adjacent but un-spliced areas in each un-spliced area based on the topological relation table, selecting the un-spliced area corresponding to the maximum value as a spliced area, splicing the block images of the spliced area and the adjacent areas thereof according to the topological relation dictionary of the spliced area, deleting the topological relation table of the spliced area, judging whether the un-spliced area exists, if so, re-selecting the spliced area in the un-spliced area, and splicing the spliced area with the block images of the adjacent areas until the un-spliced area does not exist any more to obtain a global image;
a target tracking module configured to: and tracking the target by adopting a tracking algorithm based on the global images at all the moments.
7. The global image target tracking system based on fast scene splicing as claimed in claim 6, wherein the key in the topological relation dictionary of a region represents the label of the adjacent region of the region.
8. The global image target tracking system based on fast scene splicing as claimed in claim 6, wherein the value in the topological relation dictionary of a region represents the node of the common edge of the region and the adjacent region.
9. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements the steps of a fast scene stitching based global image target tracking method according to any one of claims 1 to 5.
10. A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor implements the steps of the method for tracking global image target based on fast scene splicing according to any one of claims 1 to 5 when executing the program.
CN202210289942.0A 2022-03-23 2022-03-23 Global image target tracking method and system based on rapid scene splicing Pending CN114627163A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210289942.0A CN114627163A (en) 2022-03-23 2022-03-23 Global image target tracking method and system based on rapid scene splicing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210289942.0A CN114627163A (en) 2022-03-23 2022-03-23 Global image target tracking method and system based on rapid scene splicing

Publications (1)

Publication Number Publication Date
CN114627163A true CN114627163A (en) 2022-06-14

Family

ID=81904319

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210289942.0A Pending CN114627163A (en) 2022-03-23 2022-03-23 Global image target tracking method and system based on rapid scene splicing

Country Status (1)

Country Link
CN (1) CN114627163A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116233615A (en) * 2023-05-08 2023-06-06 深圳世国科技股份有限公司 Scene-based linkage type camera control method and device
CN117522925A (en) * 2024-01-05 2024-02-06 成都合能创越软件有限公司 Method and system for judging object motion state in mobile camera under attention mechanism

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116233615A (en) * 2023-05-08 2023-06-06 深圳世国科技股份有限公司 Scene-based linkage type camera control method and device
CN117522925A (en) * 2024-01-05 2024-02-06 成都合能创越软件有限公司 Method and system for judging object motion state in mobile camera under attention mechanism
CN117522925B (en) * 2024-01-05 2024-04-16 成都合能创越软件有限公司 Method and system for judging object motion state in mobile camera under attention mechanism

Similar Documents

Publication Publication Date Title
CN114627163A (en) Global image target tracking method and system based on rapid scene splicing
US10360494B2 (en) Convolutional neural network (CNN) system based on resolution-limited small-scale CNN modules
CN108399373B (en) The model training and its detection method and device of face key point
JP5797789B2 (en) Method and system for quasi-duplicate image retrieval
US10559090B2 (en) Method and apparatus for calculating dual-camera relative position, and device
US20130266211A1 (en) Stereo vision apparatus and method
CN104966270A (en) Multi-image stitching method
WO2023159558A1 (en) Real-time target tracking method, device, and storage medium
CN106504196A (en) A kind of panoramic video joining method and equipment based on space sphere
CN107976804A (en) A kind of design method of lens optical system, device, equipment and storage medium
CN110544202A (en) parallax image splicing method and system based on template matching and feature clustering
EP3043315A1 (en) Method and apparatus for generating superpixels for multi-view images
CN105608423A (en) Video matching method and device
CN115578260B (en) Attention method and system for directional decoupling of image super-resolution
CN107767414A (en) The scan method and system of mixed-precision
Guo et al. Deep network with spatial and channel attention for person re-identification
CN116797830A (en) Image risk classification method and device based on YOLOv7
CN110349166A (en) A kind of blood vessel segmentation method, device and equipment being directed to retinal images
CN106296580A (en) A kind of method and device of image mosaic
CN108460768A (en) The video perpetual object dividing method and device of stratification time domain cutting
Muresan et al. Improving local stereo algorithms using binary shifted windows, fusion and smoothness constraint
CN114419356A (en) Detection method, system, equipment and storage medium for densely-arranged power equipment
CN110490877A (en) Binocular stereo image based on Graph Cuts is to Target Segmentation method
JP4768358B2 (en) Image search method
CN106296568A (en) Determination method, device and the client of a kind of lens type

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination