CN113657219A - Video object detection tracking method and device and computing equipment - Google Patents
Video object detection tracking method and device and computing equipment Download PDFInfo
- Publication number
- CN113657219A CN113657219A CN202110882116.2A CN202110882116A CN113657219A CN 113657219 A CN113657219 A CN 113657219A CN 202110882116 A CN202110882116 A CN 202110882116A CN 113657219 A CN113657219 A CN 113657219A
- Authority
- CN
- China
- Prior art keywords
- moving object
- extracting
- tracking
- original video
- foreground
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 40
- 238000001514 detection method Methods 0.000 title claims abstract description 23
- 238000000605 extraction Methods 0.000 claims abstract description 18
- 238000004590 computer program Methods 0.000 claims abstract description 12
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 10
- 238000012545 processing Methods 0.000 claims description 12
- 238000004364 calculation method Methods 0.000 claims description 6
- 238000001914 filtration Methods 0.000 claims description 6
- 238000005429 filling process Methods 0.000 claims description 5
- 238000012706 support-vector machine Methods 0.000 description 7
- 230000003287 optical effect Effects 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 239000013598 vector Substances 0.000 description 3
- 239000007787 solid Substances 0.000 description 2
- 238000013500 data storage Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/215—Motion-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
- G06T2207/20032—Median filtering
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The application discloses a video object detection tracking method and device and computing equipment. The method comprises the following steps of extracting the moving objects which accord with the specified moving object types from the original video, and specifically comprises the following steps: constructing a background model by using a GMM algorithm, extracting a moving object region from an original video by using the background model, extracting features from the moving object region by using an HOG operator, classifying the features by using an SVM classifier, and extracting a moving object which accords with a specified moving object class; tracking the moving object with a KCF tracker; and calculating the confidence coefficient of the extraction of the moving object and the confidence coefficient of the tracking of the moving object, and outputting a tracking result with the highest confidence coefficient. The device comprises a detection module, a tracking module and a decision-making module. The computing device comprises a memory, a processor and a computer program stored in the memory and executable by the processor, the processor implementing the above method when executing the computer program.
Description
Technical Field
The present application relates to the field of computer vision, and more particularly to the detection and tracking of video objects.
Background
The object tracking technology is a hot spot problem in computer vision, and in recent years, the following main algorithms are used for extracting moving objects: the method comprises an optical flow method, a frame difference method, a background modeling method and the like, and the methods are mature in the aspect of object tracking, but the application fields have respective limits. Yet another is a mixed gaussian background model based on a single gaussian background model. The model can change along with the background, thereby improving the robustness of the algorithm. In addition, recent research shows that the KCF tracker is a correlation-based tracker, which can obtain better performance, can analyze frames in the fourier domain to accelerate the processing speed, thereby better tracking the object in real time, and has certain robustness in a complex scene.
Among the above methods, the optical flow method is easily affected by light, and has a large amount of calculation and poor real-time performance. The frame difference method can quickly extract a moving object by directly comparing changes of corresponding pixel values between two adjacent frames, is sensitive to noise, and is easy to extract the moving object wrongly or lost. Background modeling method, in a complex scene, background pixel values are in multimodal distribution, and the method cannot obtain a corresponding background model. Although the last method has certain robustness in a complex scene, the KCF cannot automatically initialize the position of the object in the first frame.
Disclosure of Invention
It is an object of the present application to overcome the above problems or to at least partially solve or mitigate the above problems.
According to an aspect of the present application, there is provided a video object detection tracking method, including:
the method for extracting the moving object conforming to the specified moving object category from the original video specifically comprises the following steps: constructing a background model by using a GMM algorithm, extracting a moving object region from an original video by using the background model, extracting features from the moving object region by using an HOG operator, classifying the features by using an SVM classifier, and extracting a moving object which accords with a specified moving object class;
tracking the moving object with a KCF tracker;
and calculating the confidence coefficient of the extraction of the moving object and the confidence coefficient of the tracking of the moving object, and outputting a tracking result with the highest confidence coefficient.
Optionally, the extracting a moving object region from an original video by using the background model includes:
extracting a foreground from an original video by using the background model;
processing the foreground;
and extracting a moving object region from the original video according to the processed foreground.
Optionally, the processing the foreground includes: performing median filtering, closing and filling processes on the foreground in sequence.
Optionally, the calculating a confidence of the moving object extraction and a confidence of the moving object tracking, and outputting the tracking result with the highest confidence includes:
calculating a barbit coefficient extracted by the moving object as a confidence coefficient extracted by the moving object, calculating a barbit coefficient tracked by the moving object as a confidence coefficient tracked by the moving object, and calculating an overlapping value between the current frame and the same object of the previous frame;
and when the overlapping value reaches a preset threshold value, comparing the two barbit coefficients, and taking the moving object corresponding to the highest value of the two barbit coefficients as output.
According to another aspect of the present application, there is provided a video object detection tracker apparatus, including:
the detection module is configured to extract a moving object conforming to a specified moving object category from an original video, and specifically comprises: constructing a background model by using a GMM algorithm, extracting a moving object region from an original video by using the background model, extracting features from the moving object region by using an HOG operator, classifying the features by using an SVM classifier, and extracting a moving object which accords with a specified moving object class;
a tracking module configured to track the moving object with a KCF tracker; and
and the decision module is configured to calculate the confidence coefficient of the extraction of the moving object and the confidence coefficient of the tracking of the moving object and output a tracking result with the highest confidence coefficient.
Optionally, the extracting a moving object region from an original video by using the background model includes:
extracting a foreground from an original video by using the background model;
processing the foreground;
and extracting a moving object region from the original video according to the processed foreground.
Optionally, the processing the foreground includes: performing median filtering, closing and filling processes on the foreground in sequence.
Optionally, the decision module comprises:
a calculation sub-module configured to calculate a barbit coefficient extracted by the moving object as a confidence of the moving object extraction, calculate a barbit coefficient of the moving object tracking as a confidence of the moving object tracking, and calculate an overlap value between a current frame and a previous frame of the same object; and
and the decision sub-module is configured to compare the two babit coefficients when the overlapping value reaches a preset threshold value, and take the moving object corresponding to the highest value of the two babit coefficients as an output.
According to a third aspect of the application, there is provided a computing device comprising a memory, a processor and a computer program stored in the memory and executable by the processor, wherein the processor, when executing the computer program, implements the method of any one of claims 1-4.
According to the video object detection tracking method, the video object detection tracking device and the computing equipment, because each frame is detected, when a new object appears in a scene, region extraction is carried out on a first frame containing the object, and then tracking and decision are carried out in sequence. In addition, a Gaussian background model (GMM) detection method is combined with a gradient direction Histogram (HOG) method and a Support Vector Machine (SVM) method in the moving object extraction process, so that moving objects of a specified class in a complex scene can be accurately and quickly extracted.
The above and other objects, advantages and features of the present application will become more apparent to those skilled in the art from the following detailed description of specific embodiments thereof, taken in conjunction with the accompanying drawings.
Drawings
Some specific embodiments of the present application will be described in detail hereinafter by way of illustration and not limitation with reference to the accompanying drawings. The same reference numbers in the drawings identify the same or similar elements or components. Those skilled in the art will appreciate that the drawings are not necessarily drawn to scale. In the drawings:
FIG. 1 is a schematic flow chart diagram of a video object detection tracking method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a video object detection and tracking apparatus according to an embodiment of the present application;
FIG. 3 is a schematic block diagram of a computing device according to one embodiment of the present application;
fig. 4 is a schematic structural diagram of a computer-readable storage medium according to an embodiment of the present application.
Detailed Description
Fig. 1 is a schematic flow chart of a video object detection tracking method according to an embodiment of the present application. The video object detection tracking method may generally include the following steps S1 to S3, and requires the following processing of steps S1 to S3 for each frame image of the original video.
Step S1, in order to improve the accuracy of the extracted moving object, the embodiment combines GMM, HOG and SVM detection methods. The extracting of the moving object conforming to the specified moving object category from the original video specifically includes: the method comprises the steps of constructing a background model by using a GMM algorithm, extracting a moving object region from an original video by using the background model, extracting features from the moving object region by using an HOG operator, classifying feature vectors formed by the features by using an SVM classifier, and extracting a moving object which accords with a specified moving object class.
The above step S1 is to input the original video into the moving object region extraction module, where the foreground is extracted roughly using the mixed gaussian background modeling, and median filtering, closing and filling are performed on the foreground to quickly extract the moving object region ROI, then train the feature model based on the small sample, which can perform feature extraction by using the HOG operator, and identify the moving object satisfying the designated moving object category from the moving object region ROI by using the trained feature model.
Step S2, tracking the moving object with a KCF tracker, which can solve the problem of data association.
In the above step S2, it is necessary to pay attention to the newly appearing object and verify the tracked object. If the object detected in step S1 is not associated with all KCF trackers, a new KCF tracker is assigned to the object; if a KCF tracker is tracking an object but does not detect it for several consecutive frames (e.g., ten frames), it indicates that the object disappears in view and is removed from the tracking queue.
In the above step S2, the currently active KCF tracker needs to be updated; since the object is moving, there may be a case where the areas of the two objects are mutually blocked or overlapped in the scene, and therefore, for the moving object extracted in step S1, it is necessary to associate its information (the kind of information may be set according to actual needs, such as color, height, category, etc.) with the currently active tracker data; the KCF tracker then finds the most reasonable data related to the current moving object information based on proximity and tracker internal models to determine which object the current tracker is tracking. The object information will tend to change in the next or later frame, but the change in the same object is very small, and the similarity calculation is performed by comparing the information of the moving object of the current frame with the information of the moving object recorded by the initial tracker.
And step S3, calculating the confidence of the extraction of the moving object and the confidence of the tracking of the moving object, and outputting the tracking result with the highest confidence.
In the above step S3, the barbit coefficient is calculated from the feature vector of the moving object detected in step S1, and is used as the confidence of moving object extraction, the barbit coefficient is calculated from the feature vector of the moving object tracked in step S2, and is used as the confidence of moving object tracking, the overlap between the current frame tracked in step S2 and the same moving object in the previous frame is calculated, in the case that the overlap satisfies a preset threshold (in order to ensure accuracy, the thresholds need to be fine-tuned according to different videos), the magnitudes of the two confidences are compared, when the confidence of moving object tracking is high, the moving object tracked in step S2 is used as an output, and when the confidence of moving object extraction is high, the moving object conforming to the designated moving object category extracted in step S1 is used as an output.
In summary, the video object detection and tracking method of the embodiment realizes automatic detection and tracking of video objects, can accurately and quickly extract moving objects of a specified category in a complex scene, and can determine that the method has good accuracy, robustness and calculation overhead through training experiments.
The embodiment of the present application further provides a video object detection tracker device, including:
the detection module 1 is configured to extract a moving object conforming to a specified moving object category from an original video, and specifically includes: constructing a background model by using a GMM algorithm, extracting a moving object region from an original video by using the background model, extracting features from the moving object region by using an HOG operator, classifying the features by using an SVM classifier, and extracting a moving object which accords with a specified moving object class;
a tracking module 2 configured to track the moving object with a KCF tracker; and
and the decision module 3 is configured to calculate the confidence of the extraction of the moving object and the confidence of the tracking of the moving object, and output the tracking result with the highest confidence.
As a preferred embodiment of the present application, the extracting a moving object region from an original video using the background model includes:
extracting a foreground from an original video by using the background model;
processing the foreground;
and extracting a moving object region from the original video according to the processed foreground.
As a preferred embodiment of the present application, the processing the foreground includes: performing median filtering, closing and filling processes on the foreground in sequence.
As a preferred embodiment of the present application, the decision module 3 includes:
a calculation sub-module configured to calculate a barbit coefficient extracted by the moving object as a confidence of the moving object extraction, calculate a barbit coefficient of the moving object tracking as a confidence of the moving object tracking, and calculate an overlap value between a current frame and a previous frame of the same object; and
and the decision sub-module is configured to compare the two babit coefficients when the overlapping value reaches a preset threshold value, and take the moving object corresponding to the highest value of the two babit coefficients as an output.
Embodiments also provide a computing device, referring to fig. 3, comprising a memory 1120, a processor 1110 and a computer program stored in said memory 1120 and executable by said processor 1110, the computer program being stored in a space 1130 for program code in the memory 1120, the computer program, when executed by the processor 1110, implementing the method steps 1131 for performing any of the methods according to the invention.
The embodiment of the application also provides a computer readable storage medium. Referring to fig. 4, the computer readable storage medium comprises a storage unit for program code provided with a program 1131' for performing the steps of the method according to the invention, which program is executed by a processor.
The embodiment of the application also provides a computer program product containing instructions. Which, when run on a computer, causes the computer to carry out the steps of the method according to the invention.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed by a computer, cause the computer to perform, in whole or in part, the procedures or functions described in accordance with the embodiments of the application. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
Those of skill would further appreciate that the various illustrative components and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It will be understood by those skilled in the art that all or part of the steps in the method for implementing the above embodiments may be implemented by a program, and the program may be stored in a computer-readable storage medium, where the storage medium is a non-transitory medium, such as a random access memory, a read only memory, a flash memory, a hard disk, a solid state disk, a magnetic tape (magnetic tape), a floppy disk (floppy disk), an optical disk (optical disk), and any combination thereof.
The above description is only for the preferred embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present application should be covered within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
Claims (9)
1. A video object detection tracking method, comprising:
the method for extracting the moving object conforming to the specified moving object category from the original video specifically comprises the following steps: constructing a background model by using a GMM algorithm, extracting a moving object region from an original video by using the background model, extracting features from the moving object region by using an HOG operator, classifying the features by using an SVM classifier, and extracting a moving object which accords with a specified moving object class;
tracking the moving object with a KCF tracker;
and calculating the confidence coefficient of the extraction of the moving object and the confidence coefficient of the tracking of the moving object, and outputting a tracking result with the highest confidence coefficient.
2. The method of claim 1, wherein the extracting the moving object region from the original video using the background model comprises:
extracting a foreground from an original video by using the background model;
processing the foreground;
and extracting a moving object region from the original video according to the processed foreground.
3. The method of claim 1 or 2, wherein the processing the foreground comprises: performing median filtering, closing and filling processes on the foreground in sequence.
4. The method of any one of claims 1 to 3, wherein the calculating a confidence of the moving object extraction and a confidence of the moving object tracking and outputting the tracking result with the highest confidence comprises:
calculating a barbit coefficient extracted by the moving object as a confidence coefficient extracted by the moving object, calculating a barbit coefficient tracked by the moving object as a confidence coefficient tracked by the moving object, and calculating an overlapping value between the current frame and the same object of the previous frame;
and when the overlapping value reaches a preset threshold value, comparing the two barbit coefficients, and taking the moving object corresponding to the highest value of the two barbit coefficients as output.
5. A video object detection tracker apparatus comprising:
the detection module is configured to extract a moving object conforming to a specified moving object category from an original video, and specifically comprises: constructing a background model by using a GMM algorithm, extracting a moving object region from an original video by using the background model, extracting features from the moving object region by using an HOG operator, classifying the features by using an SVM classifier, and extracting a moving object which accords with a specified moving object class;
a tracking module configured to track the moving object with a KCF tracker; and
and the decision module is configured to calculate the confidence coefficient of the extraction of the moving object and the confidence coefficient of the tracking of the moving object and output a tracking result with the highest confidence coefficient.
6. The apparatus of claim 5, wherein said extracting moving object regions from the original video using the background model comprises:
extracting a foreground from an original video by using the background model;
processing the foreground;
and extracting a moving object region from the original video according to the processed foreground.
7. The apparatus of claim 1 or 2, wherein said processing the foreground comprises: performing median filtering, closing and filling processes on the foreground in sequence.
8. The apparatus of any of claims 1-3, wherein the decision module comprises:
a calculation sub-module configured to calculate a barbit coefficient extracted by the moving object as a confidence of the moving object extraction, calculate a barbit coefficient of the moving object tracking as a confidence of the moving object tracking, and calculate an overlap value between a current frame and a previous frame of the same object; and
and the decision sub-module is configured to compare the two babit coefficients when the overlapping value reaches a preset threshold value, and take the moving object corresponding to the highest value of the two babit coefficients as an output.
9. A computing device comprising a memory, a processor, and a computer program stored in the memory and executable by the processor, wherein the processor implements the method of any of claims 1-4 when executing the computer program.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110882116.2A CN113657219A (en) | 2021-08-02 | 2021-08-02 | Video object detection tracking method and device and computing equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110882116.2A CN113657219A (en) | 2021-08-02 | 2021-08-02 | Video object detection tracking method and device and computing equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113657219A true CN113657219A (en) | 2021-11-16 |
Family
ID=78478238
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110882116.2A Pending CN113657219A (en) | 2021-08-02 | 2021-08-02 | Video object detection tracking method and device and computing equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113657219A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114979691A (en) * | 2022-05-23 | 2022-08-30 | 上海影谱科技有限公司 | Statistical analysis method and system for sports event rebroadcasting equity advertisement |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20200060868A (en) * | 2018-11-23 | 2020-06-02 | 주식회사 월드씨엔에스 | multi-view monitoring system using object-oriented auto-tracking function |
-
2021
- 2021-08-02 CN CN202110882116.2A patent/CN113657219A/en active Pending
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20200060868A (en) * | 2018-11-23 | 2020-06-02 | 주식회사 월드씨엔에스 | multi-view monitoring system using object-oriented auto-tracking function |
Non-Patent Citations (1)
Title |
---|
EN ZENG DONG ET AL: "An Automatic Object Detection and Tracking Method Based on Video Surveillance", 《IEEE ICMA》, pages 1140 - 1144 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114979691A (en) * | 2022-05-23 | 2022-08-30 | 上海影谱科技有限公司 | Statistical analysis method and system for sports event rebroadcasting equity advertisement |
CN114979691B (en) * | 2022-05-23 | 2023-07-28 | 上海影谱科技有限公司 | Statistical analysis method and system for advertisement of retransmission rights of sports event |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107424171B (en) | Block-based anti-occlusion target tracking method | |
US11205276B2 (en) | Object tracking method, object tracking device, electronic device and storage medium | |
US20190156499A1 (en) | Detection of humans in images using depth information | |
WO2013012091A1 (en) | Information processing apparatus, object tracking method, and program storage medium | |
CN109859250B (en) | Aviation infrared video multi-target detection and tracking method and device | |
CN111178161A (en) | Vehicle tracking method and system based on FCOS | |
CN109117746A (en) | Hand detection method and machine readable storage medium | |
CN107992790A (en) | Target long time-tracking method and system, storage medium and electric terminal | |
CN109712134B (en) | Iris image quality evaluation method and device and electronic equipment | |
CN113379789B (en) | Moving target tracking method in complex environment | |
CN113657219A (en) | Video object detection tracking method and device and computing equipment | |
CN113158773B (en) | Training method and training device for living body detection model | |
CN113902932A (en) | Feature extraction method, visual positioning method and device, medium and electronic equipment | |
CN114387642A (en) | Image segmentation method, device, equipment and storage medium | |
CN117115117B (en) | Pathological image recognition method based on small sample, electronic equipment and storage medium | |
CN113837006A (en) | Face recognition method and device, storage medium and electronic equipment | |
Chen et al. | Object tracking over a multiple-camera network | |
CN108776972B (en) | Object tracking method and device | |
KR101595334B1 (en) | Method and apparatus for movement trajectory tracking of moving object on animal farm | |
CN113869163B (en) | Target tracking method and device, electronic equipment and storage medium | |
US10140727B2 (en) | Image target relative position determining method, device, and system thereof | |
CN113449745B (en) | Method, device and equipment for identifying marker in calibration object image and readable medium | |
CN112085683B (en) | Depth map credibility detection method in saliency detection | |
CN113129332A (en) | Method and apparatus for performing target object tracking | |
WO2022241805A1 (en) | Video synopsis method, system and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |