CN113468998A - Portrait detection method, system and storage medium based on video stream - Google Patents
Portrait detection method, system and storage medium based on video stream Download PDFInfo
- Publication number
- CN113468998A CN113468998A CN202110699467.XA CN202110699467A CN113468998A CN 113468998 A CN113468998 A CN 113468998A CN 202110699467 A CN202110699467 A CN 202110699467A CN 113468998 A CN113468998 A CN 113468998A
- Authority
- CN
- China
- Prior art keywords
- portrait
- frame
- image
- detection candidate
- candidate frame
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 189
- 238000000034 method Methods 0.000 claims abstract description 26
- 238000012545 processing Methods 0.000 claims description 13
- 238000004590 computer program Methods 0.000 claims description 4
- 230000000875 corresponding effect Effects 0.000 description 10
- 230000002596 correlated effect Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
Images
Landscapes
- Image Analysis (AREA)
Abstract
The invention discloses a portrait detection method, a portrait detection system and a storage medium based on video streaming. The method comprises the steps of comparing two frames of images in front of and behind video stream data, tracking a differential area, intercepting a tracking area, and detecting a portrait in the area until the portrait is detected or the tracking area moves out of a picture. The invention reduces the range of the detected portrait area by a method for detecting the tracking change area, and stops the detection once the portrait is detected, thereby greatly reducing the number of times of portrait detection, saving the system overhead of a detection algorithm server and improving the detection efficiency.
Description
Technical Field
The invention belongs to the technical field of image recognition, and particularly relates to a portrait detection method and system based on video streams and a storage medium.
Background
With the continuous development of the field of monitoring and security protection, people no longer satisfy the requirement of storing video data of a camera, and more hopefully acquire valuable information from the video data, for example, capturing portrait from videos of fixed monitoring points in areas such as store entrances, community entrances, campus entrances and the like. The portrait detection of the real-time video is usually performed on each frame image or frame skipping image, so that although the detection target can be realized, the data volume required to be processed by the detection server is large, missing detection may occur in frame skipping detection, and a plurality of portraits of the same person may be obtained.
Disclosure of Invention
Aiming at least one defect or improvement requirement in the prior art, the invention provides a portrait detection method, a portrait detection system and a storage medium based on video streaming.
To achieve the above object, according to a first aspect of the present invention, there is provided a method for detecting a portrait based on a video stream, comprising the steps of:
reading two adjacent frames of images in video stream data, respectively recording the two adjacent frames of images as a previous frame of image and a current frame of image, calculating to obtain a first difference image according to the two adjacent frames of images, carrying out contour detection on the first difference image to obtain a contour of the first difference image, and obtaining a first portrait detection candidate frame according to the contour of the first difference image;
detecting a portrait on the current frame image according to the first portrait detection candidate frame;
reading a next frame image adjacent to the two previous and next frame images, calculating to obtain a second difference image according to the next frame image and the current frame image, carrying out contour detection on the second difference image to obtain a contour of the second difference image, and obtaining a second portrait detection candidate frame according to the contour of the second difference image;
and if the second portrait detection candidate frame has a new portrait detection candidate frame relative to the first portrait detection candidate frame, detecting a portrait on the next frame of image according to the new portrait detection candidate frame.
Preferably, before the contour detection is performed on the first difference map, the method further includes the steps of: carrying out binarization and image expansion processing on the first difference map;
before the contour detection is carried out on the second difference map, the method further comprises the following steps: and carrying out binarization and image expansion processing on the second difference map.
Preferably, the step of obtaining a first human image detection candidate frame according to the contour of the first difference map comprises: using a maximum rectangular frame to circumscribe the outline of the difference map, and taking the maximum rectangular frame as the first portrait detection candidate frame;
the step of obtaining a second human image detection candidate frame according to the contour of the second difference map comprises the following steps: and using a maximum rectangular frame to externally connect the outline of the difference map, and taking the maximum rectangular frame as the second portrait detection candidate frame.
Preferably, the determining whether or not a new human face detection candidate frame exists in the second human face detection candidate frame with respect to the first human face detection candidate frame includes:
and calculating the position deviation of the second portrait detection candidate frame and the first portrait detection candidate frame, and judging whether a new portrait detection candidate frame exists in the second portrait detection candidate frame relative to the first portrait detection candidate frame according to the position deviation.
Preferably, the detecting a portrait on the current frame image according to the first portrait detection candidate frame includes:
marking each portrait detection candidate frame of the first portrait detection candidate frame as in1N1 is greater than or equal to 1 and less than or equal to N1, and N1 is the total number of the face detection candidate frames of the first face detection candidate frame;
detecting a portrait in a region corresponding to each of the first portrait detection candidate frames on the current frame image, and marking the detected portrait as Fn2N2 is more than or equal to 1 and less than or equal to N2, N2 is the total number of the portraits detected in the first portrait detection candidate frame, and the portrait detection candidate frame is marked with a label i according to the corresponding relation between the portrait detection candidate frame and the portraitsn1With portrait label Fn2And performing association.
Preferably, the detecting a portrait at the new portrait detection candidate frame includes the steps of:
marking the new portrait detection candidate frame as iN1+n3N3 is more than or equal to 1 and less than or equal to N3, and N3 is the total number of new face detection candidate frames of the second face detection candidate frame;
detecting a portrait in an area corresponding to the new portrait detection candidate frame on the subsequent frame array image, and marking the detected portrait as FN2+n4N4 is more than or equal to 1 and less than or equal to N4, N4 is the total number of the new detected faces in the second face detection candidate frame, and the face detection candidate frame is marked with a mark i according to the corresponding relation between the face detection candidate frame and the facesN1+n3With portrait label FN2+n4And performing association.
According to a second aspect of the present invention, there is provided a video stream-based portrait detection system, comprising:
the first obtaining module is used for reading adjacent front and rear frame images in video stream data, respectively recording the front and rear frame images as a front frame image and a current frame image, calculating to obtain a first difference image according to the front and rear frame images, carrying out contour detection on the first difference image to obtain a contour of the first difference image, and obtaining a first human image detection candidate frame according to the contour of the first difference image;
a first detection module, configured to detect a portrait on the current frame image according to the first portrait detection candidate frame;
a second obtaining module, configured to read a next frame image adjacent to the previous and next frame images, calculate a second difference image according to the next frame image and the current frame image, perform contour detection on the second difference image, obtain a contour of the second difference image, and obtain a second portrait detection candidate frame according to the contour of the second difference image;
and the second detection module is used for detecting the portrait on the next frame of image according to the new portrait detection candidate frame if the second portrait detection candidate frame has a new portrait detection candidate frame relative to the first portrait detection candidate frame.
According to a third aspect of the invention, there is provided a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs any of the methods described above.
In general, compared with the prior art, the invention has the following beneficial effects: the invention reduces the range of the detected portrait area by the method for detecting the tracking area, and does not repeatedly detect the portrait in the same detected area, thereby greatly reducing the number of times of portrait detection, saving the system overhead of a detection algorithm server and improving the detection efficiency.
Drawings
Fig. 1 is a flowchart of a human image detection method based on video streaming according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. In addition, the technical features involved in the embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
As shown in fig. 1, a method for detecting a portrait based on a video stream according to an embodiment of the present invention includes:
s1, reading adjacent front and rear frame images in the video stream data, respectively recording the front and rear frame images as a front frame image and a current frame image, calculating to obtain a first difference image according to the front and rear frame images, carrying out contour detection on the first difference image to obtain a contour of the first difference image, and obtaining a first portrait detection candidate frame according to the contour of the first difference image.
Specifically, the method comprises the substeps of:
s11, reading the video stream data to obtain the front and back frames by gray scale processing, and recording the front frame as PrThe current frame is PcCalculating the difference between the two frames and recording as PΔ,PΔ=|Pc-PrI.e. a difference map P for distinguishing from the following difference mapsΔIs denoted as a first difference map.
And S12, performing binarization and image expansion processing on the first difference image, detecting the processed first difference image and acquiring a contour, and externally connecting the contour by using a maximum matrix to acquire a first human image detection candidate frame so as to determine the region to be tracked.
S2, a portrait is detected on the current frame image based on the first portrait detection candidate frame.
Specifically, the method comprises the substeps of:
s21, marking each portrait detection candidate frame number of the first portrait detection candidate frame as in1N1 ≦ N1, N1 ≦ 1, 2, 3 … … N1, and N1 are the total number of human detection candidate frames of the first human detection candidate frame;
s22, a region detection portrait corresponding to each of the first portrait detection candidate frames on the current frame image.
In particular, it is to be followed by PcFrame-truncating the rectangular position area map (first human image detection candidate frame) as a to-be-detected map; and the human image detection part transmits the diagram to be detected into the detector by using a default human face detector in the dlib library to obtain the human face position, namely human face information. The detected portrait is marked as Fn2N2 is not less than 1 and not more than N2, N2 is 1, 2, 3 … … N2, N2 is the total number of faces detected in the first face detection candidate frame, and the face detection candidate frame is labeled with the label i according to the corresponding relationship between the face detection candidate frame and the facesn1With portrait label Fn2And performing association. For example, assume that the first portrait detection candidate box has i1,i2……i55 portrait detection candidate frames in total, and 3 portraits F detected from the 5 portrait detection candidate frames1,F2,F3Suppose F1Is at i2The corresponding region is detected, F2Is at i3The corresponding region is detected, F3Is at i5The corresponding region is detected, then F is detected1And i2Are correlated, then F is2And i3Are correlated, then F is3And i5And (4) associating.
S3, reading the next frame image adjacent to the previous frame image and the next frame image, calculating according to the next frame image and the current frame image to obtain a second difference image, carrying out contour detection on the second difference image to obtain the contour of the second difference image, and obtaining a second portrait detection candidate frame according to the contour of the second difference image.
Taking the current frame as PrThe newly read frame is processed as PcAnd obtaining a rectangular frame after processing such as difference value processing and marking as a second portrait detection candidate frame. The method for obtaining the second human image detection candidate frame is different from the method for obtaining the first human image detection candidate frame only in processing objects, and is not described herein again.
At S4, if a new face detection candidate frame exists for the second face detection candidate frame with respect to the first face detection candidate frame, a face is detected in the new face detection candidate frame.
Further, it is determined whether or not a new human image detection candidate frame exists based on the positional deviation of the calculated human image detection candidate frame.
If the first and second human image detection candidate frames each include a plurality of frames, the following determination is performed for each of the first and second human image detection candidate frames. Judging the new rectangular frame and the previous rectangular frame inPosition, setting a threshold value T for the distance between two pointsrIf the deviation of the central point positions (or other preset positions) of the two rectangles is less than TrThe same area is considered to be tracked, and the new rectangular box label is still marked as inOtherwise, marking the new portrait detection candidate frame as iN1+n3N3 is 1, 2, 3 … … N3, N3 is the total number of new face detection candidate frames for the second face detection candidate frame;
align the label i in the second portrait detection candidate boxN1+n3Detecting the portrait of the candidate frame, and marking the detected portrait as FN2+n4N4 is 1, 2, 3 … … N4, N4 is the total number of faces newly detected in the second face detection candidate frame, and the face detection candidate frame is labeled with the label i according to the corresponding relationship between the face detection candidate frame and the faceN1+n3With portrait label FN2+n4And performing association.
The embodiment of the invention provides a portrait detection system based on video streaming, which comprises:
the first obtaining module is used for reading adjacent front and rear frame images in the video stream data, respectively recording the front and rear frame images as a front frame image and a current frame image, calculating to obtain a first difference image according to the front and rear frame images, carrying out contour detection on the first difference image to obtain a contour of the first difference image, and obtaining a first human image detection candidate frame according to the contour of the first difference image;
the first detection module is used for detecting a portrait on the current frame image according to the first portrait detection candidate frame;
the second acquisition module is used for reading a next frame image adjacent to the previous frame image and the next frame image, calculating to obtain a second difference image according to the next frame image and the current frame image, carrying out contour detection on the second difference image to acquire a contour of the second difference image, and acquiring a second portrait detection candidate frame according to the contour of the second difference image;
and the second detection module is used for detecting the portrait on the next frame of image according to the new portrait detection candidate frame if the second portrait detection candidate frame has a new portrait detection candidate frame relative to the first portrait detection candidate frame.
Further, before the contour detection is performed on the first difference map, the method further comprises the following steps: carrying out binarization and image expansion processing on the first difference image;
before the contour detection is carried out on the second difference map, the method also comprises the following steps: and carrying out binarization and image expansion processing on the second difference map.
Further, the step of obtaining a first human image detection candidate frame according to the contour of the first difference map comprises: using the outline of the maximum rectangular frame extrinsic difference map, and taking the maximum rectangular frame as a portrait detection candidate frame;
the step of obtaining a second human image detection candidate frame according to the contour of the second difference map comprises the following steps: and using the outline of the maximum rectangular frame circumscribed difference value image and taking the maximum rectangular frame as a portrait detection candidate frame.
The implementation principle and technical effect of the system are similar to those of the method, and are not described herein again.
The embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to implement any of the above technical solutions of the embodiments of the method for detecting a portrait based on a video stream. The implementation principle and technical effect are similar to those of the above method, and are not described herein again.
It must be noted that in any of the above embodiments, the methods are not necessarily executed in order of sequence number, and as long as it cannot be assumed from the execution logic that they are necessarily executed in a certain order, it means that they can be executed in any other possible order.
It will be understood by those skilled in the art that the foregoing is only a preferred embodiment of the present invention, and is not intended to limit the invention, and that any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the scope of the present invention.
Claims (10)
1. A human image detection method based on video streaming is characterized by comprising the following steps:
reading two adjacent frames of images in video stream data, respectively recording the two adjacent frames of images as a previous frame of image and a current frame of image, calculating to obtain a first difference image according to the two adjacent frames of images, carrying out contour detection on the first difference image to obtain a contour of the first difference image, and obtaining a first portrait detection candidate frame according to the contour of the first difference image;
detecting a portrait on the current frame image according to the first portrait detection candidate frame;
reading a next frame image adjacent to the two previous and next frame images, calculating to obtain a second difference image according to the next frame image and the current frame image, carrying out contour detection on the second difference image to obtain a contour of the second difference image, and obtaining a second portrait detection candidate frame according to the contour of the second difference image;
and if the second portrait detection candidate frame has a new portrait detection candidate frame relative to the first portrait detection candidate frame, detecting a portrait on the next frame of image according to the new portrait detection candidate frame.
2. The method as claimed in claim 1, wherein the step of contour detection for the first difference map further comprises: carrying out binarization and image expansion processing on the first difference map;
before the contour detection is carried out on the second difference map, the method further comprises the following steps: and carrying out binarization and image expansion processing on the second difference map.
3. The method as claimed in claim 1, wherein said obtaining the first human image detection candidate box according to the contour of the first difference map comprises: using a maximum rectangular frame to circumscribe the outline of the difference map, and taking the maximum rectangular frame as the first portrait detection candidate frame;
the step of obtaining a second human image detection candidate frame according to the contour of the second difference map comprises the following steps: and using a maximum rectangular frame to externally connect the outline of the difference map, and taking the maximum rectangular frame as the second portrait detection candidate frame.
4. The method of claim 1, wherein determining whether a new face detection candidate exists for the second face detection candidate relative to the first face detection candidate comprises:
and calculating the position deviation of the second portrait detection candidate frame and the first portrait detection candidate frame, and judging whether a new portrait detection candidate frame exists in the second portrait detection candidate frame relative to the first portrait detection candidate frame according to the position deviation.
5. The method as claimed in claim 1, wherein the detecting the portrait on the current frame image according to the first portrait detection candidate frame comprises the steps of:
marking each portrait detection candidate frame of the first portrait detection candidate frame as in1,1≤N1 is not more than N1, N1 is the total number of face detection candidate frames of the first face detection candidate frame;
detecting a portrait in a region corresponding to each of the first portrait detection candidate frames on the current frame image, and marking the detected portrait as Fn2N2 is more than or equal to 1 and less than or equal to N2, N2 is the total number of the portraits detected in the first portrait detection candidate frame, and the portrait detection candidate frame is marked with a label i according to the corresponding relation between the portrait detection candidate frame and the portraitsn1With portrait label Fn2And performing association.
6. The method of claim 5, wherein the detecting the portrait in the new portrait detection candidate frame comprises the steps of:
marking the new portrait detection candidate frame as iN1+n3,N3 is not less than 1 and not more than N3, N3 is the total number of new portrait detection candidate frames of the second portrait detection candidate frames;
detecting a portrait in an area corresponding to the new portrait detection candidate frame on the subsequent frame array image, and marking the detected portrait as FN2+n4N4 is more than or equal to 1 and less than or equal to N4, N4 is the total number of the new detected faces in the second face detection candidate frame, and the face detection candidate frame is marked with a mark i according to the corresponding relation between the face detection candidate frame and the facesN1+n3With portrait label FN2+n4And performing association.
7. A video stream based portrait detection system, comprising:
the first obtaining module is used for reading adjacent front and rear frame images in video stream data, respectively recording the front and rear frame images as a front frame image and a current frame image, calculating to obtain a first difference image according to the front and rear frame images, carrying out contour detection on the first difference image to obtain a contour of the first difference image, and obtaining a first human image detection candidate frame according to the contour of the first difference image;
a first detection module, configured to detect a portrait on the current frame image according to the first portrait detection candidate frame;
a second obtaining module, configured to read a next frame image adjacent to the previous and next frame images, calculate a second difference image according to the next frame image and the current frame image, perform contour detection on the second difference image, obtain a contour of the second difference image, and obtain a second portrait detection candidate frame according to the contour of the second difference image;
and the second detection module is used for detecting the portrait on the next frame of image according to the new portrait detection candidate frame if the second portrait detection candidate frame has a new portrait detection candidate frame relative to the first portrait detection candidate frame.
8. The video stream-based portrait detection system of claim 7, wherein before the contour detection of the first difference map, further comprising the steps of: carrying out binarization and image expansion processing on the first difference map;
before the contour detection is carried out on the second difference map, the method further comprises the following steps: and carrying out binarization and image expansion processing on the second difference map.
9. The video stream-based portrait detection system of claim 7, wherein the step of obtaining the first portrait detection candidate box according to the contour of the first difference map comprises the steps of: circumscribing the contour of the first difference map by using a maximum rectangular frame, and taking the maximum rectangular frame as the first portrait detection candidate frame;
the step of obtaining a second human image detection candidate frame according to the contour of the second difference map comprises the following steps: and using a maximum rectangular frame to externally connect the outline of the second difference map, and taking the maximum rectangular frame as the second portrait detection candidate frame.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method according to any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110699467.XA CN113468998A (en) | 2021-06-23 | 2021-06-23 | Portrait detection method, system and storage medium based on video stream |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110699467.XA CN113468998A (en) | 2021-06-23 | 2021-06-23 | Portrait detection method, system and storage medium based on video stream |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113468998A true CN113468998A (en) | 2021-10-01 |
Family
ID=77872497
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110699467.XA Pending CN113468998A (en) | 2021-06-23 | 2021-06-23 | Portrait detection method, system and storage medium based on video stream |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113468998A (en) |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101216885A (en) * | 2008-01-04 | 2008-07-09 | 中山大学 | Passerby face detection and tracing algorithm based on video |
CN103020580A (en) * | 2011-09-23 | 2013-04-03 | 无锡中星微电子有限公司 | Rapid human face detection method |
CN104978574A (en) * | 2015-07-10 | 2015-10-14 | 鲲鹏通讯(昆山)有限公司 | Gesture tracking method based on cluttered background |
CN104992155A (en) * | 2015-07-02 | 2015-10-21 | 广东欧珀移动通信有限公司 | Method and apparatus for acquiring face positions |
WO2018031105A1 (en) * | 2016-08-12 | 2018-02-15 | Qualcomm Incorporated | Methods and systems of maintaining lost object trackers in video analytics |
US20180374233A1 (en) * | 2017-06-27 | 2018-12-27 | Qualcomm Incorporated | Using object re-identification in video surveillance |
CN111325075A (en) * | 2018-12-17 | 2020-06-23 | 北京华航无线电测量研究所 | Video sequence target detection method |
CN111383246A (en) * | 2018-12-29 | 2020-07-07 | 杭州海康威视数字技术股份有限公司 | Scroll detection method, device and equipment |
CN112489090A (en) * | 2020-12-16 | 2021-03-12 | 影石创新科技股份有限公司 | Target tracking method, computer-readable storage medium and computer device |
CN112907623A (en) * | 2021-01-25 | 2021-06-04 | 成都创智数联科技有限公司 | Statistical method and system for moving object in fixed video stream |
-
2021
- 2021-06-23 CN CN202110699467.XA patent/CN113468998A/en active Pending
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101216885A (en) * | 2008-01-04 | 2008-07-09 | 中山大学 | Passerby face detection and tracing algorithm based on video |
CN103020580A (en) * | 2011-09-23 | 2013-04-03 | 无锡中星微电子有限公司 | Rapid human face detection method |
CN104992155A (en) * | 2015-07-02 | 2015-10-21 | 广东欧珀移动通信有限公司 | Method and apparatus for acquiring face positions |
CN104978574A (en) * | 2015-07-10 | 2015-10-14 | 鲲鹏通讯(昆山)有限公司 | Gesture tracking method based on cluttered background |
WO2018031105A1 (en) * | 2016-08-12 | 2018-02-15 | Qualcomm Incorporated | Methods and systems of maintaining lost object trackers in video analytics |
US20180374233A1 (en) * | 2017-06-27 | 2018-12-27 | Qualcomm Incorporated | Using object re-identification in video surveillance |
CN111325075A (en) * | 2018-12-17 | 2020-06-23 | 北京华航无线电测量研究所 | Video sequence target detection method |
CN111383246A (en) * | 2018-12-29 | 2020-07-07 | 杭州海康威视数字技术股份有限公司 | Scroll detection method, device and equipment |
CN112489090A (en) * | 2020-12-16 | 2021-03-12 | 影石创新科技股份有限公司 | Target tracking method, computer-readable storage medium and computer device |
CN112907623A (en) * | 2021-01-25 | 2021-06-04 | 成都创智数联科技有限公司 | Statistical method and system for moving object in fixed video stream |
Non-Patent Citations (4)
Title |
---|
CONEYPO: "目标追踪(Object Tracking)概念的简要介绍", 《博客园(CNBLOGS.COM/ADAMINXIE/P/13560758.HTML)》 * |
ZHANG_XIAO_XIA: "运动目标物体跟踪", 《豆丁网》 * |
李亚: "监控视频目标检测与行为识别", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
林雯: "新型基于帧间差分法的运动人脸检测算法研究", 《计算机仿真》 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111462200B (en) | Cross-video pedestrian positioning and tracking method, system and equipment | |
US10417773B2 (en) | Method and apparatus for detecting object in moving image and storage medium storing program thereof | |
US20180204070A1 (en) | Image processing apparatus and image processing method | |
US6456730B1 (en) | Moving object detection apparatus and method | |
US20110311100A1 (en) | Method, Apparatus and Computer Program Product for Providing Object Tracking Using Template Switching and Feature Adaptation | |
Venkatesh et al. | Efficient object-based video inpainting | |
US20080187173A1 (en) | Method and apparatus for tracking video image | |
CN104978567B (en) | Vehicle checking method based on scene classification | |
WO2020252974A1 (en) | Method and device for tracking multiple target objects in motion state | |
CN110827247A (en) | Method and equipment for identifying label | |
CN108197604A (en) | Fast face positioning and tracing method based on embedded device | |
JP5500024B2 (en) | Image recognition method, apparatus, and program | |
JPH08202879A (en) | Method for change of continuous video images belonging to sequence of mutually interrelated images as well as apparatus and method for replacement of expression of targetdiscriminated by set of object points by matched expression of predetermined and stored pattern of same geometrical shape in continuous tv frames of same sequence | |
CN110428442B (en) | Target determination method, target determination system and monitoring security system | |
CN109919002B (en) | Yellow stop line identification method and device, computer equipment and storage medium | |
CN108564598B (en) | Improved online Boosting target tracking method | |
EP2733666A1 (en) | Method for superpixel life cycle management | |
CN111932582A (en) | Target tracking method and device in video image | |
CN102301697B (en) | Video identifier creation device | |
CN113657434A (en) | Human face and human body association method and system and computer readable storage medium | |
CN111507232B (en) | Stranger identification method and system based on multi-mode multi-strategy fusion | |
CN116311063A (en) | Personnel fine granularity tracking method and system based on face recognition under monitoring video | |
WO2018121414A1 (en) | Electronic device, and target image recognition method and apparatus | |
CN111626145A (en) | Simple and effective incomplete form identification and page-crossing splicing method | |
CN113256683B (en) | Target tracking method and related equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20211001 |
|
RJ01 | Rejection of invention patent application after publication |