CN115731258A - Moving object recognition method and shooting device - Google Patents

Moving object recognition method and shooting device Download PDF

Info

Publication number
CN115731258A
CN115731258A CN202110979220.3A CN202110979220A CN115731258A CN 115731258 A CN115731258 A CN 115731258A CN 202110979220 A CN202110979220 A CN 202110979220A CN 115731258 A CN115731258 A CN 115731258A
Authority
CN
China
Prior art keywords
area
detection
target
detection area
foreground
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110979220.3A
Other languages
Chinese (zh)
Inventor
请求不公布姓名
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Zhendi Intelligent Technology Co Ltd
Original Assignee
Suzhou Zhendi Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Zhendi Intelligent Technology Co Ltd filed Critical Suzhou Zhendi Intelligent Technology Co Ltd
Priority to CN202110979220.3A priority Critical patent/CN115731258A/en
Publication of CN115731258A publication Critical patent/CN115731258A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention provides a method for identifying a moving target, which comprises the following steps: s11: collecting or acquiring an image; s12: performing foreground detection on the image and outputting a foreground detection area; s13: selecting the resolution of the image according to the size of the foreground detection area; and S14: and performing target detection on a second detection area comprising the foreground detection area on the image with the selected resolution, and outputting a target detection area. The design scheme of the invention has speed advantage and time advantage, and can solve the problem that the target selection cannot be carried out in partial scenes (no display screen exists in some scenes or the user cannot select the operation of tracking the target through an app picture frame). Meanwhile, the reliability of tracking target selection is higher.

Description

Moving object recognition method and shooting device
Technical Field
The invention relates to the field of computer vision and video image processing, in particular to a moving target identification method and shooting equipment.
Background
In a target tracking system, in order to track a target, a tracking target area is determined first, the target may be selected by manually selecting a target or by gesture control, or the target may be selected by detection (the target is selected if there is a detection target, but the target may be followed if there is a false detection, which may result in false tracking), and in some limited scenarios, user interaction may not be performed, and the user may not select the target, so that it is necessary to solve the problems of how to detect the tracking target and how to select the tracking target in the limited scenarios.
Disclosure of Invention
In view of at least one of the drawbacks of the prior art, the present invention provides a method for moving object identification, the method comprising:
s11: collecting or acquiring an image;
s12: performing foreground detection on the image and outputting a foreground detection area;
s13: selecting the resolution of the image according to the size of the foreground detection area; and
s14: and performing target detection on a second detection area comprising the foreground detection area on the image with the selected resolution, and outputting a target detection area.
According to one aspect of the invention, the method further comprises:
s15: and carrying out target tracking on the target detection area.
According to an aspect of the present invention, wherein the step S12 comprises:
s121: acquiring a moving target area;
s122: carrying out connected region detection and filtering on the moving target region;
s123: calculating the minimum circumscribed rectangle of each connected region, and fusing the connected regions with the center distance smaller than the distance threshold; and
s124: and outputting the fused connected region as the foreground detection region.
According to an aspect of the present invention, wherein said step S121 comprises obtaining the moving object region according to ViBE algorithm.
According to an aspect of the present invention, wherein the step S122 comprises:
calculating the area of each connected region;
and filtering out a connected region with the area smaller than a first preset area threshold value.
According to an aspect of the present invention, wherein the step S124 includes: and outputting the fused external rectangle list of the connected region as the foreground detection region.
According to an aspect of the present invention, wherein the step S13 comprises:
s131: calculating the area of the foreground detection area;
s132: when the area of the foreground detection area is smaller than a second preset area threshold value, selecting the image with high resolution; and when the area of the foreground detection area is not smaller than a second preset area threshold value, selecting the image with low resolution.
According to one aspect of the invention, wherein the high resolution is a 4k resolution and the low resolution is a 1k resolution.
According to an aspect of the invention, wherein the second detection area is 1-5 times the foreground detection area.
According to an aspect of the invention, wherein said step S14 comprises performing any one or more of face detection, pedestrian detection or head-shoulder detection.
According to an aspect of the invention, further comprising:
performing gesture detection and recognition on the foreground detection area and/or the target detection area;
and when a preset gesture is detected, carrying out target tracking on a target detection area comprising the preset gesture.
According to one aspect of the invention, the method further comprises the following steps: in the initial frame, performing gesture detection and recognition based on the foreground detection area or the target detection area; and in a non-initial frame, performing gesture detection and recognition based on a foreground detection area or a target detection area and a tracking area of a previous frame.
According to an aspect of the present invention, wherein step S14 further comprises: and selecting a moving target based on the gesture recognition result, determining the size of a moving target area according to the foreground detection area or the target detection area, executing an adaptive target tracking algorithm, and outputting the tracking area of the moving target.
The present invention also provides a photographing apparatus including:
a holder;
the camera device is arranged on the holder;
a control unit coupled to the head and the camera and capable of performing the method of moving object recognition as described above.
According to an aspect of the invention, wherein the control unit is configured to control the pan/tilt head to follow the target detection area.
According to one aspect of the invention, wherein the camera device is a drone.
The present invention also provides a computer readable storage medium having stored thereon computer readable instructions which, when executed by a processor, perform a method of moving object identification as described above.
The design scheme of the invention aims at solving the problems that partial targets are easy to be detected by mistake in a complex scene and the user interaction is more complicated or the user angle positioning cannot be directly carried out, for example, the follow targets cannot be manually selected in certain app intelligent follow-up, the follow-up can only be controlled to be opened and closed and the follow targets can only be selected in a gesture recognition or entity button mode, but the follow-up failure can be caused if the false detection problem occurs. The invention designs a moving target selection and following system under a limited scene, the intelligence degree and reliability of the target selection system are reduced by introducing a foreground detection algorithm, the false detection probability is reduced, the designed algorithm meets the following requirement under a long-distance multi-scene, the following resource occupation is less, and the actual following requirement on a part of low-computation-force platforms is met.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure, illustrate embodiments of the disclosure and together with the description serve to explain the disclosure and are not to limit the disclosure. In the drawings:
FIG. 1 illustrates a flow diagram of a moving object identification method in accordance with one embodiment of the present invention;
FIG. 2 is a flow chart illustrating step S12 of a moving object recognition method according to an embodiment of the present invention;
FIG. 3 is a flow chart illustrating step S13 of a moving object identification method according to an embodiment of the present invention;
FIG. 4 is a diagram illustrating moving object recognition according to one embodiment of the present invention;
fig. 5 shows a block diagram of a photographing apparatus according to an embodiment of the present invention.
Detailed Description
In the following, only certain exemplary embodiments are briefly described. As those skilled in the art will recognize, the described embodiments may be modified in various different ways, all without departing from the spirit or scope of the present invention. Accordingly, the drawings and description are to be regarded as illustrative in nature, and not as restrictive.
In the description of the present invention, it is to be understood that the terms "center", "longitudinal", "lateral", "length", "width", "thickness", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", "clockwise", "counterclockwise", and the like are used in the orientations and positional relationships indicated in the drawings, which are merely for convenience of description and simplicity of description, and do not indicate or imply that the devices or elements referred to must have a particular orientation, be constructed in a particular orientation, and be operated, and thus, are not to be construed as limiting the present invention. Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or to implicitly indicate the number of technical features indicated. Thus, features defined as "first", "second", may explicitly or implicitly include one or more of the described features. In the description of the present invention, "a plurality" means two or more unless specifically defined otherwise.
In the description of the present invention, it should be noted that unless otherwise explicitly stated or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection, either mechanically, electrically, or in communication with each other; either directly or indirectly through intervening media, either internally or in any other relationship. The specific meanings of the above terms in the present invention can be understood according to specific situations by those of ordinary skill in the art.
In the present invention, unless otherwise expressly stated or limited, "above" or "below" a first feature means that the first and second features are in direct contact, or that the first and second features are not in direct contact but are in contact with each other via another feature therebetween. Also, the first feature being "on," "above" and "over" the second feature includes the first feature being directly on and obliquely above the second feature, or merely indicating that the first feature is at a higher level than the second feature. A first feature being "under," "below," and "beneath" a second feature includes the first feature being directly above and obliquely above the second feature, or simply meaning that the first feature is at a lesser level than the second feature.
The following disclosure provides many different embodiments or examples for implementing different features of the invention. To simplify the disclosure of the present invention, the components and arrangements of specific examples are described below. Of course, they are merely examples and are not intended to limit the present invention. Moreover, the present invention may repeat reference numerals and/or reference letters in the various examples, which have been repeated for purposes of simplicity and clarity and do not in themselves dictate a relationship between the various embodiments and/or configurations discussed. In addition, the present invention provides examples of various specific processes and materials, but one of ordinary skill in the art may recognize applications of other processes and/or uses of other materials.
The preferred embodiments of the present invention will be described in conjunction with the accompanying drawings, and it will be understood that they are described herein for the purpose of illustration and explanation and not limitation.
Fig. 1 shows a flowchart of a moving object recognition method according to an embodiment of the present invention, where the method 10 includes:
a video image is captured in real time by a photographing apparatus or a video file including an identification target image is loaded from a storage medium at step S11.
In step S12, foreground detection is performed on the image acquired or captured in step S11, and a foreground detection area is output. In order to automatically select the target, a foreground detection algorithm may be used to obtain the moving target area. And in the initial tracking stage, only the moving target is taken as a tracking target.
Firstly, the area of the moving object is obtained according to a foreground detection algorithm. Preferably, the foreground detection algorithm selects a Visual Background Extractor (ViBE) algorithm. Compared with other detection algorithms, viBE is a pixel-level algorithm, and has the characteristics of small calculated amount, small memory occupation, high processing speed, good detection effect, higher Ghost (Ghost) area ablation speed and stable and reliable response to noise when moving targets are separated and extracted, and is very suitable for being embedded into scenes such as camera equipment and the like which require small calculated amount and small memory occupation. Depending on different application scenarios, other foreground detection algorithms, such as frame difference method, background modeling method, foreground modeling method, optical flow method, average background method, and background non-parameter estimation method, may also be used, and these methods are all within the scope of the present invention.
Then, detecting a connected region, and carrying out filtering processing on a low-connected region which is considered as a noise region; and acquiring connected regions meeting conditions (for example, the area size meets the minimum area requirement), calculating the minimum circumscribed rectangle of each connected region, and finally outputting a coordinate list of the potential foreground target region.
Fig. 2 shows a flowchart of step S12 of the moving object identification method according to an embodiment of the present invention, and preferably, step S12 includes:
a moving target region is acquired in step S121. Because the foreground detection target is a foreground target which is detected to be relatively dynamic under a relatively static background, only a moving target is taken as a foreground target in the initial tracking stage.
In step S122, connected region detection is performed on the moving target region, one or more connected regions of the moving target are obtained by using the adjacency relationship between the pixels, and then filtering is performed. In order to filter out the noisy region, different filtering conditions may be set to further determine the region of the moving object. Preferably, the minimum area requirement may be set to a first preset area threshold with the area size as a filtering condition. During filtering, the area of each connected region is calculated firstly, and then the connected region with the area smaller than a first preset area threshold value is used as a noise region to be filtered.
In step S123, the minimum bounding rectangle of each connected region is calculated, and then compared, and if the center distance between two connected regions is smaller than the distance threshold, the two connected regions are fused and taken as a connected region.
In step S124, the filtered and fused connected region is output as the foreground detection region, and preferably, a circumscribed rectangle list of the fused connected region or a coordinate list of a rectangle detection frame may be output as the foreground detection region.
The foreground detection in step S12 is specifically described through steps S121 to S124, and the execution order of the steps is not limited in the present invention. The following steps will be explained.
In step S13, in order to reduce the time consumption of the detection algorithm and improve the detection accuracy and the detection distance, according to a preferred embodiment of the present invention, an adaptive area target detection algorithm is used. The motion area of the foreground object is acquired and the foreground detection area is output in step S12, the size of the foreground detection area, such as the area or the pixels, is first calculated in step S13, then the resolution of the input image is selected, and local or global detection is performed on images with different resolutions in the next step, so that the complexity of object detection can be reduced and the detection distance can be increased. Specifically, if the size of the foreground detection area is smaller than a specified size, selecting to detect on the high-resolution image; if the size of the foreground detection area is larger than the specified size, selecting to detect on the low-resolution image; after the resolution of the input image is determined, the detection area of the subsequent target is determined according to the foreground target area, and the detection effectiveness is improved.
Fig. 3 shows a flowchart of step S13 of the moving object identification method according to an embodiment of the present invention, and preferably, step S13 includes:
the area of the foreground detection region is calculated in step S131, and the size of the area is determined according to the pixel width and the pixel height of the foreground object, for example.
When the area of the foreground detection area is smaller than a second preset area threshold in step S132, selecting a high-resolution image; and when the area of the foreground detection area is not smaller than a second preset area threshold value, selecting the low-resolution image. Two area thresholds can be set, and the high-resolution image and the low-resolution image are respectively judged, for example, when the area of the foreground detection area is smaller than a second preset area threshold, the high-resolution image is selected; and when the area is larger than a third preset area threshold value, selecting a low-resolution image.
Furthermore, when the moving object is far away, the occupied area of the foreground detection region is small, and the corresponding region on the initial image collected or obtained in step S11 may be subjected to matting, or one of the plurality of images obtained in the image conversion or processing process of the foreground detection may be selected to be subjected to matting, so as to obtain an image with a higher resolution; when the moving object is close, the area of the foreground detection area occupied by the moving object is larger, and the image can be scratched on an initial image or an image obtained in the process of image conversion or processing, so that an image with lower resolution can be obtained.
The high resolution and the low resolution are relative concepts, and requirements on detection efficiency, detection speed or detection precision can be comprehensively considered, for example, the high resolution is set to be 4k resolution, the low resolution is set to be 1k resolution, or high resolution and low resolution of other values can be customized according to scene requirements, and the invention is within the protection scope.
Before and after the matting, the image needs to be subjected to coordinate conversion, and converted into detection coordinates or a unified coordinate system under a unified image resolution, such as an image coordinate system or a pixel coordinate system, so as to facilitate the execution of subsequent steps.
The resolution of the input image of the subsequent object detection algorithm is determined through steps S131 to S132, and then at step S14: and performing target detection on a second detection area comprising the foreground detection area on the image with the selected resolution, and outputting a target detection area. And determining a subsequent target detection area according to the foreground detection area, so that the detection effectiveness can be improved. Generally speaking, the foreground detection area is smaller than the size of the initial image acquired or obtained in step S11, that is, the area closer to the moving target is determined in step S12, the size of the subsequent target area is determined in consideration of the moving speed of the target or the transformation of the position or area of the area due to movement or limb movement, and then the second detection area is set based on the area closer to the moving target to perform target detection on the second detection area, so that the complexity of the detection algorithm can be reduced. Preferably, 1-5 times the foreground detection area may be used as the second detection area.
The target detection algorithm based on the foreground detection algorithm detects only in the second detection area, so that the detection efficiency is improved, the false detection rate of the detection algorithm is reduced, the area of the detection area is reduced relative to the initial moving target area, and the problem of false detection possibly occurring in the background is also reduced.
In some scenarios, the moving object cannot be selected and identified, for example, the moving object cannot be manually selected through an app frame, or the moving object cannot be selected through other detection and identification methods. According to a preferred embodiment of the present invention, the detection and recognition method that may be adopted in step S14 includes performing one of face detection, pedestrian detection, or head-shoulder detection, or a combination of a plurality of the same.
The head and shoulder detection is to detect whether a head and shoulder image exists under the condition of not adding any control on the background of an input image, and if so, the position of the head and shoulder image is calibrated. The head and shoulder images are used for detection and positioning, so that the influence of factors such as different postures, different imaging conditions, uncertain ornaments, complex backgrounds and the like is reduced, and the method can be used as a first step of face detection, positioning and identification. Meanwhile, head and shoulder detection is also the basis of pedestrian detection, and as people are complex deformable bodies, the degrees of freedom of four limbs joints of a human body are high, the range of motion is large, and the like, the pedestrian detection also becomes one of the difficult problems in target detection. By using the favorable condition that the head and shoulder image detection is not easily interfered by other factors (such as the gait of the human body, the color of clothes, the environment and the like), the accuracy of the pedestrian detection can be increased. Therefore, head and shoulder detection and face detection or pedestrian detection are combined, and when the moving target is a human body, the problems that the tracking target cannot be selected and identified in certain scenes can be basically solved.
According to a preferred embodiment of the present invention, the tracking target may also be determined through gesture detection and recognition, for example, a video player may actively activate a gesture detection function using a preset gesture. Preferably, the gesture detection and recognition are performed on the foreground detection area obtained in step S12 and the target detection area obtained in step S14 separately or sequentially or selectively. In specific implementation, in an initial frame, gesture detection and recognition are carried out based on a foreground detection area or a target detection area; and in a non-initial frame, performing gesture detection and recognition based on a foreground detection area or a target detection area and a tracking area of a previous frame.
In step S14, a moving target may be selected based on the gesture recognition result, the size of the moving target area is determined according to the foreground detection area or the target detection area, an adaptive target tracking algorithm is executed, and a tracking area of the moving target is output. Specifically, target detection is performed on a second detection area including the foreground detection area, the target detection area is output, and when an effective detection model exists in the target detection area, gesture detection and recognition are performed. In other words, only the valid detection area is detected and recognized by the gesture.
There are two modes for determining the gesture detection area: and performing gesture area detection based on the foreground detection area and/or the target detection area, or performing gesture area detection based on the general area or the general target. Gesture region detection is performed based on a generic target, for example, based on the skin tone of a human hand. The change of the skin color is mainly influenced by the brightness and the chroma, so that the skin color can be well identified only in a color space which separates a brightness component from a chroma component according to the clustering property of the skin color, and the capture of an opponent is realized.
According to a preferred embodiment of the present invention, the method 10 further comprises:
the target detection area is subjected to target tracking in step S15, the second detection area is used as the target detection area in step S14, or the target detection area is determined according to face detection, pedestrian detection or head and shoulder detection, or when a preset gesture is detected, an area including the preset gesture is used as the target detection area, and then the target tracking is performed in step S15.
To sum up, in order to solve the problem of tracking target selection in a limited scene, selection of a tracking target can be determined in a gesture control mode, and meanwhile, the size of a candidate target is determined through foreground detection region segmentation and a general detection algorithm, so that the problem that when a user manually draws a frame in a target selection process, the target contains too many background regions to affect the performance of a tracker is solved. If the size of the subsequent target is determined only by detection, if the detection frame is not returned accurately, the tracking failure or poor effect can be caused.
Fig. 4 shows a schematic diagram of moving object recognition according to an embodiment of the present invention, as shown in (a) of fig. 4, and an original image, for example, an image captured by a video device, is first acquired. Then, foreground detection is performed on the original image. If the time-consuming situation of the detection algorithm is not considered, the foreground detection can be directly carried out on the original image; if the time consumption of the detection algorithm needs to be considered, foreground detection can be performed on the low-resolution image, and a graph (b) in fig. 4 is obtained after the foreground detection, namely the moving target is a walking person. Then, connected region detection is performed on the moving target region, filtering processing is performed according to the size of each connected region, for example, the area of each connected region is calculated, and then the connected region with the area smaller than a first preset area threshold is used as a noise region for filtering. Deleting the invalid noisy region to obtain a graph (c) in fig. 4; then, calculating the minimum circumscribed rectangle of the connected regions, fusing the connected regions with the center distance smaller than the distance threshold, and outputting the fused connected regions as foreground detection regions, as shown in (d) of fig. 4; if the size of the foreground detection area is smaller than the specified size, selecting to perform subsequent target detection on the high-resolution image; and if the size of the foreground detection area is larger than the specified size, selecting to perform subsequent target detection on the low-resolution image. For example, when the area of the foreground detection region is smaller than a second preset area threshold, selecting a high-resolution image; and when the area of the foreground detection area is not smaller than a second preset area threshold value, selecting an image with low resolution. And after the resolution of the input image is determined, performing target detection on a second detection area comprising the foreground detection area on the image with the selected resolution, and outputting the target detection area. That is, the area for subsequent target detection and the resolution of the image are determined through foreground detection, and finally, target detection of a moving target is performed on the image with the resolution in this area, so as to obtain a real and effective potential tracking target, as shown in (e) of fig. 4. If there are multiple potential moving targets, such as multiple walking people, it can be determined by gesture recognition or the like which moving target is selected as the final following target.
The present invention also provides a photographing apparatus 20, as shown in fig. 5, including:
a pan/tilt head 21;
the camera device 22, the said camera device 22 is installed on said cloud terrace 21;
a control unit 23, said control unit 23 being coupled to said head 21 and to said camera means 22 and being able to carry out the method of moving object recognition as described above.
According to a preferred embodiment of the present invention, the control unit 23 is configured to control the pan/tilt head 21 to follow the target detection area.
According to a preferred embodiment of the present invention, wherein the photographing apparatus is a drone.
The present invention also provides a computer readable storage medium having computer readable instructions stored thereon which, when executed by a processor, perform a method of moving object recognition as described above.
Finally, it should be noted that: although the present invention has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that changes may be made in the embodiments and/or equivalents thereof without departing from the spirit and scope of the invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A method of moving object recognition, the method comprising:
s11: collecting or acquiring an image;
s12: performing foreground detection on the image and outputting a foreground detection area;
s13: selecting the resolution of the image according to the size of the foreground detection area; and
s14: and performing target detection on a second detection area comprising the foreground detection area on the image with the selected resolution, and outputting a target detection area.
2. The method of claim 1, further comprising:
s15: and carrying out target tracking on the target detection area.
3. The method of claim 1, wherein the step S12 comprises:
s121: acquiring a moving target area;
s122: performing connected region detection and filtering on the moving target region;
s123: calculating the minimum circumscribed rectangle of each connected region, and fusing the connected regions with the center distance smaller than the distance threshold; and
s124: and outputting the fused connected region as the foreground detection region.
4. The method of claim 3, wherein said step S121 comprises obtaining a moving object region according to the ViBE algorithm; the step S122 includes: calculating the area of each connected region and filtering out the connected region with the area smaller than a first preset area threshold value; the step S124 includes: and outputting the fused external rectangle list of the connected region as the foreground detection region.
5. The method according to any of claims 1-3, wherein said step S13 comprises:
s131: calculating the area of the foreground detection area;
s132: when the area of the foreground detection area is smaller than a second preset area threshold value, selecting the image with high resolution; when the area of the foreground detection area is not smaller than a second preset area threshold, selecting the image with low resolution;
wherein the high resolution is 4k resolution, and the low resolution is 1k resolution.
6. A method according to any one of claims 1 to 3, wherein the second detection area is 1-5 times the foreground detection area, the step S14 comprising performing any one or more of face detection, pedestrian detection or head-shoulder detection.
7. The method of claim 1, further comprising:
performing gesture detection and recognition on the foreground detection area and/or the target detection area;
when a preset gesture is detected, carrying out target tracking on a target detection area comprising the preset gesture; and
in the initial frame, gesture detection and recognition are carried out based on a foreground detection area or a target detection area; and in a non-initial frame, performing gesture detection and recognition based on a foreground detection area or a target detection area and a tracking area of a previous frame.
8. The method of claim 7, wherein step S14 further comprises: and selecting a moving target based on the gesture recognition result, determining the size of the moving target area according to the foreground detection area or the target detection area, executing a self-adaptive target tracking algorithm, and outputting the tracking area of the moving target.
9. A photographing apparatus comprising:
a holder;
the camera device is arranged on the holder;
a control unit coupled to the head and to the camera device and capable of carrying out the method of moving object recognition according to any one of claims 1 to 8.
10. The photographing apparatus of claim 9, wherein the control unit is configured to control the pan/tilt head to follow the target detection area, and the photographing apparatus is a drone.
CN202110979220.3A 2021-08-25 2021-08-25 Moving object recognition method and shooting device Pending CN115731258A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110979220.3A CN115731258A (en) 2021-08-25 2021-08-25 Moving object recognition method and shooting device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110979220.3A CN115731258A (en) 2021-08-25 2021-08-25 Moving object recognition method and shooting device

Publications (1)

Publication Number Publication Date
CN115731258A true CN115731258A (en) 2023-03-03

Family

ID=85289593

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110979220.3A Pending CN115731258A (en) 2021-08-25 2021-08-25 Moving object recognition method and shooting device

Country Status (1)

Country Link
CN (1) CN115731258A (en)

Similar Documents

Publication Publication Date Title
US20180210556A1 (en) Information processing device and method, program and recording medium for identifying a gesture of a person from captured image data
US9092868B2 (en) Apparatus for detecting object from image and method therefor
US9213896B2 (en) Method for detecting and tracking objects in image sequences of scenes acquired by a stationary camera
KR101071352B1 (en) Apparatus and method for tracking object based on PTZ camera using coordinate map
JP7272024B2 (en) Object tracking device, monitoring system and object tracking method
CN113286194A (en) Video processing method and device, electronic equipment and readable storage medium
CN110008795B (en) Image target tracking method and system and computer readable recording medium
JP5661043B2 (en) External light reflection determination device, line-of-sight detection device, and external light reflection determination method
JP6654789B2 (en) Apparatus, program, and method for tracking object considering multiple candidates at change points
US20070273766A1 (en) Computer vision-based object tracking system
JP5001930B2 (en) Motion recognition apparatus and method
US10013632B2 (en) Object tracking apparatus, control method therefor and storage medium
KR101712136B1 (en) Method and apparatus for detecting a fainting situation of an object by using thermal image camera
US20220366570A1 (en) Object tracking device and object tracking method
US9947106B2 (en) Method and electronic device for object tracking in a light-field capture
CN110378183B (en) Image analysis device, image analysis method, and recording medium
KR101146417B1 (en) Apparatus and method for tracking salient human face in robot surveillance
JP6798609B2 (en) Video analysis device, video analysis method and program
JP5539565B2 (en) Imaging apparatus and subject tracking method
JP2004157778A (en) Nose position extraction method, program for operating it on computer, and nose position extraction device
CN115731258A (en) Moving object recognition method and shooting device
US10140503B2 (en) Subject tracking apparatus, control method, image processing apparatus, and image pickup apparatus
JP5128454B2 (en) Wrinkle detection device, wrinkle detection method and program
KR101311728B1 (en) System and the method thereof for sensing the face of intruder
JP2016081252A (en) Image processor and image processing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination