WO2021208253A1 - 一种跟踪对象确定方法、设备和手持相机 - Google Patents

一种跟踪对象确定方法、设备和手持相机 Download PDF

Info

Publication number
WO2021208253A1
WO2021208253A1 PCT/CN2020/099830 CN2020099830W WO2021208253A1 WO 2021208253 A1 WO2021208253 A1 WO 2021208253A1 CN 2020099830 W CN2020099830 W CN 2020099830W WO 2021208253 A1 WO2021208253 A1 WO 2021208253A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
tracking
image frame
tracking object
captured image
Prior art date
Application number
PCT/CN2020/099830
Other languages
English (en)
French (fr)
Inventor
霍磊
梁峰
Original Assignee
上海摩象网络科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海摩象网络科技有限公司 filed Critical 上海摩象网络科技有限公司
Publication of WO2021208253A1 publication Critical patent/WO2021208253A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Definitions

  • the embodiments of the present application relate to the field of image recognition technology, and in particular, to a method, equipment, and handheld camera for determining a tracking object.
  • Computer vision refers to a simulation of biological vision using computers and related equipment. Its main task is to process the collected pictures or videos to obtain the three-dimensional information of the corresponding scene. Target detection and tracking is an important branch in the field of computer vision, which has a wide range of applications in military guidance, visual navigation, robotics, intelligent transportation, public safety and other fields.
  • handheld smart cameras can also apply target detection and tracking technology to track the target to be photographed.
  • the field of view of handheld smart cameras is more limited, and the position needs to be continuously converted to obtain panoramic images; and the multi-target tracking management method of handheld smart cameras is not perfect, which makes tracking a large number of targets to be photographed, or
  • the tracking information for tracking the target to be shot cannot be updated in time, resulting in tracking failures and recognition failures.
  • one of the technical problems solved by the embodiments of the present invention is to provide a tracking object determination method, device, and handheld camera to overcome the defect in the prior art that the tracking information for tracking the target to be photographed cannot be updated in time.
  • the embodiment of the present application provides a method for determining a tracking object, which includes: performing image recognition on a currently captured image frame to obtain identification information used to identify the recognized object in the currently captured image frame; The tracking information of the tracking object in the captured image frame is matched with the tracking object to obtain matching information; and the tracking information is updated according to the matching information.
  • the embodiment of the present application provides a tracking object determination device, including: a memory, a processor, and a video collector, the video collector is used to collect a target to be tracked in a target area; the memory is used to store program code; the processing The program code is called, and when the program code is executed, it is used to perform the following operations: perform image recognition on the currently captured image frame to obtain identification information for identifying the recognized object in the currently captured image frame; Information and tracking information used to identify the tracking object in the captured image frame, matching the identified object with the tracking object to obtain matching information; and updating the tracking information according to the matching information.
  • An embodiment of the present application provides a handheld camera, including the tracking object determining device in the foregoing embodiment, and further includes: a carrier, which is fixedly connected to the video collector and is used to carry the video At least part of the collector.
  • image recognition is performed on the currently captured image frame to obtain the identification information used to identify the recognized object in the currently captured image frame; then based on the identification information and the tracking used to identify the tracking object in the captured image frame Information, matching the identification object and the tracking object to obtain matching information; according to the matching information, update the tracking information.
  • the current captured image frame can be compared with the previous image frame.
  • the tracking of the tracking object can be updated by searching for the matching relationship between the recognized object in the current captured image frame and the tracking object in the captured image frame Information, update the tracking status of the tracking target, and achieve more real-time and accurate management and maintenance of the tracking target's information.
  • FIG. 1 is a schematic flowchart of a method for determining a tracking object according to Embodiment 1 of this application;
  • FIG. 2 is a schematic flowchart of a method for determining a tracking object according to Embodiment 2 of the present application;
  • FIG. 3 is a schematic flowchart of a method for determining a tracking object according to Embodiment 3 of the present application
  • FIG. 4 is a structural block diagram of a tracking object determination device provided in Embodiment 4 of this application;
  • 5-7 is a schematic structural block diagram of a handheld camera provided in Embodiment 5 of this application.
  • the target detection and tracking system has been a rapid development direction in the field of computer vision in recent years.
  • vision processing technology and artificial intelligence technology handheld smart cameras can be used to track the target to be photographed, and perform object recognition and scene recognition based on the target, so that users can classify and manage the photos or videos taken, and Subsequent automatic processing.
  • home hand-held smart cameras often have a limited field of view and need to constantly change positions to obtain panoramic images.
  • the number of targets is large, there are many types and they are scattered, the smaller field of view cannot be well supported.
  • Real-time multi-target tracking needs, when multiple types and/or multiple numbers of objects appear, the tracking information for tracking the target to be shot cannot be updated in time, resulting in tracking failures, recognition failures and other errors.
  • the method for determining tracking objects in the technical solutions provided in the embodiments of the present application improves user experience.
  • Embodiment 1 of the present application provides a method for determining a tracking object, as shown in FIG. 1, which is a schematic flowchart of a method for determining a tracking object provided by an embodiment of the application.
  • S101 Perform image recognition on a currently captured image frame, and obtain identification information used to identify a recognized object in the currently captured image frame.
  • the identification information is used to identify the recognition result corresponding to the recognition object identified by the preset image recognition algorithm.
  • the specific information content and recording method included in the identification information are not limited, and the type of image recognition algorithm is not limited. .
  • the identification information may include color feature information, depth feature information, location feature information, etc. of the identified object.
  • the image recognition algorithm may be RCNN, SSD, YOLO, etc.
  • S102 According to the identification information and the tracking information used to identify the tracking object in the captured image frame, matching the identification object and the tracking object to obtain matching information.
  • the captured image frame is one or more continuous image frames before the currently captured image frame.
  • the tracking target is to use a preset image recognition algorithm to recognize at least one captured image frame and mark the obtained object; the tracking information is used to identify the recognition result of the tracking target by the preset image recognition algorithm, and The tracking state of the tracking target in one or more consecutive captured image frames before the currently captured image frame.
  • the matching information is used at least to indicate whether the identification object and the tracking object match.
  • the recognition object matches the tracking object it indicates that the recognition object and the tracking object are the same object, or the possibility that the recognition object and the tracking object are the same object is extremely high, and the tracking object can be considered to appear in the current captured image frame;
  • the tracking object does not match it indicates that the recognition object and the tracking object are probably not the same object, and it can be considered that the tracking object does not appear in the current captured image frame.
  • both the identification information and the tracking information are the recognition results obtained by using the preset image recognition algorithm to identify the image frame
  • the identification information and the tracking information can be used to match the identification object and the tracking object.
  • the matching algorithm in this embodiment is not limited here.
  • the tracking state corresponding to the tracking target will change, and according to the matching information, it can be determined whether the tracking target appears in the current captured image frame, so the tracking can be updated according to the matching information.
  • the tracking information corresponding to the object, so that the updated tracking information of the tracking target can be used in an image frame after the currently captured image frame.
  • the tracking object determination method of the present application first performs image recognition on the currently captured image frame to obtain identification information for identifying the recognized object in the currently captured image frame;
  • the tracking information of the tracking object in the image frame is matched to the identified object and the tracking object to obtain the matching information; and the tracking information is updated according to the matching information.
  • the current captured image frame can be compared with the previous image frame, the tracking status of the tracking target can be updated, and real-time and accurate management of the tracking target can be realized.
  • the second embodiment of the application provides a method for determining a tracking object, as shown in FIG. 2, which is a schematic flowchart of a method for determining a tracking object provided by an embodiment of the application.
  • S201 Perform image recognition on the currently captured image frame to obtain identification information used to identify the recognition object in the currently captured image frame.
  • step S201 is the same as step S101 in the first embodiment, and will not be repeated here.
  • S202 According to the identification information and the tracking information used to identify the tracking object in the captured image frame, match the identification object with the tracking object to obtain matching information.
  • step S202 may include:
  • S202a According to the identification information and the tracking information, obtain similarity information for identifying the degree of similarity between the identification object and the tracking object, and position information for identifying the positional relationship between the identification object and the tracking object.
  • the similarity information is used to indicate the similarity between the identified object and the tracked object, and the calculation method of the similarity is not limited.
  • the image similarity algorithm can be histogram matching algorithm, Manhattan distance algorithm, Chebyshev distance algorithm, cosine distance algorithm, Pearson correlation coefficient algorithm, Hamming distance algorithm, Jackard distance algorithm, Brecurtis distance algorithm , Mahalanobis distance algorithm, JS divergence algorithm, etc.
  • the obtained similarity can be at least one of the similarity of the color features of the recognized object and the tracking object, the similarity of the feature points, and the similarity of the shape.
  • the histogram matching algorithm when using the histogram matching algorithm to calculate the similarity between the recognized object and the tracked object, first extract the color histograms of the recognized object and the tracked object; then calculate the two colors The distance of the histogram (such as the Bhattacharyya distance, the intersection distance of the histogram, etc.), and the similarity information between the identified object and the tracked object is obtained according to the distance.
  • the color histogram is used to describe the proportion of different colors in the entire image, and does not care about the spatial position of each color.
  • the calculation method of the positional relationship between the recognition object and the tracking object is not limited.
  • the position information includes the intersection ratio sub-information and the distance sub-information, where the intersection ratio sub-information is used to identify the identification object and the tracking object corresponding Cross-union ratio, the distance sub-information is used to identify the distance between the identified object and the tracked object.
  • the intersection ratio can measure the degree of overlap between the identified object and the tracked object. For example, when the intersection ratio is a value between 0 and 1, it represents the degree of overlap between the recognition object and the tracking object. The higher the value of the intersection ratio, the higher the degree of overlap between the recognition object and the tracking object; When the union ratio is 0, it indicates that the recognition object and the tracking object do not overlap; when the intersection ratio is 1, it indicates that the recognition object and the tracking object completely overlap.
  • the area of the intersection area and the area of the union area of the recognition object and the tracking object in the image frame can be obtained first, and then the ratio of the area of the intersection area to the area of the union area can be calculated, thereby obtaining the intersection corresponding to the recognition object and the tracking object. And compare.
  • the distance between the recognition object and the tracking object may be the distance between the center point of the tracking object in the image frame and the center point of the recognition object in the image frame.
  • S202b According to the similarity information and the position information, match the identified object with the tracked object to obtain matching information.
  • the recognition object and the tracking object may be matched together according to the similarity information and the position information.
  • step S202 may further include: according to the identification information and the tracking information, the greedy algorithm and the Hungarian algorithm are used in sequence to match the identified object with the tracked object to obtain matching information.
  • the Hungarian algorithm is a combinatorial optimization algorithm that solves the task assignment problem in polynomial time.
  • step S203 may further include:
  • the tracking object determination method of the present application can comprehensively consider the similarity information and location information between the identified object and the tracked object, and/or use the greedy algorithm and the Hungarian algorithm to match the identified object with the tracked object , It can achieve more accurate matching between the recognized object and the tracked object; and for the two different matching results of the recognized object and the tracked object, follow-up processing schemes are set, which can more effectively manage the tracked object.
  • the third embodiment of the present application provides a method for determining a tracking object, as shown in FIG. 3, which is a schematic flowchart of a method for determining a tracking object provided by an embodiment of the application.
  • S301 Perform image recognition on the currently captured image frame to obtain identification information used to identify the recognition object in the currently captured image frame.
  • step S301 is the same as step S101 in the first embodiment, and will not be repeated here.
  • S302 According to the identification information and the tracking information used to identify the tracking object in the captured image frame, matching the identification object and the tracking object to obtain matching information.
  • step S302 is the same as step S102 in the first embodiment or step S202 in the second embodiment, and will not be repeated here.
  • step S303 may include:
  • S303a Obtain overlapping information according to the identification information and the tracking information.
  • the overlap information is used to identify the image overlap degree of the tracking object and other objects in the previous image frame of the currently captured image frame, and the image overlap degree of the recognition object of the currently captured image frame with other objects.
  • the image overlap is high, it indicates that the tracking object or the recognition object overlaps with other objects in the image frame, or it is too close to distinguish other objects, and it is not suitable for tracking and shooting as the tracking target of the shooting device.
  • the recognition object when the recognition object does not match the tracking object, and the image overlap is less than the overlap threshold, it indicates that the recognition object may be suitable for tracking and shooting as the tracking target of the camera, so the recognition object can be determined as a new tracking Object.
  • the overlap threshold can be set according to specific needs.
  • the state of one or more tracking objects in the captured image frame may change, for example, disappear in the image frame, or overlap with other objects, in order to facilitate the tracking of the object.
  • the tracking information may also include the status identifier corresponding to the tracking object, where the type of the status identifier is greater than 2, and the tracking object can be classified and managed.
  • the status identifier corresponding to the tracking object may also be switched according to the real-time shooting situation, and step S303b may include: updating the status identifier corresponding to the tracking object according to the matching information and the overlap information.
  • the tracking information may also include the appearance duration information used to identify the continuous appearance duration value of the tracking target in the captured image frame , Or, used to identify the disappearance duration information of the continuous disappearance duration value of the tracking target in the captured image frame.
  • updating tracking information includes:
  • Sub-step S1 according to the matching information, update the appearance duration information or disappearance duration information.
  • Sub-step S2 according to the updated appearance duration information or the updated disappearance duration information, and the overlap information, update the status identifier corresponding to the tracking object.
  • the updated appearance time information is used to identify the continuous appearance time value of the tracking object in the captured image frame;
  • the updated disappearance time information is used to identify the continuous disappearance time value of the tracking object in the captured image frame;
  • the overlap information is used to Identifies the continuous overlap duration value of the tracking object in the captured image frame.
  • the status identifier corresponding to the tracking object may be updated.
  • the preset conditions can be set according to specific needs.
  • the status identifier corresponding to the tracking object may be one of the first identifier, the second identifier, the third identifier, the fourth identifier, and the fifth identifier.
  • the specific instructions are as follows:
  • the camera will track the tracking object in the continuously tracked state, for example, adjust the shooting angle according to the movement of the tracking object.
  • the tracking object When the state identifier corresponding to the tracking object is the second identifier, the tracking object is in a long-term lost state, and the tracking object in the long-term lost state is not recognized in the L consecutive captured image frames before the currently captured image frame, However, the L+1th captured image frame before the currently captured image frame is in a continuously tracked state, that is, the state corresponding to the tracking object in the L+1th captured image frame before the currently captured image frame is marked as the first An identification, where L is greater than or equal to 0.
  • the tracking object When the status indicator corresponding to the tracking object is the third indicator, the tracking object is in an overlapping state, and the tracking object in the overlapping state overlaps with other objects or is away from other objects in X consecutive image frames before the current image frame. It is very close to be unable to distinguish, but the tracking object is continuously tracked in the X+1 captured image frame before the current captured image frame, that is, the tracked object in the X+1 captured image frame before the currently captured image frame.
  • the corresponding status identifier is the first identifier, where X is greater than or equal to 0.
  • the tracking object is in a short-term loss state, and the tracking object in the short-term loss state is not recognized in the Y consecutive captured image frames before the currently captured image frame, However, the Y+1th captured image frame before the currently captured image frame is in a continuously tracked state, that is, the Y+1th captured image frame before the currently captured image frame is in the state of the tracking object corresponding to the tracking object.
  • An identification where Y is greater than or equal to 0, and Y is less than the aforementioned L value.
  • the tracking object When the status identifier corresponding to the tracking object is the fifth identifier, the tracking object is in a newly discovered state, and the corresponding status identifier has never been determined as the first identifier, that is, the tracking object whose status identifier is the fifth identifier is in the current captured image frame and None of the previous Z consecutive image frames has been tracked and photographed, and Z is an integer greater than or equal to 0.
  • the status identifier corresponding to the tracking object is the first identifier
  • the continuous disappearance duration value is greater than the first disappearance threshold
  • the first disappearance threshold can be represented by a time value or the number of consecutive image frames, which can be set according to specific requirements.
  • the first disappearance threshold may be represented by 10 consecutive image frames.
  • the status identifier corresponding to the tracking object is the first identifier
  • the continuous disappearance duration value of the tracking object in the captured image frame is greater than the first disappearance threshold, it indicates that the tracking object may have been out of the shooting frame and was subsequently photographed If the probability of arrival is relatively low, the status identifier of the tracking object can be updated as the second identifier.
  • the status identifier corresponding to the tracking object is the second identifier
  • the continuous occurrence time value is greater than the first occurrence threshold
  • the status identifier corresponding to the tracking object is updated as the first identifier
  • the first occurrence threshold can be represented by a time value or the number of consecutive image frames, which can be set according to specific requirements.
  • the first occurrence threshold can be represented by 5 consecutive image frames.
  • the status identifier corresponding to the tracked object is the second identifier
  • the duration of the tracked object's continuous appearance in the captured image frame is greater than the first appearance threshold, it indicates that the tracked object has reappeared in the captured image frame.
  • the tracking object is cancelled.
  • the second disappearance threshold can be represented by a time value or the number of consecutive image frames, which can be set according to specific requirements.
  • the second disappearance threshold can be represented by 5 consecutive image frames.
  • the status identifier corresponding to the tracking object is the second identifier
  • the duration of the tracking object disappearing continuously is greater than the second disappearance threshold, it indicates that the tracking object may have been outside the shooting frame, and the probability of subsequent shooting is very low. Then the target is used as the tracking object, and the status identifier or tracking information corresponding to the target is deleted to reduce the occupation of computing resources and storage resources.
  • the status identifier corresponding to the tracking object is the first identifier
  • the continuous overlap duration value is greater than the first overlap threshold
  • the status identifier corresponding to the tracking object is updated as the third identifier.
  • the first overlap threshold can be represented by a time value or the number of consecutive image frames, which can be set according to specific requirements.
  • the first overlap threshold may be represented by 8 consecutive image frames.
  • the state identifier corresponding to the tracking object is the first identifier
  • the continuous overlap duration value of the tracking object in the captured image frame is greater than the first overlap threshold, it indicates that the tracking object continues to overlap with other objects or is very far away from other objects. It is nearly impossible to distinguish, and it is temporarily difficult to be tracked and photographed by the camera. Therefore, the status indicator of the tracking object can be updated as the third indicator.
  • the tracking object is cancelled.
  • the second overlap threshold is greater than the first overlap threshold, and can be represented by a duration value or the number of consecutive image frames, which can be set according to specific requirements.
  • the first overlap threshold may be represented by 5 consecutive image frames
  • the second overlap threshold may be represented by 10 consecutive image frames.
  • the state identifier corresponding to the tracking object is the second identifier
  • the sum of the continuous overlap duration value and the continuous disappearance duration value corresponding to the tracking object is greater than the second overlap threshold, it indicates that the tracking object is difficult to be tracked and photographed by the camera in the subsequent time . You can no longer use the target as a tracking object, and delete the status identifier or tracking information corresponding to the target, so as to reduce the occupation of computing resources and storage resources.
  • the tracking object is cancelled.
  • the third overlap threshold may be represented by a duration value or the number of consecutive image frames, and the third overlap threshold is greater than the first overlap threshold.
  • the third overlap threshold may be the same or different from the second overlap threshold, which can be set according to specific requirements.
  • the first overlap threshold may be represented by 5 consecutive image frames
  • the third overlap threshold may be represented by 8 consecutive image frames.
  • the status identifier corresponding to the tracking object is the third identifier
  • the sum of the continuous overlap duration value and the continuous disappearance duration value corresponding to the tracking object is greater than the third overlap threshold, it indicates that the tracking object is difficult to be tracked and photographed by the camera in the subsequent time . You can no longer use the target as a tracking object, and delete the status identifier or tracking information corresponding to the target, so as to reduce the occupation of computing resources and storage resources.
  • the status identifier corresponding to the tracking object is the first identifier
  • the continuous disappearance duration value is greater than the third disappearance threshold
  • the status identifier corresponding to the tracking object is updated as the fourth identifier.
  • the third disappearance threshold is smaller than the first disappearance threshold, and the third disappearance threshold can be represented by a time value or the number of consecutive image frames, which can be set according to specific requirements.
  • the first disappearance threshold may be represented by 8 consecutive image frames
  • the third disappearance threshold may be represented by 5 consecutive image frames.
  • the state identifier corresponding to the tracking object is the first identifier
  • the continuous disappearance duration value of the tracking object in the captured image frame is greater than the third disappearance threshold, it indicates that the tracking object may have been outside the shooting frame, and the follow-up is uncertain Whether it may continue to be photographed, so the status identifier of the tracking object can be updated as the fourth identifier.
  • the status identifier corresponding to the tracking object is the fourth identifier
  • the continuous occurrence time value is greater than the second occurrence threshold
  • the status identifier corresponding to the tracking object is updated as the first identifier
  • the second appearance threshold may be represented by a time value or the number of consecutive image frames.
  • the second appearance threshold may be the same as or different from the first appearance threshold, and may be set according to specific requirements.
  • the second occurrence threshold can be represented by 3 consecutive image frames.
  • the status identifier corresponding to the tracking object is the fourth identifier
  • the duration of the tracking object's continuous appearance in the captured image frame is greater than the second appearance threshold, it indicates that the tracking object has reappeared in the captured image frame and can be used again
  • the state identifier of the tracking object can be updated as the first identifier.
  • the status identifier corresponding to the tracking object is the fourth identifier
  • the continuous disappearance duration value is greater than the fourth disappearance threshold
  • the fourth disappearance threshold is smaller than the first disappearance threshold, and can be represented by a time value or the number of consecutive image frames, and can be set according to specific requirements.
  • the fourth disappearance threshold may be equal to the difference between the first disappearance threshold and the second disappearance threshold.
  • the first disappearance threshold may be represented by 8 continuous image frames
  • the third disappearance threshold may be represented by 5 continuous image frames
  • the fourth disappearance threshold may be represented by 3 continuous image frames.
  • the state identifier corresponding to the tracking object is the fourth identifier
  • the continuous disappearance duration value of the tracking object in the captured image frame is greater than the fourth disappearance threshold, it indicates that the tracking object has disappeared in the captured image frame for a long time.
  • the probability of being photographed is relatively low, and the status identifier of the tracking object can be updated as the second identifier.
  • the status identifier corresponding to the tracking object is the fourth identifier
  • the continuous overlap duration value is greater than the fourth overlap threshold
  • the fourth overlap threshold can be represented by a time value or the number of consecutive image frames, which can be set according to specific requirements.
  • the fourth overlap threshold may be represented by 5 consecutive image frames.
  • the status identifier corresponding to the tracking object is the fourth identifier
  • the continuous overlap duration value corresponding to the tracking object is greater than the fourth overlap threshold, it indicates that the tracking object continues to overlap with other objects or is too close to distinguish other objects, temporarily It is difficult to be tracked and photographed by the photographing device, so the status identifier of the tracking object can be updated as the third identifier.
  • the status identifier corresponding to the tracking target is the fifth identifier
  • the continuous occurrence time value is greater than the third occurrence threshold
  • the status identifier corresponding to the tracking target is updated as the first identifier
  • the third occurrence threshold can be represented by a time value or the number of consecutive image frames, which can be set according to specific requirements.
  • the third occurrence threshold can be represented by 5 consecutive image frames.
  • the status indicator corresponding to the tracking target is the fifth indicator, if the duration of the tracking target's continuous appearance in the captured image frame is greater than the third occurrence threshold, it indicates that the tracking target is likely to continue to appear in the shooting screen in the future.
  • the status identifier of the tracking target can be updated as the first identifier.
  • the tracking target is cancelled.
  • the fifth overlap threshold can be represented by a time value or the number of consecutive image frames, which can be set according to specific requirements.
  • the fifth overlap threshold can be represented by 5 consecutive image frames.
  • the status indicator corresponding to the tracking target is the fifth indicator
  • the continuous overlap duration value of the tracking target in the captured image frame is greater than the fifth overlap threshold, it indicates that the tracking target is difficult to be tracked and photographed by the camera in the subsequent time. Then the target is used as a tracking target, and the status identifier or tracking information corresponding to the target is deleted to reduce the occupation of computing resources and storage resources.
  • the tracking object determination method of the present application further considers the tracking object or the degree of overlap between the recognition object and other objects in the image frame in the process of updating the tracking information, which is beneficial to improve the accuracy of real-time tracking;
  • the tracking process by categorizing tracking objects and using state identifiers to distinguish different states, it is convenient to manage the tracking objects and reduce the consumption of computing resources and storage resources.
  • the fourth embodiment of the present application provides a tracking object determining device, as shown in FIG. 4, which is a structural block diagram of a tracking object determining device provided in an embodiment of the application.
  • the tracking object determination device of this embodiment includes: a memory 401, a processor 402, and a video collector 403.
  • the video collector 403 is used to collect the object to be tracked in the target area; the memory 401 is used to store program codes; and the processor 402 is used to call programs.
  • Code when the program code is executed, it is used to perform the following operations: perform image recognition on the currently captured image frame to obtain identification information used to identify the recognized object in the currently captured image frame; according to the identification information and used to identify the captured image
  • the tracking information of the tracking object in the frame is matched to the identified object and the tracking object to obtain the matching information; and the tracking information is updated according to the matching information.
  • the processor 402 calls the program code.
  • the program code When the program code is executed, it is used to perform the following operations: according to the identification information and the tracking information used to identify the tracking object in the captured image frame, compare the identification object and the tracking object Performing matching to obtain matching information includes: obtaining similarity information used to identify the degree of similarity between the identified object and the tracked object according to the identification information and the tracking information, and position information used to identify the positional relationship between the identified object and the tracked object; According to the similarity information and location information, the recognition object and the tracking object are matched to obtain matching information.
  • the processor 402 calls the program code, and when the program code is executed, it is used to perform the following operations: the position information includes sub-information and distance sub-information, where the sub-information is used to identify the identification object Comparing with the tracking object, the distance sub-information is used to identify the distance between the recognition object and the tracking object.
  • the processor 402 calls the program code.
  • the program code When the program code is executed, it is used to perform the following operations: according to the identification information and the tracking information used to identify the tracking object in the captured image frame, compare the identification object and the tracking object Performing matching to obtain matching information includes: according to the identification information and the tracking information, sequentially using the greedy algorithm and the Hungarian algorithm to match the identified object with the tracking object to obtain the matching information.
  • the processor 402 calls the program code.
  • the program code When the program code is executed, it is used to perform the following operations: according to the matching information, updating the tracking information includes: when the recognition object does not match the tracking object, determining the recognition object as a new Tracked object.
  • the processor 402 calls the program code, and when the program code is executed, it is used to perform the following operations: according to the matching information, updating the tracking information includes: obtaining overlapping information according to the identification information and the tracking information, where , The overlap information is used to identify the degree of image overlap between the tracking object and other objects in the previous image frame of the currently captured image frame, and the recognition object and other objects of the currently captured image frame The degree of image overlap; updating the tracking information according to the matching information and the overlap information.
  • the processor 402 calls the program code.
  • the program code When the program code is executed, it is used to perform the following operations: according to the matching information and the overlap information, updating the tracking information includes: when the recognition object does not match the tracking object, and the degree of image overlap When it is less than the overlap threshold, the recognition object is determined as a new tracking object.
  • the processor 402 calls the program code.
  • the program code When the program code is executed, it is used to perform the following operations: the tracking information includes the status identifier corresponding to the tracking object, where the type of the status identifier is greater than 2; correspondingly, according to the matching information
  • updating the tracking information includes: updating the status identifier corresponding to the tracking object according to the matching information and the overlapping information.
  • the processor 402 calls the program code, and when the program code is executed, it is used to perform the following operations: the tracking information also includes the appearance duration information used to identify the duration value of the continuous appearance of the tracked object in the captured image frame, or , Which is used to identify the disappearance duration information of the continuous disappearance duration value of the tracking object in the captured image frame; correspondingly, according to the matching information and the overlapping information, updating the tracking information includes: updating the appearance duration information or disappearance duration information according to the matching information; The updated appearance duration information or the updated disappearance duration information, as well as the overlapping information, update the status identifier corresponding to the tracking object.
  • Embodiment 1 for detailed technical content, please refer to the above-mentioned Embodiment 1 to Embodiment 3.
  • the tracking object determination device of the present application first performs image recognition on the currently captured image frame to obtain identification information used to identify the recognized object in the currently captured image frame;
  • the tracking information of the tracking object in the image frame is matched to the identified object and the tracking object to obtain the matching information; and the tracking information is updated according to the matching information.
  • the current captured image frame can be compared with the previous image frame.
  • the tracking of the tracking object can be updated by searching for the matching relationship between the recognized object in the current captured image frame and the tracking object in the captured image frame Information, update the tracking status of the tracking object, and realize the management and maintenance of the tracking object in a more real-time and accurate manner.
  • the fifth embodiment of the present application provides a handheld camera, as shown in FIG. 5, which is a structural block diagram of a handheld camera provided by an embodiment of the present application.
  • the handheld camera of this embodiment includes the tracking object determination device of the fifth embodiment, and the handheld camera further includes a carrier, which is fixedly connected to the video collector, and is used to carry at least a part of the video collector.
  • the carrier includes, but is not limited to, a handheld PTZ 1.
  • the handheld PTZ 1 is a handheld three-axis PTZ.
  • the video capture device includes, but is not limited to, a handheld three-axis pan/tilt camera.
  • the handheld pan/tilt head 1 of the embodiment of the present invention includes a handle 11 and a photographing device 12 loaded on the handle 11.
  • the photographing device 12 may include a three-axis pan/tilt camera, and in other embodiments, it includes a two-axis or PTZ cameras with more than three axes.
  • the handle 11 is provided with a display screen 13 for displaying the shooting content of the shooting device 12.
  • the invention does not limit the type of the display screen 13.
  • the display screen 13 By setting the display screen 13 on the handle 11 of the handheld PTZ 1, the display screen can display the shooting content of the shooting device 12, so that the user can quickly browse the pictures or videos shot by the shooting device 12 through the display screen 13, thereby improving The interaction and fun of the handheld PTZ 1 with the user meets the diverse needs of the user.
  • the handle 11 is also provided with an operating function part for controlling the camera 12.
  • the operating function part By operating the operating function part, the operation of the camera 12 can be controlled, for example, the camera 12 is turned on and off, and the camera 12 is controlled.
  • the operation function part may be in the form of a button, a knob or a touch screen.
  • the operating function unit includes a shooting button 14 for controlling the shooting of the shooting device 12, a power/function button 15 for controlling the opening and closing of the shooting device 12 and other functions, and a universal key 16 for controlling the movement of the pan/tilt.
  • the operating function unit may also include other control buttons, such as image storage buttons, image playback control buttons, etc., which can be set according to actual needs.
  • the operation function part and the display screen 13 are arranged on the same side of the handle 11.
  • the operation function part and the display screen 13 shown in the figure are both arranged on the front of the handle 11.
  • the appearance layout of PTZ 1 is more reasonable and beautiful.
  • the side of the handle 11 is provided with a function operation key A, which is used to facilitate the user to quickly and intelligently form a sheet with one key.
  • a function operation key A which is used to facilitate the user to quickly and intelligently form a sheet with one key.
  • the handle 11 is further provided with a card slot 17 for inserting a storage element.
  • the card slot 17 is provided on the side of the handle 11 adjacent to the display screen 13, and a memory card is inserted into the card slot 17 to store the image captured by the camera 12 in the memory card. Moreover, arranging the card slot 17 on the side does not affect the use of other functions, and the user experience is better.
  • a power supply battery for supplying power to the handle 11 and the imaging device 12 may be provided inside the handle 11.
  • the power supply battery can be a lithium battery with large capacity and small size to realize the miniaturized design of the handheld pan/tilt 1.
  • the handle 11 is also provided with a charging interface/USB interface 18.
  • the charging interface/USB interface 18 is provided at the bottom of the handle 11 to facilitate connection with an external power source or storage device, so as to charge the power supply battery or perform data transmission.
  • the handle 11 is further provided with a sound pickup hole 19 for receiving audio signals, and the sound pickup hole 19 is internally connected with a microphone.
  • the sound pickup hole 19 may include one or more. It also includes an indicator light 20 for displaying status. The user can realize audio interaction with the display screen 13 through the sound pickup hole 19.
  • the indicator light 20 can serve as a reminder, and the user can obtain the power status of the handheld PTZ 1 and the current execution function status through the indicator light 20.
  • the sound pickup hole 19 and the indicator light 20 can also be arranged on the front of the handle 11, which is more in line with the user's usage habits and operation convenience.
  • the imaging device 12 includes a pan/tilt support and a camera mounted on the pan/tilt support.
  • the camera can be a camera, or can be an imaging element composed of a lens and an image sensor (such as CMOS or CCD), etc., which can be specifically selected according to needs.
  • the camera may be integrated on the pan/tilt bracket, so that the photographing device 12 is a pan/tilt camera; it may also be an external photographing device, which is detachably connected or clamped to be mounted on the pan/tilt bracket.
  • the pan/tilt support is a three-axis pan/tilt support
  • the photographing device 12 is a three-axis pan/tilt camera.
  • the three-axis pan/tilt bracket includes a yaw axis assembly 22, a roll axis assembly 23 movably connected to the yaw axis assembly 22, and a pitch axis assembly 24 movably connected to the roll axis assembly 23.
  • the camera is mounted on the pitch axis assembly 24. .
  • the yaw axis assembly 22 drives the camera 12 to rotate in the yaw direction.
  • the pan-tilt support can also be a two-axis pan-tilt, a four-axis pan-tilt, etc., which can be specifically selected according to needs.
  • a mounting part is also provided, the mounting part is arranged at one end of the connecting arm connected to the roll axis assembly, and the yaw axis assembly can be arranged in the handle, and the yaw axis assembly drives the camera 12 along the yaw The heading turns.
  • the handle 11 is provided with an adapter 26 for coupling with the mobile device 2 (such as a mobile phone), and the adapter 26 is detachably connected to the handle 11.
  • the adapter 26 protrudes from the side of the handle for connecting to the mobile device 2.
  • the handheld PTZ 1 is docked with the adapter 26 and is used to be supported on the mobile device. 2 at the end.
  • the handle 11 is provided with an adapter 26 for connecting with the mobile device 2 to connect the handle 11 and the mobile device 2 to each other.
  • the handle 11 can be used as a base of the mobile device 2.
  • the user can hold the other end of the mobile device 2 Let's pick up and operate the handheld PTZ 1 together, the connection is convenient and fast, and the product is beautiful.
  • a communication connection between the handheld pan-tilt 1 and the mobile device 2 can be realized, and the camera 12 and the mobile device 2 can transmit data.
  • the adapter 26 and the handle 11 are detachably connected, that is, the adapter 26 and the handle 11 can be mechanically connected or removed. Further, the adapter 26 is provided with an electrical contact portion, and the handle 11 is provided with an electrical contact fitting portion that is matched with the electrical contact portion.
  • the adapter 26 can be removed from the handle 11.
  • the adapter 26 is installed on the handle 11 to complete the mechanical connection between the adapter 26 and the handle 11, and at the same time through the electrical contact part and the electrical contact mating part. The connection ensures the electrical connection between the two, so as to realize the data transmission between the camera 12 and the mobile device 2 through the adapter 26.
  • the side of the handle 11 is provided with a receiving groove 27, and the adapter 26 is slidably clamped in the receiving groove 27. After the adapter 26 is installed in the receiving groove 27, a part of the adapter 26 protrudes from the receiving groove 27, and the part of the adapter 26 protruding from the receiving groove 27 is used to connect with the mobile device 2.
  • the adapter 26 when the adapter 26 is inserted into the receiving groove 27 from the adapter 26, the adapter part is flush with the receiving groove 27, and the adapter 26 is stored in the handle. 11 of the receiving tank 27.
  • the adapter 26 can be inserted into the receiving groove 27 from the adapter part, so that the adapter 26 protrudes from the receiving groove 27 so that the mobile device 2 can be connected to the handle. 11 interconnect
  • the adapter 26 can be taken out of the receiving slot 27 of the handle 11, and then the adapter 26 can be inserted into the receiving slot 27 in the reverse direction, and then The adapter 26 is housed in the handle 11.
  • the adapter 26 is flush with the receiving groove 27 of the handle 11. After the adapter 26 is stored in the handle 11, the surface of the handle 11 can be ensured to be flat, and the adapter 26 is stored in the handle 11 to make it easier to carry.
  • the receiving groove 27 is semi-opened on one side surface of the handle 11, which makes it easier for the adapter 26 to be slidably connected to the receiving groove 27.
  • the adapter 26 can also be detachably connected to the receiving slot 27 of the handle 11 by means of a snap connection, a plug connection, or the like.
  • the receiving groove 27 is provided on the side of the handle 11.
  • the receiving groove 27 is clamped and covered by the cover 28, which is convenient for the user to operate, and does not affect the front and sides of the handle. The overall appearance.
  • the electrical contact part and the electrical contact mating part may be electrically connected in a contact contact manner.
  • the electrical contact part can be selected as a telescopic probe, can also be selected as an electrical plug-in interface, or can be selected as an electrical contact.
  • the electrical contact portion and the electrical contact mating portion can also be directly connected to each other in a surface-to-surface contact manner.
  • a method for determining a tracking object characterized in that it comprises:
  • identification information and the tracking information used to identify the tracking object in the captured image frame, matching the identification object with the tracking object to obtain matching information
  • the tracking information is updated.
  • the tracking object determination method characterized in that, according to the identification information and the tracking information used to identify the tracking object in the captured image frame, the identification object and the tracking object are Matching, obtaining matching information includes:
  • the identification information and the tracking information obtain similarity information for identifying the degree of similarity between the identification object and the tracking object, and for identifying the positional relationship between the identification object and the tracking object Location information;
  • the identification object is matched with the tracking object to obtain matching information.
  • the tracking object determination method characterized in that the position information includes cross-combination sub-information and distance sub-information, wherein the cross-combination sub-information is used to identify the identification object and the distance sub-information. An intersection ratio corresponding to the tracking object, and the distance sub-information is used to identify the distance between the recognition object and the tracking object.
  • the tracking object determination method characterized in that, according to the identification information and the tracking information used to identify the tracking object in the captured image frame, the identification object and the tracking object are Matching, obtaining matching information includes:
  • the greedy algorithm and the Hungarian algorithm are used in sequence to match the identification object with the tracking object to obtain matching information.
  • the tracking object determination method wherein the updating the tracking information according to the matching information includes:
  • the recognition object When the recognition object does not match the tracking object, the recognition object is determined as the new tracking object.
  • the tracking object determination method according to A1, wherein the updating the tracking information according to the matching information includes:
  • overlap information according to the identification information and the tracking information, where the overlap information is used to identify the degree of image overlap between the tracking object and other objects in the previous image frame of the currently captured image frame , And the degree of overlap between the recognized object and other objects in the currently captured image frame;
  • the tracking information is updated.
  • the tracking object determination method according to A6, wherein the updating the tracking information according to the matching information and the overlapping information includes:
  • the recognition object does not match the tracking object, and the image overlap degree is less than the overlap degree threshold, the recognition object is determined as the new tracking object.
  • A8 The method for determining a tracking object according to A6, wherein the tracking information includes a status identifier corresponding to the tracking object, wherein the type of the status identifier is greater than 2; Information and the overlapping information, and updating the tracking information includes:
  • the status identifier corresponding to the tracking object is updated.
  • the tracking object determination method wherein the tracking information further includes appearance duration information used to identify the continuous appearance duration value of the tracking target in the captured image frame, or used to identify the location
  • the disappearance duration information of the continuous disappearance duration value of the tracking target in the captured image frame; correspondingly, the updating the tracking information according to the matching information and the overlapping information includes:
  • the status identifier corresponding to the tracking object is updated.
  • a tracking object determination device characterized by comprising: a memory, a processor, and a video collector, the video collector is used to collect the target to be tracked in the target area; the memory is used to store program code; the processing The program code is called, and when the program code is executed, it is used to perform the following operations:
  • identification information and the tracking information used to identify the tracking object in the captured image frame, matching the identification object with the tracking object to obtain matching information
  • the tracking information is updated.
  • the tracking object determining device wherein the tracking object is matched with the tracking object according to the identification information and tracking information used to identify the tracking object in the captured image frame ,
  • the matching information includes:
  • the identification information and the tracking information obtain similarity information for identifying the degree of similarity between the identification object and the tracking object, and for identifying the positional relationship between the identification object and the tracking object Location information;
  • the identification object is matched with the tracking object to obtain matching information.
  • A12 The tracking object determining device according to A11, wherein the location information includes cross-combination sub-information and distance sub-information, wherein the cross-combination sub-information is used to identify the identification object and the tracking The cross-union ratio corresponding to the object, and the distance sub-information is used to identify the distance between the recognition object and the tracking object.
  • the tracking object determining device wherein the identifying object is matched with the tracking object according to the identification information and the tracking information used to identify the tracking object in the captured image frame ,
  • the matching information includes:
  • the greedy algorithm and the Hungarian algorithm are used in sequence to match the identification object with the tracking object to obtain matching information.
  • the recognition object When the recognition object does not match the tracking object, the recognition object is determined as the new tracking object.
  • the tracking object determining device wherein the updating the tracking information according to the matching information includes:
  • overlap information according to the identification information and the tracking information, where the overlap information is used to identify the degree of image overlap between the tracking object and other objects in the previous image frame of the currently captured image frame , And the degree of overlap between the recognized object and other objects in the currently captured image frame;
  • the tracking information is updated.
  • the tracking object determining device includes:
  • the recognition object does not match the tracking object, and the image overlap degree is less than the overlap degree threshold, the recognition object is determined as the new tracking object.
  • tracking information includes a status identifier corresponding to the tracking object, wherein the type of the status identifier is greater than 2; correspondingly, the tracking information includes a status identifier corresponding to the tracking object.
  • updating the tracking information includes:
  • the status identifier corresponding to the tracking object is updated.
  • the tracking object determining device wherein the tracking information further includes appearance duration information used to identify the continuous appearance duration value of the tracking target in the captured image frame, or used to identify the The disappearance duration information of the continuous disappearance duration value of the tracking target in the captured image frame; correspondingly, the updating the tracking information according to the matching information and the overlapping information includes:
  • the status identifier corresponding to the tracking object is updated.
  • a handheld camera characterized by comprising the tracking object determining device according to A10-18, characterized by further comprising: a carrier, which is fixedly connected to the video capture device for carrying At least a part of the video collector.
  • A20 The handheld camera according to A19, wherein the carrier includes but is not limited to a handheld pan/tilt.
  • A21 The handheld camera according to A20, wherein the handheld pan/tilt is a handheld three-axis pan/tilt.
  • A22 The handheld camera according to A21, wherein the video capture device includes, but is not limited to, a handheld three-axis pan-tilt camera.
  • each component/step described in the embodiment of the present invention can be split into more components/steps, or two or more components/steps or partial operations of components/steps can be combined into New components/steps to achieve the purpose of the embodiments of the present invention.
  • the above method according to the embodiments of the present invention can be implemented in hardware, firmware, or implemented as software or computer code that can be stored in a recording medium (such as CD ROM, RAM, floppy disk, hard disk, or magneto-optical disk), or implemented by
  • a recording medium such as CD ROM, RAM, floppy disk, hard disk, or magneto-optical disk
  • the computer code downloaded from the network is originally stored in a remote recording medium or a non-transitory machine-readable medium and will be stored in a local recording medium, so that the method described here can be stored in a general-purpose computer, a special-purpose processor, or a programmable Or such software processing on a recording medium of dedicated hardware (such as ASIC or FPGA).
  • a computer, a processor, a microprocessor controller, or programmable hardware includes a storage component (for example, RAM, ROM, flash memory, etc.) that can store or receive software or computer code, when the software or computer code is used by the computer, When the processor or hardware is accessed and executed, the target tracking and shooting method described here is realized.
  • a general-purpose computer accesses the code for implementing the method shown here, the execution of the code converts the general-purpose computer into a special-purpose computer for executing the method shown here.

Abstract

本申请实施例提供一种跟踪对象确定方法、设备和手持相机,跟踪对象确定方法包括:对当前拍摄图像帧进行图像识别,获得用于标识当前拍摄图像帧中的识别对象的识别信息;根据所述识别信息和用于标识已拍摄图像帧中的跟踪对象的跟踪信息,对所述识别对象与所述跟踪对象进行匹配,获得匹配信息;根据所述匹配信息,更新所述跟踪信息。由此,使得在多目标追踪拍摄过程中,可以将当前拍摄图像帧与上一图像帧进行对比通过寻找当前拍摄图像帧中识别对象与已拍摄图像帧中跟踪对象之间的匹配关系,更新跟踪对象的跟踪信息,更新跟踪目标的跟踪状态,实现以更实时且准确的进行跟踪对象的管理维护跟踪对象的信息。

Description

一种跟踪对象确定方法、设备和手持相机 技术领域
本申请实施例涉及图像识别技术领域,尤其涉及一种跟踪对象确定方法、设备和手持相机。
背景技术
计算机视觉是指使用计算机及相关设备对生物视觉的一种模拟。它的主要任务就是通过采集的图片或视频进行处理以获得相应场景的三维信息。目标检测跟踪是计算机视觉领域的一种重要分支,其在军事制导,视觉导航,机器人,智能交通,公共安全等领域有着广泛的应用。
随着视觉处理技术和人工智能技术的发展,手持智能相机也可以应用目标检测跟踪技术追踪待拍摄目标。但是,相比于工业相机,手持智能相机的视野比较局限,需要不断转化位置来获得全景图像;并且手持智能相机的多目标跟踪管理方法不完善,使得在追踪待拍摄目标的数量较多,或者追踪待拍摄目标的种类较多时,不能及时更新追踪待拍摄目标的跟踪信息,导致出现追踪失败、识别失败的情况。
发明内容
有鉴于此,本发明实施例所解决的技术问题之一在于提供一种跟踪对象确定方法、设备和手持相机,用以克服现有技术中不能及时更新追踪待拍摄目标的跟踪信息的缺陷。
本申请实施例提供了一种跟踪对象确定方法,包括:对当前拍摄图像帧进行图像识别,获得用于标识当前拍摄图像帧中的识别对象的识别信息;根据所述识别信息和用于标识已拍摄图像帧中的跟踪对象的跟踪信息,对所述识别对象与所述跟踪对象进行匹配,获得匹配信息;根据所述匹配信息,更新所述跟踪信息。
本申请实施例提供了一种跟踪对象确定设备,包括:存储器、处理器、视频采集器,所述视频采集器用于采集目标区域的待跟踪目标;所述存储器用于存储程序代码;所述处理器,调用所述程序代码,当程序代码被执行时,用于执行以下操作:对当前拍摄图像帧进行图像识别,获得用于标识当前拍摄图像帧中的识别对象的识别信息;根据所述识别信息和用于标识已拍摄图像帧中的 跟踪对象的跟踪信息,对所述识别对象与所述跟踪对象进行匹配,获得匹配信息;根据所述匹配信息,更新所述跟踪信息。
本申请实施例提供了一种手持相机,包括上述实施例中的所述的跟踪对象确定设备,还包括:承载器,所述承载器与所述视频采集器固定连接,用于承载所述视频采集器的至少一部分。
本申请实施例中,首先对当前拍摄图像帧进行图像识别,获得用于标识当前拍摄图像帧中的识别对象的识别信息;然后根据识别信息和用于标识已拍摄图像帧中的跟踪对象的跟踪信息,对识别对象与跟踪对象进行匹配,获得匹配信息;根据匹配信息,更新跟踪信息。使得在多目标追踪拍摄过程中,可以将当前拍摄图像帧与上一图像帧进行对比通过寻找当前拍摄图像帧中识别对象与已拍摄图像帧中跟踪对象之间的匹配关系,更新跟踪对象的跟踪信息,更新跟踪目标的跟踪状态,实现以更实时且准确的进行跟踪对象的管理维护跟踪对象的信息。
附图说明
后文将参照附图以示例性而非限制性的方式详细描述本申请实施例的一些具体实施例。附图中相同的附图标记标示了相同或类似的部件或部分。本领域技术人员应该理解,这些附图未必是按比值绘制的。附图中:
图1为本申请实施例一提供的一种跟踪对象确定方法的示意性流程图;
图2为本申请实施例二提供的一种跟踪对象确定方法的示意性流程图;
图3为本申请实施例三提供的一种跟踪对象确定方法的示意性流程图;
图4为本申请实施例四提供的一种跟踪对象确定设备的结构框图;
图5-7为本申请实施例五提供的一种手持相机的示意性结构框图。
具体实施方式
在本发明使用的术语是仅仅出于描述特定实施例的目的,而非旨在限制本发明。在本发明和所附权利要求书中所使用的单数形式的“一种”、“所述”和“该”也旨在包括多数形式,除非上下文清楚地表示其他含义。还应当理解,本文中使用的术语“和/或”是指并包括一个或多个相关联的列出项目的任何或所有可能组合。
应当理解,本申请说明书以及权利要求书中使用的“第一”“第二”以及 类似的词语并不表示任何顺序、数量或者重要性,而只是用来区分不同的组成部分。同样,“一个”或者“一”等类似词语也不表示数量限制,而是表示存在至少一个。
近年来,目标检测跟踪系统是计算机视觉领域近几年来发展较快的一个方向。随着视觉处理技术和人工智能技术的发展,手持智能相机的可以用于追踪待拍摄目标,并根据该目标进行物体识别和场景识别,以便于用户对拍摄的照片或视频进行分类和管理,以及后续的自动处理。但相比于工业相机,家用手持智能相机往往摄像头的视野比较局限,需要不断转变位置来获得全景图像,一旦目标数量较多、种类较多且较为分散时,较小的视野不能很好的支持实时多目标追踪的需求,当出现多个种类和/或多个数量物体时,不能及时更新追踪待拍摄目标的跟踪信息,导致出现追踪失败、识别失败等错误情况。
鉴于上述技术方案中的不足,本申请实施例所提供的技术方案中跟踪对象确定方法,从而改善了用户体验。
下面结合本发明实施例附图进一步说明本发明实施例具体实现。
实施例一
本申请实施例一提供一种跟踪对象确定方法,如图1所示,图1为本申请实施例提供的一种跟踪对象确定方法的示意性流程图。
本实施例的跟踪对象确定方法包括:
S101、对当前拍摄图像帧进行图像识别,获得用于标识当前拍摄图像帧中的识别对象的识别信息。
本实施例中,识别信息用于标识利用预设的图像识别算法识别出的识别对象对应的识别结果,识别信息所包括的具体信息内容以及记录方式不限,且图像识别算法的种类也不限。
例如,识别信息可以包括该识别对象的颜色特征信息、深度特征信息、位置特征信息等。又例如,图像识别算法可以为RCNN、SSD、YOLO等。
S102、根据识别信息和用于标识已拍摄图像帧中的跟踪对象的跟踪信息,对识别对象与跟踪对象进行匹配,获得匹配信息。
本实施例中,已拍摄图像帧为当前拍摄图像帧之前的一个或者多个连续图像帧。
本实施例中,跟踪目标为利用预设的图像识别算法对至少一已拍摄图像帧进行识别并标记获得的对象;跟踪信息用于标识预设的图像识别算法对跟踪目 标的识别结果,以及对跟踪目标在当前拍摄图像帧之前的一个或者多个连续已拍摄图像帧中的跟踪状态。
本实施例中,匹配信息至少用于表示识别对象和跟踪对象是否匹配。当识别对象与跟踪对象匹配时,表明识别对象与跟踪对象为同一对象,或者识别对象与跟踪对象为同一对象的可能性极高,可认为跟踪对象在当前拍摄图像帧中出现;当识别对象与跟踪对象不匹配时,表明识别对象与跟踪对象很可能不为同一对象,可认为跟踪对象未在当前拍摄图像帧中出现。
本实施例中,由于识别信息和跟踪信息均为利用预设的图像识别算法对图像帧进行识别获得的识别结果,因此可根据识别信息和跟踪信息,对识别对象与跟踪对象进行匹配,所采用的匹配算法本实施例中在此不做限制。
S103、根据匹配信息,更新跟踪信息。
本实施例中,由于完成当前拍摄图像帧的图像识别后,跟踪目标对应的跟踪状态会发生变化,而根据匹配信息可以确定跟踪目标是否在当前拍摄图像帧中出现,因此可以根据匹配信息更新跟踪对象对应的跟踪信息,以使得在当前拍摄图像帧之后的一图像帧中可使用跟踪目标更新后的跟踪信息。
由以上实施例可见,本申请的跟踪对象确定方法,首先对当前拍摄图像帧进行图像识别,获得用于标识当前拍摄图像帧中的识别对象的识别信息;然后根据识别信息和用于标识已拍摄图像帧中的跟踪对象的跟踪信息,对识别对象与跟踪对象进行匹配,获得匹配信息;根据匹配信息,更新跟踪信息。使得在多目标追踪拍摄过程中,可以将当前拍摄图像帧与上一图像帧进行对比,更新跟踪目标的跟踪状态,实现实时且准确的进行跟踪对象的管理。
实施例二
本申请实施例二提供一种跟踪对象确定方法,如图2所示,图2为本申请实施例提供的一种跟踪对象确定方法的示意性流程图。
本实施例的跟踪对象确定方法包括:
S201、对当前拍摄图像帧进行图像识别,获得用于标识当前拍摄图像帧中的识别对象的识别信息。
本实施例中,步骤S201与实施例一中的步骤S101相同,在此不再赘述。
S202、根据识别信息和用于标识已拍摄图像帧中的跟踪对象的跟踪信息,对识别对象与跟踪对象进行匹配,获得匹配信息。
本实施例中,为了获得更准确的匹配信息,步骤S202可包括:
S202a、根据识别信息和跟踪信息,获得用于标识识别对象与跟踪对象之间相似程度的相似度信息,以及用于标识识别对象和跟踪对象之间位置关系的位置信息。
其中,相似度信息用于表示识别对象与跟踪对象之间的相似度,相似度的计算方法不限。
例如,图像相似度算法可以为直方图匹配算法、曼哈顿距离算法、切比雪夫距离算法、余弦距离算法、皮尔逊相关系数算法、汉明距离算法、杰卡德距离算法、布雷柯蒂斯距离算法、马氏距离算法、JS散度算法等,获得的相似度可以是识别对象和跟踪对象的颜色特征的相似度、特征点的相似度、形状的相似度中的至少其一。
可选的,为了获得更为准确的相似度计算结果,在使用直方图匹配算法计算识别对象和跟踪对象的相似度时,可首先提取识别对象和跟踪对象的颜色直方图;然后计算两个颜色直方图的距离(如巴氏距离,直方图相交距离等),并根据该距离获得识别对象与跟踪对象的相似度信息。其中,颜色直方图用于描述不同色彩在整幅图像中所占的比例,而并不关心每种色彩所处的空间位置。
本实施例中,识别对象和跟踪对象之间位置关系的计算方法不限。
可选的,为了更为精准的确定识别对象和跟踪对象之间位置关系,位置信息包括交并比子信息和距离子信息,其中,交并比子信息用于标识识别对象和跟踪对象对应的交并比,距离子信息用于标识识别对象和跟踪对象之间的距离。
交并比可以衡量识别对象和跟踪对象的重叠程度。例如,交并比取值为0~1之间的值时,代表识别对象和跟踪对象的重叠程度,交并比的数值越高,则表明识别对象和跟踪对象的重叠程度越高;当交并比为0时,表明识别对象和跟踪对象不重叠;交并比为1时,表明识别对象和跟踪对象完全重叠。
可选的,可首先获得识别对象和跟踪对象在图像帧中的交集区域面积和并集区域面积,然后计算交集区域面积与并集区域面积的比值,由此获得识别对象和跟踪对象对应的交并比。
可选的,识别对象和跟踪对象之间的距离可以为跟踪对象在图像帧中的中心点与识别对象在图像帧中的中心点之间的距离。
S202b、根据相似度信息和位置信息,对识别对象与跟踪对象进行匹配,获得匹配信息。
本实施例中,为了更准确的对跟踪对象进行匹配,可根据相似度信息和位置信息共同对识别对象与跟踪对象进行匹配。
本实施例中,步骤S202还可以包括:根据识别信息和跟踪信息,依次使用贪心算法和匈牙利算法对识别对象与跟踪对象进行匹配,获得匹配信息。
其中,贪心算法对进行匹配时,总是做出在当前看来是最好的选择。在匹配的过程中,每一步只考虑一个识别对象与跟踪对象进行匹配,或只考虑一个跟踪对象与识别对象进行匹配,每一步都要确保能获得局部最优解。直到把所有数据枚举完。匈牙利算法是一种在多项式时间内求解任务分配问题的组合优化算法。
通过依次使用贪心算法和匈牙利算法对识别对象与跟踪对象进行匹配,不仅可以综合考虑跟踪对象和识别对象的相似度和位置信息,而且采用两种算法还可进一步提高识别对象与跟踪对象匹配的准确性。
S203、根据匹配信息,更新跟踪信息。
本实施例中,由于识别对象与跟踪对象可能匹配成功,也可能匹配失败,为了有效利用匹配信息,提高对多目标进行跟踪拍摄的效果,步骤S203还可以包括:
S203a、当识别对象与跟踪对象匹配时,更新跟踪对象的跟踪信息。
S203b、当识别对象与跟踪对象不匹配时,将识别对象确定为新的跟踪对象。
由以上实施例可见,本申请的跟踪对象确定方法,可综合考虑识别对象与跟踪对象之间的相似度信息和位置信息,和/或,采用贪心算法和匈牙利算法对识别对象与跟踪对象进行匹配,可实现以对识别对象与跟踪对象进行更为精准的匹配;并且针对识别对象与跟踪对象的两种不同匹配结果均设置了后续处理方案,可更为有效的对跟踪对象进行管理。
实施例三
本申请实施例三提供一种跟踪对象确定方法,如图3所示,图3为本申请实施例提供的一种跟踪对象确定方法的示意性流程图。
本实施例的跟踪对象确定方法包括:
S301、对当前拍摄图像帧进行图像识别,获得用于标识当前拍摄图像帧中的识别对象的识别信息。
本实施例中,步骤S301与实施例一中的步骤S101相同,在此不再赘述。
S302、根据识别信息和用于标识已拍摄图像帧中的跟踪对象的跟踪信息,对识别对象与跟踪对象进行匹配,获得匹配信息。
本实施例中,步骤S302与实施例一中的步骤S102或者实施例二中的步骤S202相同,在此不再赘述。
S303、根据匹配信息,更新跟踪信息。
本实施例中,为了更为准确的确定多目标拍摄的跟踪目标,在更新跟踪信息时还可进一步考虑图像帧中跟踪对象与其他对象的重合情况。具体的,步骤S303可包括:
S303a、根据识别信息和跟踪信息,获得重叠信息。
其中,重叠信息用于标识当前拍摄图像帧的前一已拍摄图像帧中的跟踪对象与其他对象的图像重叠度,以及当前拍摄图像帧的识别对象与其他对象的图像重叠度。当图像重叠度较高时,表明与跟踪对象或者识别对象与图像帧中的其他对象重合,或者与其他对象相距很近无法分辨开,并不适合作为拍摄装置的跟踪目标进行跟踪拍摄。
S303b、根据匹配信息和重叠信息,更新跟踪信息。
可选的,当识别对象与跟踪对象不匹配,且图像重叠度小于重叠度阈值时,表明识别对象有可能适合作为为拍摄装置的跟踪目标进行跟踪拍摄,因此可将识别对象确定为新的跟踪对象。其中,重叠阈值可根据具体需求进行设置。
可选的,由于在多目标跟踪拍摄过程中,一个或者多个跟踪对象在已拍摄图像帧中的状态会发生变化,例如在图像帧中消失,或者与其他对象重叠,为了方便对跟踪对象的管理,跟踪信息还可包括跟踪对象对应的状态标识,其中,状态标识的种类大于2,即可将跟踪对象进行分类管理。对应的,跟踪对象对应的状态标识也可根据实时拍摄情况进行切换,步骤S303b可包括:根据匹配信息和重叠信息,更新跟踪对象对应的状态标识。
可选的,为了更为准确且全面的对跟踪对象进行管理,降低计算资源和存储资源的消耗,跟踪信息还可包括用于标识跟踪目标在已拍摄图像帧中连续出现时长值的出现时长信息,或者,用于标识跟踪目标在已拍摄图像帧中连续消失时长值的消失时长信息。对应的,根据匹配信息和重叠信息,更新跟踪信息包括:
子步骤S1、根据匹配信息,更新出现时长信息或者消失时长信息。
子步骤S2、根据更新后的出现时长信息或者更新后的消失时长信息,以及重叠信息,更新跟踪对象对应的状态标识。
其中,更新后的出现时长信息用于标识跟踪对象在已拍摄图像帧中连续出现时长值;更新后的消失时长信息用于标识跟踪对象在已拍摄图像帧中连续消失时长值;重叠信息用于标识跟踪对象在已拍摄图像帧中连续重叠时长值。
可选的,若跟踪对象的连续出现时长值、连续消失时长值、连续重叠时长值满足预设条件,则可更新跟踪对象对应的状态标识。其中,预设条件可根据具体需求进行设定。
可选的,跟踪对象对应的状态标识可以为第一标识、第二标识、第三标识、第四标识、第五标识中的一种。具体说明如下:
当跟踪对象对应的状态标识为第一标识时,跟踪对象处于被跟踪拍摄状态,拍摄装置会对处于持续被跟踪状态的跟踪对象进行跟踪拍摄,例如会根据跟踪对象的运动情况调整拍摄角度。
当跟踪对象对应的状态标识为第二标识时,跟踪对象处于长时间丢失状态,处于长时间丢失状态的跟踪对象在当前拍摄图像帧之前的L个连续已拍摄图像帧中均未被识别出,但是在当前拍摄图像帧之前的第L+1个已拍摄图像帧中处于持续被跟踪状态,即当前拍摄图像帧之前的第L+1个已拍摄图像帧中该跟踪对象对应的状态标识为第一标识,其中L大于或等于0。
当跟踪对象对应的状态标识为第三标识时,跟踪对象处于重叠状态,处于重叠状态的跟踪对象在当前拍摄图像帧之前的X个连续已拍摄图像帧中与其他对象重合,或者与其他对象相距很近无法分辨开,但是在当前拍摄图像帧之前的第X+1个已拍摄图像帧中处于持续被跟踪状态,即当前拍摄图像帧之前的第X+1个已拍摄图像帧中该跟踪对象对应的状态标识为第一标识,其中X大于或等于0。
当跟踪对象对应的状态标识为第四标识时,跟踪对象处于短时间丢失状态,处于短时间丢失状态的跟踪对象在当前拍摄图像帧之前的Y个连续已拍摄图像帧中均未被识别出,但是在当前拍摄图像帧之前的第Y+1个已拍摄图像帧中处于持续被跟踪状态,即当前拍摄图像帧之前的第Y+1个已拍摄图像帧中该跟踪对象对应的状态标识为第一标识,其中Y大于或等于0,且Y小于前述L的值。
当跟踪对象对应的状态标识为第五标识时,跟踪对象处于新发现状态,且 对应的状态标识从未被确定为第一标识,即状态标识为第五标识的跟踪对象在当前拍摄图像帧以及之前的Z个连续图像帧中均未被进行跟踪拍摄,Z为大于或等于0的整数。
可选的,当跟踪对象对应的状态标识为第一标识时,若连续消失时长值大于第一消失阈值,则更新跟踪对象对应的状态标识为第二标识。
第一消失阈值可以用时间值或连续图像帧的帧数表示,其可根据具体需求进行设置。例如,第一消失阈值可以用10个连续图像帧表示。
当跟踪对象对应的状态标识为第一标识时,若该跟踪对象在已拍摄图像帧中的连续消失时长值大于第一消失阈值时,表明该跟踪对象可能已经在拍摄画面之外,后续被拍摄到的概率相对较低,则可更新该跟踪对象的状态标识为第二标识。
可选的,当跟踪对象对应的状态标识为第二标识时,若连续出现时长值大于第一出现阈值,则更新跟踪对象对应的状态标识为第一标识。
第一出现阈值可以用时间值或连续图像帧的帧数表示,其可根据具体需求进行设置。例如,第一出现阈值可以用5个连续图像帧表示。
当跟踪对象对应的状态标识为第二标识时,若该跟踪对象在已拍摄图像帧中持续出现的时长值大于第一出现阈值,表明该跟踪对象已经重新出现在已拍摄图像帧中,可再次将该跟踪对象切换至持续被跟踪状态,即可更新该跟踪对象的状态标识为第一标识。
可选的,当跟踪对象对应的状态标识为第二标识时,若连续消失时长值大于第二消失阈值,则取消跟踪对象。
第二消失阈值可以用时间值或连续图像帧的帧数表示,其可根据具体需求进行设置。例如,第二消失阈值可以用5个连续图像帧表示。
当跟踪对象对应的状态标识为第二标识时,若跟踪对象连续消失的时长值大于第二消失阈值,则表明该跟踪对象可能已经在拍摄画面之外,后续被拍摄到的概率很低,可不再将该目标作为跟踪对象,并删除该目标对应的状态标识或跟踪信息,以减少计算资源和存储资源的占用。
可选的,当跟踪对象对应的状态标识为第一标识时,若连续重叠时长值大于第一重叠阈值,则更新跟踪对象对应的状态标识为第三标识。
第一重叠阈值可以用时间值或连续图像帧的帧数表示,其可根据具体需求进行设置。例如,第一重叠阈值可以用8个连续图像帧表示。
当跟踪对象对应的状态标识为第一标识时,若该跟踪对象在已拍摄图像帧中的连续重叠时长值大于第一重叠阈值时,表明该跟踪对象持续与其他对象重合或者与其他对象相距很近无法分辨开,暂时难以被拍摄装置进行跟踪拍摄,因此可更新该跟踪对象的状态标识为第三标识。
可选的,当跟踪对象对应的状态标识为第二标识时,若连续重叠时长值与连续消失时长值的和大于第二重叠阈值,则取消跟踪对象。
第二重叠阈值大于第一重叠阈值,可以用时长值或连续图像帧的帧数表示,其可根据具体需求进行设置。例如,第一重叠阈值可以用5个连续图像帧表示,第二重叠阈值可以用10个连续图像帧表示。
当跟踪对象对应的状态标识为第二标识时,若跟踪对象对应的连续重叠时长值与连续消失时长值的和大于第二重叠阈值,则表明该跟踪对象后续时间内难以被拍摄装置进行跟踪拍摄,可不再将该目标作为跟踪对象,并删除该目标对应的状态标识或跟踪信息,以减少计算资源和存储资源的占用。
可选的,当跟踪对象对应的状态标识为第三标识时,若连续重叠时长值与连续消失时长值的和大于第三重叠阈值,则取消跟踪对象。
第三重叠阈值可以用时长值或连续图像帧的帧数表示,且第三重叠阈值大于第一重叠阈值,第三重叠阈值可以与第二重叠阈值相同或者不同,其可根据具体需求进行设置。例如,第一重叠阈值可以用5个连续图像帧表示,第三重叠阈值可以用8个连续图像帧表示。
当跟踪对象对应的状态标识为第三标识时,若跟踪对象对应的连续重叠时长值与连续消失时长值的和大于第三重叠阈值,则表明该跟踪对象后续时间内难以被拍摄装置进行跟踪拍摄,可不再将该目标作为跟踪对象,并删除该目标对应的状态标识或跟踪信息,以减少计算资源和存储资源的占用。
可选的,当跟踪对象对应的状态标识为第一标识时,若连续消失时长值大于第三消失阈值,则更新跟踪对象对应的状态标识为第四标识。
第三消失阈值小于第一消失阈值,且第三消失阈值可以用时间值或连续图像帧的帧数表示,其可根据具体需求进行设置。例如,第一消失阈值可以用8个连续图像帧表示,第三消失阈值可以用5个连续图像帧表示。
当跟踪对象对应的状态标识为第一标识时,若该跟踪对象在已拍摄图像帧中的连续消失时长值大于第三消失阈值时,表明该跟踪对象可能已经在拍摄画面之外,不确定后续是否继续可能被拍摄到,因此可更新该跟踪对象的状态 标识为第四标识。
可选的,当跟踪对象对应的状态标识为第四标识时,若连续出现时长值大于第二出现阈值,则更新跟踪对象对应的状态标识为第一标识。
第二出现阈值可以用时间值或连续图像帧的帧数表示,第二出现阈值可以与第一出现阈值相同或者不同,可以根据具体需求进行设定。例如,第二出现阈值可以用3个连续图像帧表示。
当跟踪对象对应的状态标识为第四标识时,若该跟踪对象在已拍摄图像帧中持续出现的时长值大于第二出现阈值,表明该跟踪对象已经重新出现在已拍摄图像帧中,可再次将该跟踪对象切换至持续被跟踪状态,即可更新该跟踪对象的状态标识为第一标识。
可选的,当跟踪对象对应的状态标识为第四标识时,若连续消失时长值大于第四消失阈值,则更新跟踪对象对应的状态标识为第二标识。
第四消失阈值小于第一消失阈值,且可以用时间值或连续图像帧的帧数表示,可以根据具体需求进行设定。
第四消失阈值可以等于第一消失阈值与第二消失阈值的差值。例如,第一消失阈值可以用8个连续图像帧表示,第三消失阈值可以用5个连续图像帧表示,则第四消失阈值可以用3个连续图像帧表示。
当跟踪对象对应的状态标识为第四标识时,若该跟踪对象在已拍摄图像帧中连续消失时长值大于第四消失阈值,表明该跟踪对象已经较长时间在已拍摄图像帧中消失,后续被拍摄到的概率相对较低,可更新该跟踪对象的状态标识为第二标识。
可选的,当跟踪对象对应的状态标识为第四标识时,若连续重叠时长值大于第四重叠阈值,则更新跟踪对象对应的状态标识为第三标识。
第四重叠阈值可以用时间值或连续图像帧的帧数表示,其可以根据具体需求进行设定。例如,第四重叠阈值可以用5个连续图像帧表示。
当跟踪对象对应的状态标识为第四标识时,若跟踪对象对应的连续重叠时长值大于第四重叠阈值,则表明该跟踪对象持续与其他对象重合或者与其他对象相距很近无法分辨开,暂时难以被拍摄装置进行跟踪拍摄,因此可更新该跟踪对象的状态标识为第三标识。
可选的,当跟踪目标对应的状态标识为第五标识时,若连续出现时长值大于第三出现阈值,则更新跟踪目标对应的状态标识为第一标识。
第三出现阈值可以用时间值或连续图像帧的帧数表示,其可根据具体需求进行设置。例如,第三出现阈值可以用5个连续图像帧表示。
当跟踪目标对应的状态标识为第五标识时,若该跟踪目标在已拍摄图像帧中持续出现的时长值大于第三出现阈值,表明该跟踪目标后续很可能持续出现在拍摄画面中,可将该跟踪目标切换至持续被跟踪状态,即可更新该跟踪目标的状态标识为第一标识。
可选的,当跟踪目标对应的状态标识为第五标识时,若连续重叠时长值大于第五重叠阈值,则取消跟踪目标。
第五重叠阈值可以用时间值或连续图像帧的帧数表示,其可根据具体需求进行设置。例如,第五重叠阈值可以用5个连续图像帧表示。
当跟踪目标对应的状态标识为第五标识时,若该跟踪目标在已拍摄图像帧中连续重叠时长值大于第五重叠阈值,则表明该跟踪目标后续时间内难以被拍摄装置进行跟踪拍摄,可不再将该目标作为跟踪目标,并删除该目标对应的状态标识或跟踪信息,以减少计算资源和存储资源的占用。
由以上实施例可见,本申请的跟踪对象确定方法在更新跟踪信息的过程中进一步考虑跟踪对象或者识别对象与图像帧中的其他对象的重叠程度,有利于提高实时跟踪的准确性;在多目标追踪过程中,通过对跟踪对象进行分类,并采用状态标识区分不同的状态,便于对跟踪对象的管理,降低计算资源和存储资源的消耗。
实施例四
本申请实施例四提供一种跟踪对象确定设备,如图4所示,图4为本申请实施例提供的一种跟踪对象确定设备的结构框图。
本实施例的跟踪对象确定设备包括:存储器401、处理器402、视频采集器403,视频采集器403用于采集目标区域的待跟踪对象;存储器401用于存储程序代码;处理器402,调用程序代码,当程序代码被执行时,用于执行以下操作:对当前拍摄图像帧进行图像识别,获得用于标识当前拍摄图像帧中的识别对象的识别信息;根据识别信息和用于标识已拍摄图像帧中的跟踪对象的跟踪信息,对识别对象与跟踪对象进行匹配,获得匹配信息;根据匹配信息,更新跟踪信息。
可选的,处理器402,调用程序代码,当程序代码被执行时,用于执行以 下操作:根据识别信息和用于标识已拍摄图像帧中的跟踪对象的跟踪信息,对识别对象与跟踪对象进行匹配,获得匹配信息包括:根据识别信息和跟踪信息,获得用于标识识别对象与跟踪对象之间相似程度的相似度信息,以及用于标识识别对象和跟踪对象之间位置关系的位置信息;根据相似度信息和位置信息,对识别对象与跟踪对象进行匹配,获得匹配信息。
可选的,处理器402,调用程序代码,当程序代码被执行时,用于执行以下操作:位置信息包括交并比子信息和距离子信息,其中,交并比子信息用于标识识别对象和跟踪对象对应的交并比,距离子信息用于标识识别对象和跟踪对象之间的距离。
可选的,处理器402,调用程序代码,当程序代码被执行时,用于执行以下操作:根据识别信息和用于标识已拍摄图像帧中的跟踪对象的跟踪信息,对识别对象与跟踪对象进行匹配,获得匹配信息包括:根据识别信息和跟踪信息,依次使用贪心算法和匈牙利算法对识别对象与跟踪对象进行匹配,获得匹配信息。
可选的,处理器402,调用程序代码,当程序代码被执行时,用于执行以下操作:根据匹配信息,更新跟踪信息包括:当识别对象与跟踪对象不匹配时,将识别对象确定为新的跟踪对象。
可选的,处理器402,调用程序代码,当程序代码被执行时,用于执行以下操作:根据匹配信息,更新跟踪信息包括:根据所述识别信息和所述跟踪信息,获得重叠信息,其中,所述重叠信息用于标识所述当前拍摄图像帧的前一已拍摄图像帧中的所述跟踪对象与其他对象的图像重叠度,以及所述当前拍摄图像帧的所述识别对象与其他对象的图像重叠度;根据所述匹配信息和所述重叠信息,更新所述跟踪信息。
可选的,处理器402,调用程序代码,当程序代码被执行时,用于执行以下操作:根据匹配信息和重叠信息,更新跟踪信息包括:当识别对象与跟踪对象不匹配,且图像重叠度小于重叠度阈值时,将识别对象确定为新的跟踪对象。
可选的,处理器402,调用程序代码,当程序代码被执行时,用于执行以下操作:跟踪信息包括跟踪对象对应的状态标识,其中,状态标识的种类大于2;对应的,根据匹配信息和重叠信息,更新跟踪信息包括:根据匹配信息和重叠信息,更新跟踪对象对应的状态标识。
可选的,处理器402,调用程序代码,当程序代码被执行时,用于执行以 下操作:跟踪信息还包括用于标识跟踪对象在已拍摄图像帧中连续出现时长值的出现时长信息,或者,用于标识跟踪对象在已拍摄图像帧中连续消失时长值的消失时长信息;对应的,根据匹配信息和重叠信息,更新跟踪信息包括:根据匹配信息,更新出现时长信息或者消失时长信息;根据更新后的出现时长信息或者更新后的消失时长信息,以及重叠信息,更新跟踪对象对应的状态标识。
本实施例中,详细技术内容,请参见上述实施例一至实施例三。
由以上实施例可见,本申请的跟踪对象确定设备,首先对当前拍摄图像帧进行图像识别,获得用于标识当前拍摄图像帧中的识别对象的识别信息;然后根据识别信息和用于标识已拍摄图像帧中的跟踪对象的跟踪信息,对识别对象与跟踪对象进行匹配,获得匹配信息;根据匹配信息,更新跟踪信息。使得在多目标追踪拍摄过程中,可以将当前拍摄图像帧与上一图像帧进行对比通过寻找当前拍摄图像帧中识别对象与已拍摄图像帧中跟踪对象之间的匹配关系,更新跟踪对象的跟踪信息,更新跟踪对象的跟踪状态,实现以更实时且准确的进行跟踪对象的管理维护跟踪对象的信息。
实施例五
本申请实施例五提供一种手持相机,如图5所示,图5为本申请实施例提供的一种手持相机的结构框图。
本实施例的手持相机包括上述实施例五的跟踪对象确定设备,该手持相机还包括:承载器,承载器与视频采集器固定连接,用于承载视频采集器的至少一部分。
可选的,承载器包括但不限于手持云台1。
可选的,手持云台1为手持三轴云台。
可选的,视频采集器包括但不限于手持三轴云台用摄像头。
下面对手持相机的基本构造进行简单介绍。
本发明实施例的手持云台1,包括:手柄11和装载于手柄11的拍摄装置12,在本实施例中,拍摄装置12可以包括三轴云台相机,在其他实施例中包括两轴或三轴以上的云台相机。
手柄11设有用于显示拍摄装置12的拍摄内容的显示屏13。本发明不对显示屏13的类型进行限定。
通过在手持云台1的手柄11设置显示屏13,该显示屏可以显示拍摄装置 12的拍摄内容,以实现用户能够通过该显示屏13快速浏览拍摄装置12所拍摄的图片或是视频,从而提高手持云台1与用户的互动性及趣味性,满足用户的多样化需求。
在一个实施例中,手柄11还设有用于控制拍摄装置12的操作功能部,通过操作操作功能部,能够控制拍摄装置12的工作,例如,控制拍摄装置12的开启与关闭、控制拍摄装置12的拍摄、控制拍摄装置12云台部分的姿态变化等,以便于用户对拍摄装置12进行快速操作。其中,操作功能部可以为按键、旋钮或者触摸屏的形式。
在一个实施例中,操作功能部包括用于控制拍摄装置12拍摄的拍摄按键14和用于控制拍摄装置12启闭和其他功能的电源/功能按键15,以及控制云台移动的万向键16。当然,操作功能部还可以包括其他控制按键,如影像存储按键、影像播放控制按键等等,可以根据实际需求进行设定。
在一个实施例中,操作功能部和显示屏13设于手柄11的同一面,图中所示操作功能部和显示屏13均设于手柄11的正面,符合人机工程学,同时使整个手持云台1的外观布局更合理美观。
进一步地,手柄11的侧面设置有功能操作键A,用于方便用户快速地智能一键成片。摄影机开启时,点按机身右侧橙色侧面键开启功能,则每隔一段时间自动拍摄一段视频,总共拍摄N段(N≥2),连接移动设备例如手机后,选择“一键成片”功能,系统智能筛选拍摄片段并匹配合适模板,快速生成精彩作品。
在一可选的实施方式中,手柄11还设有用于插接存储元件的卡槽17。在本实施例中,卡槽17设于手柄11上与显示屏13相邻的侧面,在卡槽17中插入存储卡,即可将拍摄装置12拍摄的影像存储在存储卡中。并且,将卡槽17设置在侧部,不会影响到其他功能的使用,用户体验较佳。
在一个实施例中,手柄11内部可以设置用于对手柄11及拍摄装置12供电的供电电池。供电电池可以采用锂电池,容量大、体积小,以实现手持云台1的小型化设计。
在一个实施例中,手柄11还设有充电接口/USB接口18。在本实施例中,充电接口/USB接口18设于手柄11的底部,便于连接外部电源或存储装置,从而对供电电池进行充电或进行数据传输。
在一个实施例中,手柄11还设有用于接收音频信号的拾音孔19,拾音孔 19内部联通麦克风。拾音孔19可以包括一个,也可以包括多个。还包括用于显示状态的指示灯20。用户可以通过拾音孔19与显示屏13实现音频交互。另外,指示灯20可以达到提醒作用,用户可以通过指示灯20获得手持云台1的电量情况和目前执行功能情况。此外,拾音孔19和指示灯20也均可以设于手柄11的正面,更符合用户的使用习惯以及操作便捷性。
在一个实施例中,拍摄装置12包括云台支架和搭载于云台支架的拍摄器。拍摄器可以为相机,也可以为由透镜和图像传感器(如CMOS或CCD)等组成的摄像元件,具体可根据需要选择。拍摄器可以集成在云台支架上,从而拍摄装置12为云台相机;也可以为外部拍摄设备,可拆卸地连接或夹持而搭载于云台支架。
在一个实施例中,云台支架为三轴云台支架,而拍摄装置12为三轴云台相机。三轴云台支架包括偏航轴组件22、与偏航轴组件22活动连接的横滚轴组件23、以及与横滚轴组件23活动连接的俯仰轴组件24,拍摄器搭载于俯仰轴组件24。偏航轴组件22带动拍摄装置12沿偏航方向转动。当然,在其他例子中,云台支架也可以为两轴云台、四轴云台等,具体可根据需要选择。
在一个实施例中,还设置有安装部,安装部设置于与横滚轴组件连接的连接臂的一端,而偏航轴组件可以设置于手柄中,偏航轴组件带动拍摄装置12一起沿偏航方向转动。
在一可选的实施方式中,手柄11设有用于与移动设备2(如手机)耦合连接的转接件26,转接件26与手柄11可拆卸连接。转接件26自手柄的侧部凸伸而出以用于连接移动设备2,当转接件26与移动设备2连接后,手持云台1与转接件26对接并用于被支撑于移动设备2的端部。
在手柄11设置用于与移动设备2连接的转接件26,进而将手柄11和移动设备2相互连接,手柄11可作为移动设备2的一个底座,用户可以通过握持移动设备2的另一端来一同把手持云台1拿起操作,连接方便快捷,产品美观性强。此外,手柄11通过转接件26与移动设备2耦合连接后,能够实现手持云台1与移动设备2之间的通信连接,拍摄装置12与移动设备2之间能够进行数据传输。
在一个实施例中,转接件26与手柄11可拆卸连接,即转接件26和手柄11之间可以实现机械方面的连接或拆除。进一步地,转接件26设有电接触部,手柄11设有与电接触部配合的电接触配合部。
这样,当手持云台1不需要与移动设备2连接时,可以将转接件26从手柄11上拆除。当手持云台1需要与移动设备2连接时,再将转接件26装到手柄11上,完成转接件26和手柄11之间的机械连接,同时通过电接触部和电接触配合部的连接保证两者之间的电性连接,以实现拍摄装置12与移动设备2之间能够通过转接件26进行数据传输。
在一个实施例中,手柄11的侧部设有收容槽27,转接件26滑动卡接于收容槽27内。当转接件26装到收容槽27后,转接件26部分凸出于收容槽27,转接件26凸出收容槽27的部分用于与移动设备2连接。
在一个实施例中,参见图5所示,所当述转接件26自转接件26装入收容槽27时,转接部与收容槽27齐平,进而将转接件26收纳在手柄11的收容槽27内。
因此,当手持云台1需要和移动设备2连接时,可以将转接件26自转接部装入收容槽27内,使得转接件26凸出于收容槽27,以便移动设备2与手柄11相互连接
当移动设备2使用完毕后,或者需要将移动设备2拔下时,可以将转接件26从手柄11的收容槽27内取出,然后反向自转接件26装入收容槽27内,进而将转接件26收纳在手柄11内。转接件26与手柄11的收容槽27齐平当转接件26收纳在手柄11内后,可以保证手柄11的表面平整,而且将转接件26收纳在手柄11内更便于携带。
在一个实施例中,收容槽27是半开放式地开设在手柄11的一侧表面,这样更便于转接件26与收容槽27进行滑动卡接。当然,在其他例子中,转接件26也可以采用卡扣连接、插接等方式与手柄11的收容槽27可拆卸连接。
在一个实施例中,收容槽27设置于手柄11的侧面,在不使用转接功能时,通过盖板28卡接覆盖该收容槽27,这样便于用户操作,同时也不影响手柄的正面和侧面的整体外观。
在一个实施例中,电接触部与电接触配合部之间可以采用触点接触的方式实现电连接。例如,电接触部可以选择为伸缩探针,也可以选择为电插接口,还可以选择为电触点。当然,在其他例子中,电接触部与电接触配合部之间也可以直接采用面与面的接触方式实现电连接。
A1、一种跟踪对象确定方法,其特征在于,包括:
对当前拍摄图像帧进行图像识别,获得用于标识当前拍摄图像帧中的识 别对象的识别信息;
根据所述识别信息和用于标识已拍摄图像帧中的跟踪对象的跟踪信息,对所述识别对象与所述跟踪对象进行匹配,获得匹配信息;
根据所述匹配信息,更新所述跟踪信息。
A2、根据A1所述的跟踪对象确定方法,其特征在于,所述根据所述识别信息和用于标识已拍摄图像帧中的跟踪对象的跟踪信息,对所述识别对象与所述跟踪对象进行匹配,获得匹配信息包括:
根据所述识别信息和所述跟踪信息,获得用于标识所述识别对象与所述跟踪对象之间相似程度的相似度信息,以及用于标识所述识别对象和所述跟踪对象之间位置关系的位置信息;
根据所述相似度信息和所述位置信息,对所述识别对象与所述跟踪对象进行匹配,获得匹配信息。
A3、根据A2所述的跟踪对象确定方法,其特征在于,所述位置信息包括交并比子信息和距离子信息,其中,所述交并比子信息用于标识所述识别对象和所述跟踪对象对应的交并比,所述距离子信息用于标识所述识别对象和所述跟踪对象之间的距离。
A4、根据A1所述的跟踪对象确定方法,其特征在于,所述根据所述识别信息和用于标识已拍摄图像帧中的跟踪对象的跟踪信息,对所述识别对象与所述跟踪对象进行匹配,获得匹配信息包括:
根据所述识别信息和所述跟踪信息,依次使用贪心算法和匈牙利算法对所述识别对象与所述跟踪对象进行匹配,获得匹配信息。
A5、根据A1所述的跟踪对象确定方法,其特征在于,所述根据所述匹配信息,更新所述跟踪信息包括:
当所述识别对象与所述跟踪对象不匹配时,将所述识别对象确定为新的所述跟踪对象。
A6、根据A1所述的跟踪对象确定方法,其特征在于,所述根据所述匹配信息,更新所述跟踪信息包括:
根据所述识别信息和所述跟踪信息,获得重叠信息,其中,所述重叠信息用于标识所述当前拍摄图像帧的前一已拍摄图像帧中的所述跟踪对象与其他对象的图像重叠度,以及所述当前拍摄图像帧的所述识别对象与其他对象的图像重叠度;
根据所述匹配信息和所述重叠信息,更新所述跟踪信息。
A7、根据A6所述的跟踪对象确定方法,其特征在于,所述根据所述匹配信息和所述重叠信息,更新所述跟踪信息包括:
当所述识别对象与所述跟踪对象不匹配,且所述图像重叠度小于重叠度阈值时,将所述识别对象确定为新的所述跟踪对象。
A8、根据A6所述的跟踪对象确定方法,其特征在于,所述跟踪信息包括所述跟踪对象对应的状态标识,其中,所述状态标识的种类大于2;对应的,所述根据所述匹配信息和所述重叠信息,更新所述跟踪信息包括:
根据所述匹配信息和所述重叠信息,更新所述跟踪对象对应的所述状态标识。
A9、根据A8所述的跟踪对象确定方法,其特征在于,所述跟踪信息还包括用于标识所述跟踪目标在已拍摄图像帧中连续出现时长值的出现时长信息,或者,用于标识所述跟踪目标在已拍摄图像帧中连续消失时长值的消失时长信息;对应的,所述根据所述匹配信息和所述重叠信息,更新所述跟踪信息包括:
根据所述匹配信息,更新所述出现时长信息或者所述消失时长信息;
根据更新后的所述出现时长信息或者更新后的所述消失时长信息,以及所述重叠信息,更新所述跟踪对象对应的状态标识。
A10、一种跟踪对象确定设备,其特征在于,包括:存储器、处理器、视频采集器,所述视频采集器用于采集目标区域的待跟踪目标;所述存储器用于存储程序代码;所述处理器,调用所述程序代码,当程序代码被执行时,用于执行以下操作:
对当前拍摄图像帧进行图像识别,获得用于标识当前拍摄图像帧中的识别对象的识别信息;
根据所述识别信息和用于标识已拍摄图像帧中的跟踪对象的跟踪信息,对所述识别对象与所述跟踪对象进行匹配,获得匹配信息;
根据所述匹配信息,更新所述跟踪信息。
A11、根据A10所述跟踪对象确定设备,其特征在于,所述根据所述识别信息和用于标识已拍摄图像帧中的跟踪对象的跟踪信息,对所述识别对象与所述跟踪对象进行匹配,获得匹配信息包括:
根据所述识别信息和所述跟踪信息,获得用于标识所述识别对象与所述 跟踪对象之间相似程度的相似度信息,以及用于标识所述识别对象和所述跟踪对象之间位置关系的位置信息;
根据所述相似度信息和所述位置信息,对所述识别对象与所述跟踪对象进行匹配,获得匹配信息。
A12、根据A11所述跟踪对象确定设备,其特征在于,所述位置信息包括交并比子信息和距离子信息,其中,所述交并比子信息用于标识所述识别对象和所述跟踪对象对应的交并比,所述距离子信息用于标识所述识别对象和所述跟踪对象之间的距离。
A13、根据A10所述跟踪对象确定设备,其特征在于,所述根据所述识别信息和用于标识已拍摄图像帧中的跟踪对象的跟踪信息,对所述识别对象与所述跟踪对象进行匹配,获得匹配信息包括:
根据所述识别信息和所述跟踪信息,依次使用贪心算法和匈牙利算法对所述识别对象与所述跟踪对象进行匹配,获得匹配信息。
A14、根据A10所述跟踪对象确定设备,其特征在于,所述根据所述匹配信息,更新所述跟踪信息包括:
当所述识别对象与所述跟踪对象不匹配时,将所述识别对象确定为新的所述跟踪对象。
A15、根据A10所述跟踪对象确定设备,其特征在于,所述根据所述匹配信息,更新所述跟踪信息包括:
根据所述识别信息和所述跟踪信息,获得重叠信息,其中,所述重叠信息用于标识所述当前拍摄图像帧的前一已拍摄图像帧中的所述跟踪对象与其他对象的图像重叠度,以及所述当前拍摄图像帧的所述识别对象与其他对象的图像重叠度;
根据所述匹配信息和所述重叠信息,更新所述跟踪信息。
A16、根据A15所述跟踪对象确定设备,其特征在于,所述根据所述匹配信息和所述重叠信息,更新所述跟踪信息包括:
当所述识别对象与所述跟踪对象不匹配,且所述图像重叠度小于重叠度阈值时,将所述识别对象确定为新的所述跟踪对象。
A17、根据A15所述跟踪对象确定设备,其特征在于,所述跟踪信息包括所述跟踪对象对应的状态标识,其中,所述状态标识的种类大于2;对应的,所述根据所述匹配信息和所述重叠信息,更新所述跟踪信息包括:
根据所述匹配信息和所述重叠信息,更新所述跟踪对象对应的所述状态标识。
A18、根据A17所述跟踪对象确定设备,其特征在于,所述跟踪信息还包括用于标识所述跟踪目标在已拍摄图像帧中连续出现时长值的出现时长信息,或者,用于标识所述跟踪目标在已拍摄图像帧中连续消失时长值的消失时长信息;对应的,所述根据所述匹配信息和所述重叠信息,更新所述跟踪信息包括:
根据所述匹配信息,更新所述出现时长信息或者所述消失时长信息;
根据更新后的所述出现时长信息或者更新后的所述消失时长信息,以及所述重叠信息,更新所述跟踪对象对应的状态标识。
A19、一种手持相机,其特征在于,包括根据A10-18所述的跟踪对象确定设备,其特征在于,还包括:承载器,所述承载器与所述视频采集器固定连接,用于承载所述视频采集器的至少一部分。
A20、根据A19所述的手持相机,其特征在于,所述承载器包括但不限于手持云台。
A21、根据A20所述的手持相机,其特征在于,所述手持云台为手持三轴云台。
A22、根据A21所述的手持相机,其特征在于,所述视频采集器包括但不限于手持三轴云台用摄像头。
需要指出,根据实施的需要,可将本发明实施例中描述的各个部件/步骤拆分为更多部件/步骤,也可将两个或多个部件/步骤或者部件/步骤的部分操作组合成新的部件/步骤,以实现本发明实施例的目的。
上述根据本发明实施例的方法可在硬件、固件中实现,或者被实现为可存储在记录介质(诸如CD ROM、RAM、软盘、硬盘或磁光盘)中的软件或计算机代码,或者被实现通过网络下载的原始存储在远程记录介质或非暂时机器可读介质中并将被存储在本地记录介质中的计算机代码,从而在此描述的方法可被存储在使用通用计算机、专用处理器或者可编程或专用硬件(诸如ASIC或FPGA)的记录介质上的这样的软件处理。可以理解,计算机、处理器、微处理器控制器或可编程硬件包括可存储或接收软件或计算机代码的存储组件(例如,RAM、ROM、闪存等),当所述软件或计算机代码被计算机、处理器或硬件访问且执行时,实现在此描述的目标追踪拍摄方法。此外,当通用计算机访问用 于实现在此示出的方法的代码时,代码的执行将通用计算机转换为用于执行在此示出的方法的专用计算机。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及方法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本发明实施例的范围。
以上实施方式仅用于说明本发明实施例,而并非对本发明实施例的限制,有关技术领域的普通技术人员,在不脱离本发明实施例的精神和范围的情况下,还可以做出各种变化和变型,因此所有等同的技术方案也属于本发明实施例的范畴,本发明实施例的专业保护范围应由权利要求限定。

Claims (10)

  1. 一种跟踪对象确定方法,其特征在于,包括:
    对当前拍摄图像帧进行图像识别,获得用于标识当前拍摄图像帧中的识别对象的识别信息;
    根据所述识别信息和用于标识已拍摄图像帧中的跟踪对象的跟踪信息,对所述识别对象与所述跟踪对象进行匹配,获得匹配信息;
    根据所述匹配信息,更新所述跟踪信息。
  2. 根据权利要求1所述的跟踪对象确定方法,其特征在于,所述根据所述识别信息和用于标识已拍摄图像帧中的跟踪对象的跟踪信息,对所述识别对象与所述跟踪对象进行匹配,获得匹配信息包括:
    根据所述识别信息和所述跟踪信息,获得用于标识所述识别对象与所述跟踪对象之间相似程度的相似度信息,以及用于标识所述识别对象和所述跟踪对象之间位置关系的位置信息;
    根据所述相似度信息和所述位置信息,对所述识别对象与所述跟踪对象进行匹配,获得匹配信息。
  3. 根据权利要求2所述的跟踪对象确定方法,其特征在于,所述位置信息包括交并比子信息和距离子信息,其中,所述交并比子信息用于标识所述识别对象和所述跟踪对象对应的交并比,所述距离子信息用于标识所述识别对象和所述跟踪对象之间的距离。
  4. 根据权利要求1所述的跟踪对象确定方法,其特征在于,所述根据所述识别信息和用于标识已拍摄图像帧中的跟踪对象的跟踪信息,对所述识别对象与所述跟踪对象进行匹配,获得匹配信息包括:
    根据所述识别信息和所述跟踪信息,依次使用贪心算法和匈牙利算法对所述识别对象与所述跟踪对象进行匹配,获得匹配信息。
  5. 根据权利要求1所述的跟踪对象确定方法,其特征在于,所述根据所述匹配信息,更新所述跟踪信息包括:
    当所述识别对象与所述跟踪对象不匹配时,将所述识别对象确定为新的所述跟踪对象。
  6. 根据权利要求1所述的跟踪对象确定方法,其特征在于,所述根据所述匹配信息,更新所述跟踪信息包括:
    根据所述识别信息和所述跟踪信息,获得重叠信息,其中,所述重叠信息用于标识所述当前拍摄图像帧的前一已拍摄图像帧中的所述跟踪对象与其他对象的图像重叠度,以及所述当前拍摄图像帧的所述识别对象与其他对象的图像重叠度;
    根据所述匹配信息和所述重叠信息,更新所述跟踪信息。
  7. 根据权利要求6所述的跟踪对象确定方法,其特征在于,所述根据所述匹配信息和所述重叠信息,更新所述跟踪信息包括:
    当所述识别对象与所述跟踪对象不匹配,且所述图像重叠度小于重叠度阈值时,将所述识别对象确定为新的所述跟踪对象。
  8. 根据权利要求6所述的跟踪对象确定方法,其特征在于,所述跟踪信息包括所述跟踪对象对应的状态标识,其中,所述状态标识的种类大于2;对应的,所述根据所述匹配信息和所述重叠信息,更新所述跟踪信息包括:
    根据所述匹配信息和所述重叠信息,更新所述跟踪对象对应的所述状态标识。
  9. 根据权利要求8所述的跟踪对象确定方法,其特征在于,所述跟踪信息还包括用于标识所述跟踪目标在已拍摄图像帧中连续出现时长值的出现时长信息,或者,用于标识所述跟踪目标在已拍摄图像帧中连续消失时长值的消失时长信息;对应的,所述根据所述匹配信息和所述重叠信息,更新所述跟踪信息包括:
    根据所述匹配信息,更新所述出现时长信息或者所述消失时长信息;
    根据更新后的所述出现时长信息或者更新后的所述消失时长信息,以及所述重叠信息,更新所述跟踪对象对应的状态标识。
  10. 一种跟踪对象确定设备,其特征在于,包括:存储器、处理器、视频采集器,所述视频采集器用于采集目标区域的待跟踪目标;所述存储器用于存储程序代码;所述处理器,调用所述程序代码,当程序代码被执行时,用于执 行以下操作:
    对当前拍摄图像帧进行图像识别,获得用于标识当前拍摄图像帧中的识别对象的识别信息;
    根据所述识别信息和用于标识已拍摄图像帧中的跟踪对象的跟踪信息,对所述识别对象与所述跟踪对象进行匹配,获得匹配信息;
    根据所述匹配信息,更新所述跟踪信息。
PCT/CN2020/099830 2020-04-15 2020-07-02 一种跟踪对象确定方法、设备和手持相机 WO2021208253A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010297067.1A CN111738053B (zh) 2020-04-15 2020-04-15 一种跟踪对象确定方法、设备和手持相机
CN202010297067.1 2020-04-15

Publications (1)

Publication Number Publication Date
WO2021208253A1 true WO2021208253A1 (zh) 2021-10-21

Family

ID=72646725

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/099830 WO2021208253A1 (zh) 2020-04-15 2020-07-02 一种跟踪对象确定方法、设备和手持相机

Country Status (2)

Country Link
CN (1) CN111738053B (zh)
WO (1) WO2021208253A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114333294A (zh) * 2021-11-30 2022-04-12 上海电科智能系统股份有限公司 一种基于非全覆盖的多元多种物体感知识别跟踪方法
CN115643485A (zh) * 2021-11-25 2023-01-24 荣耀终端有限公司 拍摄的方法和电子设备
WO2023179697A1 (zh) * 2022-03-24 2023-09-28 阿里云计算有限公司 目标跟踪方法、装置、设备及存储介质

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112291480B (zh) * 2020-12-03 2022-06-21 维沃移动通信有限公司 跟踪对焦方法、跟踪对焦装置、电子设备和可读存储介质
JP2023511239A (ja) * 2020-12-31 2023-03-17 商▲湯▼国▲際▼私人有限公司 操作イベント認識方法、及び装置
CN113268422B (zh) * 2021-05-24 2024-05-03 康键信息技术(深圳)有限公司 基于分级量化的卡顿检测方法、装置、设备及存储介质
CN117011736A (zh) * 2022-04-28 2023-11-07 北京字跳网络技术有限公司 一种多目标跟踪方法、装置、设备及可读存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180046866A1 (en) * 2016-08-11 2018-02-15 Sungjin Lee Method of Detecting a Moving Object by Reconstructive Image Processing
CN108269269A (zh) * 2016-12-30 2018-07-10 纳恩博(北京)科技有限公司 目标跟踪方法和装置
CN110245630A (zh) * 2019-06-18 2019-09-17 广东中安金狮科创有限公司 监控数据处理方法、装置及可读存储介质
CN110472608A (zh) * 2019-08-21 2019-11-19 石翊鹏 图像识别跟踪处理方法及系统
CN110516578A (zh) * 2019-08-20 2019-11-29 开放智能机器(上海)有限公司 一种基于人脸识别和目标跟踪的监视系统
CN110619658A (zh) * 2019-09-16 2019-12-27 北京地平线机器人技术研发有限公司 对象跟踪方法、对象跟踪装置和电子设备

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180046866A1 (en) * 2016-08-11 2018-02-15 Sungjin Lee Method of Detecting a Moving Object by Reconstructive Image Processing
CN108269269A (zh) * 2016-12-30 2018-07-10 纳恩博(北京)科技有限公司 目标跟踪方法和装置
CN110245630A (zh) * 2019-06-18 2019-09-17 广东中安金狮科创有限公司 监控数据处理方法、装置及可读存储介质
CN110516578A (zh) * 2019-08-20 2019-11-29 开放智能机器(上海)有限公司 一种基于人脸识别和目标跟踪的监视系统
CN110472608A (zh) * 2019-08-21 2019-11-19 石翊鹏 图像识别跟踪处理方法及系统
CN110619658A (zh) * 2019-09-16 2019-12-27 北京地平线机器人技术研发有限公司 对象跟踪方法、对象跟踪装置和电子设备

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115643485A (zh) * 2021-11-25 2023-01-24 荣耀终端有限公司 拍摄的方法和电子设备
CN115643485B (zh) * 2021-11-25 2023-10-24 荣耀终端有限公司 拍摄的方法和电子设备
CN114333294A (zh) * 2021-11-30 2022-04-12 上海电科智能系统股份有限公司 一种基于非全覆盖的多元多种物体感知识别跟踪方法
WO2023179697A1 (zh) * 2022-03-24 2023-09-28 阿里云计算有限公司 目标跟踪方法、装置、设备及存储介质

Also Published As

Publication number Publication date
CN111738053B (zh) 2022-04-01
CN111738053A (zh) 2020-10-02

Similar Documents

Publication Publication Date Title
WO2021208253A1 (zh) 一种跟踪对象确定方法、设备和手持相机
CN110555883B (zh) 相机姿态追踪过程的重定位方法、装置及存储介质
CN108596976B (zh) 相机姿态追踪过程的重定位方法、装置、设备及存储介质
CN109891874A (zh) 一种全景拍摄方法及装置
US20110216159A1 (en) Imaging control device and imaging control method
CN109584362B (zh) 三维模型构建方法及装置、电子设备和存储介质
WO2019104681A1 (zh) 拍摄方法和装置
CN111724412A (zh) 确定运动轨迹的方法、装置及计算机存储介质
WO2021208252A1 (zh) 一种跟踪目标确定方法、装置和手持相机
CN111127509A (zh) 目标跟踪方法、装置和计算机可读存储介质
US20120120267A1 (en) Electronic apparatus, control method, program, and image-capturing system
CN108632543A (zh) 图像显示方法、装置、存储介质及电子设备
CN112492215B (zh) 拍摄控制方法、装置和电子设备
WO2023072088A1 (zh) 对焦方法及装置
WO2021208251A1 (zh) 人脸跟踪方法及人脸跟踪设备
CN115525140A (zh) 手势识别方法、手势识别装置及存储介质
CN104867112A (zh) 照片处理方法及装置
CN104506770A (zh) 拍摄图像的方法及装置
CN112188089A (zh) 距离获取方法及装置、焦距调节方法及装置、测距组件
WO2021208256A1 (zh) 一种视频处理方法、设备及手持相机
CN114549578A (zh) 目标跟踪方法、装置及存储介质
WO2021208255A1 (zh) 一种视频片段标记方法、设备及手持相机
CN111753606A (zh) 一种智能模型的升级方法及装置
WO2021208257A1 (zh) 跟踪状态确定方法、设备及手持相机
WO2021208258A1 (zh) 基于跟踪目标的搜索方法、设备及其手持相机

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20931408

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20931408

Country of ref document: EP

Kind code of ref document: A1