WO2021208258A1 - 基于跟踪目标的搜索方法、设备及其手持相机 - Google Patents

基于跟踪目标的搜索方法、设备及其手持相机 Download PDF

Info

Publication number
WO2021208258A1
WO2021208258A1 PCT/CN2020/099835 CN2020099835W WO2021208258A1 WO 2021208258 A1 WO2021208258 A1 WO 2021208258A1 CN 2020099835 W CN2020099835 W CN 2020099835W WO 2021208258 A1 WO2021208258 A1 WO 2021208258A1
Authority
WO
WIPO (PCT)
Prior art keywords
area
search
tracking
image frame
algorithm
Prior art date
Application number
PCT/CN2020/099835
Other languages
English (en)
French (fr)
Inventor
张永波
梁峰
Original Assignee
上海摩象网络科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海摩象网络科技有限公司 filed Critical 上海摩象网络科技有限公司
Publication of WO2021208258A1 publication Critical patent/WO2021208258A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/251Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/223Analysis of motion using block-matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Definitions

  • the embodiments of the present application relate to the field of computer vision technology, and in particular, to a search method, equipment and handheld camera based on tracking a target.
  • Target detection and tracking is a fast-developing direction in the field of computer vision in recent years.
  • visual processing technology and artificial intelligence technology household handheld cameras can also be used to track the target to be photographed, and perform object recognition and scene recognition operations based on the target to be photographed, so that users can classify the photos or videos taken. And management, and other subsequent automatic processing operations.
  • the current single-target tracking algorithms have a problem, that is, when at least one of the shape, lighting conditions, scene, and position of the target to be tracked changes, it will seriously affect the tracking effect of the tracking shooting and cause the tracking shooting to fail.
  • the problem is, when at least one of the shape, lighting conditions, scene, and position of the target to be tracked changes, it will seriously affect the tracking effect of the tracking shooting and cause the tracking shooting to fail.
  • one of the technical problems solved by the embodiments of the present invention is to provide a tracking target-based search method, device, and handheld camera, so as to overcome the technical defects that are prone to failure in tracking and shooting in the prior art.
  • An embodiment of the present application provides a search method based on a tracking target, including: determining a first search area corresponding to a first search algorithm in a first image frame according to the effective identification information of the tracking area corresponding to the tracking target, And use the first search algorithm to search in the first search area; and determine at least one second search algorithm corresponding to the second search algorithm in the second image frame according to the effective identification information of the tracking area corresponding to the tracking target Searching a region, and using the second search algorithm to search in the second search region.
  • a tracking target-based search device which is characterized in that it includes a memory, a processor, a video collector, and the video collector is used to collect a target to be tracked in a target area; the memory is used for In storing program code; the processor is used to call and execute the program code, when the program code is executed, to perform the following operations: determine the first search algorithm according to the effective identification information of the tracking area corresponding to the tracking target Corresponding to a first search area in the first image frame, and use the first search algorithm to search in the first search area; and determine the second search area according to the effective identification information of the tracking area corresponding to the tracking target
  • the search algorithm corresponds to at least one second search area in the second image frame, and uses the second search algorithm to search in the second search area.
  • a handheld camera which includes the tracking target-based search device described in the foregoing embodiment, and further includes a carrier, which is fixedly connected to the video collector for At least a part of the video collector is carried.
  • the search algorithm provided by the embodiment of the present application determines the first search area corresponding to the first search algorithm in the image frame and performs the search according to the effective identification information of the tracking area corresponding to the tracking target, and determines that the second search algorithm is located in the image frame In the corresponding second search area and search, by combining the first search algorithm and the second search algorithm, the accuracy of the search can be improved, and the probability of tracking the target being lost can be reduced.
  • FIG. 1 is a schematic flowchart of a search method based on tracking targets according to an embodiment of the application
  • FIG. 2 is a flowchart of an embodiment of a first search algorithm in a tracking target-based search method provided by an embodiment of this application;
  • FIG. 3 is a flowchart of an embodiment of a second search algorithm in a tracking target-based search method provided by an embodiment of the application;
  • FIG. 4 is a schematic diagram of a second search area generated based on a second search algorithm according to an embodiment of the application;
  • FIG. 5 is a flowchart of an embodiment of a third search algorithm in a tracking target-based search method provided by an embodiment of this application;
  • FIG. 6 is a schematic framework diagram of a tracking target-based search device provided by an embodiment of this application.
  • FIG. 7 to 9 are schematic structural diagrams of a handheld camera provided by an embodiment of the application.
  • the technical solutions provided by the embodiments of the present application improve the accuracy of the tracking search by improving the existing tracking algorithm, and improve the use experience of tracking and shooting.
  • FIG. 1 is a schematic flowchart of the tracking target-based search method provided in the first embodiment of this application.
  • the above tracking target-based search method can be applied to various shooting devices or any electronic devices with shooting functions.
  • it can be applied to portable shooting devices such as pocket cameras, sports cameras, and handheld cameras.
  • portable shooting devices such as pocket cameras, sports cameras, and handheld cameras.
  • electronic devices such as smart phones and tablets with shooting functions, the present invention does not limit this.
  • the tracking target-based search method in the embodiment of the present application mainly includes the following steps:
  • Step S11 Determine a first search area corresponding to the first search algorithm in the first image frame according to the effective identification information of the tracking area corresponding to the tracking target, and use the first search algorithm to search in the first search area.
  • the effective identification information can be used to identify the location, shape, size and other identification features of the tracking area, but it is not limited to this.
  • the effective identification information can also be used to identify other identification features of the tracking area, such as color and material.
  • the tracking area is determined based on the effective frame in the image frame, where the effective frame is used to identify the position, shape, and size of the tracking target in the first image frame. It is usually rectangular and can follow The near and far size of the tracking object changes accordingly (for example, the effective frame used to frame a human face can follow the changes in the distance and size of the photographed face to produce corresponding changes), and when the position and area size of the effective frame change, the tracking area The center position and area size will be adjusted accordingly.
  • the center position of the tracking area coincides with the center point of the effective frame
  • the area size of the tracking area is a predetermined multiple of the area size of the effective frame.
  • the side length of the tracking area is 4 times the side length of the effective frame (that is, 4 times the side length of the tracking target). It should be noted that the center position and the area size (ie, the side length) of the tracking area can also be adjusted and set according to actual requirements, and the present invention does not limit this.
  • the first search algorithm is used to determine a first search area with the current position of the tracking target as the center, and search and identify the main body part of the tracking target accordingly.
  • the center point of the first search area coincides with the center position of the tracking area, and the search range of the first search area is the same or different from the area size of the tracking area, depending on the value of the adjustment parameter.
  • the first search algorithm can be executed once for each of at least 12 consecutive first image frames (the total time is estimated to be about 1 second), but it is not limited to this, and can also be based on actual needs. Adjust the frame number of the first image frame.
  • the size of the search range of the first search area correspondingly generated in any two adjacent first image frames of the first search algorithm is not equal, so as to improve the search accuracy.
  • Step S12 According to the effective identification information of the tracking area corresponding to the tracking target, determine at least one second search area corresponding to the second search algorithm in the second image frame, and use the second search algorithm to search in the second search area.
  • the definitions of the effective identification information and the tracking area are the same as those described in step S11 above, and will not be repeated here.
  • the second search algorithm is used to determine at least one first search area in the surrounding area of the tracking target, so as to search and recognize the surrounding part of the tracking target.
  • the search area of the second search area is the same as the area area of the tracking area, and the center point of the second search area is different from the center position of the tracking area.
  • the first image frame and the second image frame may be image frames with the same frame number (that is, the same image frame in the image sequence).
  • the third image frame in the image sequence is both the first image frame and the second image frame.
  • first image frame and the second image frame may also be two image frames with different frame numbers in the same image sequence.
  • the third frame in the image sequence is the first image frame
  • the fourth frame is the second image frame.
  • the execution sequence of the first search algorithm (ie, step S11) and the second search algorithm (ie, step S12) can be adjusted according to actual needs.
  • the second search algorithm is used to search in the second search area at least once.
  • the first image frame and The second image frame is an image frame with a different frame number.
  • the second search algorithm can be used multiple times to search for the image.
  • the sequence is searched for multiple subsequent second image frames (for example, the 13th to 20th second image frames).
  • the second search algorithm in the process of using the first search algorithm to search in the first search area, may be used to search in the second search area.
  • the first image frame and the second The image frames have the same number of frames, and the search accuracy can be improved by using two search algorithms for the same image frame at the same time.
  • the first search algorithm and the second search algorithm can be used to search for the third image frame in the image sequence.
  • the first search algorithm or the second search algorithm may be used once for each image frame in the image sequence.
  • the search algorithm can be alternated every other frame, that is, the first search algorithm is used for the second image frame in the image sequence, and the second search algorithm is used for the third image frame in the image sequence. Algorithm to search.
  • the embodiments of the present application are based on the effective identification information of the tracking area corresponding to the tracking target, and combined with the first search algorithm and the second search algorithm for searching, the search accuracy can be improved and the probability of tracking loss can be reduced.
  • the second embodiment of the present application provides a search method based on tracking targets. Please refer to FIG. 2, this embodiment of the present application describes an exemplary processing flow of determining the first search area in the first image frame in step S11 shown in FIG. 1. As shown in the figure, the search method of the embodiment of the present application mainly includes the following:
  • Step S21 Determine the center position and the area size of the tracking area in the first image frame according to the effective identification information of the tracking area.
  • the effective identification information of the tracking area is used to identify the center position (center point) and the area size of the tracking area, but it is not limited to this, and can also be used to identify other identification features of the tracking area.
  • Step S22 Determine the first search area corresponding to the first search algorithm in the first image frame according to the center position and area size of the tracking area.
  • the tracking area is a rectangular area
  • the side length of the first search area is n times the corresponding side length of the tracking area, where n is an adjustment parameter, and the center point of the first search area coincides with the center position of the tracking area.
  • the first search area generated based on the tracking area is also a rectangular area, where the four side lengths of the first search area correspond to the four side lengths of the tracking area in a one-to-one correspondence, for example, the first search area
  • the length of the left and right sides of a search area corresponds to n times the length of the left and right sides of the tracking area.
  • the length of the upper and lower sides of the first search area corresponds to n times the length of the upper and lower sides of the tracking area.
  • the first search The center point of the area is the center position of the tracking area.
  • is less than or equal to 0.3, that is, the adjustment range (increasing range or reduction range) of the area size for the tracking area by the adjustment parameter n does not exceed 0.3 times.
  • the first search algorithm may be used to search for multiple consecutive first image frames in the image sequence, and the values of adjustment parameters corresponding to two adjacent first image frames are different.
  • the first search algorithm can be used to search for 12 consecutive first image frames in the image sequence (search once per frame).
  • the difference between the adjustment parameters corresponding to two adjacent first image frames does not exceed 0.3, and the zoom-in adjustment parameter (that is, the adjustment parameter with a value greater than 1.0) and the zoom-out adjustment parameter (that is, the adjustment parameter with a value less than 1.0)
  • the adjustment parameter is set alternately, and the constant adjustment parameter (that is, the adjustment parameter with a value equal to 1.0) is set arbitrarily, for example, the interval is set between adjacent enlargement adjustment parameters and reduction adjustment parameters.
  • the successive adjustment parameters in the image sequence Each of the 12 first image frames (for example, frames 2 to 13) is searched using the first search algorithm once, where the side length of the first search area generated in the second frame is the tracking area Corresponding to 1.0 times the side length, the side length of the first search area generated in the 3rd frame is 1.1 times the side length of the tracking area, and the side length of the first search area generated in the 4th frame is the side length of the tracking area 1.0 times the length, the side length of the first search area generated in the 5th frame is 0.9 times the side length of the tracking area, and so on.
  • the second embodiment of the present application uses the first search algorithm to generate and search a first search area with the same center point as the tracking area and different area sizes in the first image frame, so as to target the subject of the tracking target. Part of the search recognition, thereby improving search efficiency and improve search accuracy.
  • the third embodiment of the present application provides a search method based on tracking targets. Please refer to FIG. 3, this embodiment of the present application describes an exemplary processing flow of determining the second search area in the second image frame in step S12 shown in FIG. 1. As shown in the figure, the search method of the embodiment of the present application mainly includes the following:
  • Step S31 Determine the center position and the area area of the tracking area in the second image frame according to the effective identification information of the tracking area.
  • the effective identification information of the tracking area is used to identify the center position (center point) and area area of the tracking area, but it is not limited to this, and can also be used to identify other identification features of the tracking area.
  • Step S32 Determine a second search area corresponding to the second search algorithm in the second image frame according to the center position and area area of the tracking area.
  • the tracking area is a rectangular area
  • the search area of the second search area is the area area of the tracking area
  • the side length of a square with the same area as the area of the tracking area can be determined according to the area of the tracking area, that is, the area of the rectangular area of the tracking area is converted into the side length a of the square with the same area.
  • the shortest distance between the center points of two adjacent second search areas can be set to 0.3 times the side length of the square (that is, 0.3a), and the center point of the second search area is between the center point of the tracking area and the center of the tracking area.
  • the shortest distance between the square is at least 0.3 times the side length of the square. But it is not limited to this. Generally speaking, it is better to set the shortest distance between the center points of two adjacent second search regions not to exceed 0.5 times the side length of the square.
  • the process of determining the second search area corresponding to the second search algorithm in the second image frame includes: taking the center position of the tracking area as the origin of the rectangular coordinate system, and the center point of the second search area is located in the The position in the rectangular coordinate system ( ⁇ 0.3ma, ⁇ 0.3m'a), where m and m'are integers, and a is the side length of a square with the same area as the area of the tracking area.
  • the shortest distance between the center points of each second search area is 0.3a
  • the center point of each second search area is 0.3a
  • the shortest distance between the small dots of the tracking area and the center of the tracking area is at least 0.3a.
  • the second search algorithm can be used to generate multiple second search areas with different positions but the same search area in the second image frame, and search and identify the surrounding parts of the tracking target accordingly, thereby Improve the accuracy of the search algorithm.
  • the fourth embodiment of the present application provides a tracking target-based search method. Please refer to FIG. 5.
  • the tracking target-based search method of the embodiment of the present application mainly includes the following:
  • Step S51 Determine the area size of the tracking area in the third image frame according to the effective identification information of the tracking area corresponding to the tracking target.
  • the effective identification information of the tracking area is used to identify the area of the tracking area, but it is not limited to this, and can also be used to identify other identification features of the tracking area.
  • Step S52 Determine the area size of the third search area corresponding to the third search algorithm in the third image frame according to the area size of the tracking area, and randomly determine the center point of the third search area in the third image frame.
  • the area size of the third search area is the same as the area size of the tracking area.
  • the center point of the third search area is any position in the entire third image frame, and the number of the third search area is at least one.
  • Step S53 Use the third search algorithm to search in the third search area.
  • the third search algorithm is used to randomly determine one or more third search areas at any position in the third image frame, and perform a global random search, so as to target the main part, surrounding parts, and background parts of the tracking target. One of them performs search and identification.
  • the first search algorithm, the second search algorithm, and the third search algorithm may be executed alternately, and correspondingly, the frame numbers of the first image frame, the second image frame, and the third image frame are all different.
  • any two of the first search algorithm, the second search algorithm, and the third search algorithm are performed simultaneously.
  • any two of the first image frame, the second image frame, and the third image frame are the same.
  • the first search algorithm, the second search algorithm, and the third search algorithm are performed simultaneously, and correspondingly, the frame numbers of the first image frame, the second image frame, and the third image frame are all the same.
  • step S52 may also be: determining the area size of the third search area corresponding to the third search algorithm in the third image frame according to the area size of the third image frame, and setting the center position of the third image frame It is determined as the center point of the third search area, and a third search area corresponding to the third search algorithm is determined in the third image frame (that is, the third search area is the entire third image frame), and in step In S53, a third search algorithm is used to perform a global search on the entire third image frame.
  • the fourth embodiment of the present application uses the third search algorithm to randomly generate and search the third search area at different positions in the third image frame.
  • the third search algorithm uses the third search algorithm to randomly generate and search the third search area at different positions in the third image frame.
  • Fig. 6 shows the main architecture of a tracking target-based search device according to the fifth embodiment of the present invention.
  • the device for tracking the status determination mainly includes a memory 602, a processor 604, and a video collector 606.
  • the video collector 606 is used for collecting tracking targets in the target area, the processor 604 is used for storing program codes, and the processor 606 is used for calling and executing the program codes.
  • the effective identification information of the tracking area corresponding to the tracking target determine a first search area corresponding to the first search algorithm in the first image frame, and use the first search algorithm to search in the first search area; and According to the effective identification information of the tracking area corresponding to the tracking target, determine at least one second search area corresponding to the second search algorithm in the second image frame, and use the second search algorithm to perform in the second search area search.
  • program code is further used to perform the following operations:
  • the center position and the area size of the tracking area in the first image frame are determined, and the center position and the area size of the tracking area are determined to be in the first image frame.
  • the tracking area is a rectangular area
  • the side length of the first search area is n times the corresponding side length of the tracking area, where n is an adjustment parameter, and
  • the values of the adjustment parameters corresponding to the adjacent image frames are different.
  • program code is further used to perform the following operations:
  • the center position and area area of the tracking area in the second image frame are determined, and according to the center position and area area of the tracking area, it is determined to be in the second image frame.
  • the second search area corresponding to the second search algorithm in two image frames.
  • program code is further used to perform the following operations:
  • the area area of the tracking area determine the side length of a square with the same area as the area area of the tracking area, and set the shortest distance between the center points of the second search area as the side length of the square
  • the shortest distance between the center point of the second search area and the center position of the tracking area is at least 0.3 times the side length of the square
  • the search area of the second search area is the tracking area The area area of the area.
  • the second search algorithm is used to search in the second search area at least once, wherein the The respective frame numbers of the first image frame and the second image frame are different.
  • the second search algorithm in the process of using the first search algorithm to search in the first search area, is used to search in the second search area, wherein the first search algorithm is used to search in the second search area.
  • the frame numbers of an image frame and the second image frame are the same.
  • program code is further used to perform the following operations:
  • the effective identification information of the tracking area corresponding to the tracking target determine the area size of the tracking area in the third image frame; according to the area size of the tracking area, determine the third search in the third image frame The area size of the third search area corresponding to the algorithm, and randomly determine the center point of the third search area in the third image frame; and use the third search algorithm to search in the third search area.
  • the program code is further configured to perform the following operation: for each frame of the first image frame of 12 consecutive frames, the first search algorithm is used to search.
  • the sixth embodiment of the present invention provides a handheld camera, which includes the tracking state determination device described in the sixth embodiment above. In addition, it also includes a carrier fixedly connected to the video collector to carry at least the video collector. Part.
  • the handheld camera is a handheld pan-tilt camera.
  • the carrier includes at least a handheld pan/tilt, and the handheld pan/tilt includes but is not limited to a handheld three-axis pan/tilt.
  • the video capture device includes, but is not limited to, a handheld three-axis pan/tilt camera.
  • the handheld camera as the handheld gimbal camera as an example, the basic structure of the handheld gimbal camera is briefly introduced.
  • the handheld pan/tilt camera of the embodiment of the present invention (as shown in FIG. 7) includes a handle 11 and a photographing device 12 loaded on the handle 11.
  • the The photographing device 12 may include a three-axis pan-tilt camera, and in other embodiments includes a two-axis or more than three-axis pan-tilt camera.
  • the handle 11 is provided with a display screen 13 for displaying the shooting content of the shooting device 12.
  • the invention does not limit the type of the display screen 13.
  • the display screen 13 By setting the display screen 13 on the handle 11 of the handheld PTZ camera, the display screen can display the shooting content of the shooting device 12, so that the user can quickly browse the pictures or videos shot by the shooting device 12 through the display screen 13, thereby improving The interaction and fun between the handheld PTZ camera and the user meets the diverse needs of the user.
  • the handle 11 is further provided with an operating function unit for controlling the camera 12, and by operating the operating function unit, the operation of the camera 12 can be controlled, for example, the opening and closing of the camera 12 can be controlled. Turning off and controlling the shooting of the shooting device 12, controlling the posture change of the pan-tilt part of the shooting device 12, etc., so that the user can quickly operate the shooting device 12.
  • the operation function part may be in the form of a button, a knob or a touch screen.
  • the operating function unit includes a photographing button 14 for controlling the photographing of the photographing device 12, a power/function button 15 for controlling the opening and closing of the photographing device 12 and other functions, as well as controlling the pan/tilt.
  • the universal key 16 may also include other control buttons, such as image storage buttons, image playback control buttons, etc., which can be set according to actual needs.
  • the operation function part and the display screen 13 are arranged on the same side of the handle 11.
  • the operation function part and the display screen 13 shown in the figure are both arranged on the front of the handle 11, which conforms to ergonomics.
  • the overall appearance and layout of the handheld PTZ camera is more reasonable and beautiful.
  • the side of the handle 11 is provided with a function operation key A, which is used to facilitate the user to quickly and intelligently form a sheet with one key.
  • a function operation key A which is used to facilitate the user to quickly and intelligently form a sheet with one key.
  • the handle 11 is further provided with a card slot 17 for inserting a storage element.
  • the card slot 17 is provided on the side of the handle 11 adjacent to the display screen 13, and a memory card is inserted into the card slot 17 to store the images taken by the camera 12 in the memory card. .
  • arranging the card slot 17 on the side does not affect the use of other functions, and the user experience is better.
  • a power supply battery for supplying power to the handle 11 and the imaging device 12 may be provided inside the handle 11.
  • the power supply battery can be a lithium battery with large capacity and small size to realize the miniaturization design of the handheld pan/tilt camera.
  • the handle 11 is also provided with a charging interface/USB interface 18.
  • the charging interface/USB interface 18 is provided at the bottom of the handle 11 to facilitate connection with an external power source or storage device, so as to charge the power supply battery or perform data transmission.
  • the handle 11 is further provided with a sound pickup hole 19 for receiving audio signals, and the sound pickup hole 19 communicates with a microphone inside.
  • the sound pickup hole 19 may include one or more. It also includes an indicator light 20 for displaying status. The user can realize audio interaction with the display screen 13 through the sound pickup hole 19.
  • the indicator light 20 can serve as a reminder, and the user can obtain the power status of the handheld pan/tilt camera and the current execution function status through the indicator light 20.
  • the sound pickup hole 19 and the indicator light 20 can also be arranged on the front of the handle 11, which is more in line with the user's usage habits and operation convenience.
  • the imaging device 12 includes a pan-tilt support and a camera mounted on the pan-tilt support.
  • the imager may be a camera, or an image pickup element composed of a lens and an image sensor (such as CMOS or CCD), etc., which can be specifically selected according to needs.
  • the camera may be integrated on the pan-tilt support, so that the photographing device 12 is a pan-tilt camera; it may also be an external photographing device, which can be detachably connected or clamped to be mounted on the pan-tilt support.
  • the pan/tilt support is a three-axis pan/tilt support
  • the photographing device 12 is a three-axis pan/tilt camera.
  • the three-axis pan/tilt head bracket includes a yaw axis assembly 22, a roll axis assembly 23 movably connected to the yaw axis assembly 22, and a pitch axis assembly 24 movably connected to the roll axis assembly 23.
  • the camera is mounted on the pitch axis assembly 24.
  • the yaw axis assembly 22 drives the camera 12 to rotate in the yaw direction.
  • the pan/tilt support can also be a two-axis pan/tilt, a four-axis pan/tilt, etc., which can be specifically selected according to needs.
  • a mounting portion is further provided, the mounting portion is provided at one end of the connecting arm connected to the roll shaft assembly, and the yaw shaft assembly may be set in the handle, and the yaw shaft assembly drives The camera 12 rotates in the yaw direction together.
  • the handle 11 is provided with an adapter 26 for coupling with a mobile device 2 (such as a mobile phone), and the adapter 26 is detachably connected to the handle 11.
  • the adapter 26 protrudes from the side of the handle 11 for connecting to the mobile device 2.
  • the handheld pan/tilt camera It is docked with the adapter 26 and used to be supported at the end of the mobile device 2.
  • the handle 11 is provided with an adapter 26 for connecting with the mobile device 2 to connect the handle 11 and the mobile device 2 to each other.
  • the handle 11 can be used as a base of the mobile device 2.
  • the user can hold the other end of the mobile device 2 Let's pick up and operate the handheld pan/tilt camera together.
  • the connection is convenient and fast, and the product is beautiful.
  • a communication connection between the handheld pan-tilt camera and the mobile device 2 can be realized, and data can be transmitted between the camera 12 and the mobile device 2.
  • the adapter 26 and the handle 11 are detachably connected, that is, the adapter 26 and the handle 11 can be mechanically connected or removed. Further, the adapter 26 is provided with an electrical contact portion, and the handle 11 is provided with an electrical contact matching portion that matches with the electrical contact portion.
  • the adapter 26 can be removed from the handle 11.
  • the adapter 26 is then mounted on the handle 11 to complete the mechanical connection between the adapter 26 and the handle 11, and at the same time through the electrical contact part and the electrical contact mating part. The connection ensures the electrical connection between the two, so as to realize the data transmission between the camera 12 and the mobile device 2 through the adapter 26.
  • the side of the handle 11 is provided with a receiving groove 27, and the adapter 26 is slidably clamped in the receiving groove 27. After the adapter 26 is installed in the receiving slot 27, the adapter 26 partially protrudes from the receiving slot 27, and the portion of the adapter 26 protruding from the receiving slot 27 is used to connect with the mobile device 2.
  • the adapter 26 when the adapter 26 is inserted into the receiving groove 27 from the adapter 26, the adapter 26 is flush with the receiving groove 27, Furthermore, the adapter 26 is stored in the receiving groove 27 of the handle 11.
  • the adapter 26 can be installed in the receiving slot 27 so that the adapter 26 protrudes from the receiving slot 27 so that the mobile device 2 can be connected to the handle 11 are connected to each other.
  • the adapter 26 can be taken out of the receiving slot 27 of the handle 11, and then inserted into the receiving slot from the adapter 26 in the reverse direction 27, the adapter 26 is further stored in the handle 11.
  • the adapter 26 is flush with the receiving groove 27 of the handle 11. After the adapter 26 is stored in the handle 11, the surface of the handle 11 can be ensured to be flat, and the adapter 26 is stored in the handle 11 to make it easier to carry.
  • the receiving groove 27 is semi-opened on one side surface of the handle 11, which makes it easier for the adapter 26 to be slidably connected to the receiving groove 27.
  • the adapter 26 can also be detachably connected to the receiving slot 27 of the handle 11 by means of a snap connection, a plug connection, or the like.
  • the receiving groove 27 is provided on the side of the handle 11.
  • the receiving groove 27 is clamped and covered by the cover 28, which is convenient for the user to operate, and does not affect the front and sides of the handle. The overall appearance.
  • the electrical contact part and the electrical contact mating part may be electrically connected in a contact contact manner.
  • the electrical contact portion can be selected as a telescopic probe, can also be selected as an electrical plug-in interface, or can be selected as an electrical contact.
  • the electrical contact portion and the electrical contact mating portion can also be directly connected to each other in a surface-to-surface contact manner.
  • a search method based on tracking targets characterized in that the method comprises:
  • the effective identification information of the tracking area corresponding to the tracking target determine a first search area corresponding to the first search algorithm in the first image frame, and use the first search algorithm to search in the first search area; and According to the effective identification information of the tracking area corresponding to the tracking target, determine at least one second search area corresponding to the second search algorithm in the second image frame, and use the second search algorithm to perform in the second search area search.
  • the search method according to A1, wherein the determining a first search area corresponding to the first search algorithm in the first image frame according to the effective identification information of the tracking area corresponding to the tracking target includes:
  • the first search area corresponding to the first search algorithm in the first image frame is determined according to the center position and the area size of the tracking area.
  • A3 The search method according to A2, wherein the tracking area is a rectangular area, the side length of the first search area is n times the corresponding side length of the tracking area, where n is an adjustment parameter,
  • the second search area corresponding to the second search algorithm in the second image frame is determined.
  • A6 The search method according to A5, wherein the second search algorithm corresponding to the second search algorithm in the second image frame is determined according to the center position and the area area of the tracking area
  • the area includes:
  • the area area of the tracking area determine the side length of a square with the same area as the area area of the tracking area, and set the shortest distance between the center points of the second search area as the side length of the square
  • the shortest distance between the center point of the second search area and the center position of the tracking area is at least 0.3 times the side length of the square
  • the search area of the second search area is the tracking area The area area of the area.
  • the second search algorithm is used to search in the second search area, wherein the first image frame and the first The respective frame numbers of the two image frames are the same.
  • A9 The search method according to A1, wherein the method further includes:
  • the area size of the tracking area determine the area size of the third search area corresponding to the third search algorithm in the third image frame, and randomly determine the area size of the third search area in the third image frame Center point;
  • the first search algorithm For each of the 12 consecutive frames of the first image frame, the first search algorithm is used to search.
  • a tracking target-based search device characterized by comprising a memory, a processor, a video collector, the video collector is used to collect the target to be tracked in the target area; the memory is used to store program code; The processor is used to call and execute the program code, and when the program code is executed, it is used to perform the following operations:
  • the effective identification information of the tracking area corresponding to the tracking target determine a first search area corresponding to the first search algorithm in the first image frame, and use the first search algorithm to search in the first search area;
  • the effective identification information of the tracking area corresponding to the tracking target determine at least one second search area corresponding to the second search algorithm in the second image frame, and use the second search algorithm to perform in the second search area search.
  • the first search area corresponding to the first search algorithm in the first image frame is determined according to the center position and the area size of the tracking area.
  • A13 The search device according to A11, wherein the tracking area is a rectangular area, the side length of the first search area is n times the corresponding side length of the tracking area, where n is an adjustment parameter,
  • the area area of the tracking area determine the side length of a square with the same area as the area area of the tracking area, and set the shortest distance between the center points of the second search area as the side length of the square
  • the shortest distance between the center point of the second search area and the center position of the tracking area is at least 0.3 times the side length of the square
  • the search area of the second search area is the tracking area The area area of the area.
  • the second search algorithm is used to search in the second search area, wherein the first image frame and the first The respective frame numbers of the two image frames are the same.
  • the area size of the tracking area determine the area size of the third search area corresponding to the third search algorithm in the third image frame, and randomly determine the area size of the third search area in the third image frame Center point;
  • the first search algorithm For each of the 12 consecutive frames of the first image frame, the first search algorithm is used to search.
  • a handheld camera characterized by comprising the tracking target-based search device according to any one of A11-A20, characterized by further comprising: a carrier, the carrier and the video capture device
  • the fixed connection is used to carry at least a part of the video collector.
  • A22 The handheld camera according to A21, wherein the carrier includes a handheld pan/tilt.
  • A23 The handheld camera according to A22, wherein the carrier is a handheld three-axis pan/tilt.
  • A24 The handheld camera according to A21, wherein the video capture device comprises a handheld three-axis pan-tilt camera.
  • the improvement of a technology can be clearly distinguished between hardware improvements (for example, improvements in circuit structures such as diodes, transistors, switches, etc.) or software improvements (improvements in method flow).
  • hardware improvements for example, improvements in circuit structures such as diodes, transistors, switches, etc.
  • software improvements improvements in method flow.
  • the improvement of many methods and processes of today can be regarded as a direct improvement of the hardware circuit structure.
  • Designers almost always get the corresponding hardware circuit structure by programming the improved method flow into the hardware circuit. Therefore, it cannot be said that the improvement of a method flow cannot be realized by the hardware entity module.
  • a programmable logic device for example, a Field Programmable Gate Array (Field Programmable Gate Array, FPGA)
  • PLD Programmable Logic Device
  • FPGA Field Programmable Gate Array
  • HDL Hardware Description Language
  • ABEL Advanced Boolean Expression Language
  • AHDL Altera Hardware Description Language
  • HDCal JHDL
  • Lava Lava
  • Lola MyHDL
  • PALASM RHDL
  • VHDL Very-High-Speed Integrated Circuit Hardware Description Language
  • Verilog Verilog
  • the controller can be implemented in any suitable manner.
  • the controller can take the form of, for example, a microprocessor or a processor and a computer-readable medium storing computer-readable program codes (such as software or firmware) executable by the (micro)processor. , Logic gates, switches, application specific integrated circuits (ASICs), programmable logic controllers and embedded microcontrollers. Examples of controllers include but are not limited to the following microcontrollers: ARC625D, Atmel AT91SAM, Microchip PIC18F26K20 and Silicon Labs C8051F320, the memory controller can also be implemented as part of the memory control logic.
  • controllers in addition to implementing the controller in a purely computer-readable program code manner, it is entirely possible to program the method steps to make the controller use logic gates, switches, application-specific integrated circuits, programmable logic controllers, and embedded logic.
  • the same function can be realized in the form of a microcontroller or the like. Therefore, such a controller can be regarded as a hardware component, and the devices included in it for realizing various functions can also be regarded as a structure within the hardware component. Or even, the device for realizing various functions can be regarded as both a software module for realizing the method and a structure within a hardware component.
  • a typical implementation device is a computer.
  • the computer may be, for example, a personal computer, a laptop computer, a cell phone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or Any combination of these devices.
  • These computer program instructions can also be stored in a computer-readable memory that can guide a computer or other programmable data processing equipment to work in a specific manner, so that the instructions stored in the computer-readable memory produce an article of manufacture including the instruction device.
  • the device implements the functions specified in one process or multiple processes in the flowchart and/or one block or multiple blocks in the block diagram.
  • These computer program instructions can also be loaded on a computer or other programmable data processing equipment, so that a series of operation steps are executed on the computer or other programmable equipment to produce computer-implemented processing, so as to execute on the computer or other programmable equipment.
  • the instructions provide steps for implementing the functions specified in one process or multiple processes in the flowchart and/or one block or multiple blocks in the block diagram.
  • this application can be provided as a method, a system, or a computer program product. Therefore, this application may adopt the form of a complete hardware embodiment, a complete software embodiment, or an embodiment combining software and hardware. Moreover, this application may adopt the form of a computer program product implemented on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) containing computer-usable program codes.
  • a computer-usable storage media including but not limited to disk storage, CD-ROM, optical storage, etc.
  • This application may be described in the general context of computer-executable instructions executed by a computer, such as a program module.
  • program modules include routines, programs, objects, components, data structures, etc. that perform specific transactions or implement specific abstract data types.
  • This application can also be practiced in distributed computing environments. In these distributed computing environments, remote processing devices connected through a communication network execute transactions.
  • program modules can be located in local and remote computer storage media including storage devices.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

本申请实施例提供一种基于跟踪目标的搜索方法、设备及手持相机,其包括根据跟踪目标对应的跟踪区域的有效标识信息,确定第一搜索算法在第一图像帧中对应的一个第一搜索区域,并使用第一搜索算法在第一搜索区域进行搜索;以及根据跟踪目标对应的跟踪区域的有效标识信息,确定第二搜索算法在第二图像帧中对应的至少一个第二搜索区域,并使用第二搜索算法在第二搜索区域进行搜索。借此,可以提高跟踪目标的跟踪精度,并降低跟踪目标丢失的概率。

Description

基于跟踪目标的搜索方法、设备及其手持相机 技术领域
本申请实施例涉及计算机视觉技术领域,尤其涉及一种基于跟踪目标的搜索方法、设备及手持相机。
背景技术
目标检测跟踪是计算机视觉领域近几年来发展较快的一个方向。随着视觉处理技术和人工智能技术的发展,家用手持相机也可用于追踪待拍摄目标,并根据所述待拍摄目标进行物体识别和场景识别等操作,以便于用户对拍摄的照片或视频进行分类和管理,以及后续的其他自动处理操作。
然而,目前的单目标跟踪算法均存在一个问题,也就是当待跟踪目标的形状、光照条件、场景、位置中的至少一个产生变化时,会严重影响跟踪拍摄的跟踪效果,并导致跟踪拍摄失败的问题。
发明内容
有鉴于此,本发明实施例所解决的技术问题之一在于提供一种基于跟踪目标的搜索方法、设备及其手持相机,用以克服现有技术中容易出现跟踪拍摄失败的技术缺陷。
本申请的一实施例提供了一种基于跟踪目标的搜索方法,包括:根据跟踪目标对应的跟踪区域的有效标识信息,确定第一搜索算法在第一图像帧中对应的一个第一搜索区域,并使用所述第一搜索算法在所述第一搜索区域进行搜索;以及根据所述跟踪目标对应的跟踪区域的有效标识信息,确定第二搜索算法在第二图像帧中对应的至少一个第二搜索区域,并使用所述第二搜索算法在所述第二搜索区域进行搜索。
本申请的另一实施例提供了一种基于跟踪目标的搜索设备,其特征在于,包括存储器、处理器、视频采集器、所述视频采集器用于采集目标区域的待跟踪目标;所述存储器用于存储程序代码;所述处理器用于调用并执行所述程序代码,当所述程序代码被执行时,用于执行以下操作:根据跟踪目标对应的跟踪区域的有效标识信息,确定第一搜索算法在第一图像帧中对应的一个第一搜索区域,并使用所述第一搜索算法在所述第一搜索区域进行搜索;以及根据所述跟踪目标对应的跟踪区域的有效标识信息,确定第二搜索算法在第二图像帧 中对应的至少一个第二搜索区域,并使用所述第二搜索算法在所述第二搜索区域进行搜索。
本申请的又一实施例提供了一种手持相机,其包括上述实施例所述的基于跟踪目标的搜索设备,且还包括承载器,所述承载器与所述视频采集器固定连接,用于承载所述视频采集器的至少一部分。
本申请实施例提供的搜索算法,根据跟踪目标对应的跟踪区域的有效标识信息,以确定第一搜索算法在图像帧中对应的第一搜索区域并进行搜索,以及确定第二搜索算法在图像帧中对应的第二搜索区域并进行搜索,通过结合第一搜索算法与第二搜索算法,可以提高搜索的精度,并降低跟踪目标跟丢的概率。
附图说明
后文将参照附图以示例性而非限制性的方式详细描述本申请实施例的一些具体实施例。附图中相同的附图标记标示了相同或类似的部件或部分。本领域技术人员应该理解,这些附图未必是按比值绘制的。附图中:
图1为本申请实施例提供的一种基于跟踪目标的搜索方法的示意性流程图;
图2为本申请实施例提供的一种基于跟踪目标的搜索方法中第一搜索算法的实施例流程图;
图3为本申请实施例提供的一种基于跟踪目标的搜索方法中第二搜索算法的实施例流程图;
图4为本申请实施例基于第二搜索算法生成的第二搜索区域的示意图;
图5为本申请实施例提供的一种基于跟踪目标的搜索方法中第三搜索算法的实施例流程图;
图6为本申请实施例提供的一种基于跟踪目标的搜索设备的示意性框架图;
图7至图9为本申请实施例提供的一种手持相机的示意性结构图。
具体实施方式
在本发明使用的术语是仅仅出于描述特定实施例的目的,而非旨在限制本发明。在本发明和所附权利要求书中所使用的单数形式的“一种”、“所述”和“该”也旨在包括多数形式,除非上下文清楚地表示其他含义。还应当理解, 本文中使用的术语“和/或”是指并包含一个或多个相关联的列出项目的任何或所有可能组合。
应当理解,本申请说明书以及权利要求书中使用的“第一”“第二”以及类似的词语并不表示任何顺序、数量或者重要性,而只是用来区分不同的组成部分。同样,“一个”或者“一”等类似词语也不表示数量限制,而是表示存在至少一个。
近年来,手持相机的跟踪拍摄技术得到了较快发展,然而,在利用手持相机进行跟踪拍摄过程中,当跟踪目标的形状、光照条件、场景、位置产生变化时,容易影响跟踪拍摄的跟踪效果,进而导致跟踪目标丢失的问题。
鉴于上述技术方案中的不足,本申请实施例所提供的技术方案通过针对现有跟踪算法进行改善,可以提高跟踪搜索的精度,并提高跟踪拍摄的使用体验。
下面结合本发明实施例附图进一步说明本发明实施例具体实现。
实施例一
本申请实施例一提供一种基于跟踪目标的搜索方法,图1为本申请实施例一提供的基于跟踪目标的搜索方法的示意性流程图。
于本实施例中,上述基于跟踪目标的搜索方法可应用于各种拍摄设备或具有拍摄功能的任何电子设备,例如,可应用于口袋相机、运动相机、手持相机等便携式拍摄设备,也可应用于具有拍摄功能的智能手机、平板等电子设备,本发明对此不作限制。
如图所示,本申请实施例的基于跟踪目标的搜索方法主要包括以下步骤:
步骤S11,根据跟踪目标对应的跟踪区域的有效标识信息,确定第一搜索算法在第一图像帧中对应的一个第一搜索区域,并使用第一搜索算法在第一搜索区域进行搜索。
可选的,有效标识信息可用于标识跟踪区域的位置、形状、尺寸等识别特征,但并不以此为限,有效标识信息也可用于标识跟踪区域的其他识别特征,例如颜色、材质等。
可选的,跟踪区域为基于图像帧中的有效框所确定,其中,所述有效框用于识别跟踪目标在第一图像帧中的位置、形状及大小,其通常呈矩形,并可随着跟踪物体的远近大小产生相应变化(例如用于框住人脸的有效框,可跟随拍摄人脸的远近大小变化而产生相应变化),且当有效框的位置及区域尺寸发生变化时,跟踪区域的中心位置及区域尺寸亦会随之相应调整。
于本实施例中,跟踪区域的中心位置为与有效框的中心点重合,而跟踪区域的区域尺寸则为有效框的区域尺寸的预定倍数。例如,跟踪区域的边长为有效框的4倍边长(即跟踪目标的4倍边长)。需说明的是,跟踪区域的中心位置及区域尺寸(即边长)也可根据实际需求而进行调整设置,本发明对此不作限制。
可选的,第一搜索算法用于以跟踪目标的当前所述位置为中心确定一个第一搜索区域,据以针对跟踪目标的主体部分进行搜索识别。
可选的,第一搜索区域的中心点与跟踪区域的中心位置重合,第一搜索区域的搜索范围与跟踪区域的区域尺寸相同或不相同,具体根据调整参数的取值而定。
关于第一搜索算法的具体实施手段将于后续的图2中予以详述。
较佳的,可针对连续的至少12帧第一图像帧各执行一次第一搜索算法(经测算,其总耗时大约为1秒),但并不以此为限,也可根据实际需求,针对第一图像帧的帧数进行调整。
可选的,第一搜索算法在任意相邻两帧第一图像帧中对应生成的第一搜索区域的搜索范围大小为不相等,借以提高搜索精度。
步骤S12,根据跟踪目标对应的跟踪区域的有效标识信息,确定第二搜索算法在第二图像帧中对应的至少一个第二搜索区域,并使用第二搜索算法在第二搜索区域进行搜索。
其中,针对有效标识信息以及跟踪区域的定义,均与上述步骤S11中所述相同,在此不予赘述。
可选的,第二搜索算法用于在跟踪目标的周围区域确定至少一个第一搜索区域,以针对跟踪目标的周围部分进行搜索识别。
可选的,第二搜索区域的搜索面积与跟踪区域的区域面积相同,第二搜索区域的中心点与跟踪区域的中心位置不相同。
关于第二搜索算法的具体实施手段将于后续的图3中予以详述。
于一实施例中,所述第一图像帧和第二图像帧可以是帧号相同的图像帧(亦即图像序列中的相同图像帧)。例如,图像序列中的第3帧图像帧即是第一图像帧也是第二图像帧。
于另一实施例中,所述第一图像帧和第二图像帧也可以是同一图像序列中帧号不同的两个图像帧。例如,图像序列中的第3帧为第一图像帧,第4帧为 第二图像帧。
可选的,第一搜索算法(即步骤S11)和第二搜索算法(即步骤S12)的执行顺序可根据实际需求而调整。
于一实施例中,可在至少一次使用第一搜索算法在第一搜索区域进行搜索之后,再至少一次使用第二搜索算法在第二搜索区域进行搜索,于此情况下,第一图像帧与第二图像帧为帧号不同的图像帧。借由此种不同搜索算法的交替执行方式,可以提高计算机的处理效率。
例如,可在使用多次第一搜索算法针对图像序列中的多帧第一图像帧(例如第1帧至第12帧第一图像帧)进行搜索后,再使用多次第二搜索算法针对图像序列中后续的多帧第二图像帧(例如第13帧至第20帧第二图像帧)进行搜索。
于另一实施例中,可在使用第一搜索算法在第一搜索区域进行搜索的过程中,使用第二搜索算法在第二搜索区域进行搜索,于此情况下,第一图像帧与第二图像帧为帧数相同,通过针对同一图像帧同时使用两种搜索算法进行搜索,可以提高搜索的精度。
例如,可针对图像序列中的第3帧图像帧同时使用第一搜索算法和第二搜索算法进行搜索。
于又一实施例中,可针对图像序列中的每一帧图像帧使用一次第一搜索算法或一次第二搜索算法。举例而言,可每隔一帧交替一次搜索算法,亦即,针对图像序列中的第2帧图像帧使用第一搜索算法进行搜索,并针对图像序列中的第3帧图像帧使用第二搜索算法进行搜索。
综上所述,本申请实施例基于跟踪目标对应的跟踪区域的有效标识信息,并结合使用第一搜索算法和第二搜索算法进行搜索,可以提高搜索精度,降低跟踪丢失的概率。
实施例二
本申请实施例二提供了一种基于跟踪目标的搜索方法。请参考图2,本申请实施例描述了图1所示步骤S11中在第一图像帧中确定第一搜索区域的示例性处理流程。如图所示,本申请实施例的搜索方法主要包括以下:
步骤S21,根据跟踪区域的有效标识信息,确定在第一图像帧中跟踪区域的中心位置和区域尺寸。
于本实施例中,跟踪区域的有效标识信息用于标识跟踪区域的中心位置 (中心点)及区域尺寸,但不限于此,亦可用于标识跟踪区域的其他识别特征。
步骤S22,根据跟踪区域的中心位置和区域尺寸,确定在第一图像帧中第一搜索算法对应的第一搜索区域。
可选的,跟踪区域为矩形区域,第一搜索区域的边长为跟踪区域对应边长的n倍,其中,n为调整参数,第一搜索区域的中心点与跟踪区域的中心位置重合。
具体的,于本实施例中,基于跟踪区域所生成的第一搜索区域亦为矩形区域,其中,第一搜索区域的四个边长与跟踪区域的四个边长一一对应,例如,第一搜索区域的左右两侧边长对应为跟踪区域的左右两侧边长的n倍,第一搜索区域的上下两侧边长对应为跟踪区域的上下两侧边长的n倍,第一搜索区域的中心点即为跟踪区域的中心位置。
较佳的,|n-1|小于或等于0.3,也就是说,调整参数n对针对跟踪区域的区域尺寸的调整幅度(增加幅度或缩小幅度)不超过0.3倍。
可选的,可使用第一搜索算法针对图像序列中连续的多帧第一图像帧进行搜索,且相邻两个第一图像帧对应的调整参数的取值不同。
较佳的,可使用第一搜索算法可针对图像序列中连续的12帧第一图像帧进行搜索(每一帧搜索一次)。
较佳的,相邻两个第一图像帧对应的调整参数之间的差值不超过0.3,且放大调整参数(即取值大于1.0的调整参数)与缩小调整参数(即取值小于1.0的调整参数)为交替设置,不变调整参数(即取值等于1.0的调整参数)为任意设置,例如间隔设置在相邻的放大调整参数和缩小调整参数之间。
于一实施例中,可根据调整参数组{1.01.1,1.0,0.9,1.0,1.2,1.0,0.8,1.0,1.3,1.0,1.3}中的12个调整参数,依次针对图像序列中连续的12帧第一图像帧(例如第2帧至第13帧)中的每一帧各使用一次第一搜索算法进行搜索,其中,在第2帧中生成的第一搜索区域的边长为跟踪区域对应边长的1.0倍,在第3帧中生成的第一搜索区域的边长为跟踪区域对应边长的1.1倍,在第4帧中生成的第一搜索区域的边长为跟踪区域对应边长的1.0倍,在第5帧中生成的第一搜索区域的边长为跟踪区域对应边长的0.9倍,并以此类推。
需说明的是,调整参数组中调整参数的个数和排列顺序并不以上述实施例为限,可根据实际搜索需求进行调整。
综上所述,本申请实施例二利用第一搜索算法可在第一图像帧中生成与跟 踪区域具有相同中心点且不同区域尺寸的第一搜索区域并进行搜索,据以针对跟踪目标的主体部分进行搜索识别,从而提高搜索效率并提高搜索精度。
实施例三
本申请实施例三提供了一种基于跟踪目标的搜索方法。请参考图3,本申请实施例描述了图1所示步骤S12中在第二图像帧中确定第二搜索区域的示例性处理流程。如图所示,本申请实施例的搜索方法主要包括以下:
步骤S31,根据所述跟踪区域的所述有效标识信息,确定在所述第二图像帧中所述跟踪区域的中心位置和区域面积。
于本实施例中,跟踪区域的有效标识信息用于标识跟踪区域的中心位置(中心点)及区域面积,但不限于此,亦可用于标识跟踪区域的其他识别特征。
步骤S32,根据跟踪区域的中心位置和区域面积,确定在第二图像帧中第二搜索算法对应的第二搜索区域。
可选的,跟踪区域为矩形区域,第二搜索区域的搜索面积即为跟踪区域的区域面积。
可选的,可根据跟踪区域的区域面积,确定面积与跟踪区域的区域面积相同的正方形的边长,亦即,将跟踪区域的矩形区域面积转化成具有相同面积的正方形的边长a。
较佳的,相邻的两个第二搜索区域的中心点之间的最短距离可设置为正方形边长的0.3倍(即0.3a),第二搜索区域的中心点与跟踪区域的中心位置之间的最短距离至少为正方形边长的0.3倍。但并不以此为限,一般而言,相邻的两个第二搜索区域的中心点之间的最短距离设置不超过正方形边长的0.5倍为佳。
于一实施例中,第二图像帧中第二搜索算法对应的第二搜索区域的确定过程包括:以跟踪区域的中心位置作为直角坐标系的原点,则第二搜索区域的中心点位于所述直角坐标系中(±0.3ma,±0.3m’a)的位置,其中,m和m’为整数,a为面积与跟踪区域的区域面积相同的正方形的边长。
例如,根据{(-0.3a,-0.3a),(0.0a,-0.3a),(0.3a,-0.3a),(0.3a,0.0a),(0.3a,0.3a),(0.3a,0.0a),(0.3a,-0.3a),(0.0a,-0.3a),(-0.6a,-0.6a)…},可以得到第二图像帧中各个第二搜索区域的中心点位置(参考图4所示的各个小圆点位置),并根据各第二搜索区域的中心点位置以及跟踪区域的区域面积,确定在第二图像帧中第二搜索算法对应的至少一个第二搜索区域。
其中,由图4可以看出,各第二搜索区域的中心点(即图4中的各小圆点)之间的最短距离为0.3a,各第二搜索区域的中心点(即图4中的各小圆点)与跟踪区域的中心位置(即有效框的中心点)之间的最短距离至少为0.3a。
综上所述,本申请实施例三利用第二搜索算法可在第二图像帧中生成多个位置不同但搜索面积相同的第二搜索区域,据以针对跟踪目标的周围部分进行搜索识别,从而提高搜索算法的精度。
实施例四
本申请实施例四提供一种基于跟踪目标的搜索方法,请参考图5,本申请实施例的基于跟踪目标的搜索方法主要包括以下:
步骤S51,根据所述跟踪目标对应的跟踪区域的有效标识信息,确定在第三图像帧中所述跟踪区域的区域尺寸。
可选的,跟踪区域的有效标识信息用于标识跟踪区域的区域面积,但不限于此,亦可用于标识跟踪区域的其他识别特征。
步骤S52,根据跟踪区域的区域尺寸,确定在第三图像帧中第三搜索算法对应的第三搜索区域的区域尺寸,并随机在第三图像帧中确定第三搜索区域的中心点。
可选的,第三搜索区域的区域尺寸与跟踪区域的区域尺寸相同。
可选的,第三搜索区域的中心点为整张第三图像帧中的任意位置,第三搜索区域的数量为至少一个。
步骤S53,使用第三搜索算法在第三搜索区域进行搜索。
综上所述,第三搜索算法用于在第三图像帧的任意位置随机确定一个或多个第三搜索区域,并进行全局随机搜索,从而针对跟踪目标的主体部分、周围部分、背景部分中的一者进行搜索识别。
于一实施例中,第一搜索算法、第二搜索算法和第三搜索算法可为交替执行,对应的,第一图像帧、第二图像帧、第三图像帧的帧号均不相同。
于另一实施例中,第一搜索算法、第二搜索算法和第三搜索算法中任意两者为同时进行,对应的,第一图像帧、第二图像帧、第三图像帧中任意两者的帧号为相同。
于又一实施例中,第一搜索算法、第二搜索算法和第三搜索算法三者同时进行,对应的,第一图像帧、第二图像帧、第三图像帧的帧号均相同。
可选的,上述步骤S52亦可为:根据第三图像帧的区域尺寸,确定在第三 图像帧中第三搜索算法对应的第三搜索区域的区域尺寸,并将第三图像帧的中心位置确定为第三搜索区域的中心点,据以在第三图像帧中确定第三搜索算法对应的一个第三搜索区域(亦即,第三搜索区域即为整个第三图像帧),并于步骤S53中使用第三搜索算法对整个第三图像帧进行全局搜索。
综上所述,本申请实施例四利用第三搜索算法在第三图像帧中的不同位置随机生成第三搜索区域并进行搜索,通过将第三搜索算法与前述第一搜索算法、第二搜索算法相结合,以进一步提高跟踪精度,降低目标跟丢的概率。
实施例五
图6示出了本发明实施例五的基于跟踪目标的搜索设备的主要架构。
如图所示,本发明实施例提供的跟踪该状态确定设备主要包括存储器602、处理器604、视频采集器606。
视频采集器606用于采集目标区域的跟踪目标,处理器604用于存储程序代码,处理器606用于调用并执行所述程序代码。
于本实施例中,当程序代码被处理器执行时,可用于执行以下操作:
根据跟踪目标对应的跟踪区域的有效标识信息,确定第一搜索算法在第一图像帧中对应的一个第一搜索区域,并使用所述第一搜索算法在所述第一搜索区域进行搜索;以及根据所述跟踪目标对应的跟踪区域的有效标识信息,确定第二搜索算法在第二图像帧中对应的至少一个第二搜索区域,并使用所述第二搜索算法在所述第二搜索区域进行搜索。
于可选实施例中,所述程序代码还用于执行以下操作:
根据所述跟踪区域的所述有效标识信息,确定在所述第一图像帧中所述跟踪区域的中心位置和区域尺寸,并根据所述跟踪区域的中心位置和区域尺寸,确定在所述第一图像帧中所述第一搜索算法对应的所述第一搜索区域。
于可选实施例中,所述跟踪区域为矩形区域,所述第一搜索区域的边长为所述跟踪区域对应边长的n倍,其中,n为调整参数,|n-1|小于或等于0.3;所述第一搜索区域的中心点与所述跟踪区域的中心位置重合。
于可选实施例中,相邻的所述图像帧对应的所述调整参数的取值不同。
于可选实施例中,所述程序代码还用于执行以下操作:
根据所述跟踪区域的所述有效标识信息,确定在所述第二图像帧中所述跟踪区域的中心位置和区域面积,并根据所述跟踪区域的中心位置和区域面积,确定在所述第二图像帧中所述第二搜索算法对应的所述第二搜索区域。
于可选实施例中,所述程序代码还用于执行以下操作:
根据所述跟踪区域的区域面积,确定面积与所述跟踪区域的区域面积相同的正方形的边长,并将所述第二搜索区域的中心点之间的最短距离设置为所述正方形的边长的0.3倍,所述第二搜索区域的中心点与所述跟踪区域的中心位置之间的最短距离至少为所述正方形边长的0.3倍,所述第二搜索区域的搜索面积为所述跟踪区域的区域面积。
于可选实施例中,至少一次使用所述第一搜索算法在所述第一搜索区域进行搜索之后,至少一次使用所述第二搜索算法在所述第二搜索区域进行搜索,其中,所述第一图像帧与所述第二图像帧各自的帧号为不同。
于可选实施例中,在使用所述第一搜索算法在所述第一搜索区域进行搜索的过程中,使用所述第二搜索算法在所述第二搜索区域进行搜索,其中,所述第一图像帧与所述第二图像帧各自的帧号为相同。
于可选实施例中,所述程序代码还用于执行以下操作:
根据所述跟踪目标对应的跟踪区域的有效标识信息,确定在第三图像帧中所述跟踪区域的区域尺寸;根据所述跟踪区域的区域尺寸,确定在所述第三图像帧中第三搜索算法对应的第三搜索区域的区域尺寸,并随机在所述第三图像帧中确定所述第三搜索区域的中心点;并使用所述第三搜索算法在所述第三搜索区域进行搜索。
于可选实施例中,所述程序代码还用于执行以下操作:针对连续12帧所述第一图像帧的每一帧,使用所述第一搜索算法进行搜索。
实施例六
本发明实施例六提供一种手持相机,其包括有上述实施例六所述的跟踪状态确定设备,此外,还包括有与视频采集器固定连接的承载器,以用于承载视频采集器的至少一部分。
可选的,手持相机为手持云台相机。
可选的,承载器至少包括有手持云台,所述手持云台包括但不限于手持三轴云台。
可选的,视频采集器包括但不限于手持三轴云台用摄像头。
下面以手持相机为手持云台相机为例,对手持云台相机的基本构造进行简单介绍。
请配合参考图7至图9,本发明实施例的手持云台相机(如图7所示), 包括:手柄11和装载于所述手柄11的拍摄装置12,在本实施例中,所述拍摄装置12可以包括三轴云台相机,在其他实施例中包括两轴或三轴以上的云台相机。
所述手柄11设有用于显示所述拍摄装置12的拍摄内容的显示屏13。本发明不对显示屏13的类型进行限定。
通过在手持云台相机的手柄11设置显示屏13,该显示屏可以显示拍摄装置12的拍摄内容,以实现用户能够通过该显示屏13快速浏览拍摄装置12所拍摄的图片或是视频,从而提高手持云台相机与用户的互动性及趣味性,满足用户的多样化需求。
在一个实施例中,所述手柄11还设有用于控制所述拍摄装置12的操作功能部,通过操作所述操作功能部,能够控制拍摄装置12的工作,例如,控制拍摄装置12的开启与关闭、控制拍摄装置12的拍摄、控制拍摄装置12云台部分的姿态变化等,以便于用户对拍摄装置12进行快速操作。其中,所述操作功能部可以为按键、旋钮或者触摸屏的形式。
在一个实施例中,操作功能部包括用于控制所述拍摄装置12拍摄的拍摄按键14和用于控制所述拍摄装置12启闭和其他功能的电源/功能按键15,以及控制所述云台移动的万向键16。当然,操作功能部还可以包括其他控制按键,如影像存储按键、影像播放控制按键等等,可以根据实际需求进行设定。
在一个实施例中,所述操作功能部和所述显示屏13设于所述手柄11的同一面,图中所示操作功能部和显示屏13均设于手柄11的正面,符合人机工程学,同时使整个手持云台相机的外观布局更合理美观。
进一步地,所述手柄11的侧面设置有功能操作键A,用于方便用户快速地智能一键成片。摄影机开启时,点按机身右侧橙色侧面键开启功能,则每隔一段时间自动拍摄一段视频,总共拍摄N段(N≥2),连接移动设备例如手机后,选择“一键成片”功能,系统智能筛选拍摄片段并匹配合适模板,快速生成精彩作品。
在一可选的实施方式中,所述手柄11还设有用于插接存储元件的卡槽17。在本实施例中,卡槽17设于所述手柄11上与所述显示屏13相邻的侧面,在卡槽17中插入存储卡,即可将拍摄装置12拍摄的影像存储在存储卡中。并且,将卡槽17设置在侧部,不会影响到其他功能的使用,用户体验较佳。
在一个实施例中,手柄11内部可以设置用于对手柄11及拍摄装置12供 电的供电电池。供电电池可以采用锂电池,容量大、体积小,以实现手持云台相机的小型化设计。
在一个实施例中,所述手柄11还设有充电接口/USB接口18。在本实施例中,所述充电接口/USB接口18设于所述手柄11的底部,便于连接外部电源或存储装置,从而对所述供电电池进行充电或进行数据传输。
在一个实施例中,所述手柄11还设有用于接收音频信号的拾音孔19,拾音孔19内部联通麦克风。拾音孔19可以包括一个,也可以包括多个。还包括用于显示状态的指示灯20。用户可以通过拾音孔19与显示屏13实现音频交互。另外,指示灯20可以达到提醒作用,用户可以通过指示灯20获得手持云台相机的电量情况和目前执行功能情况。此外,拾音孔19和指示灯20也均可以设于手柄11的正面,更符合用户的使用习惯以及操作便捷性。
在一个实施例中,所述拍摄装置12包括云台支架和搭载于所述云台支架的拍摄器。所述拍摄器可以为相机,也可以为由透镜和图像传感器(如CMOS或CCD)等组成的摄像元件,具体可根据需要选择。所述拍摄器可以集成在云台支架上,从而拍摄装置12为云台相机;也可以为外部拍摄设备,可拆卸地连接或夹持而搭载于云台支架。
在一个实施例中,所述云台支架为三轴云台支架,而所述拍摄装置12为三轴云台相机。所述三轴云台支架包括偏航轴组件22、与所述偏航轴组件22活动连接的横滚轴组件23、以及与所述横滚轴组件23活动连接的俯仰轴组件24,所述拍摄器搭载于所述俯仰轴组件24。所述偏航轴组件22带动拍摄装置12沿偏航方向转动。当然,在其他例子中,所述云台支架也可以为两轴云台、四轴云台等,具体可根据需要选择。
在一个实施例中,还设置有安装部,安装部设置于与所述横滚轴组件连接的连接臂的一端,而偏航轴组件可以设置于所述手柄中,所述偏航轴组件带动拍摄装置12一起沿偏航方向转动。
在一可选的实施方式中,所述手柄11设有用于与移动设备2(如手机)耦合连接的转接件26,所述转接件26与所述手柄11可拆卸连接。所述转接件26自所述手柄11的侧部凸伸而出以用于连接所述移动设备2,当所述转接件26与所述移动设备2连接后,所述手持云台相机与所述转接件26对接并用于被支撑于所述移动设备2的端部。
在手柄11设置用于与移动设备2连接的转接件26,进而将手柄11和移 动设备2相互连接,手柄11可作为移动设备2的一个底座,用户可以通过握持移动设备2的另一端来一同把手持云台相机拿起操作,连接方便快捷,产品美观性强。此外,手柄11通过转接件26与移动设备2耦合连接后,能够实现手持云台相机与移动设备2之间的通信连接,拍摄装置12与移动设备2之间能够进行数据传输。
在一个实施例中,所述转接件26与所述手柄11可拆卸连接,即转接件26和手柄11之间可以实现机械方面的连接或拆除。进一步地,所述转接件26设有电接触部,所述手柄11设有与所述电接触部配合的电接触配合部。
这样,当手持云台相机不需要与移动设备2连接时,可以将转接件26从手柄11上拆除。当手持云台相机需要与移动设备2连接时,再将转接件26装到手柄11上,完成转接件26和手柄11之间的机械连接,同时通过电接触部和电接触配合部的连接保证两者之间的电性连接,以实现拍摄装置12与移动设备2之间能够通过转接件26进行数据传输。
在一个实施例中,所述手柄11的侧部设有收容槽27,所述转接件26滑动卡接于所述收容槽27内。当转接件26装到收容槽27后,转接件26部分凸出于所述收容槽27,转接件26凸出收容槽27的部分用于与移动设备2连接。
在一个实施例中,参见图9所示,所当述转接件26自所述转接件26装入所述收容槽27时,所述转接件26与所述收容槽27齐平,进而将转接件26收纳在手柄11的收容槽27内。
因此,当手持云台相机需要和移动设备2连接时,可以将转接件26装入所述收容槽27内,使得转接件26凸出于所述收容槽27,以便移动设备2与手柄11相互连接。
当移动设备2使用完毕后,或者需要将移动设备2拔下时,可以将转接件26从手柄11的收容槽27内取出,然后反向自所述转接件26装入所述收容槽27内,进而将转接件26收纳在手柄11内。转接件26与手柄11的收容槽27齐平当转接件26收纳在手柄11内后,可以保证手柄11的表面平整,而且将转接件26收纳在手柄11内更便于携带。
在一个实施例中,所述收容槽27是半开放式地开设在手柄11的一侧表面,这样更便于转接件26与收容槽27进行滑动卡接。当然,在其他例子中,转接件26也可以采用卡扣连接、插接等方式与手柄11的收容槽27可拆卸连接。
在一个实施例中,收容槽27设置于手柄11的侧面,在不使用转接功能 时,通过盖板28卡接覆盖该收容槽27,这样便于用户操作,同时也不影响手柄的正面和侧面的整体外观。
在一个实施例中,所述电接触部与电接触配合部之间可以采用触点接触的方式实现电连接。例如,所述电接触部可以选择为伸缩探针,也可以选择为电插接口,还可以选择为电触点。当然,在其他例子中,所述电接触部与电接触配合部之间也可以直接采用面与面的接触方式实现电连接。
A1、一种基于跟踪目标的搜索方法,其特征在于,所述方法包括:
根据跟踪目标对应的跟踪区域的有效标识信息,确定第一搜索算法在第一图像帧中对应的一个第一搜索区域,并使用所述第一搜索算法在所述第一搜索区域进行搜索;以及根据所述跟踪目标对应的跟踪区域的有效标识信息,确定第二搜索算法在第二图像帧中对应的至少一个第二搜索区域,并使用所述第二搜索算法在所述第二搜索区域进行搜索。
A2、根据A1所述的搜索方法,其特征在于,所述根据跟踪目标对应的跟踪区域的有效标识信息,确定第一搜索算法在第一图像帧中对应的一个第一搜索区域包括:
根据所述跟踪区域的所述有效标识信息,确定在所述第一图像帧中所述跟踪区域的中心位置和区域尺寸;以及
根据所述跟踪区域的中心位置和区域尺寸,确定在所述第一图像帧中所述第一搜索算法对应的所述第一搜索区域。
A3、根据A2所述的搜索方法,其特征在于,所述跟踪区域为矩形区域,所述第一搜索区域的边长为所述跟踪区域对应边长的n倍,其中,n为调整参数,|n-1|小于或等于0.3;所述第一搜索区域的中心点与所述跟踪区域的中心位置重合。
A4、根据A3所述的搜索方法,其特征在于,相邻的所述第一图像帧对应的所述调整参数的取值不同。
A5、根据A1所述的搜索方法,其特征在于,所述根据所述跟踪目标对应的跟踪区域的有效标识信息,确定第二搜索算法在第二图像帧中对应的至少一个第二搜索区域包括:
根据所述跟踪区域的所述有效标识信息,确定在所述第二图像帧中所述跟踪区域的中心位置和区域面积;以及
根据所述跟踪区域的中心位置和区域面积,确定在所述第二图像帧中所 述第二搜索算法对应的所述第二搜索区域。
A6、根据A5所述的搜索方法,其特征在于,所述根据所述跟踪区域的中心位置和区域面积,确定在所述第二图像帧中所述第二搜索算法对应的所述第二搜索区域包括:
根据所述跟踪区域的区域面积,确定面积与所述跟踪区域的区域面积相同的正方形的边长,并将所述第二搜索区域的中心点之间的最短距离设置为所述正方形的边长的0.3倍,所述第二搜索区域的中心点与所述跟踪区域的中心位置之间的最短距离至少为所述正方形边长的0.3倍,所述第二搜索区域的搜索面积为所述跟踪区域的区域面积。
A7、根据A1所述的搜索方法,其特征在于,
至少一次使用所述第一搜索算法在所述第一搜索区域进行搜索之后,至少一次使用所述第二搜索算法在所述第二搜索区域进行搜索,其中,所述第一图像帧与所述第二图像帧各自的帧号为不同。
A8、根据A1所述的搜索方法,其特征在于,
在使用所述第一搜索算法在所述第一搜索区域进行搜索的过程中,使用所述第二搜索算法在所述第二搜索区域进行搜索,其中,所述第一图像帧与所述第二图像帧各自的帧号为相同。
A9、根据A1所述的搜索方法,其特征在于,所述方法还包括:
根据所述跟踪目标对应的跟踪区域的有效标识信息,确定在第三图像帧中所述跟踪区域的区域尺寸;
根据所述跟踪区域的区域尺寸,确定在所述第三图像帧中第三搜索算法对应的第三搜索区域的区域尺寸,并随机在所述第三图像帧中确定所述第三搜索区域的中心点;以及
使用所述第三搜索算法在所述第三搜索区域进行搜索。
A10、根据A1所述的搜索方法,其特征在于,
针对连续12帧所述第一图像帧的每一帧,使用所述第一搜索算法进行搜索。
A11、一种基于跟踪目标的搜索设备,其特征在于,包括存储器、处理器、视频采集器、所述视频采集器用于采集目标区域的待跟踪目标;所述存储器用于存储程序代码;所述处理器用于调用并执行所述程序代码,当所述程序代码被执行时,用于执行以下操作:
根据跟踪目标对应的跟踪区域的有效标识信息,确定第一搜索算法在第一图像帧中对应的一个第一搜索区域,并使用所述第一搜索算法在所述第一搜索区域进行搜索;以及
根据所述跟踪目标对应的跟踪区域的有效标识信息,确定第二搜索算法在第二图像帧中对应的至少一个第二搜索区域,并使用所述第二搜索算法在所述第二搜索区域进行搜索。
A12、根据A11所述的搜索设备,其特征在于,所述程序代码还用于执行以下操作:
根据所述跟踪区域的所述有效标识信息,确定在所述第一图像帧中所述跟踪区域的中心位置和区域尺寸;以及
根据所述跟踪区域的中心位置和区域尺寸,确定在所述第一图像帧中所述第一搜索算法对应的所述第一搜索区域。
A13、根据A11所述的搜索设备,其特征在于,所述跟踪区域为矩形区域,所述第一搜索区域的边长为所述跟踪区域对应边长的n倍,其中,n为调整参数,|n-1|小于或等于0.3;所述第一搜索区域的中心点与所述跟踪区域的中心位置重合。
A14、根据A13所述的搜索设备,其特征在于,相邻的所述图像帧对应的所述调整参数的取值不同。
A15、根据A11所述的搜索设备,其特征在于,所述程序代码还用于执行以下操作:
根据所述跟踪区域的所述有效标识信息,确定在所述第二图像帧中所述跟踪区域的中心位置和区域面积;以及
根据所述跟踪区域的中心位置和区域面积,确定在所述第二图像帧中所述第二搜索算法对应的所述第二搜索区域。
A16、根据A15所述的搜索设备,其特征在于,所述程序代码还用于执行以下操作:
根据所述跟踪区域的区域面积,确定面积与所述跟踪区域的区域面积相同的正方形的边长,并将所述第二搜索区域的中心点之间的最短距离设置为所述正方形的边长的0.3倍,所述第二搜索区域的中心点与所述跟踪区域的中心位置之间的最短距离至少为所述正方形边长的0.3倍,所述第二搜索区域的搜索面积为所述跟踪区域的区域面积。
A17、根据A11所述的搜索设备,其特征在于,
至少一次使用所述第一搜索算法在所述第一搜索区域进行搜索之后,至少一次使用所述第二搜索算法在所述第二搜索区域进行搜索,其中,所述第一图像帧与所述第二图像帧各自的帧号为不同。
A18、根据A11所述的搜索设备,其特征在于,
在使用所述第一搜索算法在所述第一搜索区域进行搜索的过程中,使用所述第二搜索算法在所述第二搜索区域进行搜索,其中,所述第一图像帧与所述第二图像帧各自的帧号为相同。
A19、根据A11所述的搜索设备,其特征在于,所述程序代码还用于执行以下操作:
根据所述跟踪目标对应的跟踪区域的有效标识信息,确定在第三图像帧中所述跟踪区域的区域尺寸;
根据所述跟踪区域的区域尺寸,确定在所述第三图像帧中第三搜索算法对应的第三搜索区域的区域尺寸,并随机在所述第三图像帧中确定所述第三搜索区域的中心点;以及
使用所述第三搜索算法在所述第三搜索区域进行搜索。
A20、根据A11所述的搜索设备,其特征在于,所述程序代码还用于执行以下操作:
针对连续12帧所述第一图像帧的每一帧,使用所述第一搜索算法进行搜索。
A21、一种手持相机,其特征在于,包括根据A11-A20中任一项所述的基于跟踪目标的搜索设备,其特征在于,还包括:承载器,所述承载器与所述视频采集器固定连接,用于承载所述视频采集器的至少一部分。
A22、根据A21所述的手持相机,其特征在于,所述承载器包括手持云台。
A23、根据A22所述的手持相机,其特征在于,所述承载器为手持三轴云台。
A24、根据A21所述的手持相机,其特征在于,所述视频采集器包括手持三轴云台用摄像头。
至此,已经对本主题的特定实施例进行了描述。其它实施例在所附权利要求书的范围内。在一些情况下,在权利要求书中记载的动作可以按照不同的 顺序来执行并且仍然可以实现期望的结果。另外,在附图中描绘的过程不一定要求示出的特定顺序或者连续顺序,以实现期望的结果。在某些实施方式中,多任务处理和并行处理可以是有利的。
在20世纪90年代,对于一个技术的改进可以很明显地区分是硬件上的改进(例如,对二极管、晶体管、开关等电路结构的改进)还是软件上的改进(对于方法流程的改进)。然而,随着技术的发展,当今的很多方法流程的改进已经可以视为硬件电路结构的直接改进。设计人员几乎都通过将改进的方法流程编程到硬件电路中来得到相应的硬件电路结构。因此,不能说一个方法流程的改进就不能用硬件实体模块来实现。例如,可编程逻辑器件(Programmable Logic Device,PLD)(例如现场可编程门阵列(Field Programmable Gate Array,FPGA))就是这样一种集成电路,其逻辑功能由用户对器件编程来确定。由设计人员自行编程来把一个数字系统“集成”在一片PLD上,而不需要请芯片制造厂商来设计和制作专用的集成电路芯片。而且,如今,取代手工地制作集成电路芯片,这种编程也多半改用“逻辑编译器(logic compiler)”软件来实现,它与程序开发撰写时所用的软件编译器相类似,而要编译之前的原始代码也得用特定的编程语言来撰写,此称之为硬件描述语言(Hardware Description Language,HDL),而HDL也并非仅有一种,而是有许多种,如ABEL(Advanced Boolean Expression Language)、AHDL(Altera Hardware Description Language)、Confluence、CUPL(Cornell University Programming Language)、HDCal、JHDL(Java Hardware Description Language)、Lava、Lola、MyHDL、PALASM、RHDL(Ruby Hardware Description Language)等,目前最普遍使用的是VHDL(Very-High-Speed Integrated Circuit Hardware Description Language)与Verilog。本领域技术人员也应该清楚,只需要将方法流程用上述几种硬件描述语言稍作逻辑编程并编程到集成电路中,就可以很容易得到实现该逻辑方法流程的硬件电路。
控制器可以按任何适当的方式实现,例如,控制器可以采取例如微处理器或处理器以及存储可由该(微)处理器执行的计算机可读程序代码(例如软件或固件)的计算机可读介质、逻辑门、开关、专用集成电路(Application Specific Integrated Circuit,ASIC)、可编程逻辑控制器和嵌入微控制器的形式,控制器的例子包括但不限于以下微控制器:ARC 625D、Atmel AT91SAM、Microchip PIC18F26K20以及Silicone Labs C8051F320,存储器控制器还可以被实现为存 储器的控制逻辑的一部分。本领域技术人员也知道,除了以纯计算机可读程序代码方式实现控制器以外,完全可以通过将方法步骤进行逻辑编程来使得控制器以逻辑门、开关、专用集成电路、可编程逻辑控制器和嵌入微控制器等的形式来实现相同功能。因此这种控制器可以被认为是一种硬件部件,而对其内包括的用于实现各种功能的装置也可以视为硬件部件内的结构。或者甚至,可以将用于实现各种功能的装置视为既可以是实现方法的软件模块又可以是硬件部件内的结构。
上述实施例阐明的系统、装置、模块或单元,具体可以由计算机芯片或实体实现,或者由具有某种功能的产品来实现。一种典型的实现设备为计算机。具体的,计算机例如可以为个人计算机、膝上型计算机、蜂窝电话、相机电话、智能电话、个人数字助理、媒体播放器、导航设备、电子邮件设备、游戏控制台、平板计算机、可穿戴设备或者这些设备中的任何设备的组合。
本申请是参照根据本申请实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
还需要说明的是,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、商品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、商品或者设备所固有的要素。在没有更多限制的情况下,由语句 “包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、商品或者设备中还存在另外的相同要素。
本领域技术人员应明白,本申请的实施例可提供为方法、系统或计算机程序产品。因此,本申请可采用完全硬件实施例、完全软件实施例或结合软件和硬件方面的实施例的形式。而且,本申请可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本申请可以在由计算机执行的计算机可执行指令的一般上下文中描述,例如程序模块。一般地,程序模块包括执行特定事务或实现特定抽象数据类型的例程、程序、对象、组件、数据结构等等。也可以在分布式计算环境中实践本申请,在这些分布式计算环境中,由通过通信网络而被连接的远程处理设备来执行事务。在分布式计算环境中,程序模块可以位于包括存储设备在内的本地和远程计算机存储介质中。
本说明书中的各个实施例均采用递进的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于系统实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
以上所述仅为本申请的实施例而已,并不用于限制本申请。对于本领域技术人员来说,本申请可以有各种更改和变化。凡在本申请的精神和原理之内所作的任何修改、等同替换、改进等,均应包含在本申请的权利要求范围之内。

Claims (10)

  1. 一种基于跟踪目标的搜索方法,其特征在于,所述方法包括:
    根据跟踪目标对应的跟踪区域的有效标识信息,确定第一搜索算法在第一图像帧中对应的一个第一搜索区域,并使用所述第一搜索算法在所述第一搜索区域进行搜索;以及
    根据所述跟踪目标对应的跟踪区域的有效标识信息,确定第二搜索算法在第二图像帧中对应的至少一个第二搜索区域,并使用所述第二搜索算法在所述第二搜索区域进行搜索。
  2. 根据权利要求1所述的搜索方法,其特征在于,所述根据跟踪目标对应的跟踪区域的有效标识信息,确定第一搜索算法在第一图像帧中对应的一个第一搜索区域包括:
    根据所述跟踪区域的所述有效标识信息,确定在所述第一图像帧中所述跟踪区域的中心位置和区域尺寸;以及
    根据所述跟踪区域的中心位置和区域尺寸,确定在所述第一图像帧中所述第一搜索算法对应的所述第一搜索区域。
  3. 根据权利要求2所述的搜索方法,其特征在于,所述跟踪区域为矩形区域,所述第一搜索区域的边长为所述跟踪区域对应边长的n倍,其中,n为调整参数,|n-1|小于或等于0.3;所述第一搜索区域的中心点与所述跟踪区域的中心位置重合。
  4. 根据权利要求3所述的搜索方法,其特征在于,相邻的所述第一图像帧对应的所述调整参数的取值不同。
  5. 根据权利要求1所述的搜索方法,其特征在于,所述根据所述跟踪目标对应的跟踪区域的有效标识信息,确定第二搜索算法在第二图像帧中对应的至少一个第二搜索区域包括:
    根据所述跟踪区域的所述有效标识信息,确定在所述第二图像帧中所述跟踪区域的中心位置和区域面积;以及
    根据所述跟踪区域的中心位置和区域面积,确定在所述第二图像帧中所述第二搜索算法对应的所述第二搜索区域。
  6. 根据权利要求5所述的搜索方法,其特征在于,所述根据所述跟踪区域的中心位置和区域面积,确定在所述第二图像帧中所述第二搜索算法对应的所述第二搜索区域包括:
    根据所述跟踪区域的区域面积,确定面积与所述跟踪区域的区域面积相同 的正方形的边长,并将所述第二搜索区域的中心点之间的最短距离设置为所述正方形的边长的0.3倍,所述第二搜索区域的中心点与所述跟踪区域的中心位置之间的最短距离至少为所述正方形边长的0.3倍,所述第二搜索区域的搜索面积为所述跟踪区域的区域面积。
  7. 根据权利要求1所述的搜索方法,其特征在于,
    至少一次使用所述第一搜索算法在所述第一搜索区域进行搜索之后,至少一次使用所述第二搜索算法在所述第二搜索区域进行搜索,其中,所述第一图像帧与所述第二图像帧各自的帧号为不同。
  8. 根据权利要求1所述的搜索方法,其特征在于,
    在使用所述第一搜索算法在所述第一搜索区域进行搜索的过程中,使用所述第二搜索算法在所述第二搜索区域进行搜索,其中,所述第一图像帧与所述第二图像帧各自的帧号为相同。
  9. 根据权利要求1所述的搜索方法,其特征在于,所述方法还包括:
    根据所述跟踪目标对应的跟踪区域的有效标识信息,确定在第三图像帧中所述跟踪区域的区域尺寸;
    根据所述跟踪区域的区域尺寸,确定在所述第三图像帧中第三搜索算法对应的第三搜索区域的区域尺寸,并随机在所述第三图像帧中确定所述第三搜索区域的中心点;以及
    使用所述第三搜索算法在所述第三搜索区域进行搜索。
  10. 根据权利要求1所述的搜索方法,其特征在于,
    针对连续12帧所述第一图像帧的每一帧,使用所述第一搜索算法进行搜索。
PCT/CN2020/099835 2020-04-15 2020-07-02 基于跟踪目标的搜索方法、设备及其手持相机 WO2021208258A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010296054.2A CN111563913B (zh) 2020-04-15 2020-04-15 基于跟踪目标的搜索方法、设备及其手持相机
CN202010296054.2 2020-04-15

Publications (1)

Publication Number Publication Date
WO2021208258A1 true WO2021208258A1 (zh) 2021-10-21

Family

ID=72073102

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/099835 WO2021208258A1 (zh) 2020-04-15 2020-07-02 基于跟踪目标的搜索方法、设备及其手持相机

Country Status (2)

Country Link
CN (1) CN111563913B (zh)
WO (1) WO2021208258A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113869163B (zh) * 2021-09-18 2022-08-23 北京远度互联科技有限公司 目标跟踪方法、装置、电子设备及存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101807300A (zh) * 2010-03-05 2010-08-18 北京智安邦科技有限公司 一种目标碎片区域融合的方法及装置
US20120099765A1 (en) * 2010-10-21 2012-04-26 SET Corporation Method and system of video object tracking
CN106920252A (zh) * 2016-06-24 2017-07-04 阿里巴巴集团控股有限公司 一种图像数据处理方法、装置及电子设备
CN108765458A (zh) * 2018-04-16 2018-11-06 上海大学 基于相关滤波的高海况无人艇海面目标尺度自适应跟踪方法
CN110503662A (zh) * 2019-07-09 2019-11-26 科大讯飞(苏州)科技有限公司 跟踪方法及相关产品
CN110853076A (zh) * 2019-11-08 2020-02-28 重庆市亿飞智联科技有限公司 一种目标跟踪方法、装置、设备及存储介质

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8582811B2 (en) * 2011-09-01 2013-11-12 Xerox Corporation Unsupervised parameter settings for object tracking algorithms
US9760791B2 (en) * 2015-09-01 2017-09-12 Sony Corporation Method and system for object tracking
CN105631895B (zh) * 2015-12-18 2018-05-29 重庆大学 结合粒子滤波的时空上下文视频目标跟踪方法
CN108537726B (zh) * 2017-03-03 2022-01-04 杭州海康威视数字技术股份有限公司 一种跟踪拍摄的方法、设备和无人机
US10796185B2 (en) * 2017-11-03 2020-10-06 Facebook, Inc. Dynamic graceful degradation of augmented-reality effects
CN107959798B (zh) * 2017-12-18 2020-07-07 北京奇虎科技有限公司 视频数据实时处理方法及装置、计算设备
CN108062763B (zh) * 2017-12-29 2020-10-16 纳恩博(北京)科技有限公司 目标跟踪方法及装置、存储介质
CN109785385B (zh) * 2019-01-22 2021-01-29 中国科学院自动化研究所 视觉目标跟踪方法及系统

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101807300A (zh) * 2010-03-05 2010-08-18 北京智安邦科技有限公司 一种目标碎片区域融合的方法及装置
US20120099765A1 (en) * 2010-10-21 2012-04-26 SET Corporation Method and system of video object tracking
CN106920252A (zh) * 2016-06-24 2017-07-04 阿里巴巴集团控股有限公司 一种图像数据处理方法、装置及电子设备
CN108765458A (zh) * 2018-04-16 2018-11-06 上海大学 基于相关滤波的高海况无人艇海面目标尺度自适应跟踪方法
CN110503662A (zh) * 2019-07-09 2019-11-26 科大讯飞(苏州)科技有限公司 跟踪方法及相关产品
CN110853076A (zh) * 2019-11-08 2020-02-28 重庆市亿飞智联科技有限公司 一种目标跟踪方法、装置、设备及存储介质

Also Published As

Publication number Publication date
CN111563913A (zh) 2020-08-21
CN111563913B (zh) 2021-12-10

Similar Documents

Publication Publication Date Title
WO2021208256A1 (zh) 一种视频处理方法、设备及手持相机
CN110555883B (zh) 相机姿态追踪过程的重定位方法、装置及存储介质
US10924641B2 (en) Wearable video camera medallion with circular display
WO2021208253A1 (zh) 一种跟踪对象确定方法、设备和手持相机
US9380207B1 (en) Enabling multiple field of view image capture within a surround image mode for multi-lense mobile devices
WO2021208249A1 (zh) 图像处理方法、设备及手持相机
US8605158B2 (en) Image pickup control apparatus, image pickup control method and computer readable medium for changing an image pickup mode
US20140139425A1 (en) Image processing apparatus, image processing method, image capture apparatus and computer program
CN109040600A (zh) 全景景象拍摄及浏览的移动装置、系统及方法
WO2021208255A1 (zh) 一种视频片段标记方法、设备及手持相机
CN106657455B (zh) 一种带可旋转摄像头的电子设备
CN109981944A (zh) 电子装置及其控制方法
US20120120267A1 (en) Electronic apparatus, control method, program, and image-capturing system
WO2021185374A1 (zh) 一种拍摄图像的方法及电子设备
CN111724412A (zh) 确定运动轨迹的方法、装置及计算机存储介质
WO2021208251A1 (zh) 人脸跟踪方法及人脸跟踪设备
CN110661979B (zh) 摄像方法、装置、终端及存储介质
WO2021208258A1 (zh) 基于跟踪目标的搜索方法、设备及其手持相机
WO2021208252A1 (zh) 一种跟踪目标确定方法、装置和手持相机
WO2021208257A1 (zh) 跟踪状态确定方法、设备及手持相机
WO2021208254A1 (zh) 一种跟踪目标的找回方法、设备以及手持相机
WO2021208259A1 (zh) 云台驱动方法、设备及手持相机
WO2021208260A1 (zh) 目标对象的跟踪框显示方法、设备及手持相机
WO2021208261A1 (zh) 一种跟踪目标的找回方法、设备及手持相机
CN114697570B (zh) 用于显示图像的方法、电子设备及芯片

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20931166

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20931166

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 17.07.2023)

122 Ep: pct application non-entry in european phase

Ref document number: 20931166

Country of ref document: EP

Kind code of ref document: A1