CN115225815B - Intelligent target tracking shooting method, server, shooting system, equipment and medium - Google Patents

Intelligent target tracking shooting method, server, shooting system, equipment and medium Download PDF

Info

Publication number
CN115225815B
CN115225815B CN202210699461.7A CN202210699461A CN115225815B CN 115225815 B CN115225815 B CN 115225815B CN 202210699461 A CN202210699461 A CN 202210699461A CN 115225815 B CN115225815 B CN 115225815B
Authority
CN
China
Prior art keywords
tracking
target set
target
information
tracking target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210699461.7A
Other languages
Chinese (zh)
Other versions
CN115225815A (en
Inventor
张巍
庞博
陈家鹏
付震
尚阳星
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southwest University of Science and Technology
Original Assignee
Southwest University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southwest University of Science and Technology filed Critical Southwest University of Science and Technology
Priority to CN202210699461.7A priority Critical patent/CN115225815B/en
Publication of CN115225815A publication Critical patent/CN115225815A/en
Application granted granted Critical
Publication of CN115225815B publication Critical patent/CN115225815B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Studio Devices (AREA)
  • Image Analysis (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The application discloses an intelligent target tracking shooting method, a server, a shooting system, equipment and a medium. The method comprises the following steps: receiving a video data stream of an expected tracking target sent by a cradle head camera device; tracking and identifying each video frame in the video data stream according to a preset first tracking algorithm to obtain a first tracking target set; tracking and identifying each video frame in the video data stream according to a preset second tracking algorithm to obtain a second tracking target set; obtaining a tracking result target set according to the first tracking target set and the second tracking target set; and obtaining control information according to the position information of the tracking result targets in the video frame in the tracking result target set, and sending the control information to the cradle head camera device so as to track and shoot the expected tracking targets. According to the intelligent target tracking shooting method, the moving target can be automatically tracked, so that the cradle head camera device can shoot the moving target clearly.

Description

Intelligent target tracking shooting method, server, shooting system, equipment and medium
Technical Field
The application relates to the technical field of artificial intelligence, in particular to an intelligent target tracking shooting method, a server, a shooting system, equipment and a medium.
Background
Along with the continuous improvement of the living standard of people, the demands of people for rich leisure life are also more and more varied, wherein the sports are a great part of the leisure life of people. For people's sports life, people often prefer to use cameras to record the fine moments in sports. However, recording with a camera is a troublesome matter and often takes a lot of manpower to take a record at all times.
In the related art, an automatic tracking camera is adopted to realize automatic shooting of a moving target. However, the existing automatic tracking cameras are all based on GPS for coarse positioning, and then the wide-angle camera is used for tracking and shooting a moving target; however, the motion video shot in this way is blurred and the entire motion process of the moving object cannot be completely recorded.
Disclosure of Invention
The present application aims to solve at least one of the technical problems existing in the prior art. Therefore, the intelligent target tracking shooting method can achieve automatic tracking of the moving target, and therefore the cradle head camera device can shoot the moving target clearly.
The application also provides a server.
The application also provides a target intelligent tracking shooting system.
The application also provides electronic equipment.
The present application also proposes a computer-readable storage medium.
According to an embodiment of the first aspect of the present application, a target intelligent tracking shooting method is applied to a server, the server is connected with a cradle head camera device, and the method includes:
receiving a video data stream of an expected tracking target sent by a cradle head camera device; wherein the video data stream comprises a plurality of video frames;
tracking and identifying each video frame in the video data stream according to a preset first tracking algorithm to obtain a first tracking target set; the first tracking algorithm carries out tracking identification processing on the expected tracking target based on the color information of the expected tracking target;
tracking and identifying each video frame in the video data stream according to a preset second tracking algorithm to obtain a second tracking target set; the second tracking algorithm carries out tracking identification processing on the expected tracking target based on the size information and the position information of the expected tracking target;
obtaining a tracking result target set according to the first tracking target set and the second tracking target set; the tracking result target set is matched with the expected tracking target;
And obtaining control information according to the position information of the tracking result target in the video frame in the tracking result target set, and sending the control information to the cradle head camera device, so that the cradle head camera device adjusts working posture information according to the control information to track and shoot the expected tracking target.
According to the intelligent target tracking shooting method, at least the following beneficial effects are achieved: tracking the video data stream of the expected tracking target shot by the cradle head camera device by a first tracking algorithm and a second tracking algorithm to respectively obtain a first tracking target set and a second tracking target set, and then determining a tracking result target set matched with the expected tracking target from the first tracking target set and the second tracking target set; and obtaining control information according to the position information of the tracking result target in the video frame in the tracking result target set, and sending the control information to the tripod head camera device so that the tripod head camera device adjusts working posture information according to the control information to track and shoot the expected tracking target. By the arrangement, intelligent tracking of the expected tracking target is achieved, so that the cradle head camera device can clearly shoot the expected tracking target, and the experience effect of a user is improved.
According to some embodiments of the present application, obtaining a tracking result target set from a first tracking target set and a second tracking target set includes:
calculating the difference value of the position information of each tracking target in the first tracking target set and the position information of each tracking target in the second tracking target set at the same moment within a preset time threshold to obtain a plurality of position difference value information;
and if the plurality of position difference information is greater than or equal to a preset position threshold value, taking the first tracking target set as a tracking result target set.
According to some embodiments of the present application, obtaining a tracking result target set according to the first tracking target set and the second tracking target set further includes:
if the plurality of position difference information is smaller than a preset position threshold value, calculating the difference between the size information of the tracking targets in the first tracking target set and the size information of the tracking targets in the second tracking target set to obtain size difference information;
and obtaining a tracking result target set according to the size difference information, the preset size threshold, the first tracking target set and the second tracking target set.
According to some embodiments of the present application, obtaining a tracking result target set according to the size difference information, the preset size threshold, the first tracking target set and the second tracking target set includes:
And if the size difference information is larger than the preset size threshold, taking the second tracking target set as a tracking result target set.
According to some embodiments of the present application, the tracking result target set is obtained according to the size difference information, the preset size threshold, the first tracking target set and the second tracking target set, and further includes:
and if the size difference information is smaller than or equal to the size threshold, taking the first tracking target set as a tracking result target set.
According to the server of the embodiment of the second aspect of the present application, the server is connected with the pan-tilt camera device, and the server includes:
the receiving module is used for receiving the video data stream of the expected tracking target sent by the cradle head camera device; wherein the video data stream comprises a plurality of video frames;
the first tracking module is used for carrying out tracking identification processing on each video frame in the video data stream according to a preset first tracking algorithm to obtain a first tracking target set; the first tracking algorithm carries out tracking identification processing on the expected tracking target based on the color information of the expected tracking target;
the second tracking module is used for carrying out tracking identification processing on each video frame in the video data stream according to a preset second tracking algorithm to obtain a second tracking target set; the second tracking algorithm carries out tracking identification processing on the expected tracking target based on the size information and the position information of the expected tracking target;
The tracking processing module is used for obtaining a tracking result target set according to the first tracking target set and the second tracking target set; the tracking result target set is matched with the expected tracking target;
the control processing module is used for obtaining control information according to the position information of the tracking result targets in the video frames in the tracking result target set, and sending the control information to the tripod head camera device so that the tripod head camera device can adjust working posture information according to the control information to track and shoot the expected tracking targets.
The server according to the embodiment of the application has at least the following beneficial effects: tracking the video data stream of the expected tracking target shot by the cradle head camera device by a first tracking algorithm and a second tracking algorithm to respectively obtain a first tracking target set and a second tracking target set, and then determining a tracking result target set matched with the expected tracking target from the first tracking target set and the second tracking target set; and obtaining control information according to the position information of the tracking result target in the video frame in the tracking result target set, and sending the control information to the tripod head camera device so that the tripod head camera device adjusts working posture information according to the control information to track and shoot the expected tracking target. By the arrangement, intelligent tracking of the expected tracking target is achieved, so that the cradle head camera device can clearly shoot the expected tracking target, and the experience effect of a user is improved.
According to an embodiment of the third aspect of the present application, an intelligent tracking shooting system for a target includes:
the user end is used for sending the expected tracking target; wherein, a positioning device is arranged on the expected tracking target;
the differential positioning base station is used for receiving the preliminary positioning information sent by the positioning device, and generating and sending differential positioning information according to the preliminary positioning information;
the server as in the embodiment of the first aspect;
the cradle head camera device is used for receiving and adjusting working posture information according to the differential positioning information;
the cradle head camera device is also used for transmitting video data streams of expected tracking targets to the server, and adjusting working posture information according to control information output by the server so as to realize tracking shooting of the expected tracking targets.
According to the target intelligent tracking shooting system, at least the following beneficial effects are achieved: the user sends the expected tracking target to the server through the user side; the method comprises the steps that a server tracks a video data stream of a target expected to be tracked by a cradle head camera through a first tracking algorithm and a second tracking algorithm to obtain a first tracking target set and a second tracking target set respectively, and then a tracking result target set matched with the target expected to be tracked is determined from the first tracking target set and the second tracking target set; and obtaining control information according to the position information of the tracking result target in the video frame in the tracking result target set, and sending the control information to the tripod head camera device so that the tripod head camera device adjusts working posture information according to the control information to track and shoot the expected tracking target. By the arrangement, intelligent tracking of the expected tracking target is achieved, so that the cradle head camera device can clearly shoot the expected tracking target, and the experience effect of a user is improved.
According to some embodiments of the present application, a pan-tilt camera device includes:
the cradle head camera is used for tracking and shooting an expected tracking target;
the inertial sensor is used for detecting and outputting initial posture information of the pan-tilt camera;
the locator is used for detecting and outputting initial positioning information of the cradle head camera;
and the gesture controller is used for adjusting the working gesture information of the cradle head camera according to the control information output by the server.
An electronic device according to an embodiment of a fourth aspect of the present application includes:
at least one memory;
at least one processor;
at least one program;
the program is stored in the memory, and the processor executes at least one program to implement:
the method as in the embodiment of the first aspect.
According to a fifth aspect of the embodiments of the present application, there is provided a computer-readable storage medium storing computer-executable instructions for causing a computer to perform:
the method as in the embodiment of the first aspect.
Additional aspects and advantages of the application will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the application.
Drawings
The application is further described with reference to the accompanying drawings and examples, in which:
fig. 1 is a block diagram of a target intelligent tracking shooting system according to an embodiment of the present application;
FIG. 2 is a schematic diagram of transformation of an inertial coordinate system and a volumetric coordinate system according to an embodiment of the present application;
fig. 3 is a flowchart of a target intelligent tracking shooting method provided in an embodiment of the present application;
FIG. 4 is a flowchart of a specific method of step S800 in FIG. 3;
FIG. 5 is another flowchart of the specific method of step S800 in FIG. 3;
FIG. 6 is a flowchart illustrating a specific method of step S840 in FIG. 5;
FIG. 7 is another flowchart of a specific method of step S840 in FIG. 5;
FIG. 8 is a block diagram of a particular module of the server of FIG. 1;
fig. 9 is a schematic hardware structure of an electronic device according to an embodiment of the present application.
Detailed Description
Embodiments of the present application are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are exemplary only for the purpose of explaining the present application and are not to be construed as limiting the present application.
In the description of the present application, it should be understood that references to orientation descriptions, such as directions of up, down, front, back, left, right, etc., are based on the orientation or positional relationship shown in the drawings, are merely for convenience of describing the present application and simplifying the description, and do not indicate or imply that the apparatus or element referred to must have a specific orientation, be configured and operated in a specific orientation, and thus should not be construed as limiting the present application.
In the description of the present application, the meaning of a number is one or more, the meaning of a number is two or more, and greater than, less than, exceeding, etc. are understood to exclude the present number, and the meaning of a number above, below, within, etc. are understood to include the present number. The description of the first and second is for the purpose of distinguishing between technical features only and should not be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated or implicitly indicating the precedence of the technical features indicated.
In the description of the present application, unless explicitly defined otherwise, terms such as arrangement, installation, connection, etc. should be construed broadly and the specific meaning of the terms in the present application can be reasonably determined by a person skilled in the art in combination with the specific contents of the technical solution.
In the description of the present application, a description with reference to the terms "one embodiment," "some embodiments," "illustrative embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
First, several nouns involved in the embodiments of the present application are parsed:
artificial intelligence (artificial intelligence, AI): is a new technical science for researching and developing theories, methods, technologies and application systems for simulating, extending and expanding the intelligence of people; artificial intelligence is a branch of computer science that attempts to understand the nature of intelligence and to produce a new intelligent machine that can react in a manner similar to human intelligence, research in this field including robotics, language recognition, image recognition, natural language processing, and expert systems. Artificial intelligence can simulate the information process of consciousness and thinking of people. Artificial intelligence is also a theory, method, technique, and application system that utilizes a digital computer or digital computer-controlled machine to simulate, extend, and expand human intelligence, sense the environment, acquire knowledge, and use knowledge to obtain optimal results.
Real-time kinematic (RTK): RTK (Real-time kinematic) Real-time differential positioning is a measuring method capable of obtaining centimeter-level positioning accuracy in Real time in the field and improving field operation efficiency. The RTK real-time dynamic measurement technology is a real-time differential GPS technology based on carrier phase observation, is a breakthrough in the development mileage of the measurement technology, and consists of a reference station receiver, a data chain and a mobile station receiver. And the mobile station GPS receiver receives GPS satellite signals and simultaneously receives data transmitted by the reference station through wireless receiving equipment, and then calculates three-dimensional coordinates and accuracy of the mobile station in real time according to the principle of relative positioning.
Inertial sensor: the inertial sensor is a sensor for detecting and measuring acceleration, inclination, impact, vibration, rotation and multi-degree-of-freedom motion, and is an important component for solving navigation, orientation and motion carrier control.
An accelerometer: an accelerometer is an inertial sensor capable of measuring the acceleration force of an object.
Angular velocity sensor: angular velocity sensors, also known as gyroscopes, are capable of measuring the angular velocity of an object. Angular velocity is commonly used for position and attitude control of moving objects, as well as other applications requiring accurate angular measurements.
YOLO algorithm (You Only Look Once): the YOLO algorithm is an object recognition algorithm, the name meaning: "you need only look at a glance to identify what object is in the picture. The YOLO algorithm can quickly identify the category and position information of some specific objects in the pictures, but cannot judge the corresponding relation of each object between two consecutive pictures, and cannot continuously track a specific object in one video stream. The object size change or scene change has little influence on the recognition capability of the YOLO algorithm.
Mean Shift algorithm (Adaptive Scale Mean-Shift, ASMS) to accommodate size variation: the ASMS algorithm is a target tracking algorithm capable of performing a task of visual target tracking according to color characteristics and adapting to a change in target size.
However, the ASMS algorithm has the following drawbacks:
first, the adaptation effect on the size of the target is not strong, and when the target is far away from or near the camera at a high speed, the size of the target in the video can be changed rapidly, and at this time, the problem of tracking error can occur only by using the ASMS algorithm.
Second, it is not suitable for long-term tracking, and the algorithm is also easy to lose when the background of the target changes greatly.
Sports are a large part of people's leisure life, and for people's sports life, people often prefer to use cameras to record good moments in sports. However, recording with a camera is a troublesome matter and often takes a lot of manpower to take a record at all times.
In the related art, an automatic tracking camera is adopted to realize automatic shooting of a moving target. For example, hua is auto tracking camera (auto-tracking camera), which can locate and shoot indoor scenes by vision to record life scenes. However, hua is auto tracking camera cannot be applied to a scene of high-speed movement, and since the visual scene is only a scene of good illuminated environment, the performance is greatly degraded once the scene goes outdoors. The auto tracking camera developed by foreign soloshot3 is suitable for a high-speed moving scene, coarse positioning is performed based on a GPS, then a moving object is tracked and shot through a wide-angle camera, but because the GPS is coarse positioning, shooting can only be performed through a wide angle, the moving object is blurred in a video, and the moving process of a user cannot be completely recorded.
Based on the above, the application provides the intelligent target tracking shooting method, the server, the shooting system, the equipment and the medium, which can realize the automatic tracking of the moving target, thereby being convenient for the cradle head camera to clearly shoot the moving target and improving the experience of the user.
The embodiment of the application provides a target intelligent tracking shooting method, which relates to the technical field of artificial intelligence. The target intelligent tracking shooting method provided by the embodiment of the application can be applied to a terminal, a server side and software running in the terminal or the server side. In some embodiments, the terminal may be a smart phone, tablet, notebook, desktop, or smart watch, etc.; the server side can be configured as an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, and a cloud server for providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, content delivery networks (Content Delivery Network, CDN), basic cloud computing services such as big data and artificial intelligent platforms and the like; the software may be an application or the like that implements the target intelligent tracking shooting method, but is not limited to the above form.
Embodiments of the present disclosure are operational with numerous general purpose or special purpose computer system environments or configurations. For example: personal computers, server computers, hand-held or portable devices, tablet devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like. The application may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The application may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
Embodiments of the present application are further described below with reference to the accompanying drawings.
Referring to fig. 1, some embodiments of the present application provide an intelligent tracking shooting system for a target, which includes a user terminal 100, a differential positioning base station 200, a server 300, and a pan-tilt camera 400.
A client 100, configured to send a desired tracking target; the target to be tracked is specified by a user through a user terminal, and a positioning device is arranged on the target to be tracked.
In some embodiments, the client 100 includes a display. The user terminal 100 can display a motion trail and a motion picture of a desired tracking target on the user terminal 100 based on the differential positioning information transmitted from the differential positioning base station 200 and the motion picture acquired by the pan/tilt camera 400. The user terminal 100 may be a mobile phone APP, or may be other terminal devices, such as a PC computer, etc. The expected tracking target is provided with a positioning device and a data transmission device, wherein the positioning device is used for positioning the expected tracking target, and the data transmission device is connected with the differential positioning base station 200 and used for information interaction with the differential positioning base station 200.
It should be noted that the positioning device and the data transmission device may be integrally provided, for example, both provided on a wearable device that desires to track the target; other types of arrangements are also possible. In this regard, the present application is not particularly limited.
The differential positioning base station 200 is configured to receive the preliminary positioning information sent by the positioning device, and generate and send differential positioning information according to the preliminary positioning information.
In some embodiments, the differential positioning base station 200 includes a router, a data transmission device, an image transmission device, and an RTK-GPS base station. The router is responsible for connecting with the user terminal 100, so that the user can freely read the shot video and the GPS data through the user terminal 100, or realize detection and viewing of a desired tracking target. The data transmission device is used for being connected with the data transmission device desiring to track the target so as to realize the information interaction between the desiring to track the target and the differential positioning base station 200. The image transmission device is communicatively connected to the server 300, and is configured to push the video data stream received from the pan-tilt camera 400 to the server 300. The RTK-GPS base station is used for generating differential positioning information according to the preliminary positioning information sent by the positioning device of the expected tracking target and sending the differential positioning information to the expected tracking target. The RTK-GPS or RTK-Beidou system can realize centimeter-level accurate positioning, and the positioning precision of the GPS is 10 m-level, so that the cradle head camera device 400 can accurately track a target by using RTK-GPS equipment.
The server 300 is configured to track and identify a desired tracking target, and output control information according to a tracking result to control the pan/tilt camera 400 to track and shoot the desired tracking target.
The pan-tilt camera 400 is configured to receive and adjust working posture information according to the differential positioning information; the video data stream is also used for sending the video data stream of the expected tracking target to the server 300, and adjusting the working posture information according to the control information output by the server 300 so as to realize tracking shooting of the expected tracking target.
According to the target intelligent tracking shooting system, a user sends a desired tracking target to a server 300 through a user side 100; the server 300 performs tracking processing on the video data stream of the desired tracking target shot by the pan-tilt camera 400 by using a first tracking algorithm and a second tracking algorithm to obtain a first tracking target set and a second tracking target set respectively, and then determines a tracking result target set matched with the desired tracking target from the first tracking target set and the second tracking target set; and then obtaining control information according to the position information of the tracking result targets in the video frame in the tracking result target set, and sending the control information to the pan-tilt camera 400, so that the pan-tilt camera 400 adjusts working posture information according to the control information to track and shoot the expected tracking targets. By the arrangement, intelligent tracking of the expected tracking target is achieved, so that the cradle head camera device 400 can clearly shoot the expected tracking target, and the experience effect of a user is improved.
In some embodiments, pan-tilt camera 400 includes a pan-tilt camera, an inertial sensor, a positioner, and a pose controller.
And the cradle head camera is used for tracking and shooting the expected tracking target.
And the inertial sensor is used for detecting and outputting the initial posture information of the pan-tilt camera.
And the locator is used for detecting and outputting the initial positioning information of the cradle head camera.
And the gesture controller is used for adjusting the working gesture information of the pan-tilt camera according to the control information output by the server 300.
Referring to fig. 2, in some embodiments, the inertial sensor includes an accelerometer and a gyroscope. A is a target to be tracked, I is an inertial coordinate system of the earth, and B is a body coordinate system based on a cradle head camera device. The rotation matrix R of the inertial coordinate system I and the body coordinate system B can be calculated through the gyroscope and the accelerometer, and then the coordinate P of the expected tracking target A in the body coordinate system B is obtained according to the formula (1) B . Wherein, formula (1) is as follows:
P B =R·p (1)
in the formula (1), R represents a rotation matrix of an inertial coordinate system I and a body coordinate system B, and is calculated by a gyroscope and an accelerometer; p is the coordinate of the target A to be tracked in the inertial coordinate system I, and is obtained through GPS positioning.
In the related art, because initial posture information of the pan-tilt camera device is lacking, the installation position and initial posture information of the pan-tilt camera device need to be calculated through the relative position of the moving object, and then tracking is performed. According to the method and the device, the initial positioning information and the initial posture information of the cradle head camera device can be obtained rapidly through the introduction of the inertial sensor and the positioner, so that the initialization work is simplified or the moving target can be tracked and shot without initialization.
Based on the target intelligent tracking shooting system shown in fig. 1, referring to fig. 3, some embodiments of the present application provide a target intelligent tracking shooting method, which is applied to a server, and the server is connected with a pan-tilt camera device, and the method includes step S500, step S600, step S700, step S800, and step S900. It should be understood that the target intelligent tracking shooting method in the embodiment of the present application includes, but is not limited to, these five steps, and the following details of these five steps are described below.
Step S500, receiving a video data stream of an expected tracking target sent by a cradle head camera device; wherein the video data stream comprises a plurality of video frames.
In step S500, the desired tracking target is a target specified by the user, and the target may be a moving object, an athlete, or other targets, which is not specifically limited in this application, as long as the target is provided with a positioning device, and preliminary positioning information can be output. In the embodiment of the application, a skier is taken as a desired tracking target to describe the intelligent tracking shooting method of the target.
The video data stream includes a plurality of video frames, each video frame being an image. The video data stream is a video data stream photographed by the cradle head camera device on a desired tracking target in real time. If the tracking target is desired to be a skier, the video data stream is video data of the skier's movement.
Step S600, tracking and identifying each video frame in the video data stream according to a preset first tracking algorithm to obtain a first tracking target set; the first tracking algorithm performs tracking recognition processing on the expected tracking target based on color information of the expected tracking target.
Step S700, tracking and identifying each video frame in the video data stream according to a preset second tracking algorithm to obtain a second tracking target set; the second tracking algorithm performs tracking recognition processing on the expected tracking target based on the size information and the position information of the expected tracking target.
Step S800, obtaining a tracking result target set according to the first tracking target set and the second tracking target set; wherein the tracking result target set is matched with the expected tracking target.
In steps S600 to S800, the first tracking algorithm may adopt an ASMS algorithm, the second tracking algorithm may adopt a YOLO algorithm, the first tracking algorithm may also adopt other algorithms based on visual target tracking, and the second tracking algorithm may also adopt other object recognition algorithms. Taking ASMS algorithm and YOLO algorithm as examples, the intelligent tracking shooting method of the target is described.
The ASMS algorithm completes the task of tracking the visual target based on the color characteristics of the target, however, the size adaptation effect of the ASMS algorithm on the target is not strong, when the target is far away from or near a holder camera at a higher speed, the size of the target in the video can be changed rapidly, and at the moment, the target is tracked by using the ASMS algorithm only, so that the situation of tracking errors is easy to occur; moreover, the ASMS algorithm is not suitable for tracking the target for a long time, and when the background of the target is changed greatly, the ASMS algorithm is easy to lose, and is not suitable for a scene from a long distance to a short distance. In the present application, it is desirable to track the object as a skier, and it is understood that in such a skiing scene, the change of the optical zoom of the pan-tilt camera and the change of the intensity of the light easily cause the change of the color of the object in the video, and the ASMS algorithm may be adopted to track the skier. However, if only ASMS algorithm is used, it is again likely to result in heel loss for the skier.
The YOLO algorithm can accurately identify the size and position of an object, and the target size change or scene change has little effect on the object. However, the YOLO algorithm cannot determine the correspondence of each object between two consecutive pictures, and cannot continuously track a specific object in one video stream. In this application, it is desirable that the tracking target is a skier, i.e., for the YOLO algorithm, the tracking task can be completed only by identifying the human body.
Based on the advantages and disadvantages of the YOLO algorithm and the ASMS algorithm, the intelligent tracking of skiers is realized by combining the two algorithms, and the specific process is as follows:
carrying out tracking identification processing on each video frame in the video data stream by using an ASMS algorithm to obtain a first tracking target set; carrying out tracking identification processing on each video frame in the video data stream by using a YOLO algorithm to obtain a second tracking target set; and then obtaining a tracking result target set according to the first tracking target set and the second tracking target set. By the arrangement, the target to be tracked can be accurately and rapidly positioned, so that intelligent tracking of the target to be tracked is realized, and the situation that the target is tracked is avoided.
Step S900, control information is obtained according to the position information of the tracking result targets in the video frame in the tracking result target set, and the control information is sent to the cradle head camera device, so that the cradle head camera device adjusts working posture information according to the control information, and the desired tracking target is tracked and shot.
In step S900, generally, at the time of photographing, a main subject of photographing needs to be placed in the middle of the angle of view. And judging the position information of the tracking result target in the video frame in the tracking result target set, and outputting control information in real time to adjust the working posture information of the cradle head camera, so that the tracking shooting of the expected tracking target is realized. For example, in skiing sports, skiing sportsman is too near the left of visual angle, through output control information, control cloud platform camera rotates to the left to make skiing sportsman be in the centre of visual angle, in order to avoid appearing following the condition of losing, and, like this set up, also can make the video of shooting more perfect, improve user's experience effect.
According to the target intelligent tracking shooting method, a first tracking target set and a second tracking target set are respectively obtained through tracking processing of a video data stream of a target expected to be tracked shot by a cradle head camera device through a first tracking algorithm and a second tracking algorithm, and then a tracking result target set matched with the target expected to be tracked is determined from the first tracking target set and the second tracking target set; and obtaining control information according to the position information of the tracking result target in the video frame in the tracking result target set, and sending the control information to the tripod head camera device so that the tripod head camera device adjusts working posture information according to the control information to track and shoot the expected tracking target. By the arrangement, intelligent tracking of the expected tracking target is achieved, so that the cradle head camera device can clearly shoot the expected tracking target, and the experience effect of a user is improved.
Referring to fig. 4, in some embodiments of the present application, step S800 includes step S810 and step S820. It should be appreciated that step S800 includes, but is not limited to, step S810 and step S820, which are described in detail below in conjunction with fig. 4.
Step S810, calculating the difference between the position information of each tracking target in the first tracking target set and the position information of each tracking target in the second tracking target set at the same moment within a preset time threshold to obtain a plurality of position difference information.
In step S820, if the plurality of position difference information is greater than or equal to the preset position threshold, the first tracking target set is used as the tracking result target set.
In step S810 and step S820, a difference between the position information of each tracking target in the first tracking target set and the position information of each tracking target in the second tracking target set at the same time in a period of time is calculated, so as to determine whether the target tracked by the ASMS algorithm and the target tracked by the YOLO algorithm are close. If the position difference information is greater than or equal to the preset position threshold value, the position difference information indicates that the position difference information and the position difference information are not close, and in this case, the first tracking target set obtained by tracking through an ASMS algorithm is the tracking result target set in the period of time.
Referring to fig. 5, in some embodiments of the present application, step S800 further includes step S830 and step S840. It should be understood that step S800 includes, but is not limited to, step S830 and step S840. These two steps are described in detail below in conjunction with fig. 5.
In step S830, if the plurality of position difference information is smaller than the preset position threshold, a difference between the size information of the tracked objects in the first tracked object set and the size information of the tracked objects in the second tracked object set is calculated, so as to obtain the size difference information.
In step S840, a tracking result target set is obtained according to the size difference information, the preset size threshold, the first tracking target set and the second tracking target set.
In step S830 and step S840, in a period of time, when the plurality of position difference information is smaller than the preset position threshold, it is indicated that the target tracked by the ASMS algorithm and the target tracked by the YOLO algorithm are relatively close, the target tracks identified by the two are synchronous, at this time, the difference between the size information of the tracked target in the first tracked target set and the size information of the tracked target in the second tracked target set is calculated, so as to obtain size difference information, and a tracking result target set is obtained according to the size difference information, the preset size threshold, the first tracked target set and the second tracked target set.
Referring to fig. 6, in some embodiments, step S840 includes step S841, and it should be understood that step S840 includes, but is not limited to, step S841, and step S841 is described in detail below in conjunction with fig. 6.
In step S841, if the size difference information is greater than the preset size threshold, the second tracking target set is used as the tracking result target set.
And in a period of time, when the plurality of position difference information is smaller than a preset position threshold value, the target tracked by the ASMS algorithm is relatively close to the target tracked by the Yolo algorithm, and the target tracks identified by the ASMS algorithm and the Yolo algorithm are synchronous. At this time, the object identified by the YOLO algorithm is considered to be the current tracking target of the ASMS algorithm, and since the YOLO algorithm can identify the size and position of the object more accurately, if the size difference information between the object identified by the YOLO algorithm and the target tracked by the ASMS algorithm is greater than the preset size threshold, the tracking result of the ASMS is updated by the result identified by the YOLO algorithm, that is, in this period of time, the second tracking target set is taken as the tracking result target set.
Referring to fig. 7, in some embodiments of the present application, step S840 further includes step S842, and it should be understood that step S840 includes, but is not limited to, step S842, and step S842 is described in detail below in conjunction with fig. 7.
In step S842, if the size difference information is less than or equal to the size threshold, the first tracking target set is used as the tracking result target set.
And in a period of time, when the plurality of position difference information is smaller than a preset position threshold value, the target tracked by the ASMS algorithm is relatively close to the target tracked by the Yolo algorithm, and the target tracks identified by the ASMS algorithm and the Yolo algorithm are synchronous. At this time, the object identified by the YOLO algorithm is considered to be the target currently tracked by the ASMS algorithm, if the size difference information is smaller than or equal to the size threshold, which indicates that the results identified by the two are basically not different, and the two are considered to be consistent, then the first tracking target set can be used as the tracking result target set in the period of time. Of course, since the target identified by the ASMS algorithm is consistent with the target identified by the YOLO algorithm, the second tracking target set may be used as the tracking result target set.
Referring to fig. 8, some embodiments of the present application further provide a server 300, where the server 300 is connected to a pan-tilt camera device, and the server 300 includes a receiving module 310, a first tracking module 320, a second tracking module 330, a tracking processing module 340, and a control processing module 350.
A receiving module 310, configured to receive a video data stream of a desired tracking target sent by a pan-tilt camera; wherein the video data stream comprises a plurality of video frames.
The first tracking module 320 is configured to perform tracking identification processing on each video frame in the video data stream according to a preset first tracking algorithm, so as to obtain a first tracking target set; the first tracking algorithm performs tracking recognition processing on the expected tracking target based on color information of the expected tracking target.
The second tracking module 330 is configured to perform tracking identification processing on each video frame in the video data stream according to a preset second tracking algorithm, so as to obtain a second tracking target set; the second tracking algorithm performs tracking recognition processing on the expected tracking target based on the size information and the position information of the expected tracking target.
The tracking processing module 340 is configured to obtain a tracking result target set according to the first tracking target set and the second tracking target set; wherein the tracking result target set is matched with the expected tracking target.
The control processing module 350 is configured to obtain control information according to the position information of the tracking result target in the video frame in the tracking result target set, and send the control information to the pan-tilt camera device, so that the pan-tilt camera device adjusts working posture information according to the control information, so as to track and shoot the desired tracking target.
The server 300 of the embodiment of the present application performs tracking processing on a video data stream of a target desired to be tracked by using a first tracking algorithm and a second tracking algorithm, so as to obtain a first tracking target set and a second tracking target set, and then determines a tracking result target set matched with the target desired to be tracked from the first tracking target set and the second tracking target set; and obtaining control information according to the position information of the tracking result target in the video frame in the tracking result target set, and sending the control information to the tripod head camera device so that the tripod head camera device adjusts working posture information according to the control information to track and shoot the expected tracking target. By the arrangement, intelligent tracking of the expected tracking target is achieved, so that the cradle head camera device can clearly shoot the expected tracking target, and the experience effect of a user is improved.
It should be noted that, the server in this embodiment corresponds to the foregoing target intelligent tracking shooting method, and the specific tracking method refers to the foregoing target intelligent tracking shooting method, which is not described herein again.
The disclosed embodiments also provide a computer device comprising:
at least one memory;
at least one processor;
at least one program;
the program is stored in the memory, and the processor executes at least one program to implement the method for intelligent tracking shooting of a target according to the present disclosure. The computer device can be any intelligent terminal including a mobile phone, a tablet computer, a personal digital assistant (Personal Digital Assistant, PDA), a vehicle-mounted computer and the like.
The computer device according to the embodiment of the present application will be described in detail with reference to fig. 9.
As shown in fig. 9, fig. 9 illustrates a hardware structure of a computer device of another embodiment, the computer device includes:
the processor 1000 may be implemented by a general purpose central processing unit (Central Processing Unit, CPU), a microprocessor, an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), or one or more integrated circuits, etc. for executing relevant programs to implement the technical solutions provided by the embodiments of the present disclosure;
The Memory 1100 may be implemented in the form of a Read Only Memory (ROM), a static storage device, a dynamic storage device, or a random access Memory (Random Access Memory, RAM). The memory 1100 may store an operating system and other application programs, and when the technical solution provided in the embodiments of the present disclosure is implemented by software or firmware, relevant program codes are stored in the memory 1100, and the processor 1000 invokes a target intelligent tracking shooting method for executing the embodiments of the present disclosure;
an input/output interface 1200 for implementing information input and output;
the communication interface 1300 is configured to implement communication interaction between the device and other devices, and may implement communication in a wired manner (e.g. USB, network cable, etc.), or may implement communication in a wireless manner (e.g. mobile network, WIFI, bluetooth, etc.);
bus 1400 that transfers information between the various components of the device (e.g., processor 1000, memory 1100, input/output interface 1200, and communication interface 1300);
wherein the processor 1000, the memory 1100, the input/output interface 1200 and the communication interface 1300 are communicatively coupled to each other within the device via a bus 1400.
The disclosed embodiments also provide a storage medium that is a computer-readable storage medium storing computer-executable instructions for causing a computer to execute the above-described target intelligent tracking shooting method.
The memory, as a non-transitory computer readable storage medium, may be used to store non-transitory software programs as well as non-transitory computer executable programs. In addition, the memory may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory optionally includes memory remotely located relative to the processor, the remote memory being connectable to the processor through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The embodiments described in the embodiments of the present disclosure are for more clearly describing the technical solutions of the embodiments of the present disclosure, and do not constitute a limitation on the technical solutions provided by the embodiments of the present disclosure, and as those skilled in the art can know that, with the evolution of technology and the appearance of new application scenarios, the technical solutions provided by the embodiments of the present disclosure are equally applicable to similar technical problems.
It will be appreciated by those skilled in the art that the technical solutions shown in the figures do not limit the embodiments of the present disclosure, and may include more or fewer steps than shown, or may combine certain steps, or different steps.
The above described apparatus embodiments are merely illustrative, wherein the units illustrated as separate components may or may not be physically separate, i.e. may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
Those of ordinary skill in the art will appreciate that all or some of the steps of the methods, systems, functional modules/units in the devices disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof.
The terms "first," "second," "third," "fourth," and the like in the description of the present application and in the above-described figures, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that embodiments of the present application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be understood that in this application, "at least one" means one or more, and "a plurality" means two or more. "and/or" for describing the association relationship of the association object, the representation may have three relationships, for example, "a and/or B" may represent: only a, only B and both a and B are present, wherein a, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (one) of a, b or c may represent: a, b, c, "a and b", "a and c", "b and c", or "a and b and c", wherein a, b, c may be single or plural.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of elements is merely a logical functional division, and there may be additional divisions of actual implementation, e.g., multiple elements or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, including multiple instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods of the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, an optical disk, or other various media capable of storing a program.
Preferred embodiments of the disclosed embodiments are described above with reference to the accompanying drawings, and thus do not limit the scope of the claims of the disclosed embodiments. Any modifications, equivalent substitutions and improvements made by those skilled in the art without departing from the scope and spirit of the embodiments of the present disclosure shall fall within the scope of the claims of the embodiments of the present disclosure.

Claims (10)

1. The intelligent target tracking shooting method is characterized by being applied to a server, wherein the server is connected with a cradle head camera device, and the method comprises the following steps:
receiving a video data stream of an expected tracking target sent by the cradle head camera device; wherein the video data stream comprises a plurality of video frames;
tracking and identifying each video frame in the video data stream according to a preset first tracking algorithm to obtain a first tracking target set; the first tracking algorithm carries out tracking identification processing on the expected tracking target based on the color information of the expected tracking target;
tracking and identifying each video frame in the video data stream according to a preset second tracking algorithm to obtain a second tracking target set; the second tracking algorithm carries out tracking identification processing on the expected tracking target based on the size information and the position information of the expected tracking target;
Obtaining a tracking result target set according to the first tracking target set and the second tracking target set; wherein the tracking result target set is matched with the expected tracking target;
and obtaining control information according to the position information of the tracking result targets in the video frame in the tracking result target set, and sending the control information to the tripod head camera device so that the tripod head camera device can adjust working posture information according to the control information of the tracking result target set to track and shoot the expected tracking targets.
2. The method of claim 1, wherein the obtaining a tracking result target set from the first tracking target set and the second tracking target set comprises:
calculating the difference value of the position information of each tracking target in the first tracking target set and the position information of each tracking target in the second tracking target set at the same moment within a preset time threshold to obtain a plurality of position difference value information;
and if the plurality of position difference information is larger than or equal to a preset position threshold value, taking the first tracking target set as a tracking result target set.
3. The method of claim 2, wherein the obtaining a tracking result target set from the first tracking target set and the second tracking target set further comprises:
If the plurality of position difference information is smaller than the preset position threshold, calculating the difference between the size information of the tracking targets in the first tracking target set and the size information of the tracking targets in the second tracking target set to obtain size difference information;
and obtaining a tracking result target set according to the size difference information, a preset size threshold value, the first tracking target set and the second tracking target set.
4. The method of claim 3, wherein the obtaining a tracking result target set from the size difference information, the preset size threshold, the first tracking target set, and the second tracking target set includes:
and if the size difference information is larger than the size threshold, taking the second tracking target set as the tracking result target set.
5. The method according to claim 3 or 4, wherein the obtaining the tracking result target set according to the size difference information, the preset size threshold, the first tracking target set, and the second tracking target set further comprises:
and if the size difference information is smaller than or equal to the size threshold, taking the first tracking target set as the tracking result target set.
6. A server, wherein the server is connected with a pan-tilt camera device, the server comprising:
the receiving module is used for receiving the video data stream of the expected tracking target sent by the cradle head camera device; wherein the video data stream comprises a plurality of video frames;
the first tracking module is used for carrying out tracking identification processing on each video frame in the video data stream according to a preset first tracking algorithm to obtain a first tracking target set; the first tracking algorithm carries out tracking identification processing on the expected tracking target based on the color information of the expected tracking target;
the second tracking module is used for carrying out tracking identification processing on each video frame in the video data stream according to a preset second tracking algorithm to obtain a second tracking target set; the second tracking algorithm carries out tracking identification processing on the expected tracking target based on the size information and the position information of the expected tracking target;
the tracking processing module is used for obtaining a tracking result target set according to the first tracking target set and the second tracking target set; wherein the tracking result target set is matched with the expected tracking target;
And the control processing module is used for obtaining control information according to the position information of the tracking result targets in the video frame in the tracking result target set, and sending the control information to the holder camera device so that the holder camera device can adjust working posture information according to the control information to track and shoot the expected tracking targets.
7. An intelligent target tracking shooting system, which is characterized by comprising:
the user end is used for sending the expected tracking target; wherein, a positioning device is arranged on the expected tracking target;
the differential positioning base station is used for receiving the preliminary positioning information sent by the positioning device, and generating and sending differential positioning information according to the preliminary positioning information;
the server of claim 6;
the cradle head camera device is used for receiving and adjusting working posture information according to the differential positioning information;
the cradle head camera device is also used for sending the video data stream of the expected tracking target to the server, and adjusting working posture information according to the control information output by the server so as to realize tracking shooting of the expected tracking target.
8. The system of claim 7, wherein the pan-tilt camera device comprises:
The cradle head camera is used for carrying out tracking shooting on the expected tracking target;
the inertial sensor is used for detecting and outputting initial posture information of the cradle head camera;
the locator is used for detecting and outputting initial positioning information of the cradle head camera;
and the gesture controller is used for adjusting the working gesture information of the cradle head camera according to the control information output by the server.
9. An electronic device, comprising:
at least one memory;
at least one processor;
at least one program;
the program is stored in the memory, and the processor executes the at least one program to implement:
a method as claimed in any one of claims 1 to 5.
10. A computer-readable storage medium storing computer-executable instructions for causing a computer to perform:
a method as claimed in any one of claims 1 to 5.
CN202210699461.7A 2022-06-20 2022-06-20 Intelligent target tracking shooting method, server, shooting system, equipment and medium Active CN115225815B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210699461.7A CN115225815B (en) 2022-06-20 2022-06-20 Intelligent target tracking shooting method, server, shooting system, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210699461.7A CN115225815B (en) 2022-06-20 2022-06-20 Intelligent target tracking shooting method, server, shooting system, equipment and medium

Publications (2)

Publication Number Publication Date
CN115225815A CN115225815A (en) 2022-10-21
CN115225815B true CN115225815B (en) 2023-07-25

Family

ID=83608005

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210699461.7A Active CN115225815B (en) 2022-06-20 2022-06-20 Intelligent target tracking shooting method, server, shooting system, equipment and medium

Country Status (1)

Country Link
CN (1) CN115225815B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116095462B (en) * 2022-12-30 2024-03-01 深圳市浩瀚卓越科技有限公司 Visual field tracking point position determining method, device, equipment, medium and product
CN117241133B (en) * 2023-11-13 2024-02-06 武汉益模科技股份有限公司 Visual work reporting method and system for multi-task simultaneous operation based on non-fixed position

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016034059A1 (en) * 2014-09-04 2016-03-10 成都理想境界科技有限公司 Target object tracking method based on color-structure features
CN105631446A (en) * 2015-12-17 2016-06-01 天脉聚源(北京)科技有限公司 Method and device for determining interactive corner mark prompt
CN109597431A (en) * 2018-11-05 2019-04-09 视联动力信息技术股份有限公司 A kind of method and device of target following
CN113785558A (en) * 2019-05-07 2021-12-10 掌中加有限公司 Wearable device for detecting events using camera module and wireless communication device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109460740B (en) * 2018-11-15 2020-08-11 上海埃威航空电子有限公司 Ship identity recognition method based on AIS and video data fusion

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016034059A1 (en) * 2014-09-04 2016-03-10 成都理想境界科技有限公司 Target object tracking method based on color-structure features
CN105631446A (en) * 2015-12-17 2016-06-01 天脉聚源(北京)科技有限公司 Method and device for determining interactive corner mark prompt
CN109597431A (en) * 2018-11-05 2019-04-09 视联动力信息技术股份有限公司 A kind of method and device of target following
CN113785558A (en) * 2019-05-07 2021-12-10 掌中加有限公司 Wearable device for detecting events using camera module and wireless communication device

Also Published As

Publication number Publication date
CN115225815A (en) 2022-10-21

Similar Documents

Publication Publication Date Title
US11860923B2 (en) Providing a thumbnail image that follows a main image
CN115225815B (en) Intelligent target tracking shooting method, server, shooting system, equipment and medium
US8797353B2 (en) Augmented media message
US9875579B2 (en) Techniques for enhanced accurate pose estimation
CN104335649B (en) Based on the determination smart mobile phone position of images match and the method and system of posture
US6292215B1 (en) Apparatus for referencing and sorting images in a three-dimensional system
EP3273318B1 (en) Autonomous system for collecting moving images by a drone with target tracking and improved target positioning
CN108022302B (en) Stereo display device of Inside-Out space orientation's AR
KR20150013709A (en) A system for mixing or compositing in real-time, computer generated 3d objects and a video feed from a film camera
US9756260B1 (en) Synthetic camera lenses
CN108416285A (en) Rifle ball linkage surveillance method, apparatus and computer readable storage medium
CN110533719B (en) Augmented reality positioning method and device based on environment visual feature point identification technology
WO2022077296A1 (en) Three-dimensional reconstruction method, gimbal load, removable platform and computer-readable storage medium
US11082607B2 (en) Systems and methods for generating composite depth images based on signals from an inertial sensor
US20210217210A1 (en) Augmented reality system and method of displaying an augmented reality image
CN109040525B (en) Image processing method, image processing device, computer readable medium and electronic equipment
Koschorrek et al. A multi-sensor traffic scene dataset with omnidirectional video
CN111083633A (en) Mobile terminal positioning system, establishment method thereof and positioning method of mobile terminal
US20170069133A1 (en) Methods and Systems for Light Field Augmented Reality/Virtual Reality on Mobile Devices
CN110741625B (en) Motion estimation method and photographic equipment
WO2014087166A1 (en) Terrain-topography motion capture system, apparatus and method
CN114185073A (en) Pose display method, device and system
KR101601726B1 (en) Method and system for determining position and attitude of mobile terminal including multiple image acquisition devices
WO2019127320A1 (en) Information processing method and apparatus, cloud processing device, and computer program product
CN108322698A (en) The system and method merged based on multiple-camera and Inertial Measurement Unit

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant