CN115225815A - Target intelligent tracking shooting method, server, shooting system, equipment and medium - Google Patents

Target intelligent tracking shooting method, server, shooting system, equipment and medium Download PDF

Info

Publication number
CN115225815A
CN115225815A CN202210699461.7A CN202210699461A CN115225815A CN 115225815 A CN115225815 A CN 115225815A CN 202210699461 A CN202210699461 A CN 202210699461A CN 115225815 A CN115225815 A CN 115225815A
Authority
CN
China
Prior art keywords
tracking
target
information
target set
tracked
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210699461.7A
Other languages
Chinese (zh)
Other versions
CN115225815B (en
Inventor
张巍
庞博
陈家鹏
付震
尚阳星
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southwest University of Science and Technology
Original Assignee
Southwest University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southwest University of Science and Technology filed Critical Southwest University of Science and Technology
Priority to CN202210699461.7A priority Critical patent/CN115225815B/en
Publication of CN115225815A publication Critical patent/CN115225815A/en
Application granted granted Critical
Publication of CN115225815B publication Critical patent/CN115225815B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items

Abstract

The application discloses a target intelligent tracking shooting method, a server, a shooting system, equipment and a medium. The method comprises the following steps: receiving a video data stream of a target to be tracked, which is sent by a pan-tilt camera device; tracking and identifying each video frame in the video data stream according to a preset first tracking algorithm to obtain a first tracking target set; tracking and identifying each video frame in the video data stream according to a preset second tracking algorithm to obtain a second tracking target set; obtaining a tracking result target set according to the first tracking target set and the second tracking target set; and according to the tracking result target, the position information of the tracking result target in the video frame is concentrated to obtain control information, and the control information is sent to a holder camera device so as to track and shoot the expected tracking target. According to the intelligent target tracking and shooting method, the moving target can be automatically tracked, so that the moving target can be clearly shot by the holder camera device.

Description

Target intelligent tracking shooting method, server, shooting system, equipment and medium
Technical Field
The application relates to the technical field of artificial intelligence, in particular to an intelligent target tracking shooting method, a server, a shooting system, equipment and a medium.
Background
Along with the continuous improvement of the living standard of people, the demands of people on rich leisure lives are more and more diversified, wherein sports are a great part of the leisure lives of people. For people's sports life, people often prefer to use cameras to record nice moments in sports. However, recording with a camera is a troublesome task, and it often takes a lot of manpower to shoot the recording all the time.
In the related art, an automatic tracking camera is adopted to automatically shoot a moving target. However, the existing automatic tracking cameras perform rough positioning based on a GPS, and then the wide-angle camera performs tracking shooting on a moving target; however, the motion video shot in this way is blurred, and the whole motion process of the moving object cannot be completely recorded.
Disclosure of Invention
The present application is directed to solving at least one of the problems in the prior art. Therefore, the target intelligent tracking shooting method can achieve automatic tracking of the moving target, and therefore the pan-tilt camera device can clearly shoot the moving target.
The application also provides a server.
The application also provides an intelligent target tracking and shooting system.
The application also provides an electronic device.
The present application also provides a computer-readable storage medium.
According to the first aspect of the application, the target intelligent tracking shooting method is applied to a server, the server is connected with a holder camera device, and the method comprises the following steps:
receiving a video data stream of a target to be tracked, which is sent by a pan-tilt camera device; wherein the video data stream comprises a plurality of video frames;
tracking and identifying each video frame in the video data stream according to a preset first tracking algorithm to obtain a first tracking target set; the first tracking algorithm carries out tracking identification processing on the target to be tracked based on the color information of the target to be tracked;
tracking and identifying each video frame in the video data stream according to a preset second tracking algorithm to obtain a second tracking target set; the second tracking algorithm tracks and identifies the expected tracking target based on the size information and the position information of the expected tracking target;
obtaining a tracking result target set according to the first tracking target set and the second tracking target set; wherein the tracking result target set is matched with the expected tracking target;
and according to the tracking result, the position information of the target in the video frame is centrally tracked to obtain control information, and the control information is sent to the holder camera device, so that the holder camera device adjusts the working attitude information according to the control information to track and shoot the expected tracking target.
The target intelligent tracking shooting method provided by the embodiment of the application has at least the following beneficial effects: tracking video data streams of an expected tracking target shot by a pan-tilt camera by a first tracking algorithm and a second tracking algorithm to respectively obtain a first tracking target set and a second tracking target set, and then determining a tracking result target set matched with the expected tracking target from the first tracking target set and the second tracking target set; and then, according to the tracking result, the position information of the target in the video frame is centrally tracked to obtain control information, and the control information is sent to the pan-tilt camera device, so that the pan-tilt camera device adjusts the working attitude information according to the control information to track and shoot the target to be tracked. Set up like this, realized the intelligent tracking to expectation pursuit target to cloud platform camera device realizes the clear shooting to expectation pursuit target, has improved user's experience effect.
According to some embodiments of the present application, obtaining a tracking result target set according to a first tracking target set and a second tracking target set comprises:
calculating the difference value between the position information of each tracking target in the first tracking target set and the position information of each tracking target in the second tracking target set at the same moment within a preset time threshold value to obtain a plurality of position difference value information;
and if the position difference information is greater than or equal to a preset position threshold value, taking the first tracking target set as a tracking result target set.
According to some embodiments of the present application, obtaining a tracking result target set according to the first tracking target set and the second tracking target set further includes:
if the position difference information is smaller than a preset position threshold value, calculating the difference value between the size information of the tracked target in the first tracked target set and the size information of the tracked target in the second tracked target set to obtain size difference value information;
and obtaining a tracking result target set according to the size difference information, a preset size threshold, the first tracking target set and the second tracking target set.
According to some embodiments of the present application, obtaining a tracking result target set according to the size difference information, the preset size threshold, the first tracking target set, and the second tracking target set includes:
and if the size difference information is larger than a preset size threshold, taking the second tracking target set as a tracking result target set.
According to some embodiments of the present application, obtaining a tracking result target set according to the size difference information, the preset size threshold, the first tracking target set, and the second tracking target set, further includes:
and if the size difference information is less than or equal to the size threshold, taking the first tracking target set as a tracking result target set.
According to the server of the embodiment of the second aspect of the present application, the server is connected with the pan-tilt imaging device, and the server includes:
the receiving module is used for receiving a video data stream of an expected tracking target sent by the holder camera device; wherein the video data stream comprises a plurality of video frames;
the first tracking module is used for tracking and identifying each video frame in the video data stream according to a preset first tracking algorithm to obtain a first tracking target set; the first tracking algorithm carries out tracking identification processing on the expected tracking target based on the color information of the expected tracking target;
the second tracking module is used for performing tracking identification processing on each video frame in the video data stream according to a preset second tracking algorithm to obtain a second tracking target set; the second tracking algorithm is used for tracking and identifying the expected tracking target based on the size information and the position information of the expected tracking target;
the tracking processing module is used for obtaining a tracking result target set according to the first tracking target set and the second tracking target set; wherein, the tracking result target set is matched with the expected tracking target;
and the control processing module is used for obtaining control information according to the position information of the tracking result target in the video frame in the tracking result target set and sending the control information to the holder camera device so that the holder camera device adjusts the working attitude information according to the control information to track and shoot the expected tracking target.
The server according to the embodiment of the application has at least the following beneficial effects: tracking video data streams of an expected tracking target shot by a pan-tilt camera by a first tracking algorithm and a second tracking algorithm to respectively obtain a first tracking target set and a second tracking target set, and then determining a tracking result target set matched with the expected tracking target from the first tracking target set and the second tracking target set; and then, according to the tracking result, the position information of the target in the video frame is centrally tracked to obtain control information, and the control information is sent to the pan-tilt camera device, so that the pan-tilt camera device adjusts the working attitude information according to the control information to track and shoot the target to be tracked. Set up like this, realized the intelligent tracking to expectation pursuit target to cloud platform camera device realizes the clear shooting to expectation pursuit target, has improved user's experience effect.
The target intelligent tracking shooting system according to the embodiment of the third aspect of the application comprises:
the client is used for sending the expected tracking target; wherein, a positioning device is arranged on the expected tracking target;
the differential positioning base station is used for receiving the primary positioning information sent by the positioning device, and generating and sending differential positioning information according to the primary positioning information;
a server as in an embodiment of the first aspect;
the holder camera device is used for receiving and adjusting the working attitude information according to the differential positioning information;
the holder camera device is also used for sending a video data stream of the expected tracking target to the server, and adjusting the working attitude information according to the control information output by the server so as to realize tracking shooting of the expected tracking target.
According to the embodiment of the application, the intelligent target tracking and shooting system at least has the following beneficial effects: a user sends an expected tracking target to a server through a user side; the method comprises the steps that a server tracks a video data stream of a desired tracking target shot by a pan-tilt camera through a first tracking algorithm and a second tracking algorithm to respectively obtain a first tracking target set and a second tracking target set, and then a tracking result target set matched with the desired tracking target is determined from the first tracking target set and the second tracking target set; and then, according to the tracking result, the position information of the target in the video frame is centrally tracked to obtain control information, and the control information is sent to the pan-tilt camera device, so that the pan-tilt camera device adjusts the working attitude information according to the control information to track and shoot the target to be tracked. Set up like this, realized the intelligent tracking to expectation pursuit target to cloud platform camera device realizes the clear shooting to expectation pursuit target, has improved user's experience effect.
According to some embodiments of the present application, a pan-tilt camera apparatus comprises:
the pan-tilt camera is used for carrying out tracking shooting on the target to be tracked;
the inertial sensor is used for detecting and outputting initial attitude information of the pan-tilt camera;
the positioner is used for detecting and outputting initial positioning information of the pan-tilt camera;
and the attitude controller is used for adjusting the working attitude information of the pan-tilt camera according to the control information output by the server.
An electronic device according to an embodiment of a fourth aspect of the present application includes:
at least one memory;
at least one processor;
at least one program;
the program is stored in the memory, and the processor executes at least one program to implement:
a method as in the embodiment of the first aspect.
According to a fifth aspect of the present application, a computer-readable storage medium stores computer-executable instructions for causing a computer to perform:
a method as in the embodiment of the first aspect.
Additional aspects and advantages of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.
Drawings
The present application is further described with reference to the following figures and examples, in which:
fig. 1 is a block diagram of a target intelligent tracking shooting system according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram illustrating a transformation of an inertial coordinate system and a body coordinate system provided in an embodiment of the present application;
fig. 3 is a flowchart of a target intelligent tracking shooting method according to an embodiment of the present application;
FIG. 4 is a flowchart illustrating a specific method of step S800 in FIG. 3;
FIG. 5 is another flowchart illustrating a specific method of step S800 in FIG. 3;
FIG. 6 is a flowchart illustrating a specific method of step S840 in FIG. 5;
FIG. 7 is another flowchart illustrating a specific method of step S840 in FIG. 5;
FIG. 8 is a block diagram of the server shown in FIG. 1;
fig. 9 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application.
Detailed Description
Reference will now be made in detail to the embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the same or similar elements or elements having the same or similar functions throughout. The embodiments described below with reference to the drawings are exemplary only for the purpose of explaining the present application and are not to be construed as limiting the present application.
In the description of the present application, it should be understood that the positional descriptions referred to, for example, the directions or positional relationships indicated by upper, lower, front, rear, left, right, etc., are based on the directions or positional relationships shown in the drawings, and are only for convenience of description and simplification of the description, but do not indicate or imply that the device or element referred to must have a specific direction, be constructed and operated in a specific direction, and thus, should not be construed as limiting the present application.
In the description of the present application, the meaning of a plurality is one or more, the meaning of a plurality is two or more, and the above, below, exceeding, etc. are understood as excluding the present number, and the above, below, within, etc. are understood as including the present number. If the first and second are described for the purpose of distinguishing technical features, they are not to be understood as indicating or implying relative importance or implicitly indicating the number of technical features indicated or implicitly indicating the precedence of the technical features indicated.
In the description of the present application, unless otherwise specifically limited, terms such as set, installed, connected and the like should be understood broadly, and those skilled in the art can reasonably determine the specific meanings of the above terms in the present application in combination with the specific contents of the technical solutions.
In the description of the present application, reference to the description of "one embodiment", "some embodiments", "illustrative embodiments", "examples", "specific examples", or "some examples", etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
First, a plurality of terms related to the embodiments of the present application are analyzed:
artificial Intelligence (AI): is a new technical science for researching and developing theories, methods, technologies and application systems for simulating, extending and expanding human intelligence; artificial intelligence is a branch of computer science that attempts to understand the essence of intelligence and produces a new intelligent machine that can react in a manner similar to human intelligence, and research in this field includes robotics, language recognition, image recognition, natural language processing, and expert systems, among others. The artificial intelligence can simulate the information process of human consciousness and thinking. Artificial intelligence is also a theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and expand human intelligence, perceive the environment, acquire knowledge and use the knowledge to obtain the best results.
Real-time kinematic (RTK): RTK (Real-time kinematic) Real-time differential positioning is a measuring method capable of obtaining centimeter-level positioning accuracy in Real time in the field, and can improve the field operation efficiency. The RTK real-time dynamic measuring technique is a real-time differential GPS technique based on carrier phase observation, is a breakthrough in the development range of measuring technique, and is composed of three parts of a base station receiver, a data chain and a rover receiver. A base station is provided with 1 receiver as a reference station, satellites are continuously observed, observation data and station measurement information of the satellites are sent to a rover station in real time through radio transmission equipment, the rover station GPS receiver receives GPS satellite signals and data transmitted by the base station through radio receiving equipment, and then three-dimensional coordinates and precision of the rover station are calculated in real time according to the principle of relative positioning.
An inertial sensor: the inertial sensor is a sensor for detecting and measuring acceleration, inclination, impact, vibration, rotation and multi-degree-of-freedom motion, and is an important part for solving navigation, orientation and motion carrier control.
An accelerometer: an accelerometer is an inertial sensor that is capable of measuring acceleration forces of an object.
An angular velocity sensor: angular velocity sensors, also known as gyroscopes, are capable of measuring the angular velocity of an object. Angular velocity is often used for position control and attitude control of moving objects, and in other situations where accurate angular measurements are required.
YOLO algorithm (You Only Look one): the YOLO algorithm is an object recognition algorithm, the name meaning: "you just need to look at one's eye to identify what object is in the picture". The YOLO algorithm can quickly identify the type and position information of some specific type objects in the pictures, but cannot judge the corresponding relationship of each object between two continuous pictures, and cannot continuously track a specific object in a video stream. The change of the target size or the change of the scene has little influence on the recognition capability of the YOLO algorithm.
Mean Shift algorithm to accommodate size change (ASMS): the ASMS algorithm is a target tracking algorithm, can complete the task of visual target tracking according to color characteristics, and can adapt to the change of target size.
However, the ASMS algorithm has the following disadvantages:
first, the adaptation effect on the size of the target is not strong, and when the target moves away from or approaches the camera at a fast speed, the size of the target in the video changes rapidly, and at this time, the problem of tracking error occurs only by using the ASMS algorithm.
Secondly, the method is not suitable for long-term tracking, and when the background of the target changes greatly, the algorithm is easy to lose track.
Sports is a large part of people's leisure life, and people often like to use cameras to record nice moments in sports for people's sports life. However, recording with a camera is a troublesome task, and it often takes a lot of manpower to shoot the recording all the time.
In the related art, an automatic tracking camera is adopted to automatically shoot a moving target. For example, auto tracking camera in hua chi can perform positioning shooting on indoor scenes through vision to record life scenes. However, the Huacheng auto tracking camera cannot be applied to a scene moving at a high speed, and since the visual scene is only a scene better lighted, the performance of the camera is greatly reduced once the camera is outdoors. An auto tracking camera developed by soloshot3 abroad is suitable for high-speed motion scenes, carries out rough positioning based on a GPS (global positioning system), and then tracks and shoots a moving target through a wide-angle camera, but because the GPS is rough positioning, shooting can only be carried out through a wide angle, and the moving target is fuzzy in a video and cannot completely record the motion process of a user.
Based on the method, the server, the shooting system, the equipment and the medium, the moving target can be automatically tracked, so that the moving target can be clearly shot by the holder camera device, and the experience of a user is improved.
The embodiment of the application provides an intelligent target tracking shooting method, and relates to the technical field of artificial intelligence. The target intelligent tracking shooting method provided by the embodiment of the application can be applied to a terminal, a server side and software running in the terminal or the server side. In some embodiments, the terminal may be a smartphone, tablet, laptop, desktop computer, smart watch, or the like; the server side can be configured as an independent physical server, can also be configured as a server cluster or a distributed system formed by a plurality of physical servers, and can also be configured as a cloud server for providing basic cloud computing services such as cloud service, a cloud database, cloud computing, cloud functions, cloud storage, network service, cloud communication, middleware service, domain name service, safety service, content Delivery Network (CDN) and big data and artificial intelligence platforms; the software may be an application or the like that implements the target intelligent tracking shooting method, but is not limited to the above form.
Embodiments of the disclosure are operational with numerous general purpose or special purpose computing system environments or configurations. For example: personal computers, server computers, hand-held or portable devices, tablet-type devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer computing devices, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like. The application may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The application may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
The embodiments of the present application will be further explained with reference to the drawings.
Referring to fig. 1, some embodiments of the present application provide an intelligent target tracking and shooting system, which includes a user terminal 100, a differential positioning base station 200, a server 300, and a pan-tilt camera 400.
A user terminal 100 for transmitting a desired tracking target; the expected tracking target is specified by a user through a user side, and a positioning device is arranged on the expected tracking target.
In some embodiments, the user terminal 100 includes a display. The user end 100 can display the motion trail and the motion picture of the target to be tracked on the user end 100 according to the differential positioning information sent by the differential positioning base station 200 and the motion picture collected by the pan-tilt camera 400. The user end 100 may be a mobile phone APP, or may be other terminal equipment, such as a PC computer. The expected tracking target is provided with a positioning device and a data transmission device, the positioning device is used for positioning the expected tracking target, and the data transmission device is connected with the differential positioning base station 200 and used for information interaction with the differential positioning base station 200.
It should be noted that the positioning device and the data transmission device may be integrated, for example, both disposed on a wearable device that desires to track a target; other types of settings are also possible. The present application is not particularly limited thereto.
And the differential positioning base station 200 is configured to receive the preliminary positioning information sent by the positioning apparatus, and generate and send differential positioning information according to the preliminary positioning information.
In some embodiments, the differential positioning base station 200 includes a router, a data transfer device, an image transfer device, and an RTK-GPS base station. The router is responsible for connecting with the user terminal 100, so that the user can freely read the shot video and the GPS data through the user terminal 100, or detect and view the target desired to be tracked. The data transmission device is used for connecting with a data transmission device of a target to be tracked so as to realize information interaction between the target to be tracked and the differential positioning base station 200. The image transmission device is in communication connection with the server 300 and is used for pushing the video data stream received from the pan-tilt camera 400 to the server 300. And the RTK-GPS base station is used for generating differential positioning information according to the preliminary positioning information sent by the positioning device of the expected tracking target and sending the differential positioning information to the expected tracking target. The RTK-GPS or RTK-Beidou system can realize centimeter-level accurate positioning, the positioning precision of the GPS is 10m level, and the application can enable the holder camera 400 to accurately track the target by using the RTK-GPS equipment.
The server 300 is configured to perform tracking recognition on the target to be tracked, and output control information according to the tracking result to control the pan-tilt camera 400 to perform tracking shooting on the target to be tracked.
The holder camera device 400 is used for receiving and adjusting the working attitude information according to the differential positioning information; and the video data stream of the desired tracking target is sent to the server 300, and the working attitude information is adjusted according to the control information output by the server 300, so as to realize tracking shooting of the desired tracking target.
In the target intelligent tracking shooting system of the embodiment of the application, a user sends an expected tracking target to the server 300 through the user side 100; the server 300 tracks the video data stream of the desired tracking target captured by the pan-tilt camera 400 by using a first tracking algorithm and a second tracking algorithm to obtain a first tracking target set and a second tracking target set respectively, and then determines a tracking result target set matched with the desired tracking target from the first tracking target set and the second tracking target set; and then, according to the tracking result, the position information of the tracking result target in the video frame is concentrated to obtain control information, and the control information is sent to the holder camera 400, so that the holder camera 400 adjusts the working attitude information according to the control information to track and shoot the expected tracking target. Set up like this, realized the intelligent tracking to expectation pursuit target to cloud platform camera device 400 realizes the clear shooting to expectation pursuit target, has improved user's experience effect.
In some embodiments, pan-tilt camera device 400 includes a pan-tilt camera, an inertial sensor, a positioner, and an attitude controller.
And the pan-tilt camera is used for tracking and shooting the target to be tracked.
And the inertial sensor is used for detecting and outputting the initial attitude information of the pan-tilt camera.
And the positioner is used for detecting and outputting the initial positioning information of the pan-tilt camera.
And the attitude controller is used for adjusting the working attitude information of the pan-tilt camera according to the control information output by the server 300.
Referring to fig. 2, in some embodiments, the inertial sensors include accelerometers and gyroscopes. A is a target expected to be tracked, I is an inertial coordinate system of the earth, and B is a body coordinate system based on the holder camera device. The rotation matrix R of an inertial coordinate system I and a body coordinate system B can be calculated through a gyroscope and an accelerometer, and then the coordinate P of the expected tracking target A in the body coordinate system B is obtained according to the formula (1) B . Wherein, the formula (1) is as follows:
P B =R·p (1)
in the formula (1), R represents a rotation matrix of an inertial coordinate system I and a body coordinate system B, and is obtained by calculation of a gyroscope and an accelerometer; and p is the coordinate of the expected tracking target A in the inertial coordinate system I, and is obtained through GPS positioning.
The tracking shooting of a certain moving target needs to be performed by video calibration, and in the related art, due to the lack of initial attitude information of the pan-tilt camera device, the installation position and the initial attitude information of the pan-tilt camera device need to be calculated through the relative position of the moving target, and then tracking is performed. By introducing the inertial sensor and the positioner, the initial positioning information and the initial attitude information of the pan-tilt camera device can be quickly obtained, so that the initialization work is simplified or initialization is not required, and tracking shooting of a moving target can be realized.
Based on the target intelligent tracking shooting system shown in fig. 1, referring to fig. 3, some embodiments of the present application provide a target intelligent tracking shooting method, which is applied to a server, and the server is connected with a pan-tilt camera, and the method includes step S500, step S600, step S700, step S800, and step S900. It should be understood that the target intelligent tracking shooting method of the embodiment of the present application includes, but is not limited to, these five steps, which are described in detail below.
Step S500, receiving a video data stream of a target to be tracked, which is sent by a pan-tilt camera device; wherein the video data stream comprises a plurality of video frames.
In step S500, it is desirable to track a target as a certain target designated by the user, where the target may be a moving object, a sportsman, or another target, and the application is not particularly limited as long as the target is provided with a positioning device capable of outputting preliminary positioning information. The target intelligent tracking shooting method of the present application is described in detail in the embodiments of the present application with a skier as a desired tracking target.
The video data stream includes a plurality of video frames, each video frame being an image. The video data stream is shot by the pan-tilt camera device in real time for the expected tracking target. If the desired tracking target is a skier, the video data stream is the video data that the skier is moving.
Step S600, tracking and identifying each video frame in a video data stream according to a preset first tracking algorithm to obtain a first tracking target set; the first tracking algorithm performs tracking identification processing on the expected tracking target based on the color information of the expected tracking target.
Step S700, tracking and identifying each video frame in the video data stream according to a preset second tracking algorithm to obtain a second tracking target set; and the second tracking algorithm carries out tracking and identification processing on the expected tracking target based on the size information and the position information of the expected tracking target.
Step S800, obtaining a tracking result target set according to the first tracking target set and the second tracking target set; wherein the tracking result target set matches the desired tracking target.
In steps S600 to S800, the first tracking algorithm may adopt an ASMS algorithm, the second tracking algorithm may adopt a YOLO algorithm, the first tracking algorithm may also adopt other algorithms based on tracking a visual target, and the second tracking algorithm may also adopt other object recognition algorithms. The target intelligent tracking shooting method of the present application is described by taking an ASMS algorithm and a YOLO algorithm as examples.
The ASMS algorithm completes a task of tracking a visual target based on color characteristics of the target, but the ASMS algorithm has a weak effect of adapting to the size of the target, when the target is far away from or close to a pan-tilt camera at a high speed, the size of the target in a video can be changed rapidly, and at the moment, the target is tracked only by using the ASMS algorithm, so that a tracking error condition is easy to occur; moreover, the ASMS algorithm is not suitable for tracking the target for a long time, and when the background change of the target is large, the ASMS algorithm is easy to lose, so that the ASMS algorithm is not suitable for the scene from a long distance to a short distance. In the application, the target to be tracked is a skier, and it can be understood that in a scene of skiing, due to the change of the optical zoom of the pan-tilt camera and the change of the light intensity, the color of an object in a video is easy to change, and the skier can be tracked by adopting an ASMS algorithm. However, the use of the ASMS algorithm alone is likely to result in a loss of the sole for the skier.
The YOLO algorithm can accurately identify the size and the position of an object, and the size change or the scene change of the object has little influence on the size and the position. However, the YOLO algorithm cannot determine the correspondence between objects in two consecutive pictures, and cannot continuously track a specific object in a video stream. In the present application, it is desirable that the tracking target is a skier, i.e. for the YOLO algorithm, only human body can be identified to complete the tracking task.
Based on the advantages and disadvantages of the YOLO algorithm and the ASMS algorithm, the intelligent tracking of the skiers is realized by combining the two algorithms, and the specific process is as follows:
performing tracking identification processing on each video frame in the video data stream by using an ASMS algorithm to obtain a first tracking target set; performing tracking identification processing on each video frame in the video data stream by using a YOLO algorithm to obtain a second tracking target set; and then obtaining a tracking result target set according to the first tracking target set and the second tracking target set. Through setting up like this, can realize the accurate quick location to expectation pursuit target to realize the intelligent tracking to expectation pursuit target, in order to guarantee that the condition of target tracking can not appear.
And S900, obtaining control information according to the position information of the tracking result target in the video frame in the tracking result target set, and sending the control information to the pan-tilt camera device, so that the pan-tilt camera device adjusts the working attitude information according to the control information to track and shoot the expected tracking target.
In step S900, generally, at the time of shooting, it is necessary to place a main subject of shooting in the middle of the angle of view. And outputting control information in real time by judging the position information of the tracking result target in the video frame in a centralized manner so as to adjust the working posture information of the pan-tilt camera, thereby realizing the tracking shooting of the target to be tracked. For example, in skiing, the skier is too close to the left side at visual angle, through output control information, control cloud platform camera and turn to the left side and rotate to make the skier be in the centre at visual angle, in order to avoid appearing with the condition of losing, and, set up like this, also can make the video of shooing more perfect, improve user's experience effect.
According to the target intelligent tracking shooting method, a video data stream of a desired tracking target shot by a pan-tilt camera is tracked by a first tracking algorithm and a second tracking algorithm to obtain a first tracking target set and a second tracking target set respectively, and then a tracking result target set matched with the desired tracking target is determined from the first tracking target set and the second tracking target set; and then, according to the tracking result, the position information of the tracking result target in the video frame is concentrated to obtain control information, and the control information is sent to the holder camera device, so that the holder camera device adjusts the working attitude information according to the control information, and the expected tracking target is tracked and shot. Set up like this, realized the intelligent tracking to expectation pursuit target to cloud platform camera device realizes the clear shooting to expectation pursuit target, has improved user's experience effect.
Referring to fig. 4, in some embodiments of the present application, step S800 includes step S810 and step S820. It is understood that step S800 includes, but is not limited to, step S810 and step S820, which are described in detail below in conjunction with fig. 4.
Step S810, calculating a difference between the position information of each tracked target in the first tracked target set and the position information of each tracked target in the second tracked target set at the same time within a preset time threshold, to obtain a plurality of position difference information.
In step S820, if the position difference information is greater than or equal to the preset position threshold, the first tracking target set is used as the tracking result target set.
In steps S810 and S820, a difference between the position information of each tracked target in the first tracked target set and the position information of each tracked target in the second tracked target set at the same time in a period of time is calculated to determine whether the target tracked by the ASMS algorithm and the target tracked by the YOLO algorithm are close to each other. If the position difference information is greater than or equal to the preset position threshold, the two are not close to each other, in this case, the first tracking target set obtained by tracking through the ASMS algorithm is the tracking result target set in the period of time.
Referring to fig. 5, in some embodiments of the present application, step S800 further includes step S830 and step S840. It is understood that step S800 includes, but is not limited to, step S830 and step S840. These two steps are described in detail below with reference to fig. 5.
In step S830, if the position difference information is smaller than the preset position threshold, a difference between the size information of the tracked target in the first tracked target set and the size information of the tracked target in the second tracked target set is calculated to obtain size difference information.
Step 840, a tracking result target set is obtained according to the size difference information, the preset size threshold, the first tracking target set and the second tracking target set.
In steps S830 and S840, in a period of time, when the position difference information is smaller than the preset position threshold, it indicates that the target obtained by the ASMS algorithm and the target obtained by the YOLO algorithm are closer to each other, and the target tracks identified by the two are synchronous, at this time, the difference between the size information of the tracked target in the first tracked target set and the size information of the tracked target in the second tracked target set is calculated to obtain size difference information, and the tracking result target set is obtained according to the size difference information, the preset size threshold, the first tracked target set and the second tracked target set.
Referring to fig. 6, in some embodiments, step S840 includes step S841, it being understood that step S840 includes, but is not limited to, step S841, which is described in detail below in connection with fig. 6.
In step S841, if the size difference information is greater than the preset size threshold, the second tracking target set is used as the tracking result target set.
And in a period of time, when the information of the position difference values is smaller than a preset position threshold value, the target obtained by the ASMS algorithm and the target obtained by the YOLO algorithm are relatively close to each other, and the tracks of the two identified targets are synchronous. At this time, the object identified by the YOLO algorithm is considered to be the target currently tracked by the ASMS algorithm, and since the YOLO algorithm can identify the size and the position of the object more accurately, if the size difference information between the object identified by the YOLO algorithm and the target tracked by the ASMS algorithm is greater than the preset size threshold, the tracking result of the ASMS is updated by using the result identified by the YOLO algorithm, that is, the second tracking target set is used as the tracking result target set in this period of time.
Referring to fig. 7, in some embodiments of the present application, step S840 further includes step S842, it should be understood that step S840 includes, but is not limited to, step S842, which is described in detail below in conjunction with fig. 7.
In step S842, if the size difference information is less than or equal to the size threshold, the first tracking target set is used as the tracking result target set.
And in a period of time, when the information of the position difference values is smaller than a preset position threshold value, it is indicated that the target obtained by the ASMS algorithm tracking and the target obtained by the YOLO algorithm tracking are relatively close to each other, and the tracks of the two identified targets are synchronous. At this time, the object identified by the YOLO algorithm is considered as the target currently tracked by the ASMS algorithm, and if the size difference information is less than or equal to the size threshold, it indicates that the two identified results are not substantially different, and it can be considered as consistent, the first tracked target set can be used as the tracked result target set in this period of time. Of course, since the target identified by the ASMS algorithm is consistent with the target identified by the YOLO algorithm, the second tracking target set may also be used as the tracking result target set.
Referring to fig. 8, some embodiments of the present application further provide a server 300, where the server 300 is connected to the pan-tilt camera, and the server 300 includes a receiving module 310, a first tracking module 320, a second tracking module 330, a tracking processing module 340, and a control processing module 350.
A receiving module 310, configured to receive a video data stream of an expected tracking target sent by a pan-tilt camera; wherein the video data stream comprises a plurality of video frames.
The first tracking module 320 is configured to perform tracking identification processing on each video frame in the video data stream according to a preset first tracking algorithm to obtain a first tracking target set; and the first tracking algorithm carries out tracking identification processing on the target to be tracked based on the color information of the target to be tracked.
The second tracking module 330 is configured to perform tracking identification processing on each video frame in the video data stream according to a preset second tracking algorithm to obtain a second tracking target set; and the second tracking algorithm carries out tracking identification processing on the expected tracking target based on the size information and the position information of the expected tracking target.
A tracking processing module 340, configured to obtain a tracking result target set according to the first tracking target set and the second tracking target set; wherein the tracking result target set matches the desired tracking target.
And the control processing module 350 is configured to obtain control information according to the position information of the tracking result target in the video frame, and send the control information to the pan-tilt camera device, so that the pan-tilt camera device adjusts the working posture information according to the control information to perform tracking shooting on the desired tracking target.
The server 300 of the embodiment of the application tracks the video data stream of the desired tracking target shot by the pan-tilt camera by using the first tracking algorithm and the second tracking algorithm to respectively obtain the first tracking target set and the second tracking target set, and then determines the tracking result target set matched with the desired tracking target from the first tracking target set and the second tracking target set; and then, according to the tracking result, the position information of the tracking result target in the video frame is concentrated to obtain control information, and the control information is sent to the holder camera device, so that the holder camera device adjusts the working attitude information according to the control information, and the expected tracking target is tracked and shot. Set up like this, realized the intelligent tracking to expectation pursuit target to cloud platform camera device realizes the clear shooting to expectation pursuit target, has improved user's experience effect.
It should be noted that the server of the present embodiment corresponds to the foregoing target intelligent tracking shooting method, and for a specific tracking method, reference is made to the foregoing target intelligent tracking shooting method, which is not described herein again.
An embodiment of the present disclosure further provides a computer device, including:
at least one memory;
at least one processor;
at least one program;
programs are stored in the memory and the processor executes at least one of the programs to implement the present disclosure to implement the above-described target intelligent track-shooting method. The computer device can be any intelligent terminal including a mobile phone, a tablet computer, a Personal Digital Assistant (PDA), a vehicle-mounted computer and the like.
The computer device according to the embodiment of the present application is described in detail below with reference to fig. 9.
Referring to fig. 9, fig. 9 illustrates a hardware configuration of a computer apparatus according to another embodiment, the computer apparatus including:
the processor 1000 may be implemented by a general Central Processing Unit (CPU), a microprocessor, an Application Specific Integrated Circuit (ASIC), or one or more Integrated circuits, and is configured to execute related programs to implement the technical solution provided by the embodiments of the present disclosure;
the Memory 1100 may be implemented in the form of a Read Only Memory (ROM), a static storage device, a dynamic storage device, or a Random Access Memory (RAM). The memory 1100 may store an operating system and other application programs, and when the technical solution provided by the embodiments of the present disclosure is implemented by software or firmware, related program codes are stored in the memory 1100, and the processor 1000 calls the target intelligent tracking shooting method of the embodiments of the present disclosure;
an input/output interface 1200 for implementing information input and output;
the communication interface 1300 is configured to implement communication interaction between the device and other devices, and may implement communication in a wired manner (e.g., USB, network cable, etc.) or in a wireless manner (e.g., mobile network, WIFI, bluetooth, etc.);
a bus 1400 that transfers information between various components of the device (e.g., the processor 1000, the memory 1100, the input/output interface 1200, and the communication interface 1300);
wherein the processor 1000, the memory 1100, the input/output interface 1200 and the communication interface 1300 are communicatively connected to each other within the device via a bus 1400.
The embodiment of the present disclosure also provides a storage medium, which is a computer-readable storage medium, and the computer-readable storage medium stores computer-executable instructions, which are used to enable a computer to execute the above target intelligent tracking shooting method.
The memory, which is a non-transitory computer readable storage medium, may be used to store non-transitory software programs as well as non-transitory computer executable programs. Further, the memory may include high speed random access memory, and may also include non-transitory memory, such as at least one disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory optionally includes memory located remotely from the processor, and these remote memories may be connected to the processor through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The embodiments described in the embodiments of the present disclosure are for more clearly illustrating the technical solutions of the embodiments of the present disclosure, and do not constitute a limitation to the technical solutions provided in the embodiments of the present disclosure, and it is obvious to those skilled in the art that the technical solutions provided in the embodiments of the present disclosure are also applicable to similar technical problems with the evolution of technology and the emergence of new application scenarios.
Those skilled in the art will appreciate that the solutions shown in the figures are not meant to limit embodiments of the present disclosure, and may include more or fewer steps than those shown, or some of the steps may be combined, or different steps.
The above-described embodiments of the apparatus are merely illustrative, wherein the units illustrated as separate components may or may not be physically separate, i.e. may be located in one place, or may also be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
It will be understood by those of ordinary skill in the art that all or some of the steps of the methods, systems, and functional modules/units in the devices disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof.
The terms "first," "second," "third," "fourth," and the like in the description of the application and the above-described figures, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in sequences other than those illustrated or described herein. Moreover, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be understood that, in this application, "at least one" means one or more, "a plurality" means two or more. "and/or" is used to describe the association relationship of the associated object, indicating that there may be three relationships, for example, "a and/or B" may indicate: only A, only B and both A and B are present, wherein A and B may be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "at least one of the following" or similar expressions refer to any combination of these items, including any combination of single item(s) or plural items. For example, at least one (one) of a, b, or c, may represent: a, b, c, "a and b", "a and c", "b and c", or "a and b and c", wherein a, b and c may be single or plural.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a unit is merely a logical division, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes multiple instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method of the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing programs, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The preferred embodiments of the disclosed embodiments have been described above with reference to the accompanying drawings, which are not intended to limit the scope of the embodiments of the disclosure. Any modifications, equivalents and improvements within the scope and spirit of the embodiments of the present disclosure should be considered within the scope of the claims of the embodiments of the present disclosure by those skilled in the art.

Claims (10)

1. An intelligent target tracking shooting method is applied to a server, the server is connected with a holder camera device, and the method comprises the following steps:
receiving a video data stream of a target to be tracked, which is sent by the pan-tilt camera device; wherein the video data stream comprises a plurality of video frames;
tracking and identifying each video frame in the video data stream according to a preset first tracking algorithm to obtain a first tracking target set; wherein the first tracking algorithm performs tracking identification processing on the expected tracking target based on the color information of the expected tracking target;
tracking and identifying each video frame in the video data stream according to a preset second tracking algorithm to obtain a second tracking target set; the second tracking algorithm carries out tracking and identification processing on the expected tracking target based on the size information and the position information of the expected tracking target;
obtaining a tracking result target set according to the first tracking target set and the second tracking target set; wherein the tracking result target set matches the desired tracking target;
and obtaining control information according to the position information of the tracking result target in the tracking result target set in the video frame, and sending the control information to the holder camera device, so that the holder camera device adjusts the working attitude information according to the control information of the tracking result target set, and the expected tracking target is tracked and shot.
2. The method of claim 1, wherein obtaining a set of tracked objects from the first set of tracked objects and the second set of tracked objects comprises:
calculating the difference value between the position information of each tracked target in the first tracked target set and the position information of each tracked target in the second tracked target set at the same moment in a preset time threshold value to obtain a plurality of position difference value information;
and if the position difference information is larger than or equal to a preset position threshold value, taking the first tracking target set as a tracking result target set.
3. The method of claim 2, wherein obtaining a set of tracked objects from the first set of tracked objects and the second set of tracked objects further comprises:
if the position difference information is smaller than the preset position threshold, calculating the difference between the size information of the tracked targets in the first tracked target set and the size information of the tracked targets in the second tracked target set to obtain size difference information;
and obtaining a tracking result target set according to the size difference information, a preset size threshold, the first tracking target set and the second tracking target set.
4. The method according to claim 3, wherein obtaining a tracking result target set according to the size difference information, a preset size threshold, the first tracking target set and the second tracking target set comprises:
and if the size difference information is larger than the size threshold, taking the second tracking target set as the tracking result target set.
5. The method according to claim 3 or 4, wherein the obtaining a tracking result target set according to the size difference information, a preset size threshold, the first tracking target set and the second tracking target set further comprises:
and if the size difference information is smaller than or equal to the size threshold, taking the first tracking target set as the tracking result target set.
6. The utility model provides a server, its characterized in that, server and cloud platform camera device are connected, the server includes:
the receiving module is used for receiving the video data stream of the expected tracking target sent by the holder camera device; wherein the video data stream comprises a plurality of video frames;
the first tracking module is used for tracking and identifying each video frame in the video data stream according to a preset first tracking algorithm to obtain a first tracking target set; wherein the first tracking algorithm performs tracking identification processing on the expected tracking target based on the color information of the expected tracking target;
the second tracking module is used for tracking and identifying each video frame in the video data stream according to a preset second tracking algorithm to obtain a second tracking target set; the second tracking algorithm carries out tracking and identification processing on the expected tracking target based on the size information and the position information of the expected tracking target;
the tracking processing module is used for obtaining a tracking result target set according to the first tracking target set and the second tracking target set; wherein the tracking result target set and the desired tracking target match;
and the control processing module is used for obtaining control information according to the position information of the tracking result target in the video frame in the tracking result target set, and sending the control information to the holder camera device so that the holder camera device adjusts working attitude information according to the control information to track and shoot the expected tracking target.
7. The utility model provides a target intelligence tracking shooting system which characterized in that includes:
the client is used for sending the expected tracking target; wherein, a positioning device is arranged on the target to be tracked;
the differential positioning base station is used for receiving the initial positioning information sent by the positioning device, and generating and sending differential positioning information according to the initial positioning information;
the server of claim 6;
the holder camera device is used for receiving and adjusting the working attitude information according to the differential positioning information;
the cloud platform camera device is also used for sending the video data stream of the expected tracking target to the server, and adjusting the working attitude information according to the control information output by the server so as to realize tracking shooting of the expected tracking target.
8. The system of claim 7, wherein the pan-tilt camera comprises:
the pan-tilt camera is used for carrying out tracking shooting on the target to be tracked;
the inertial sensor is used for detecting and outputting initial attitude information of the pan-tilt camera;
the positioner is used for detecting and outputting initial positioning information of the pan-tilt camera;
and the attitude controller is used for adjusting the working attitude information of the pan-tilt camera according to the control information output by the server.
9. An electronic device, comprising:
at least one memory;
at least one processor;
at least one program;
the programs are stored in the memory, and the processor executes the at least one program to implement:
the method of any one of claims 1 to 5.
10. A computer-readable storage medium having computer-executable instructions stored thereon for causing a computer to perform:
the method of any one of claims 1 to 5.
CN202210699461.7A 2022-06-20 2022-06-20 Intelligent target tracking shooting method, server, shooting system, equipment and medium Active CN115225815B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210699461.7A CN115225815B (en) 2022-06-20 2022-06-20 Intelligent target tracking shooting method, server, shooting system, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210699461.7A CN115225815B (en) 2022-06-20 2022-06-20 Intelligent target tracking shooting method, server, shooting system, equipment and medium

Publications (2)

Publication Number Publication Date
CN115225815A true CN115225815A (en) 2022-10-21
CN115225815B CN115225815B (en) 2023-07-25

Family

ID=83608005

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210699461.7A Active CN115225815B (en) 2022-06-20 2022-06-20 Intelligent target tracking shooting method, server, shooting system, equipment and medium

Country Status (1)

Country Link
CN (1) CN115225815B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116095462A (en) * 2022-12-30 2023-05-09 深圳市浩瀚卓越科技有限公司 Visual field tracking point position determining method, device, equipment, medium and product
CN117241133A (en) * 2023-11-13 2023-12-15 武汉益模科技股份有限公司 Visual work reporting method and system for multi-task simultaneous operation based on non-fixed position

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016034059A1 (en) * 2014-09-04 2016-03-10 成都理想境界科技有限公司 Target object tracking method based on color-structure features
CN105631446A (en) * 2015-12-17 2016-06-01 天脉聚源(北京)科技有限公司 Method and device for determining interactive corner mark prompt
CN109597431A (en) * 2018-11-05 2019-04-09 视联动力信息技术股份有限公司 A kind of method and device of target following
US20210073573A1 (en) * 2018-11-15 2021-03-11 Shanghai Advanced Avionics Co., Ltd. Ship identity recognition method based on fusion of ais data and video data
CN113785558A (en) * 2019-05-07 2021-12-10 掌中加有限公司 Wearable device for detecting events using camera module and wireless communication device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016034059A1 (en) * 2014-09-04 2016-03-10 成都理想境界科技有限公司 Target object tracking method based on color-structure features
CN105631446A (en) * 2015-12-17 2016-06-01 天脉聚源(北京)科技有限公司 Method and device for determining interactive corner mark prompt
CN109597431A (en) * 2018-11-05 2019-04-09 视联动力信息技术股份有限公司 A kind of method and device of target following
US20210073573A1 (en) * 2018-11-15 2021-03-11 Shanghai Advanced Avionics Co., Ltd. Ship identity recognition method based on fusion of ais data and video data
CN113785558A (en) * 2019-05-07 2021-12-10 掌中加有限公司 Wearable device for detecting events using camera module and wireless communication device

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116095462A (en) * 2022-12-30 2023-05-09 深圳市浩瀚卓越科技有限公司 Visual field tracking point position determining method, device, equipment, medium and product
CN116095462B (en) * 2022-12-30 2024-03-01 深圳市浩瀚卓越科技有限公司 Visual field tracking point position determining method, device, equipment, medium and product
CN117241133A (en) * 2023-11-13 2023-12-15 武汉益模科技股份有限公司 Visual work reporting method and system for multi-task simultaneous operation based on non-fixed position
CN117241133B (en) * 2023-11-13 2024-02-06 武汉益模科技股份有限公司 Visual work reporting method and system for multi-task simultaneous operation based on non-fixed position

Also Published As

Publication number Publication date
CN115225815B (en) 2023-07-25

Similar Documents

Publication Publication Date Title
US9875579B2 (en) Techniques for enhanced accurate pose estimation
CN104335649B (en) Based on the determination smart mobile phone position of images match and the method and system of posture
US11906983B2 (en) System and method for tracking targets
US11042028B1 (en) Relative pose data augmentation of tracked devices in virtual environments
US20180356492A1 (en) Vision based location estimation system
CN115225815A (en) Target intelligent tracking shooting method, server, shooting system, equipment and medium
EP3273318B1 (en) Autonomous system for collecting moving images by a drone with target tracking and improved target positioning
US9495783B1 (en) Augmented reality vision system for tracking and geolocating objects of interest
CN108022302B (en) Stereo display device of Inside-Out space orientation's AR
US20160055679A1 (en) Techniques for Accurate Pose Estimation in Outdoor Environments
US10976163B2 (en) Robust vision-inertial pedestrian tracking with heading auto-alignment
CN110100190A (en) System and method for using the sliding window of global location epoch in vision inertia ranging
US20110201362A1 (en) Augmented Media Message
CN109374008A (en) A kind of image capturing system and method based on three mesh cameras
CN108957505A (en) A kind of localization method, positioning system and portable intelligent wearable device
US10838515B1 (en) Tracking using controller cameras
CN111083633B (en) Mobile terminal positioning system, establishment method thereof and positioning method of mobile terminal
WO2022077296A1 (en) Three-dimensional reconstruction method, gimbal load, removable platform and computer-readable storage medium
CN112785715A (en) Virtual object display method and electronic device
CN113056904A (en) Image transmission method, movable platform and computer readable storage medium
Palonen et al. Augmented reality in forest machine cabin
CN110741625B (en) Motion estimation method and photographic equipment
WO2014087166A1 (en) Terrain-topography motion capture system, apparatus and method
EP4252417A1 (en) Simulation sighting binoculars, and simulation system and methods
KR101601726B1 (en) Method and system for determining position and attitude of mobile terminal including multiple image acquisition devices

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant