CN117979168A - Intelligent camera management system for aerobics competition video shooting - Google Patents

Intelligent camera management system for aerobics competition video shooting Download PDF

Info

Publication number
CN117979168A
CN117979168A CN202410384085.1A CN202410384085A CN117979168A CN 117979168 A CN117979168 A CN 117979168A CN 202410384085 A CN202410384085 A CN 202410384085A CN 117979168 A CN117979168 A CN 117979168A
Authority
CN
China
Prior art keywords
space
shooting
camera
image
performer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410384085.1A
Other languages
Chinese (zh)
Other versions
CN117979168B (en
Inventor
马秀玉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiamusi University
Original Assignee
Jiamusi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiamusi University filed Critical Jiamusi University
Priority to CN202410384085.1A priority Critical patent/CN117979168B/en
Publication of CN117979168A publication Critical patent/CN117979168A/en
Application granted granted Critical
Publication of CN117979168B publication Critical patent/CN117979168B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/183Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source
    • H04N7/185Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source from a mobile camera, e.g. for remote control

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

The invention belongs to the technical field of video shooting equipment, and particularly discloses an intelligent shooting management system for aerobics competition video shooting, which comprises the following components: the first shooting end is arranged on the ground; the first shooting end is provided with a field of view so as to acquire a first image; the second shooting end is suspended in the first space; the second shooting end is used for acquiring a second image; the central control end is used for receiving and displaying the first image and the second image; the central control end is also used for acquiring the boundary of the stage and defining a second space for the second shooting end to move around the stage according to the boundary; the guide end is arranged on the performer; the guiding end is used for guiding the first shooting end to move along with the performer in the second space; has the following advantages: the player action is tracked in real time through the guide end, and the angles of the unmanned aerial vehicle and the camera are automatically adjusted, so that the wonderful moment is captured efficiently and safely. The system reduces manual operation, prevents space conflict through a bionic model, improves video quality, adapts to various performances, and has expansibility.

Description

Intelligent camera management system for aerobics competition video shooting
Technical Field
The invention relates to the technical field of video shooting, in particular to an intelligent shooting management system for aerobics competition video shooting.
Background
With the continued advancement of digital media technology, video capture has become one of the important means of recording and disseminating sporting events. Especially in aerobics games, video shooting is used not only for live broadcast but also for training analysis, technical improvement and game review for athletes. Traditionally, video capture of aerobics games has relied primarily on fixed and hand held cameras, which are typically operated by a floor camera team, capable of capturing the wonderful moments of the game from multiple angles and locations. However, such a conventional photographing method, while being capable of providing stable picture quality and covering different viewing angles of a game, has a certain limitation in view angle comprehensiveness, photographing mobility and flexibility.
In recent years, the introduction of unmanned aerial vehicle shooting technology brings new dimensions and challenges for aerobics competition video recording. Unmanned aerial vehicle shooting provides an unprecedented air view angle, and the richness and the ornamental value of video content are greatly increased. By moving freely in the air, the drone is able to capture perspectives that are difficult to achieve with conventional ground cameras, such as player's air skills, team formation changes, and a full view of the playing field. The novel visual experience is brought to the game from the perspective of overhead, so that the audience can more comprehensively understand the game content.
However, drone photography also introduces a new series of challenges. One of the most significant problems is that if the flight path of the unmanned aerial vehicle is improperly planned, it may fly directly into the line of sight of the ground camera, resulting in a sudden appearance of moving objects in the picture, which not only distracts the audience from the game, but may also affect the live quality of the game. In addition, buzzes generated by the drone while flying above the playing field may also cause interference to the athlete and spectator.
Therefore, an intelligent camera management system for aerobics competition video shooting is provided to solve the problems.
Disclosure of Invention
The invention aims to provide an intelligent camera shooting management system for aerobics competition video shooting, so as to solve or improve at least one of the technical problems.
In view of the foregoing, a first aspect of the present invention is to provide a camera shooting intelligent management system for aerobics competition video shooting.
The first aspect of the present invention provides a camera shooting intelligent management system for aerobics competition video shooting, comprising: the first shooting end is movably arranged on the ground far away from the stage; the first shooting end is provided with a view field facing to the performer on the stage so as to acquire a first image; the second shooting end is suspended in the first space around the performer except the view field; the second shooting end is used for acquiring a second image of the performer, which is different from the first image; the central control end is used for receiving and displaying the first image and the second image; the central control end is also used for acquiring the boundary of the stage and defining a second space for the second shooting end to move around the stage according to the boundary; the guide end is arranged on the performer; the guiding end is used for guiding the first shooting end to move along with the performer in the second space; wherein the first space moves along with the leading end, and the first space is positioned inside the second space.
In any of the above technical solutions, the first shooting end includes a moving mechanism and a first camera installed on the moving mechanism; the first camera acquires the first image in the field of view; the first camera is further used for acquiring a third image around the stage, and the central control end generates the boundary through the third image; when the first camera acquires the third image, the second shooting end is positioned in the field of view.
In any of the above solutions, the central control terminal includes a computing device, where the computing device constructs a bionic model around the stage according to the third image, and updates the second space in real time according to the second image; the computing device is capable of interacting with the second shooting end to control the second shooting end to move in the first space through the bionic model.
In any of the above technical solutions, the second shooting end includes a rotor unmanned aerial vehicle and a second camera fixed on the rotor unmanned aerial vehicle; the rotor unmanned aerial vehicle is provided with a third space which wraps the rotor unmanned aerial vehicle and the second camera, and the third space is located inside the first space.
In any of the above solutions, the biomimetic model includes the first space, the third space, and the field of view; the central control end further comprises a display screen for displaying the first image and the second image; and the display screen is also provided with a twin interface for updating the bionic model in real time.
In any of the above solutions, the third space moves along with the rotorcraft, and the twin interface includes a control interface for controlling the rotorcraft; and when a performer with the leading end moves, having the following conditions: in a first case, the first space moves along the leading end in the twin interface, and the computing device continuously judges whether the first space and the third space are intersected or not; secondly, when the first space is intersected with the third space, the rotor unmanned aerial vehicle receives the movement collected by the guide end and approaches to the center of the first space; the twin interface locks the control interface; in a third scenario, when the first space does not intersect the second space, or when the first space does not intersect the third space, or when the first space does not intersect both the second space and the third space, the computing device controls the rotorcraft to move such that a center of the third space is continuously near a center of the first space; the twinning interface unlocks the control interface.
In any of the above solutions, a distance between the guide end and the first space is set according to the field of view; and when the field of view intersects the first space, there is also the following: in a fourth aspect, when the first space moves, a connection line between the center of the first space and the guiding end is obtained, and the computing device controls the first space to move so as to increase an included angle between the connection line and a horizontal line; in case five, the computing device controls a first space to move to increase a distance of the leading end from the first space as the field of view moves.
In any of the above solutions, the position of the guiding end on the performer is set according to the performance type of the performer, and the movement parameters include linear acceleration, angular velocity, and movement direction of the performer.
In any of the above solutions, the rotor unmanned aerial vehicle is a point in the bionic model, and the computing device updates the bionic model corresponding to the field of view according to the boundary and the position parameter of the first shooting end.
In any of the above solutions, in the controlling of the movement of the rotorcraft, the leading end has a higher priority than the computing device.
Compared with the prior art, the invention has the following beneficial effects:
With the leader device equipped on the actor, the system is able to accurately track the actor's movements, including their linear acceleration, angular velocity, and direction of movement, in real time. This ensures that the rotorcraft (second shooting end) and the ground camera (first shooting end) can accurately capture each highlight instant of the performance, regardless of how the performer moves or switches performance types.
The system automatically adjusts the flight path of the unmanned aerial vehicle and the shooting angle of the ground camera by analyzing the collected data in real time, and adapts to the dynamic change of the performer. The automatic adjustment reduces the need for manual operation and improves shooting efficiency and effect.
By constructing a bionic model and updating the digital twin platform in real time, the system can effectively monitor the relative position relationship among a first space (virtual space around a performer), a second space (unmanned aerial vehicle flight area) and a third space (unmanned aerial vehicle safety buffer zone), prevent space conflict and ensure the safety of the performer and equipment.
The local following strategy that the guiding end guides the rotor unmanned aerial vehicle to move is preferentially adopted, so that the calculation pressure of the central control end is reduced, and the response speed and reliability of the system are improved.
The intelligent management of the system not only ensures the high efficiency and safety of the shooting process, but also remarkably improves the quality of shooting content by optimizing shooting angles and flight paths. The post-optimization and data feedback further ensure the optimal quality of the video output.
The design of the intelligent management system allows flexible adaptation to various performance types and environmental conditions, and has good expandability, and new functions and improvements can be introduced through software updating and hardware upgrading so as to meet future shooting requirements.
Additional aspects and advantages of embodiments according to the invention will be apparent from the description which follows, or may be learned by practice of embodiments according to the invention.
Drawings
The foregoing and/or additional aspects and advantages of the invention will become apparent and may be better understood from the following description of embodiments taken in conjunction with the accompanying drawings in which:
FIG. 1 is a schematic diagram of a system of the present invention;
FIG. 2 is a schematic diagram of a second photographing end structure of the present invention;
FIG. 3 is a flow chart of the method of the present invention;
fig. 4 is a schematic structural diagram of an electronic device according to the present invention.
The correspondence between the reference numerals and the component names in fig. 1 to 4 is:
1 first shooting end, 101 first camera, 102 mobile mechanism, 2 second shooting end, 201 rotor unmanned aerial vehicle, 202 second camera, 3 well accuse end, 301 computing equipment, 302 display screen, 4 leading end, 5 visual fields, 6 first spaces, 7 second spaces, 8 third spaces.
Detailed Description
In order that the above-recited objects, features and advantages of the present application will be more clearly understood, a more particular description of the application will be rendered by reference to the appended drawings and appended detailed description. It should be noted that, without conflict, the embodiments of the present application and features in the embodiments may be combined with each other.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, but the present invention may be practiced in other ways than those described herein, and therefore the scope of the present invention is not limited to the specific embodiments disclosed below.
Referring to fig. 1-4, an intelligent camera management system for capturing video of an aerobics competition according to some embodiments of the present invention is described below.
The embodiment of the first aspect of the invention provides an intelligent camera management system for aerobics competition video shooting. In some embodiments of the present invention, as shown in fig. 1-2, the intelligent camera management system for aerobics competition video shooting includes:
A first photographing end 1 movably disposed on the ground far from the stage; the first photographing end 1 has a field of view 5 toward the performer on the stage to acquire first image data. The first camera 1 assumes the conventional ground-based camera task and pays attention to the three-dimensional field of view 5 of the first camera 1 in acquiring the first image data for subsequent optimization.
The second shooting end 2 is suspended in the first space 6 around the performer except the view field 5; the second photographing terminal 2 is used for acquiring second image data of the performer, which is different from the first image data. The second shooting end 2 bears shooting task in the air, and different shooting directions and attention pictures are arranged between the second shooting end 2 and the first shooting end 1.
The central control end 3 is used for receiving and displaying the first image data and the second image data; the central control end 3 is further configured to acquire boundary data of the stage, and define a second space 7 around the stage for the second shooting end 2 to move according to the boundary data. The boundary data is used to determine the boundary at which the second photographing terminal 2 can fly, and by displaying the first image data and the second image data, the moving pictures of the performer can be acquired simultaneously at different angles on one device.
A leading end 4 provided on the performer; the guiding end 4 is used for guiding the first shooting end 1 to move along with the performer in the second space 7. The leading end 4 is wearable small-size equipment to the second shooting end 2 can follow the actor constantly and shoot, simultaneously through the cooperation of first space 6 and second space 7 and visual field 5, avoids unmanned aerial vehicle to appear on the image of first image data show.
Wherein the first space 6 follows the movement of the leading end 4 and the first space 6 is located inside the second space 7. The first space 6 directly moves along with the guide end 4 and the second shooting end 2 in the first space 6 indirectly moves, so that the second shooting end 2 can be prevented from moving along with a performer at high frequency, and stability of pictures in continuous playing can be improved.
According to the intelligent shooting management system for aerobics competition video shooting, the first shooting end 1 is provided with the high-resolution camera, and each detail on the stage, including the action, the expression and the fine part of the stage setting of a performer, can be captured. By means of the movable arrangement, the first shooting end 1 can adjust its position and orientation to maintain an optimal shooting angle for the performer, no matter how they move on the stage. The first shooting end 1 optimizes the three-dimensional view field 5 of the camera by adjusting the focal length and the angle of the camera, so that important scenes and performers on a stage are ensured to be effectively captured.
The first shooting end 1 is provided with an adjustable mechanical device, which allows the camera to dynamically adjust its shooting angle and position according to the position of the performer. This may be achieved by a preset trajectory motion or by relying on real-time tracking techniques. The camera determines its three-dimensional field of view 5 by its internal optics, depending on the focal length and the sensor size. This determines the range and depth of scenes that the camera can capture and how to map these scenes onto a two-dimensional image. In order to adapt to the change of stage lighting and the movement of performers, the first shooting end 1 can automatically adjust focusing and exposure, so that the image data is ensured to keep high quality. The captured image data is transmitted in real time to the central control terminal 3 or the storage device for later processing and analysis. This may involve compression and encoding processes to optimize data transmission and storage efficiency. By analysing the captured image data, the first filming end 1 can further adjust its field of view 5, ensuring that important performance elements are captured within the optimal viewing angle and range. Such optimization may be done automatically based on preset rules or using image recognition techniques.
The second photographing end 2 is capable of capturing images of a performer and a stage from a plurality of angles and heights by hovering and moving in the air, providing a completely different viewing angle from the first photographing end 1. Similar to the first shooting end 1, the second shooting end 2 also has the capability of dynamic tracking, and can automatically adjust the position and shooting angle according to the movement of the performer, so that the key performance moment is ensured not to be missed. The second photographing terminal 2 is equipped with a high-resolution camera, and can capture high-quality images under different illumination conditions, and can maintain image stability even in a complex air environment. The second shooting end 2 can quickly switch shooting points in and out of the field of view 5, and the flexibility enables the second shooting end to better capture the full view and details of the performance.
The second shooting end 2 is usually realized through an unmanned aerial vehicle, and the unmanned aerial vehicle carries a high-resolution camera and can freely move in the air to realize highly flexible shooting. The second camera end 2 is connected to an operator or an automatic control system via a real-time control and communication system, which allows for immediate adjustment of the flight path and camera settings in response to changes in performance. In order to ensure the stabilization of the image when shooting in the air, the second shooting end 2 is equipped with an advanced stabilization system, such as a three-axis stabilization cradle head, to counteract the effects of wind blowing and movement. The second photographing terminal 2 has auto-focusing and auto-exposure adjusting functions, ensuring that a clear image can be captured even in the case of rapid movement or light change. The captured image data can be sent back to the central control end 3 or stored in a built-in memory on the unmanned aerial vehicle in real time through a wireless transmission technology for later processing and analysis.
The central control terminal 3 receives image data from the first photographing terminal 1 and the second photographing terminal 2 through a wireless communication technology, and displays the data on a control interface in real time. This allows the operator to monitor performance pictures from different angles simultaneously. The central control end 3 is responsible for acquiring accurate boundary data of the stage. This can be obtained by a preset map input, GPS positioning or by analysis of images captured by the first camera 1 and the second camera 2. Based on the boundary data of the stage, the central control end 3 calculates and demarcates the movable second space 7 range of the second shooting end 2, ensures that the unmanned aerial vehicle flies in a safe and proper range, and avoids entering a spectator area or other no-fly areas. The central control terminal 3 is also responsible for managing and storing the received image data, including storage, playback, post-processing, etc. of the image.
The central control terminal 3 is in communication connection with the first shooting terminal 2 and the second shooting terminal 2 through a wireless network (such as Wi-Fi and 4G/5G), and receives real-time image data and flight parameters. The received image data is subjected to preliminary processing (such as decoding, scaling and the like) and then displayed on a monitor of the central control end 3 in real time, so that instant visual feedback is provided for operators. The central control terminal 3 uses an image processing algorithm to analyze the images received from the first shooting terminal 1 and the second shooting terminal 2, automatically identify the stage boundary, and calculate the range of the second space 7 by combining other input information (such as a preset map and GPS data). According to the boundary data obtained through analysis, the central control terminal 3 sends a flight area and a path instruction to the second shooting terminal 2 (unmanned aerial vehicle) through a software interface, so that the unmanned aerial vehicle can safely fly in the specified second space 7. The central control terminal 3 is responsible for classifying, storing and managing the received and recorded image data and supporting later playback and editing work.
The guiding end 4 provides accurate position information of the performer through built-in positioning technology (such as GPS, IMU or other sensors), so that the second shooting end 2 can update its flight path in real time to follow the performer. The guiding end 4 not only provides static position information, but also can adjust in real time according to the action change of the performer, so that the second shooting end 2 can capture the key moment and dynamic scene of the performance. The guiding end 4 continuously sends position and motion information to the second shooting end 2 through wireless signals, so that the unmanned aerial vehicle can receive dynamic changes of performers in real time. The guiding end 4 is also responsible for coordinating the working areas of the first shooting end 1 and the second shooting end 2, and the unmanned aerial vehicle is prevented from entering the field of view 5 of the first shooting end 1 through a preset space boundary, so that the shooting of the first image data is prevented from being interfered.
The lead end 4 captures the position and motion status of the performer using high precision positioning technology (GPS or indoor positioning system) and Inertial Measurement Units (IMU). These sensors provide continuous position, velocity and direction data. The lead 4 transmits the data to the second capturing end 2 in real time through a wireless communication module (such as Wi-Fi, bluetooth or dedicated radio band). The communication protocols and data formats are optimized to ensure low latency and high reliability. The second shooting end 2 is internally provided with an intelligent flight control algorithm, and the flight path and the camera orientation of the second shooting end can be adjusted in real time according to the data received from the guiding end 4. This includes path planning, obstacle avoidance, and speed and direction adjustments to maintain optimal shooting angles and distances for the performer. The lead terminal 4 is also in communication with the central control terminal 3 and receives information about the boundaries of the first space 6 and the second space 7. This information is used to guide the flight of the second camera end 2, ensuring that it does not enter areas that might cause camera disturbances.
The first space 6 is a virtual space that moves with the leading end 4, and its boundary dynamically changes with the performer (the position of the leading end 4). A safe and optimal flight area is defined in the first space 6 for guiding the second shooting end 2 to take a picture. The first space 6 is designed to be located inside the second space 7, wherein the second space 7 defines the maximum boundary at which the second camera end 2 can fly. This nesting design ensures that the second camera end 2 always operates within a safe and predetermined range. By controlling the movement of the first space 6, the second photographing end 2 is indirectly guided to follow the performer. The indirect control mechanism avoids the need of directly tracking the performer at high frequency by the second shooting end 2, thereby reducing the picture jitter and improving the picture stability. In the first space 6, the flight path of the second photographing end 2 is optimized to maintain an optimal photographing angle and distance to the performer while considering avoiding too fast or too frequent movements to ensure video continuity and picture quality.
The lead end 4 is equipped with a high-precision positioning sensor capable of monitoring the position change of the performer in real time and transmitting these data to the center control end 3 in real time. The central control end 3 dynamically adjusts the position and boundary of the first space 6 according to the received position information of the guide end 4. This process involves calculating the optimal position and size of the first space 6 relative to the current position of the performer to ensure that the second camera 2 is able to capture the key picture. Based on the position and boundary of the first space 6, the central control end 3 sends a flight control instruction to the second shooting end 2 to guide the second shooting end to optimally move in the first space 6. These instructions consider the need to avoid picture jitter and to improve stability. The second shooting end 2 is equipped with a stabilization technique, such as a triaxial stabilization cradle head, to ensure that the camera is kept stable even while moving, further improving the picture quality.
In summary, the first shooting end 1 is provided with a high-resolution camera, so that each detail on the stage, whether the action, the expression or the stage setting of the performer, can be captured; the second photographing end 2 provides a completely different viewing angle from the first photographing end 1 through hovering and moving, so that the layering and the ornamental value of pictures are enriched. The two shooting ends have dynamic tracking capability, and can automatically adjust the position and shooting angle according to the movement of a performer to capture key moments; the central control terminal 3 receives and displays the image data shot at the two ends in real time, so that operators can adjust shooting strategies in time, and the picture effect is optimized. The central control end 3 dynamically defines a second space 7 according to stage boundary data, so that the second shooting end 2 is ensured to fly in a safe range, and the audience area is prevented from entering or the first shooting end 1 is prevented from being interfered; the design of the guide end 4 enables the second shooting end 2 to indirectly follow the performer, reduces high-frequency movement and improves picture stability. The first shooting end 1 accurately calibrates the three-dimensional view field 5 before working, and ensures that key scenes are effectively captured.
In any of the above embodiments, the first photographing end 1 includes a moving mechanism 102 and a first camera 101 mounted on the moving mechanism 102; the first camera 101 acquires first image data in the field of view 5. The three-dimensional field of view 5 is determined by the focal length of the first camera 101 and the sensor size, and the three-dimensional field of view 5 is calibrated before operation.
The first camera 101 is further configured to acquire third image data around the stage, and the central control terminal 3 generates boundary data from the third image data. The third image data is used for a specific preset calibration to generate boundary data.
Wherein the second camera end 2 is located within the field of view 5 when the first camera 101 acquires the third image data. The corresponding picture is provided with the second photographing end 2 when the third image data is generated, and the second photographing end 2 can be subjected to external dimension adaptation.
In this embodiment, a high mobility of the first photographing end 1 on the ground is provided. This mechanism may include wheels, rails, or other types of movement means that allow the camera to move smoothly between different positions as desired. Is the primary device on the mobile mechanism 102 for capturing image data of the performance. The camera is equipped with a high resolution image sensor capable of capturing sharp images under different light conditions. Before shooting, it is necessary to precisely calibrate the three-dimensional field of view 5 of the first camera 101, which is determined by the focal length of the camera and the size of the sensor. The calibration of the extent of the field of view 5 ensures that the camera is able to cover critical areas of the performer and stage.
The moving mechanism 102 of the first photographing terminal 1 is controlled by the central control terminal 3 or an operator, and can be switched in a predetermined trajectory or free movement according to the progress and demand of performance. This mobility allows the camera to capture performance from optimal angles and positions. The first camera 101 captures image data of a scene using a built-in image sensor. The settings of the camera, such as focal length, aperture and exposure time, are adjusted according to ambient light and shooting requirements to ensure image quality. By a combination of focal length and sensor size, the field of view 5 range of the first camera 101 is precisely calibrated before operation. This calibration process determines the range of scenes that the camera can capture, ensuring that important performance parts are contained. The captured image data is transmitted in real time to the central control terminal 3 for real-time monitoring, storage or further processing. The real-time performance is not only important for live performance, but also convenient for later editing and analysis.
The first camera 101 acquires third image data for describing the stage and the boundary of the surrounding space by capturing the environment around the stage. This may include the space around, above the stage and specific ground markings or features. The third image data is used for a specific pre-set calibration procedure that determines the physical boundaries of the stage by analyzing specific features or markers in the image. Based on the information extracted from the third image data, the central control terminal 3 generates a set of accurate boundary data defining the safe flight area and the shooting range of the second shooting terminal 2 (unmanned aerial vehicle).
In a particular preset process, the first camera 101 takes a series of photographs or videos around the stage, capturing enough environmental details for subsequent analysis. The captured third image data is transmitted in real time to the central control terminal 3, where special image processing software will process the data, identifying stage boundaries and important reference points. The boundary lines and the marked features in the image are identified by the software through image identification technologies such as edge detection, feature point matching and the like, and are used for determining the actual physical boundary of the stage. Based on the identified features and boundaries, the central control terminal 3 calculates a set of boundary data, which are stored in the form of a digital map or a set of coordinates, which clearly define the operating space of the second photographing terminal 2. And the central control terminal 3 plans the flight area of the second shooting terminal 2 (unmanned aerial vehicle) by utilizing the generated boundary data, so that shooting activities are ensured to be carried out in the safe second space 7, and the situation that the second shooting terminal flies out of a preset area or enters a no-fly zone is avoided.
By including the second photographing end 2 in the third image data, the system is able to capture the exact position and size of the second photographing end 2 with respect to the stage and performer. This enables the system to automatically adjust the flying height and distance of the second photographing end 2 to accommodate different photographing requirements and scene layouts. The second shooting end 2 is included in the third image data, so that the central control end 3 can consider the actual occupied space of the second shooting end 2 when analyzing the image, thereby more accurately defining the flight boundary and avoiding collision and interference. This procedure also helps to evaluate the coverage of the field of view of the second capturing end 2, ensuring that important performance elements can be captured effectively, and at the same time evaluate whether there are dead angles or occlusion problems in the field of view.
During a particular calibration procedure, the first camera 101 captures a panoramic image or video sequence including the second capturing end 2, ensuring that the second capturing end 2 is visible within the field of view 5. The third image data received by the central control end 3 is processed and analyzed, and the outline, the position and the size of the second shooting end 2 and the positions of the stage and the performer are identified by using an image identification algorithm. Based on the analysis result, the central control end 3 calculates an optimal flight path and boundary adapted to the external dimension of the second photographing end 2. This involves calculating a safe distance, a flying height, and a photographing angle to ensure a photographing effect while maintaining safe flying of the second photographing terminal 2. The adaptation result is sent to the second shooting end 2 in the form of a flight control instruction to guide the second shooting end to adjust the position or the flight path so as to match the preset shooting requirement and the safety specification. In the shooting process, the first camera 101 and the central control terminal 3 continuously monitor the position and the flight state of the second shooting terminal 2, and adjust the flight boundary and shooting parameters in real time if necessary to cope with dynamic changes and emergency situations of performance.
Specifically, the movement mechanism 102 may be at least one of: the tripod, the first camera 101 provides stable support, avoids picture shake, is suitable for shooting at a fixed angle, is convenient for quick setting and adjustment, and is suitable for shooting from an angle for a long time; the handheld stabilizer keeps the first camera 101 stable in mobile shooting, reduces vibration influence, provides greater flexibility and mobility, and is suitable for following shooting or moving scenes; the rocker arm can lift the first camera 101 to a high position to perform high-angle shooting, can realize downward smooth movement from high altitude, and is suitable for shooting large scenes or increasing visual impact of pictures; the slide rail stably moves the first camera 101 on the horizontal or inclined plane, realizes smooth tracking or lateral movement of the lens, is suitable for realizing stable forward, backward or lateral movement effects, and increases the depth and dynamic sense of the picture. The rail car moves the first camera 101 on a preset rail to realize long-distance smooth movement, is commonly used for large-scale activities or movie production, and can provide extremely smooth movement effects; the electronic control cradle head can be used for fixing the first camera 101 at a position by controlling the horizontal rotation and the vertical inclination of the first camera 101 through a remote control or an automatic system, increases the change of shooting angles, and is suitable for realizing automatic tracking or remote control shooting.
Further, generating boundary data from the third image data by:
The deployment and configuration of the first camera 101, wherein the first camera 101 is installed in a preset area, so that the first camera 101 can cover the field range needing to be acquired; the camera's movement mechanism 102 is provided to enable the camera to move along a particular path to capture omnidirectional image data.
Calibrating a camera, and determining internal azimuth elements of the camera, including focal length, principal point coordinates and distortion coefficients; the external azimuth element of its position and orientation is calculated from the image taken by the camera at a known three-dimensional point.
The first camera 101 acquires third image data of the field in real time according to the movement of the preset path, and transmits the acquired third image data to the central control terminal 3 in real time for further processing.
The image data processing and boundary recognition, the central control end 3 receives the third image data, and preprocesses the third image data, including denoising, contrast adjustment and the like, so as to improve the image quality; extracting stable feature points from the image by using an image processing algorithm such as SIFT; feature point matching and three-dimensional reconstruction: and matching the same characteristic points in different images, and reconstructing a three-dimensional model of the field through a multi-view geometric principle. Based on the reconstructed three-dimensional model, natural boundaries of the field are identified, and boundary data is generated.
Generating and optimizing boundary data, and generating preliminary boundary data by the central control terminal 3 according to the identified natural boundary; and carrying out optimization processing on the generated boundary data, including smoothing processing and accurate adjustment, so as to improve the accuracy of the boundary data.
As can be seen from the above, the deployment and configuration of the first camera 101 ensures that the field area to be acquired is covered entirely, capturing enough image data to support the generation of boundary data. By providing a movable camera mechanism, the camera can be moved along a specific path, capturing an omnidirectional view of the field. The mobility not only improves the coverage of the image data, but also increases the diversity of the data, and provides rich input for subsequent image processing and analysis. The camera is calibrated, and the position, orientation and optical characteristics of the camera are accurately calculated so as to facilitate correct interpretation of image data. By placing three-dimensional marker points at known locations and capturing images of these points, camera calibration techniques (e.g., using checkerboard calibration plates) are used to calculate the internal and external orientation elements, ensuring that the image data can be accurately mapped into the actual spatial coordinates. And acquiring image data, namely acquiring third image data of the field in real time, and providing an original material for generating boundary data. The first camera 101 moves according to a predetermined path, capturing successive images of the field, which are then transmitted in real time to the central control terminal 3. The acquired image data should contain key features and boundary information of the field. And (3) processing image data and identifying boundaries, extracting key feature points in the image, reconstructing a three-dimensional model of a field, and identifying natural boundaries. And denoising and contrast adjustment are carried out on the received image, so that the image quality is improved. And extracting stable characteristic points from the images by using SIFT algorithm and the like, and matching the same characteristic points in different images. And reconstructing a three-dimensional model of the field by utilizing the characteristic point matching result through a multi-view geometric principle. Based on the three-dimensional model, natural boundaries of the site are identified. And generating and optimizing boundary data, generating accurate boundary data, and performing optimization processing to improve accuracy. Based on the site boundary identified from the three-dimensional model, the central control terminal 3 generates preliminary boundary data. And smoothing and accurately adjusting the generated boundary data to ensure the accuracy and reliability of the boundary data.
In any of the above embodiments, the central control terminal 3 includes a computing device 301, and the computing device 301 constructs a bionic model around the stage according to the third image data, and updates the second space 7 in real time according to the second image data. The computing device 301 performs the step of generating boundary data from the third image data and constructs a digital twin platform of the second camera 2 and the biomimetic model. Due to the shaking of curtains and lights, as well as the microphone position and the flow of persons, the updating of the boundary data and further the updating of the second space 7 is required on the basis of the second image data taken in real time.
The computing device 301 is capable of interacting data with the second camera 2 to control the movement of the second camera 2 within the first space 6 by means of the biomimetic model. The actual control of the second shooting end 2 and the real-time demonstration of the local picture are carried out through a digital twin platform.
In this embodiment, using the third image data, the computing device 301 builds a biomimetic model of the stage and its surroundings. This model includes the physical layout of the stage, the position of the curtains, the arrangement of the lighting devices, and other key elements. Based on the bionic model and the real-time position information of the second shooting end 2, a digital twin platform is established. This platform allows to simulate and predict the behaviour of the second camera 2 in the real environment to optimize its flight path and the camera angle. The computing device 301 performs the step of generating initial boundary data based on the third image data. Then, according to the second image data shot in real time, monitoring the change of the stage environment (such as curtain shake, light change, microphone position change and personnel flow), and updating the boundary data in real time to reflect the current environment state.
The computing device 301 adjusts and updates the operation space of the second capturing end 2, i.e. the second space 7, according to the updated boundary data, to ensure that the unmanned aerial vehicle flies within a safe range while capturing the best capturing picture.
Image processing and model construction, the computing device 301 analyzes the third image data through an advanced image processing algorithm, and extracts key features of the stage and the surrounding environment; by utilizing the characteristics to construct a three-dimensional bionic model of the stage, the model can accurately reflect the layout of the stage and the positions of key elements. The realization of the digital twin platform is realized, and the digital twin platform is created by integrating the real-time position and state information of the second shooting end 2 into a bionic model; the platform uses the model to simulate and analyze, optimizing the flight strategy and shooting plan of the second shooting end 2. Generating and updating boundary data in real time, and generating initial boundary data by the computing device 301 based on the constructed bionic model and the third image data; subsequently, computing device 301 continues to receive second image data, monitors for environmental changes using image recognition and tracking techniques, and updates the boundary data in real-time to reflect the current environmental state. The adjustment of the second space 7, according to the boundary data updated in real time, the computing device 301 adjusts the definition of the second space 7, so as to ensure that the second photographing end 2 can still perform photographing safely and effectively in a changed environment.
The computing device 301 is capable of bi-directional data exchange with the second capturing end 2, including flight parameters such as position, speed, direction, etc., as well as real-time image data. This enables the computing device 301 to adjust the flight strategy of the second camera 2 according to real-time conditions. By establishing a digital twin model of the second shooting end 2 and the stage environment, the computing device 301 is able to simulate the flight and shooting behavior of the second shooting end 2 in the virtual environment, enabling a more accurate and safe manipulation. By using the bionic model around the stage, the computing device 301 can accurately calculate the optimal moving path of the second shooting end 2 in the first space 6, avoid collision and enter the no-fly zone, and simultaneously ensure optimization of shooting angles and distances. The computing device 301 receives the real-time image data returned from the second capturing end 2, and displays the real-time image data locally, so as to provide instant picture feedback for the operator to make rapid adjustment and decision.
The computing device 301 establishes a stable bidirectional communication link with the second capturing end 2 through a wireless network, sends a control instruction in real time, and receives status information and image data of the second capturing end 2. Digital twin model construction and application on the computing device 301, a three-dimensional digital twin model comprising the stage environment and the second shooting end 2 is constructed based on the third image data and the real-time second image data. With this model, the computing device 301 may simulate the flight path of the second capturing end 2 in the virtual environment, performing analysis such as collision detection, path optimization, and the like. The biomimetic model-based path planning, the computing device 301 analyzes the biomimetic model and current environmental data, identifying safe flight areas and potential obstacles. According to the analysis result, the optimal flight path of the second shooting end 2 in the first space 6 is calculated, and a specific flight control instruction is issued. Real-time image feedback and processing, the second shooting end 2 transmits the captured images back to the computing device 301 in real time, and the computing device 301 processes the images and displays the images to an operator in real time. An operator can quickly adjust according to the real-time picture, such as changing a flight path, adjusting a shooting angle and the like, so as to adapt to actual shooting requirements.
In any of the above embodiments, the second photographing end 2 includes the rotary-wing drone 201 and the second camera 202 fixed to the rotary-wing drone 201. The rotorcraft 201 is configured to hoist the second camera 202 and the data transceiver module.
The rotary unmanned aerial vehicle 201 is provided with a third space 8 which is wrapped outside the rotary unmanned aerial vehicle 201 and the second camera 202, and the third space 8 is positioned inside the first space 6. The third space 8 is used for calibrating the appearance maximum size of the unmanned aerial vehicle, and the rotating propeller is prevented from scratching external objects.
In this embodiment, the second photographing terminal 2 photographs in the air by the second camera 202 mounted on the rotary-wing unmanned aerial vehicle 201, captures a completely different view angle from the ground photographing, and provides a multi-dimensional picture representation. With the high mobility of the unmanned aerial vehicle, the second photographing terminal 2 can follow the action of the performer, and even a rapid movement or a complex performing action can be accurately captured. The rotorcraft 201 is equipped with a data transceiver module that not only can receive control commands from the central control terminal 3, but also can transmit captured image data in real time back to the central control terminal 3 or other receiving devices.
Flight control of the rotorcraft 201, the rotorcraft 201 providing lift and driving force through its multiple rotors, allowing vertical take-off, hover, and flights in all directions; the unmanned aerial vehicle flight control system receives the instruction from the central control end 3, changes the flight direction, the flight height and the flight speed by adjusting the rotation speed of each rotor wing, and realizes accurate position positioning and stable flight situation. The shooting function of the second camera 202, the second camera 202 is fixed on the unmanned aerial vehicle, the stability in the shooting process is ensured through a stabilizing device (such as a cradle head), the picture shake is reduced, and the image quality is improved; the camera shoots according to the instruction of the central control end 3 or a preset automatic script, including automatic adjustment of parameters such as focal length, exposure and the like, so as to ensure capturing of high-quality images. The data transceiver module is in charge of communicating with the central control end 3, receiving flight control instructions and shooting commands, and simultaneously transmitting captured image data back to the central control end 3 in real time; the module uses a high-frequency wireless communication technology, so that the speed and stability of data transmission are ensured, and operators can monitor shooting conditions in real time and make adjustments.
By defining the third space 8, the system is able to calibrate the maximum profile size of the drone, including the dynamic range of the onboard second camera 202 and the rotating propeller. The third space 8 is used as a safety buffer area to help the unmanned aerial vehicle avoid physical contact with stage scenery, performers or other suspended objects during flight and shooting, and reduce collision risk. By utilizing the size data of the third space 8, the flight control system of the central control end 3 or the unmanned aerial vehicle can plan the flight path of the unmanned aerial vehicle more accurately, so that the safety of objects and personnel in the field can be ensured while the excellent shooting angle is maintained.
And (3) three-dimensional space modeling, namely, before the unmanned aerial vehicle is planned to fly and shoot, three-dimensional modeling is carried out on the maximum physical space occupied by the unmanned aerial vehicle and the propeller thereof in motion through software, so that the range of a third space 8 is defined. Boundary detection and avoidance, the flight control system of the unmanned aerial vehicle performs boundary detection by utilizing the data of the third space 8, and when the unmanned aerial vehicle is predicted to be close to an obstacle, the flight path is automatically adjusted, so that collision between parts such as a propeller and the obstacle is avoided. And the flight strategy is optimized, and the flight height, speed and steering strategy of the unmanned aerial vehicle are optimized according to the size of the third space 8, so that the required picture can be captured in the shooting process, and the interference or danger to the surrounding environment can be avoided. When the unmanned aerial vehicle executes a flight task, the central control end 3 monitors the position of the unmanned aerial vehicle and the relative position relation between the third space 8 and surrounding objects in real time, and performs flight adjustment if necessary, so that the safety of shooting activities is ensured.
Specifically, the third space and the first space are both spherical. The field of view is conical.
In any of the above embodiments, the third space 8 is spherical, and the center of sphere of the third space 8 is located at the geometric center of the unmanned aerial vehicle, and the third space 8 also includes the second camera 202 and the data transceiver module; the bionic model comprises a first space 6, a third space 8 and a view field 5, so that various virtual forehead planning spaces can be clearly and practically displayed on the display screen 302, and the virtual forehead planning spaces can be controlled in a targeted manner; and the central control terminal 3 further comprises a display screen 302 for displaying the first image data and the second image data.
The display screen 302 also displays a twinning interface for updating the bionic model in real time. The display of the first image data and the second image data can be synchronized through the twin interface, so that the shooting condition of the whole field can be mastered under the condition of no matter the stage is watched, and the central control end 3 does not need to be installed in the visible range around the stage.
In this embodiment, the third space 8 is designed as a sphere, the centre of which is located in the geometrical centre of the drone. This design is intended to provide omnidirectional protection for the drone and its accessories (including the second camera 202 and the data transceiver module), ensuring its safe distance at any angle of flight. The bionic model comprises a first space 6, a third space 8 and a view field 5, and can clearly show various virtual planning spaces and the relation between the virtual planning spaces and the actual environment, so that targeted control and flight planning are facilitated. The central control terminal 3 is equipped with a display screen 302 for displaying the first image data and the second image data in real time. This function allows the operator to monitor images from both the ground camera (first filming end 1) and the drone camera (second filming end 2) simultaneously, optimizing filming angles and strategies.
The safe buffering function of the third space 8, the third space 8 is defined as a virtual spherical area surrounding the unmanned aerial vehicle and the carrying equipment thereof through software calculation and modeling, and the radius of the area is enough to cover the rotor wing at the outermost periphery; in the flight control system of the unmanned aerial vehicle, the virtual spherical area is utilized for flight path planning and boundary detection, so that the unmanned aerial vehicle is prevented from colliding with an external object. The application of the bionic model provides a comprehensive virtual environment by integrating the information of the first space 6, the third space 8 and the view field 5, and is used for simulating the flight and shooting behaviors of the unmanned aerial vehicle in an actual field; by using the model, an operator can conduct detailed flight previewing and strategy planning, and the shooting activity of the unmanned aerial vehicle is ensured to be efficient and safe. The monitoring and displaying of the central control terminal 3, the display system of the central control terminal 3 displays the image data transmitted back by the first shooting terminal 2 and the second shooting terminal 2 in real time, so that an operator is allowed to know and control the current shooting condition in real time. An operator can adjust the flight path of the unmanned aerial vehicle or the shooting parameters of the camera according to the real-time image so as to realize the optimal shooting effect.
The twin interface displayed on the display screen 302 of the central control terminal 3 contains a real-time updated bionic model, and the corresponding relation between the image data of the first shooting terminal 1 and the second shooting terminal 2 and the actual scene is synchronously reflected by the model. Through the twin interface, an operator can obtain comprehensive information such as field layout, shooting angles, positions of the unmanned aerial vehicle and the ground camera and the like from the virtual model under the condition that the operator does not directly watch the stage, and accurate grasp of the full-field shooting condition is realized. Since the twinning interface provides an all-round view, the center controlling end 3 need not be limited to being mounted around the stage or within any particular visual range. This feature greatly enhances flexibility in device placement and applicability of the system.
The data synchronization and model updating ensure that the model reflects the latest scene state at any time by receiving image data from the first shooting end 1 and the second shooting end 2 in real time and synchronizing the data with the existing bionic model. When the object in the scene moves or the field layout changes, the twin interface is immediately updated, and the latest shooting angle and position information are displayed. The virtual and the actual scenes are fused, the real-time image data is fused into a virtual bionic model through an advanced graphic processing technology, and a dynamic and interactable three-dimensional scene view is created; this allows the operator to intuitively see how each movement of the drone and the ground camera affects the shooting effect through the operation interface and make adjustments as needed. And the highly-customized view shows that the twin interface supports multi-angle and multi-scale view switching, so that operators can deeply observe the stage layout and the configuration of shooting equipment from different view angles. The interface can also display additional information such as an flyer safety area, a no-fly area, a shooting angle range and the like according to the requirements of the user.
In any of the above embodiments, the third space 8 moves following the rotary-wing drone 201, and the twinning interface includes a control interface for controlling the rotary-wing drone 201; and has the following condition when the performer with the leading end 4 moves.
In case one, the first space 6 moves following the leading end 4 within the twinning interface, the computing device 301 continues to determine whether there is an intersection of the first space 6 and the third space 8.
In the second case, when the first space 6 and the third space 8 intersect, the rotorcraft 201 receives the movement data collected by the guiding end 4 and approaches the center of the first space 6; the twinning interface locks the control interface. And if so, a second space 7 representing that the unmanned gyroplane is about to deviate from the set movable control range.
In case three, when the first space 6 does not intersect with the second space 7, or when the first space 6 does not intersect with the third space 8, or when the first space 6 does not intersect with both the second space 7 and the third space 8, the computing device 301 controls the rotary wing drone 201 to move so that the center of the third space 8 is continuously close to the center of the first space 6; the twinning interface unlocks the control interface. When the performer moves and the first space 6 does not intersect the second space 7 and/or the third space 8, the drone may be controlled to be centered in the first space 6 to outflow a redundant space for the performer's rapid movement.
In this embodiment, the third space 8, which is a virtual safety buffer for the rotorcraft 201 and its attached equipment, dynamically updates its position as the drone moves. The twin interface not only demonstrates the real-time position and status of the first space 6 and the third space 8, but also integrates the control interface of the rotorcraft unmanned aerial vehicle 201, allowing operators to directly control and command the unmanned aerial vehicle from within the twin interface. Case one involves that when a performer with a leading end 4 moves, the first space 6 (virtual space around the performer) also follows the movement accordingly. The computing device 301 monitors the positional relationship between the first space 6 and the third space 8 (the safety buffer of the unmanned aerial vehicle) in real time to determine whether there is an intersection or potential collision risk between the two.
The following mechanism of the third space 8, the position and the range of the third space 8 are dynamically calculated by the computing device 301 according to the real-time position data of the unmanned aerial vehicle, so as to ensure that the unmanned aerial vehicle is in the safety buffer zone at any moment. Real-time feedback and control of the twin interface are carried out, and the twin interface continuously updates the bionic model by receiving real-time data of the unmanned aerial vehicle and the guide end 4, so that real-time performance and accuracy of display information are ensured; the unmanned aerial vehicle control interface embedded in the twin interface allows operators to make quick decisions and issue control instructions based on real-time information. The spatial interaction detection, the computing device 301 processes the position data from the leading end 4 and the unmanned aerial vehicle in real time, and determines the relative position relationship between the first space 6 and the third space 8 by using an algorithm. When an intersection of two spaces is detected, the system automatically alerts the operator and provides adjustment advice, such as adjusting the unmanned flight path, to avoid potential collisions or disturbances.
When the rotor unmanned aerial vehicle 201 detects that the first space 6 and the third space 8 intersect, the rotor unmanned aerial vehicle automatically receives movement data collected by the guide end 4, adjusts a flight path to be as close to the center of the first space 6 as possible, and reduces the intersecting range. When the condition of space intersection is detected, the twin interface locks the control interface, limits further flight control instructions, and prevents an operator from inadvertently making the unmanned aerial vehicle further invade the first space 6 or depart from the preset second space 7 (the set movable control range). When the first space 6 is intersected with the third space 8, the system can give out early warning to prompt the operator and the unmanned aerial vehicle to be separated from the second space 7, namely, the movable control range can be reached, and immediate action is needed to be taken to adjust the flight path.
The space monitoring and data processing, the computing device 301 monitors the relative position relationship between the first space 6 and the third space 8 in real time, and calculates whether the intersection exists between the first space 6 and the third space 8 through a complex algorithm. When an intersection is detected, the computing device 301 analyzes the movement data of the leading end 4, determining how the drone should adjust the flight path to reduce the extent of the intersection. And after receiving the adjustment instruction, the flight control system of the rotary-wing unmanned aerial vehicle 201 automatically calculates a new flight path, so that the unmanned aerial vehicle approaches the center of the first space 6 instead of being far away, and the intersection range of the two spaces is reduced. Meanwhile, the control system adjusts the flying speed and the flying height of the unmanned aerial vehicle, and ensures that the maximum safety and stability are maintained in the adjusting process. The user interface and the early warning mechanism are automatically locked when the twin interface detects that the space is intersected, so that an operator is prevented from sending a control instruction which can aggravate the intersection condition. The system gives an early warning to the operator through visual and audio signals, clearly indicates the intersecting state and the risk of needing attention, and prompts the operator to take proper countermeasures.
When the first space 6 and the second space 7 and/or the third space 8 keep the safe distance disjoint, the computing device 301 adjusts the flight path of the drone such that the center of the third space 8 (the safe buffer zone of the drone) gradually approaches the center of the first space 6. Under the condition, the twin interface can unlock the control interface, so that an operator can manually adjust the unmanned aerial vehicle or execute a preset flight mode according to actual needs. The system adjusts the position of the unmanned aerial vehicle in the mode, so that more space is reserved for the possible rapid movement of the performer, and interference and safety risk to the performance are reduced to the greatest extent while capturing pictures.
The real-time space analysis, the computing device 301 monitors the position relationship between the first space 6 and the second space 7 and the third space 8 in real time, so as to ensure that the safe distance is kept between the three spaces; once it is determined that there is no intersection between the spaces, the computing device 301 immediately calculates a new flight path so that the drone can optimize its position relative to the performer. And (3) dynamically flight adjusting, wherein according to the calculated optimal path, the flight control system of the unmanned aerial vehicle automatically adjusts the speed, the height and the direction of the unmanned aerial vehicle, so that the center point of the unmanned aerial vehicle is close to the center point of the performer. In the flight adjustment process, the system can continuously monitor the spatial relationship, so that the situation of spatial intersection can not occur at any time. The operator intervenes, and when the twin interface unlocks the control interface, the operator can perform manual control according to actual shooting requirements and site conditions, such as adjusting shooting angles or flying speeds. The twinning interface provides real-time flight and image data to assist the operator in making more accurate decisions. And a redundant space is reserved, and through accurately controlling the position of the unmanned aerial vehicle, the system reserves enough space for the rapid movement of the performer, so that the consistency and the integrity of the shooting effect are ensured, and the naturalness and the fluency of the performance process are also ensured.
In any of the above embodiments, the distance between the guide end 4 and the first space 6 is set according to the field of view 5, and if the field of view 5 is too large, the distance between the guide end 4 and the first space 6 needs to be increased; and when the field of view 5 intersects the first space 6, there is also the following:
In case four, when the first space 6 moves, it indicates that the performer moves, and the computing device 301 obtains the connection line between the center of the first space 6 and the leading end 4, and controls the first space 6 to move so as to increase the included angle between the connection line and the horizontal line. So that the rotorcraft 201 flies above the stage to avoid being within the field of view 5 of the first filming end 1.
In case five, the computing device 301 controls the first space 6 to move to increase the distance of the leading end 4 from the first space 6 when the field of view 5 moves, indicating a change in the shooting direction and/or position generated by the first shooting end 1. So that the rotorcraft 201 flies laterally around the stage to avoid being within the field of view 5 of the first filming end 1.
In this embodiment, as the performer moves, i.e., the first space 6 moves, the computing device 301 automatically obtains a line connecting the center of the first space 6 and the leading end 4, and adjusts the position and direction of the first space 6 to increase the angle between the line and the horizontal line. Through adjusting first space 6, the system instructs rotor unmanned aerial vehicle 201 to fly to the stage top, even also avoids unmanned aerial vehicle to invade the visual field 5 of first shooting end 1 at the actor removal in-process to avoid shooting interference. The system continuously monitors the relative position of the field of view 5 of the first camera 1 and the first space 6, ensuring that at any time the flight area of the drone does not create an undesired intersection with the field of view 5 of the first camera 1.
Dynamic spatial analysis, the computing device 301 monitors the position change of the leading end 4 (performer) in real time, thereby determining the movement path and speed of the first space 6. Through real-time data analysis, the connection line between the center of the first space 6 and the leading end 4 and the included angle between the center of the first space and the horizontal line are calculated, and the position and the direction of the first space 6 are dynamically adjusted according to the included angle. Unmanned aerial vehicle flight adjustment, according to the adjustment of first space 6, computing device 301 sends the instruction to rotor unmanned aerial vehicle 201, instructs it to vertically rise or adjust the flight path, ensures that its flight track can not interfere with field of view 5 of first shooting end 1. The flight control system of the unmanned aerial vehicle calculates a new flight path by using the received instruction, and adjusts the flight height and position in real time so as to maintain the safe distance from the first space 6. The field of view 5 interference prevention, by monitoring the field of view 5 of the first shooting end 1 and the real-time position of the unmanned aerial vehicle, the system can pre-judge potential field of view 5 interference and take measures in advance to avoid; under the condition that the unmanned aerial vehicle enters the field of view 5 of the first shooting end 1, the system can take emergency avoidance measures, such as quickly changing the flight height or the path, so that the unmanned aerial vehicle is prevented from appearing in the picture.
When the change in shooting direction or position of the first shooting end 1 causes the field of view 5 thereof to move, the computing device 301 automatically adjusts the position of the first space 6 to increase the distance of the leading end 4 from the first space 6. To avoid the drone intruding into the new field of view 5 of the first filming end 1, the computing device 301 controls the drone to perform a transversal flight, moving around the stage, maintaining the continuity of the filming activity and reducing the interference. The system continuously monitors the changes in the field of view 5 of the first camera 1 and the effect of these changes on the first space 6 and the unmanned aerial vehicle flight area, ensuring timely response and making appropriate adjustments.
Detecting the change of the field of view 5, wherein the computing device 301 monitors the change of the field of view 5 in real time by receiving the image data and/or the position information transmitted by the first shooting end 1; upon detection of the changes in the field of view 5, the system immediately analyzes the effect of these changes on the first space 6, and possibly on the disturbance of the unmanned aerial vehicle flight. Dynamic space and flight path adjustment, according to the analysis result of the change of the field of view 5, the computing device 301 automatically adjusts the position and size of the first space 6 to adapt to the new field of view 5 setting of the first shooting end 1, and increases the distance between the guide end 4 and the first space 6; at the same time, the computing device 301 generates a new flight path of the unmanned aerial vehicle, guides it to perform a lateral flight, and avoids entering the adjusted first shooting end 1 field of view 5. Flight control and real-time feedback, the flight control system of the rotary-wing unmanned aerial vehicle 201 adjusts the speed, the height and the direction of the unmanned aerial vehicle according to the new flight instruction provided by the computing equipment 301, and executes transverse flight; the twin interface displays the flight state and position of the unmanned aerial vehicle and the change of the field of view 5 of the first shooting end 1 in real time, so that an operator can clearly know the current shooting condition and make quick adjustment.
Further, the field of view is determined in the following manner:
to construct a mathematical expression of any point on the field of view, we need to consider the spatial coordinates of the camera, the focal length, and the angle of the camera orientation. Assume that the camera is located at a point in three-dimensional space The orientation angle is defined by pitch angle (pitch) θ and yaw angle (yaw) ϕ, and the focal length of the camera is f, the field of view can be approximated as a cone with its apex at the camera's location and the axis of the cone pointing toward the camera's orientation.
The premise assumption is that
Focal length f: the distance of the camera lens from the sensor plane affects the "depth" of the field of view.
Angle of view α: the range of viewing angles that a camera is able to capture is typically given by the camera specification.
Camera position: The coordinate position of the camera in three-dimensional space.
Orientation angle: the orientation of the camera is determined by the pitch angle θ and the yaw angle ϕ.
The expression for any point on the field of view, at this setting, any point P (x, y, z) on the field of view can be determined by:
Determining a field of view centerline:
The centerline direction of the field of view may be defined by the camera's orientation angle θ (pitch angle) and ϕ (yaw angle).
Calculating a field boundary:
at the focal distance f from the camera, the width W and height H of the field of view can be calculated by the field angle α. However, it is actually necessary to determine the boundaries of the field of view at different distances from the angle of view and the focal length.
Express any point on the field of view:
Assuming that the camera is facing straight ahead, regardless of rotation (i.e., θ= ϕ =0), a point on the field of view is at a distance D from the camera, and the point is at its coordinates in the camera coordinate system Can be determined by the angle of the field of view and the relative position of the point in the field of view.
Considering rotation, the point P can be converted into the world coordinate system by a rotation matrix.
Mathematical expression simplification
In a simplified case, if we consider only a two-dimensional plane (assuming the y-axis is above, the z-axis goes deep into the screen), and the camera is facing straight ahead, then at focal length f, the expression of any point P (x, y) on the field of view can be approximated as:;/>
where D is the distance from the camera to point P, ϕ and θ are the yaw and pitch angles, respectively, of point P relative to the camera position.
Further, the minimum distance between the unmanned aerial vehicle and the boundary of the view field is obtained, and the following formula is adopted: ; wherein alpha is the included angle between the center line of the field of view of the camera and the connecting line from the camera to the unmanned aerial vehicle,/> Is the straight line distance from the unmanned aerial vehicle to the camera. And the radius of the first space is R, the radius of the third space is R, the R is specifically set according to the size of the unmanned aerial vehicle, and the R is set according to the stage space and the camera parameters at the time of use.
In any of the above embodiments, the position of the leading end 4 on the performer is set according to the type of performance of the performer, and the movement data includes the linear acceleration, angular velocity, and movement direction of the performer.
In this embodiment, the system presets the optimal position of the leading end 4 on the performer, depending on the type of performance (e.g., dance, gymnastics or other action performance) of the performer, to ensure that it can accurately capture the dynamic information of the performer. The leading end 4 is equipped with a sensor, can gather data such as the linear acceleration of performer, angular velocity and direction of movement in real time, provides accurate tracking information for unmanned aerial vehicle and ground camera. Based on the movement data collected from the lead 4, the computing device 301 can adjust the flight path of the unmanned aerial vehicle and the shooting angle of the ground camera in real time to capture performance content in an optimal manner.
Before the performance starts, according to the specific type of the performance and the action range of the performer, the system determines the optimal position of the guide end 4 on the performer through analysis, and the optimal position can reduce data errors to the greatest extent and improve the data quality; the position selection of the lead 4 aims at capturing critical actions and overall movement characteristics of the performer, without impeding the activity of the performer while ensuring accurate transmission of data. The high-precision mobile data acquisition, wherein a sensor arranged in the guide end 4 monitors and records the linear acceleration, the angular velocity and the moving direction of the performer in real time, and the data reflect the speed change, the rotation and the direction conversion of the performer; this data is transmitted wirelessly in real time to the computing device 301 for analysis of the dynamic changes of the performer and prediction of its future movement trajectory. The calculation device 301 calculates the optimal shooting path and angle of the unmanned aerial vehicle and the ground camera according to the received movement data by using advanced algorithm. The control systems of the unmanned aerial vehicle and the ground camera are adjusted according to the calculation results, so that the action of the performer can be tracked in real time, and the wonderful moment is captured.
Specifically, the step of guiding the second photographing terminal 2 by the guiding terminal 4 includes:
Step 1, judging the states of the guide end 4 and the performer: angular velocity and acceleration are measured using the lead 4 with MEMS gyroscopes and MEMS accelerometers on the performer. The average value of the measured values in a certain time is calculated and compared with a preset static state threshold value. If the average value is within the threshold value range, the average value is determined to be in a static state, and the step 2 is entered; if the threshold is exceeded, a motion state is determined and the process goes to step 6.
Step 2, static state data processing: in a static state, the angular velocity and the average acceleration value measured by the gyroscope and the accelerometer are calculated. Using these means as references, the gyroscope zero bias and accelerometer scale are calculated.
Step 3, calculating the posture of the static state: and (3) calculating a current attitude angle, a quaternion and an attitude matrix according to the stationary state average value obtained in the step (2).
Step 4, static correction of position and speed: the current positions of the guide end 4 and the performer are kept unchanged, the speed is cleared, and the speed error caused by the accumulation of navigation errors is eliminated. And (5) ending the static part processing, and returning to the step (1).
Step 5, motion state data processing: in the motion state, the angular velocity and the acceleration are calculated directly by using the current measured values of the gyroscope and the accelerometer as input data.
Step 6, calculating the gesture and speed of the motion state: and (5) updating the quaternion, the attitude matrix and the attitude angle by using the angular speed obtained in the step (5). And projecting the acceleration to a navigation coordinate system through a gesture matrix, and eliminating the influence of gravity to obtain the actual motion acceleration. And integrating the actual motion acceleration to obtain the current speed and the current position.
Step 7, error compensation and navigation update of the rotor unmanned aerial vehicle 201: and comparing the projected mean value of the accelerometer under the navigation system with the gravity acceleration by using an improved complementary filtering method to carry out error compensation. The error compensated position and velocity information is sent to the drone, directing its movement to ensure that the rotorcraft 201 follows the performer in the event that the third space 8 intersects the first space 6.
The system can accurately capture the dynamic state of the performer in real time by measuring the angular speed and the acceleration of the performer by using the MEMS gyroscope and the MEMS accelerometer. This high-precision state capture provides accurate guidance information for the rotorcraft 201, ensuring that the drone can closely follow the movements of the performer, capturing every wonderful moment. By comparing the average value of the measured values with a preset threshold value, the system can intelligently judge whether the performer is in a static state or a moving state, and adjust the data processing mode accordingly. The intelligent processing not only improves the efficiency of data processing, but also enhances the adaptability of the system to the dynamic change of the performer. In both stationary and moving states, the system can calculate the exact attitude angle, quaternion and attitude matrix, as well as the current speed and position obtained by integration. These calculations provide accurate flight guidance for the drone, particularly when static corrections of position and velocity and error compensation in motion are performed, ensuring the accuracy and stability of the drone flight. And the position and the speed are subjected to error compensation by adopting an improved complementary filtering method, so that the accuracy of the unmanned aerial vehicle navigation system is further improved. This accurate error compensation and navigation information update enables the unmanned aerial vehicle to flexibly cope with a complex performance environment, and to ensure that the unmanned aerial vehicle safely and stably follows the movements of the performer even in the case where the first space 6 and the third space 8 intersect.
In any of the above embodiments, the rotary unmanned aerial vehicle 201 is a point in the bionic model to reduce the calculation pressure of the computing device 301, and the computing device 301 updates the bionic model corresponding to the field of view 5 according to the boundary data and the position parameter of the first capturing end 1.
In this embodiment, in the biomimetic model, the rotorcraft 201 is reduced to a point, which is aimed at reducing the computational pressure of the computing device 301 and improving the real-time response capability of the system. The computing device 301 dynamically updates the bionic model corresponding to the field of view 5 according to the boundary data obtained in real time and the position parameters of the first shooting end 1, so as to ensure the accuracy and the latest of the model. The system adjusts and optimizes the flight path of the unmanned aerial vehicle and the shooting angle of the first shooting end 1 by using the updated bionic model so as to capture the wonderful performance moment to the maximum extent.
The construction of the model is simplified, the unmanned rotorcraft 201 is represented as a point in the bionic model, and only the position information of the unmanned rotorcraft in space is reserved mainly by neglecting the specific shape and size of the unmanned rotorcraft. The representation method greatly simplifies the complexity of the model, reduces the demand for calculation resources, and enables the system to quickly calculate the optimal position and flight path of the unmanned aerial vehicle. Real-time data processing and model updating, the computing device 301 continuously receives boundary data from various sensors and cameras and position parameters of the first capturing end 1. Based on these real-time data, the system dynamically updates the field of view 5 in the biomimetic model, adjusting the model to reflect the current shooting conditions and field layout. Optimizing the flight path and the view field 5, and calculating the optimal position of the unmanned aerial vehicle relative to the performer and the first shooting end 1 by using the updated bionic model and how to adjust the flight path to avoid interference and collision; meanwhile, the system optimizes the setting of the field of view 5 of the first shooting end 1 according to the updated position parameters of the first shooting end, so as to ensure that the optimal shooting angle and content can be captured.
In any of the embodiments described above, the leading end 4 has a higher priority than the computing device 301 in controlling the movement of the rotorcraft 201. By guiding the movement of the rotorcraft 201 for the leading end 4 with a local following strategy as much as possible, the computational pressure of the central control end 3 is avoided and the risk of human overstock is reduced.
In this embodiment, the real-time movement data (e.g., position, speed, direction, etc.) sent by the rotorcraft 201 through the lead end 4 comes from the main adjustment of its flight path, rather than being entirely dependent on the remote control instructions of the central control end 3. By having the rotorcraft 201 directly respond to the data of the lead 4, the need for the central control 3 to handle complex calculations and path planning is reduced, thereby reducing the overall computational pressure of the system. The direct intervention of the central control end 3 on the control of the rotary-wing unmanned aerial vehicle 201 is reduced, the risk possibly caused by operation delay or misoperation is reduced, and the reliability and the safety of the system are enhanced.
The data acquisition and transmission of the guide end 4, the guide end 4 is provided with a high-precision MEMS gyroscope and an accelerometer, and the moving state (including linear acceleration, angular velocity and moving direction) of the performer is monitored and recorded in real time; these data are transmitted wirelessly in real time to the rotorcraft 201, which is immediately received and parsed by a processing system within the drone. The unmanned aerial vehicle autonomously flies and adjusts, a flight control system in the rotor unmanned aerial vehicle 201 automatically calculates a new flight path or adjusts the flight state according to the received data of the guide end 4 so as to ensure that the flight path and the flight state closely follow a performer; the system optimizes the response strategy of the unmanned aerial vehicle through an advanced algorithm, ensures that even when a performer moves or changes direction rapidly, the response strategy can be adjusted in real time, and keeps the optimal matching of shooting angles and distances. The auxiliary monitoring and adjustment of the central control end 3, although the rotor unmanned aerial vehicle 201 performs autonomous flight adjustment mainly through a local following strategy, the central control end 3 monitors the whole system so as to ensure flight safety and shooting effect; if necessary, the central control terminal 3 can intervene in adjustment, such as adjustment of the flying height or the boundary of the flying area, so as to ensure that the unmanned aerial vehicle does not enter the no-fly zone or fly out of the shooting range.
Another embodiment of the first aspect of the present invention proposes a management method implemented by a camera intelligent management system according to an aerobics competition video shooting, as shown in fig. 3, specifically including the following steps:
S101, presetting and preparing: a lead 4 device (including MEMS gyroscopes and accelerometers) is mounted on the actor to capture dynamic data. The initial positions and angles of the rotary unmanned aerial vehicle 201 (second photographing end 2) and the ground camera (first photographing end 1) are set according to the performance type.
S102, capturing and analyzing real-time data: the guiding end 4 monitors the action of the performer in real time, and judges the state (stationary or moving) of the performer by measuring the angular velocity and acceleration data. The computing device 301 receives the data and analyzes the movement trend of the performer.
S103, dynamically adjusting the unmanned aerial vehicle and the ground camera: the computing device 301 automatically adjusts the flight path of the rotorcraft 201 and the view angle of the ground camera based on the data provided by the lead end 4, ensuring that the best view is continuously captured. And (3) performing error compensation by using a complementary filtering method, and optimizing flight navigation.
S104, spatial relationship management: the bionic model comprising a first space 6 (virtual space around a performer), a second space 7 (unmanned aerial vehicle flight area) and a third space 8 (unmanned aerial vehicle safety buffer zone) is updated in real time by utilizing the digital twin platform, and space interaction is monitored to avoid intersection.
S105, twin interface control and monitoring: the display screen 302 of the central control end 3 displays a twin interface, and monitors the states and positions of the guide end 4, the rotor unmanned aerial vehicle 201 and the ground camera in real time. And the control interface is intelligently locked or unlocked according to the spatial relationship, and the shooting strategy is optimized.
S106, post-optimization and feedback: post-processing and quality optimization of the captured video content. By analyzing the data of the shooting process, improved and adjusted references are provided for future shots.
Embodiments of the second aspect of the present invention provide an electronic device. In some embodiments of the present invention, as shown in fig. 4, there is provided an electronic device including: the system can be electronic equipment such as a desktop computer, a notebook computer, a palm computer, a cloud server and the like. The electronic device 3 may include, but is not limited to, a processor 301 and a memory 302. It will be appreciated by those skilled in the art that fig. 4 is merely an example of the electronic device 3 and does not constitute a limitation of the electronic device 3, and may include more or fewer components than shown, or different components.
The Processor 301 may be a central processing unit (CentralProcessingUnit, CPU) or other general purpose Processor, digital signal Processor (DIGITAL SIGNAL Processor, DSP), application SPECIFIC INTEGRATED Circuit (ASIC), field-programmable gate array (Field-ProgrammableGate Array, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like.
The memory 302 may be an internal storage unit of the electronic device 3, for example, a hard disk or a memory of the electronic device 3. The memory 302 may also be an external storage device of the electronic device 3, for example, a plug-in hard disk provided on the electronic device 3, a smart memory card (SMART MEDIACARD, SMC), a Secure Digital (SD) card, a flash memory card (FLASH CARD), or the like. The memory 302 may also include both internal storage units and external storage devices of the electronic device 3. The memory 302 is used to store computer programs and other programs and data required by the electronic device.
Embodiments of the third aspect of the present invention provide a computer-readable storage medium. In some embodiments of the present invention, a computer readable storage medium is provided, which when executed by the processor 301 implements the steps of the method described above, so that the computer readable storage medium provided in the third aspect of the present invention has all the technical effects of the steps described above, which are not described herein.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, the specific names of the functional units and modules are only for distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
In the embodiments provided in the present disclosure, it should be understood that the disclosed apparatus/electronic device and method may be implemented in other manners. For example, the apparatus/electronic device embodiments described above are merely illustrative, e.g., the division of modules or elements is merely a logical functional division, and there may be additional divisions of actual implementations, multiple elements or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The integrated modules/units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present disclosure may implement all or part of the flow of the method of the above-described embodiments, or may be implemented by a computer program to instruct related hardware, and the computer program may be stored in a computer readable storage medium, where the computer program, when executed by a processor, may implement the steps of the method embodiments described above. The computer program may comprise computer program code, which may be in source code form, object code form, executable file or in some intermediate form, etc. The computer readable medium may include: any entity or device capable of carrying computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth. It should be noted that the content of the computer readable medium can be appropriately increased or decreased according to the requirements of the jurisdiction's jurisdiction and the patent practice, for example, in some jurisdictions, the computer readable medium does not include electrical carrier signals and telecommunication signals according to the jurisdiction and the patent practice.
The above embodiments are merely for illustrating the technical solution of the present disclosure, and are not limiting thereof; although the present disclosure has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the disclosure, and are intended to be included in the scope of the present disclosure.

Claims (10)

1. An intelligent management system of making a video recording of aerobics competition video shooting, characterized by comprising:
the first shooting end is movably arranged on the ground far away from the stage; the first shooting end is provided with a view field facing to the performer on the stage so as to acquire a first image;
The second shooting end is suspended in the first space around the performer except the view field; the second shooting end is used for acquiring a second image of the performer, which is different from the first image;
the central control end is used for receiving and displaying the first image and the second image; the central control end is also used for acquiring the boundary of the stage and defining a second space for the second shooting end to move around the stage according to the boundary;
the guide end is arranged on the performer; the guiding end is used for guiding the first shooting end to move along with the performer in the second space;
Wherein the first space moves along with the leading end, and the first space is positioned inside the second space.
2. The intelligent camera management system for aerobics competition video shooting of claim 1, wherein the first shooting end comprises a moving mechanism and a first camera arranged on the moving mechanism; the first camera acquires the first image in the field of view;
the first camera is further used for acquiring a third image around the stage, and the central control end generates the boundary through the third image;
When the first camera acquires the third image, the second shooting end is positioned in the field of view.
3. The intelligent camera management system for aerobics competition video shooting according to claim 2, wherein the central control end comprises a computing device, the computing device constructs a bionic model around the stage according to the third image, and updates the second space in real time according to the second image;
The computing device is capable of interacting with the second shooting end to control the second shooting end to move in the first space through the bionic model.
4. A camera shooting intelligent management system for aerobics competition video shooting as claimed in claim 3, wherein the second shooting end comprises a rotor unmanned aerial vehicle and a second camera fixed on the rotor unmanned aerial vehicle;
the rotor unmanned aerial vehicle is provided with a third space which wraps the rotor unmanned aerial vehicle and the second camera, and the third space is located inside the first space.
5. The intelligent camera management system for aerobics competition video shooting of claim 4, wherein the bionic model comprises the first space, the third space and the field of view; the central control end further comprises a display screen for displaying the first image and the second image;
and the display screen is also provided with a twin interface for updating the bionic model in real time.
6. The intelligent camera management system for aerobics competition video shooting of claim 5, wherein the third space follows the movement of the rotorcraft, and the twinning interface includes a control interface for controlling the rotorcraft; and when a performer with the leading end moves, having the following conditions:
In a first case, the first space moves along the leading end in the twin interface, and the computing device continuously judges whether the first space and the third space are intersected or not;
Secondly, when the first space is intersected with the third space, the rotor unmanned aerial vehicle receives the moving parameters acquired by the guide end and approaches to the center of the first space; the twin interface locks the control interface;
in a third scenario, when the first space does not intersect the second space, or when the first space does not intersect the third space, or when the first space does not intersect both the second space and the third space, the computing device controls the rotorcraft to move such that a center of the third space is continuously near a center of the first space; the twinning interface unlocks the control interface.
7. The intelligent camera management system for aerobics competition video shooting of claim 6, wherein the distance between the guide end and the first space is set according to the field of view; and when the field of view intersects the first space, there is also the following:
in a fourth aspect, when the first space moves, a connection line between the center of the first space and the guiding end is obtained, and the computing device controls the first space to move so as to increase an included angle between the connection line and a horizontal line;
in case five, the computing device controls a first space to move to increase a distance of the leading end from the first space as the field of view moves.
8. An aerobics competition video shooting intelligent management system as in claim 6, wherein the position of the leading end on the performer is set according to the type of performance of the performer, and wherein the movement parameters include linear acceleration, angular velocity and direction of movement of the performer.
9. The intelligent camera management system for aerobics competition video shooting of claim 5, wherein the rotary-wing unmanned aerial vehicle is a point in the bionic model, and the computing device updates the bionic model corresponding to the field of view according to the boundary and the position parameters of the first shooting end.
10. The intelligent management system for video capture of an aerobics competition of claim 4, wherein the leading end has a higher priority than the computing device in controlling movement of the rotorcraft.
CN202410384085.1A 2024-04-01 2024-04-01 Intelligent camera management system for aerobics competition video shooting Active CN117979168B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410384085.1A CN117979168B (en) 2024-04-01 2024-04-01 Intelligent camera management system for aerobics competition video shooting

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410384085.1A CN117979168B (en) 2024-04-01 2024-04-01 Intelligent camera management system for aerobics competition video shooting

Publications (2)

Publication Number Publication Date
CN117979168A true CN117979168A (en) 2024-05-03
CN117979168B CN117979168B (en) 2024-06-11

Family

ID=90859968

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410384085.1A Active CN117979168B (en) 2024-04-01 2024-04-01 Intelligent camera management system for aerobics competition video shooting

Country Status (1)

Country Link
CN (1) CN117979168B (en)

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190238822A1 (en) * 2018-01-31 2019-08-01 Synaptive Medical (Barbados) Inc. System for three-dimensional visualization
CN111405242A (en) * 2020-02-26 2020-07-10 北京大学(天津滨海)新一代信息技术研究院 Ground camera and sky moving unmanned aerial vehicle linkage analysis method and system
CN111447340A (en) * 2020-05-29 2020-07-24 深圳市瑞立视多媒体科技有限公司 Mixed reality virtual preview shooting system
JP2021022783A (en) * 2019-07-25 2021-02-18 キヤノン株式会社 Control device, tracking system, control method, and program
CN112839214A (en) * 2021-02-08 2021-05-25 上海电力大学 Inspection system based on unmanned aerial vehicle and ground trolley multi-view field
CN113137955A (en) * 2021-05-13 2021-07-20 江苏航空职业技术学院 Unmanned aerial vehicle aerial survey virtual simulation method based on scene modeling and virtual photography
KR102378738B1 (en) * 2021-10-07 2022-03-28 주식회사 씨스토리 System for broadcasting real-time streaming sportainment baseball league in dome baseball stadium
WO2022061508A1 (en) * 2020-09-22 2022-03-31 深圳市大疆创新科技有限公司 Shooting control method, apparatus and system, and storage medium
KR102390569B1 (en) * 2021-10-14 2022-04-27 (주)지트 Geospatial information survey system using drone
CN216817246U (en) * 2022-03-04 2022-06-24 飞循智航(成都)科技有限公司 Unmanned aerial vehicle tracking system
US20230035682A1 (en) * 2019-12-17 2023-02-02 Kt Corporation Method and device for setting drone flight path
CN116665490A (en) * 2023-07-28 2023-08-29 中国民航管理干部学院 Urban air traffic management data processing system based on digital twinning
WO2024009855A1 (en) * 2022-07-05 2024-01-11 ソニーグループ株式会社 Control method, control device, and computer program
RU2811574C1 (en) * 2023-01-13 2024-01-15 федеральное государственное автономное образовательное учреждение высшего образования "Пермский национальный исследовательский политехнический университет" Video cameras control system

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190238822A1 (en) * 2018-01-31 2019-08-01 Synaptive Medical (Barbados) Inc. System for three-dimensional visualization
JP2021022783A (en) * 2019-07-25 2021-02-18 キヤノン株式会社 Control device, tracking system, control method, and program
US20230035682A1 (en) * 2019-12-17 2023-02-02 Kt Corporation Method and device for setting drone flight path
CN111405242A (en) * 2020-02-26 2020-07-10 北京大学(天津滨海)新一代信息技术研究院 Ground camera and sky moving unmanned aerial vehicle linkage analysis method and system
CN111447340A (en) * 2020-05-29 2020-07-24 深圳市瑞立视多媒体科技有限公司 Mixed reality virtual preview shooting system
WO2022061508A1 (en) * 2020-09-22 2022-03-31 深圳市大疆创新科技有限公司 Shooting control method, apparatus and system, and storage medium
CN112839214A (en) * 2021-02-08 2021-05-25 上海电力大学 Inspection system based on unmanned aerial vehicle and ground trolley multi-view field
CN113137955A (en) * 2021-05-13 2021-07-20 江苏航空职业技术学院 Unmanned aerial vehicle aerial survey virtual simulation method based on scene modeling and virtual photography
KR102378738B1 (en) * 2021-10-07 2022-03-28 주식회사 씨스토리 System for broadcasting real-time streaming sportainment baseball league in dome baseball stadium
KR102390569B1 (en) * 2021-10-14 2022-04-27 (주)지트 Geospatial information survey system using drone
CN216817246U (en) * 2022-03-04 2022-06-24 飞循智航(成都)科技有限公司 Unmanned aerial vehicle tracking system
WO2024009855A1 (en) * 2022-07-05 2024-01-11 ソニーグループ株式会社 Control method, control device, and computer program
RU2811574C1 (en) * 2023-01-13 2024-01-15 федеральное государственное автономное образовательное учреждение высшего образования "Пермский национальный исследовательский политехнический университет" Video cameras control system
CN116665490A (en) * 2023-07-28 2023-08-29 中国民航管理干部学院 Urban air traffic management data processing system based on digital twinning

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
YUCONG LIN: "Sampling-Based Path Planning for UAV Collision Avoidance", IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 25 April 2017 (2017-04-25) *
周鑫: "无人机航拍在体育赛事中的应用", 电视技术, 31 December 2023 (2023-12-31) *
杜永浩;邢立宁;蔡昭权;: "无人飞行器集群智能调度技术综述", 自动化学报, no. 02, 28 February 2020 (2020-02-28) *
鲜斌: "基于模型预测控制与改进人工势场法的多无人机路径规划", 控制与决策, 2 January 2024 (2024-01-02) *

Also Published As

Publication number Publication date
CN117979168B (en) 2024-06-11

Similar Documents

Publication Publication Date Title
US11797009B2 (en) Unmanned aerial image capture platform
US20210072745A1 (en) Systems and methods for uav flight control
US11572196B2 (en) Methods and systems for movement control of flying devices
CN105120146B (en) It is a kind of to lock filming apparatus and image pickup method automatically using unmanned plane progress moving object
CN108139204B (en) Information processing apparatus, method for estimating position and/or orientation, and recording medium
CN113038016B (en) Unmanned aerial vehicle image acquisition method and unmanned aerial vehicle
WO2018214078A1 (en) Photographing control method and device
US20180095459A1 (en) User interaction paradigms for a flying digital assistant
JP7059937B2 (en) Control device for movable image pickup device, control method and program for movable image pickup device
US11340634B2 (en) Systems and methods for height control of a movable object
CN205353774U (en) Accompany unmanned aerial vehicle system of taking photo by plane of shooing aircraft
US11295621B2 (en) Methods and associated systems for managing 3D flight paths
WO2021098453A1 (en) Target tracking method and unmanned aerial vehicle
CN107643758A (en) Shoot the autonomous system and method that include unmanned plane and earth station of mobile image
CN113031462A (en) Port machine inspection route planning system and method for unmanned aerial vehicle
US12075152B2 (en) Computer-assisted camera and control system
WO2020209167A1 (en) Information processing device, information processing method, and program
CN117979168B (en) Intelligent camera management system for aerobics competition video shooting
EP3288828B1 (en) Unmanned aerial vehicle system and method for controlling an unmanned aerial vehicle
WO2024069788A1 (en) Mobile body system, aerial photography system, aerial photography method, and aerial photography program
WO2024069789A1 (en) Aerial imaging system, aerial imaging method, and aerial imaging program
US11682175B2 (en) Previsualization devices and systems for the film industry
JP2004080580A (en) Tracking method of panhead for camera and tracking camera panhead
JP2024024367A (en) Information processor and method for processing information
JP2023114238A (en) Information processing device and information processing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant