CN113056904A - Image transmission method, movable platform and computer readable storage medium - Google Patents

Image transmission method, movable platform and computer readable storage medium Download PDF

Info

Publication number
CN113056904A
CN113056904A CN202080005966.8A CN202080005966A CN113056904A CN 113056904 A CN113056904 A CN 113056904A CN 202080005966 A CN202080005966 A CN 202080005966A CN 113056904 A CN113056904 A CN 113056904A
Authority
CN
China
Prior art keywords
image
frame rate
target
rate
height
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202080005966.8A
Other languages
Chinese (zh)
Inventor
饶雄斌
赵亮
陈颖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SZ DJI Technology Co Ltd
Original Assignee
SZ DJI Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SZ DJI Technology Co Ltd filed Critical SZ DJI Technology Co Ltd
Publication of CN113056904A publication Critical patent/CN113056904A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/57Mechanical or electrical details of cameras or camera modules specially adapted for being embedded in other devices
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • G01S17/8943D imaging with simultaneous measurement of time-of-flight at a 2D array of receiver pixels, e.g. time-of-flight cameras or flash lidar
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/933Lidar systems specially adapted for specific applications for anti-collision purposes of aircraft or spacecraft
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/66Remote control of cameras or camera parts, e.g. by remote control devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Image Analysis (AREA)
  • Studio Devices (AREA)

Abstract

The application provides an image transmission method, a movable platform and a computer readable storage medium, comprising: acquiring the movement characteristics of target pixel points in an image acquired by an image acquisition device; and/or acquiring motion state information of the movable platform (S101); and determining a target frame rate for image transmission from the movable platform to the control terminal according to the movement characteristics of the target pixel points in the image and/or the motion state information of the movable platform (S102). The timeliness and the accuracy of determining the target frame rate are improved.

Description

Image transmission method, movable platform and computer readable storage medium
Technical Field
The present application relates to the field of image transmission technologies, and in particular, to an image transmission method, a movable platform, and a computer-readable storage medium.
Background
At present, in the unmanned aerial vehicle system, when unmanned aerial vehicle passed through wireless communication link with video data and transmitted control terminal, there was the problem of balanced smoothness degree and definition. The higher the frame rate of image transmission, the smoother the image transmission screen displayed by the control terminal, but due to the limitation of bandwidth, the fewer the number of transmission bits that can be allocated to each frame, resulting in the lower the sharpness of a single frame image.
In the prior art, a user may manually select a frame rate through a control terminal, but the user may select the frame rate subjectively, which may cause a problem of untimely adjustment or low accuracy.
Disclosure of Invention
The embodiment of the application provides an image transmission method, a movable platform and a computer readable storage medium, which can improve the timeliness and accuracy of determining a target frame rate.
In a first aspect, an embodiment of the present application provides an image transmission method, where the method is applied to a movable platform, where the movable platform includes an image acquisition device, and the movable platform is in communication connection with a control terminal, and the method includes:
acquiring the movement characteristics of target pixel points in the image acquired by the image acquisition device; and/or the presence of a gas in the gas,
acquiring motion state information of the movable platform;
and determining a target frame rate for image transmission from the movable platform to the control terminal according to the movement characteristics of the target pixel points in the image and/or the motion state information of the movable platform.
In a second aspect, an embodiment of the present application further provides a movable platform, where the movable platform is connected to a control terminal in communication, and the movable platform includes:
the image acquisition device is used for acquiring images;
a memory for storing a computer program;
a processor for invoking a computer program in the memory to perform:
acquiring the movement characteristics of target pixel points in the image acquired by the image acquisition device; and/or the presence of a gas in the gas,
acquiring motion state information of the movable platform;
and determining a target frame rate for image transmission from the movable platform to the control terminal according to the movement characteristics of the target pixel points in the image and/or the motion state information of the movable platform.
In a third aspect, an embodiment of the present application further provides a computer-readable storage medium, where the computer-readable storage medium is used to store a computer program, and the computer program is loaded by a processor to execute any one of the image transmission methods provided in the embodiment of the present application.
The method and the device can acquire the moving characteristics of the target pixel points in the image acquired by the image acquisition device; and/or acquiring motion state information of the movable platform; and determining a target frame rate for image transmission from the movable platform to the control terminal according to the movement characteristics of the target pixel points in the image and/or the motion state information of the movable platform. The scheme can automatically determine the target frame rate without manual selection of a user, and the timeliness and the accuracy of determining the target frame rate are improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic diagram of an application scenario of an image transmission method provided in an embodiment of the present application;
fig. 2 is a schematic diagram of interaction between an unmanned aerial vehicle and a control terminal provided in an embodiment of the present application;
fig. 3 is a schematic flowchart of an image transmission method provided in an embodiment of the present application;
FIG. 4 is a schematic diagram of determining a plurality of moving objects from a first image and a second image according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a movable platform provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The flow diagrams depicted in the figures are merely illustrative and do not necessarily include all of the elements and operations/steps, nor do they necessarily have to be performed in the order depicted. For example, some operations/steps may be decomposed, combined or partially combined, so that the actual execution sequence may be changed according to the actual situation.
Some embodiments of the present application will be described in detail below with reference to the accompanying drawings. The embodiments described below and the features of the embodiments can be combined with each other without conflict.
The embodiment of the application provides an image transmission method, a movable platform and a computer readable storage medium, which are used for determining a target frame rate of image transmission from the movable platform to a control terminal based on the movement characteristics of target pixel points in an image acquired by an image acquisition device and/or the motion state information of the movable platform, so that the timeliness and the accuracy of determining the target frame rate are improved.
Wherein, movable platform can include cloud platform, platform body and image acquisition device etc. and this platform body can be used for carrying on the cloud platform, and this cloud platform can carry on image acquisition device to make the cloud platform can drive image acquisition device and remove, this image acquisition device can include one or more. Specifically, the types of the movable platform and the image capturing device may be flexibly set according to actual needs, and specific contents are not limited herein. For example, the image acquisition device may be a camera or a visual sensor, and the movable platform may be a mobile terminal, an unmanned aerial vehicle, a robot or a pan-tilt camera. The pan/tilt/zoom camera may include a camera, a pan/tilt head, and the like, and the pan/tilt head may include an axis arm and the like, and the axis arm may drive the camera to move, for example, control the camera to move to a suitable position through the axis arm, so as to acquire a desired image through the camera. The camera may be a monocular camera, and the type of the camera may be an ultra-wide angle camera, a telephoto camera (i.e., a zoom camera), an infrared camera, a far infrared camera, an ultraviolet camera, a Time of Flight (TOF) depth camera (TOF depth camera for short), and the like. The unmanned aerial vehicle can include a camera, a distance measuring device, an obstacle sensing device and the like. This unmanned aerial vehicle can also be including the cloud platform that is used for carrying on the camera, and this cloud platform can drive the camera and move to suitable position to gather required image through the camera. The drone may include a rotor-type drone (e.g., a quad-rotor drone, a hexa-rotor drone, or an octa-rotor drone, etc.), a fixed-wing drone, or a combination of a rotor-type drone and a fixed-wing drone, without limitation.
The movable platform may further be provided with a Positioning device such as a Global Positioning System (GPS) for accurately Positioning a moving position of the movable platform. The position relation between the camera and the positioning device can be on the same plane, and the camera and the positioning device can be on the same straight line or form a preset included angle in the plane; of course, the camera and the positioning device may be located on different planes.
Fig. 1 is a schematic view of a scene for implementing the image transmission method provided in this embodiment, as shown in fig. 1, taking a movable platform as an unmanned aerial vehicle as an example, a control terminal 100 is in communication connection with an unmanned aerial vehicle 200, the control terminal 100 may be configured to control the flight of the unmanned aerial vehicle 200 or execute corresponding actions, and acquire corresponding motion information from the unmanned aerial vehicle 200, where the motion information may include flight direction, flight attitude, flight altitude, flight speed, position information, and the like, and send the acquired motion information to the control terminal 100, and the control terminal 100 performs analysis, display, and the like. The control terminal 100 may also receive a control instruction input by the user, and accordingly control a distance measuring device or a camera on the drone 200 based on the control instruction. For example, the control terminal 100 may receive a shooting instruction or a ranging instruction input by a user, and send the shooting instruction or the ranging instruction to the drone 200, and the drone 200 may control a camera to shoot a captured picture according to the shooting instruction, or control a ranging device to range a target object according to the ranging instruction, and so on.
Specifically, the type of the control terminal 100 may be flexibly set according to actual needs, and specific contents are not limited herein. For example, the control terminal 100 may be a remote control device provided with a display, a control key, and the like, for establishing a communication connection with the drone 200 and controlling the drone 200, and the display may be used to display an image, a video, and the like. This control terminal 100 can also be third party's cell-phone or panel computer etc. establishes communication connection with unmanned aerial vehicle 200 through the agreement that predetermines to control unmanned aerial vehicle 200.
In some embodiments, the obstacle sensing device of the unmanned aerial vehicle 200 may acquire sensing signals around the unmanned aerial vehicle 200, and by analyzing the sensing signals, obstacle information may be obtained, and obstacle information is displayed in the display of the unmanned aerial vehicle 200, so that the user may know the obstacle sensed by the unmanned aerial vehicle 200, and the user may control the unmanned aerial vehicle 200 to avoid the obstacle. The display can be a liquid crystal display screen, a touch screen and the like.
In some embodiments, the obstacle sensing device may comprise at least one sensor for acquiring a sensing signal from the drone 200 in at least one direction. For example, the obstacle sensing device may include a sensor for detecting an obstacle in front of the drone 200. For example, the obstacle sensing device may include two sensors for detecting obstacles in front of and behind the drone 200, respectively. For example, the obstacle sensing device may include four sensors for detecting obstacles and the like in front of, behind, to the left, and to the right of the drone 200, respectively. For example, the obstacle sensing device may include five sensors for detecting obstacles and the like in front of, behind, to the left, to the right, and above the drone 200, respectively. For example, the obstacle sensing device may include six sensors for detecting obstacles in front of, behind, to the left of, to the right of, above, and below the drone 200, respectively. The sensors in the obstacle sensing device may be implemented separately or integrally. The detection direction of the sensor can be set according to specific needs to detect obstacles in various directions or direction combinations, and is not limited to the form disclosed in the present application.
The drone 200 may have multiple rotors. The rotor may be connected to the body of the drone 200, which may include a control unit, an Inertial Measurement Unit (IMU), a processor, a battery, a power source, and/or other sensors. The rotor may be connected to the body by one or more arms or extensions that branch off from a central portion of the body. For example, one or more arms may extend radially from the central body of the drone 200 and may have rotors at or near the ends of the arms.
Fig. 2 is a schematic diagram for implementing the interaction between the unmanned aerial vehicle and the control terminal provided in the embodiment of the present application, as shown in fig. 2, taking a movable platform as the unmanned aerial vehicle, taking an image acquisition device including a visual sensor and a camera as an example, the unmanned aerial vehicle may acquire an image through the visual sensor and determine a movement characteristic (e.g., a shift rate) of a target pixel point in the image, and acquire motion state information (e.g., a height, a translational motion speed, and/or an angular speed in a yaw direction of the unmanned aerial vehicle) of the unmanned aerial vehicle itself through an inertial measurement unit IMU, and then the unmanned aerial vehicle may make a frame rate control decision to determine a target frame rate, that is, the unmanned aerial vehicle may determine the target frame rate for the image transmission to the control terminal based on the movement characteristic of the target. At this time, the images collected by the camera can be subjected to video coding based on the target frame rate to generate video code stream data, and the video code stream data is sent to the control terminal through the wireless communication module of the unmanned aerial vehicle. The control terminal can receive video code stream data sent by the unmanned aerial vehicle through a wireless communication module of the control terminal, decode the video code stream data to obtain an image, and display the image through the display.
It should be noted that the structures of the apparatuses such as the unmanned aerial vehicle and the control terminal in fig. 1 and fig. 2 do not limit the application scenarios of the image transmission method.
Referring to fig. 3, fig. 3 is a schematic flowchart illustrating an image transmission method according to an embodiment of the present application. The image transmission method can be applied to a movable platform for accurately determining the target frame rate, and the movable platform is used as an unmanned aerial vehicle for detailed description.
As shown in fig. 3, the image transmission method may include steps S101 to S102, and may specifically be as follows:
s101, obtaining the movement characteristics of target pixel points in an image collected by an image collecting device; and/or acquiring motion state information of the movable platform.
Wherein, can predetermine one or more image acquisition device on the unmanned aerial vehicle, this image acquisition device can be camera or vision sensor etc.. The unmanned aerial vehicle can gather a frame or multiframe images through image acquisition device at the in-process of flight.
After the image is acquired, the movement characteristics of the target pixel points in the image can be acquired, and the target pixel points can include one or more target pixel points, for example, the target pixel points can be feature points, or the target pixel points can be pixel points in a background region, or the target pixel points can be pixel points in a region where a moving target is located, and the like. The movement characteristic can be used for representing the movement state of the target pixel point, the movement characteristic can be flexibly set according to actual needs, specific content is not limited here, and for example, the movement characteristic can be the movement rate, the acceleration and the like of the target pixel point.
In some embodiments, obtaining the movement characteristics of the target pixel point in the image acquired by the image acquisition device may include: acquiring a first image and a second image in a plurality of frames of images acquired by an image acquisition device; acquiring pixel coordinates of a target pixel point of a first image and pixel coordinates of a target pixel point of a second image, wherein the target pixel point of the first image corresponds to the target pixel point of the second image; and determining the movement characteristic of the target pixel point according to the pixel coordinate of the target pixel point of the first image and the pixel coordinate of the target pixel point of the second image.
In order to improve the accuracy of determining the movement characteristics of the target pixel points, the movement characteristics can be determined based on the pixel coordinates of the target pixel points in the multi-frame images. Specifically, unmanned aerial vehicle can gather a frame of image through image acquisition device every interval preset time, after the time quantum that predetermines, can gather and obtain multiframe image, and this every interval preset time and preset time quantum can carry out nimble setting as required according to the time, and concrete value does not do the injecing here. At the moment, a target pixel point is extracted from the multi-frame image, the pixel coordinate of the target pixel point on each frame of image is obtained, and the moving characteristic of the target pixel point is determined according to the pixel coordinate corresponding to the target pixel point in each frame of image. For example, taking two frames of images as an example, a first image and a second image may be screened from a plurality of frames of images acquired by the image acquisition device, where the first image and the second image may be images with adjacent acquisition times or images with non-adjacent acquisition times. For example, a first image and a second image which are adjacent in acquisition time can be screened out from a plurality of frames of images acquired by the image acquisition device, or a first image and a second image which are good in image quality can be screened out from a plurality of frames of images acquired by the image acquisition device, or a first image and a second image which are high in image definition can be screened out from a plurality of frames of images acquired by the image acquisition device.
It should be noted that the image acquisition device may acquire multiple frames of images based on the initial frame rate. When the image acquisition device starts to acquire images, for example, when the image acquisition device is turned on, the default frame rate may be set as the initial frame rate of the image acquisition device.
Then, the pixel coordinates of the target pixel point of the first image and the pixel coordinates of the target pixel point of the second image may be obtained, where the target pixel point of the first image corresponds to the target pixel point of the second image, for example, when the target pixel point is a pixel point of a moving vehicle, the pixel coordinates of a pixel point in an area where the vehicle is located in the first image may be obtained, and the pixel coordinates of a pixel point in an area where the vehicle is located in the second image may be obtained. At this time, the movement characteristic of the target pixel point can be determined according to the pixel coordinate of the target pixel point of the first image and the pixel coordinate of the target pixel point of the second image.
In some embodiments, the target pixel points may include a first target pixel point corresponding to a background and/or a second target pixel point corresponding to a moving target, where the first target pixel point of the first image corresponds to the first target pixel point of the second image, and the second target pixel point of the first image corresponds to the second target pixel point of the second image.
In order to improve the accuracy of subsequently determining the movement characteristics, the target pixel points may be divided into first target pixel points corresponding to a background and/or second target pixel points corresponding to a moving target, where the moving target may be an object moving in a picture acquired by the image acquisition device, the moving target may include one or more moving targets, for example, the moving target may be a walking person, a moving vehicle, or a running dog, and the background may be an area other than the moving target in the picture acquired by the image acquisition device. For example, the background and the moving target in the first image may be identified, the pixel coordinates of the first target pixel points corresponding to the background in the first image may be obtained, the pixel coordinates of the second target pixel points corresponding to the moving target in the first image may be obtained, the background and the moving target in the second image may be identified, the pixel coordinates of the first target pixel points corresponding to the background in the second image may be obtained, and the pixel coordinates of the second target pixel points corresponding to the moving target in the second image may be obtained. Of course, only the background or the moving target in the first image may be identified, and the pixel coordinates of the first target pixel point corresponding to the background or the pixel coordinates of the second target pixel point corresponding to the moving target in the first image may be obtained, and the background or the moving target in the second image may be identified, and the pixel coordinates of the first target pixel point corresponding to the background or the pixel coordinates of the second target pixel point corresponding to the moving target in the second image may be obtained.
In some embodiments, the image transmission method may further include: a background and/or moving object is identified in the first image and the second image based on a pre-trained computational model.
In order to improve the accuracy of identifying the background and/or the moving target, the background and/or the moving target in the multi-frame image may be identified by a pre-trained calculation model, the pre-trained calculation model may be a deep learning module, the pre-trained calculation model may be flexibly set according to actual needs, for example, the calculation model may be a target detection algorithm SSD or YOLO, the calculation model may also be a convolutional neural network R-CNN or Faster R-CNN, for example, a plurality of sample images including different types of moving targets and backgrounds may be obtained, the calculation model is trained based on the plurality of sample images, and the trained calculation model, that is, the pre-trained calculation model, is obtained. At this time, for multiple frames of images such as the acquired first image and the acquired second image, the background and/or the moving target in the first image can be accurately identified through a pre-trained calculation model, and the background and/or the moving target in the second image can be accurately identified, so that the pixel coordinates of the pixel points in the area where the background and/or the moving target are located can be obtained.
It should be noted that the calculation model for identifying the background may be the same as or different from the calculation model for identifying the moving object, for example, the first calculation model trained in advance is used to identify the background in the multi-frame images such as the first image and the second image, so as to obtain the position of the background area on each frame of image, and the pixel coordinates of the background area on each frame of image are determined according to the position of the background area. And identifying the moving target in the multi-frame images such as the first image and the second image through a pre-trained second calculation model to obtain the position of the moving target on each frame of image, and determining the pixel coordinate of the moving target on each frame of image according to the position of the moving target, wherein the first calculation model and the second calculation model can be the same or different.
In some embodiments, the background area or the area where the moving object is located in the image may include a plurality of pixels. Feature points can be extracted aiming at a background area and an area where a moving target is located in the image, the extracted feature points are used as target pixel points, and the corresponding relation between the target pixel points in the first image and the target pixel points in the second image is determined by utilizing a feature point matching method.
In some embodiments, the moving target may include one or a plurality of previous moving targets with the largest corresponding image region area in a plurality of candidate moving targets; alternatively, the moving object may include one or more previous moving areas with the largest motion amplitude among the plurality of candidate moving objects.
In order to improve the accuracy of the subsequent determination of the movement characteristics, reliable pixel coordinates of one or more moving objects may be obtained for determining the movement characteristics, i.e. the moving object may comprise one or more. In an embodiment, the moving objects may be screened based on image area areas, specifically, when a plurality of moving objects to be selected can be identified from an image, the image area occupied by each moving object to be selected may be obtained, and one or a plurality of previous moving objects with the largest image area may be screened from the plurality of moving objects to be selected, for example, one moving object with the largest image area or the first 3 moving objects with the largest image area may be screened from the plurality of moving objects to be selected. In another embodiment, the moving objects may be screened based on the moving amplitude, specifically, when a plurality of moving objects to be selected may be identified from the image, the moving amplitude of each moving object to be selected may be obtained, and one or a plurality of previous moving objects with the largest moving amplitude may be screened from the plurality of moving objects to be selected, for example, one moving object with the largest moving amplitude or the first 3 moving objects with the largest moving amplitude may be screened from the plurality of moving objects to be selected.
In some embodiments, determining the movement characteristic of the target pixel point according to the pixel coordinates of the target pixel point of the first image and the pixel coordinates of the target pixel point of the second image may include: determining a relative position difference according to the pixel coordinates of the target pixel point of the first image and the pixel coordinates of the target pixel point of the second image; determining the offset rate of the target pixel point according to the relative position difference and the acquisition time interval of the first image and the second image; and determining the movement characteristic of the target pixel point according to the offset rate.
In order to improve flexibility and convenience of determining the movement characteristics of the target pixel points, the movement characteristics can be determined based on the relative position difference of the target pixel points. Specifically, the relative position difference between the pixel coordinates of the target pixel points in the multi-frame images acquired by the image acquisition device and the acquisition time interval of the multi-frame images can be acquired, and the movement characteristic of the target pixel points is determined according to the relative position difference and the acquisition time interval. For example, taking two frames of images as an example, the two frames of images include a first image and a second image, and the relative position difference may be determined according to the pixel coordinates of the target pixel of the first image and the pixel coordinates of the target pixel of the second image, for example, the relative position difference Δ a may be determined by the following formula (1):
Figure BDA0003047054260000091
wherein (x)i,yi) (x) pixel coordinates representing a target pixel point of the first imagek,yk) And representing the pixel coordinates of the target pixel point of the second image.
And acquiring a time interval T between the first image and the second image, and then determining the offset rate of the target pixel point according to the relative position difference Δ a and the time interval T between the first image and the second image, which may be specifically represented by the following formula (2):
Figure BDA0003047054260000092
wherein p represents the offset rate of the target pixel point, Δ a represents the relative position difference, and T represents the acquisition time interval.
At this time, the movement characteristic of the target pixel point can be determined according to the offset rate.
In some embodiments, determining the movement characteristic of the target pixel point according to the offset rate may include: and carrying out weighted average on the offset rate of the first target pixel point and the offset rate of the second target pixel point, so as to represent the movement characteristic of the target pixel point by using the offset rate after weighted average.
When the target pixel points include a first target pixel point and a second target pixel point, for example, the target pixel points include a first target pixel point corresponding to a background and a second target pixel point corresponding to a moving target, the offset rate of the first target pixel point may be determined based on the pixel coordinates of the first target pixel point in the first image and the second image, and the offset rate of the second target pixel point may be determined based on the pixel coordinates of the second target pixel point in the first image and the second image. Then, performing weighted average on the offset rate of the first target pixel and the offset rate of the second target pixel, so as to represent the movement characteristic of the target pixel by using the offset rate after weighted average, which can be specifically shown in the following formula (3):
Figure BDA0003047054260000093
wherein the content of the first and second substances,
Figure BDA0003047054260000094
indicating the rate of offset, i.e. the characteristic of movement of the target pixel, p1Representing the offset rate, p, of the first target pixel2And representing the offset rate of the second target pixel point.
It should be noted that, when the target pixel includes a first target pixel and a second target pixel, for example, the target pixel includes a first target pixel corresponding to a background and a second target pixel corresponding to a moving target, the offset rate of the first target pixel may be determined based on the pixel coordinates of the first target pixel in the first image and the second image, and the offset rate of the second target pixel may be determined based on the pixel coordinates of the second target pixel in the first image and the second image. Then, a weighted value (for example, a weighted value of a background) of the first target pixel point and a weighted value (for example, a weighted value of a moving target) of the second target pixel point may be obtained, and a weighted sum operation is performed according to the weighted value of the first target pixel point, the offset rate of the first target pixel point, the weighted value of the second target pixel point, and the offset rate of the second target pixel point, so as to represent the movement characteristic of the target pixel point by the offset rate after the weighted sum, which may be specifically represented by the following formula (4):
Figure BDA0003047054260000101
wherein the content of the first and second substances,
Figure BDA0003047054260000102
indicating the rate of offset, i.e. the characteristic of movement of the target pixel, p1Representing the offset rate, p, of the first target pixel2Representing the offset rate of the second target pixel, a11Weight value representing first target pixel point, a12Weight value representing second target pixel point, a11+a12=1。
The following description will be given in detail by taking an example in which the target pixel points include a target pixel point corresponding to a background, a target pixel point corresponding to a first moving target, a target pixel point corresponding to a second moving target, and a target pixel point corresponding to a third moving target, and taking an example in which the first image and the second image are taken as an example.
Specifically, the pixel coordinate B of the target pixel point corresponding to the background in the first image may be obtained, and the pixel coordinate B' of the target pixel point corresponding to the background in the second image may be obtained, wherein,
B={(xi,yi)}
B'={(xk,yk)}
(xi,yi) Representing the pixel coordinates of the background in the first image, (x)k,yk) Representing the pixel coordinates of the background in the second image.
Based on (x)i,yi) And (x)k,yk) Acquiring the relative position difference of the background on the first image and the second image, namely acquiring the relative position difference delta B between the pixel coordinates of B on the first image and B' on the second image:
Figure BDA0003047054260000103
acquiring the acquisition time interval T of the first image and the second image, and determining the offset rate p corresponding to the background according to the acquisition time interval T and the relative position difference delta Bb
Figure BDA0003047054260000104
As shown in FIG. 4, a moving object except background S can be identified from the first image and the second imageiThe pixel coordinates { S ] of three moving objects on the first image, 3 of which occupy the largest area of the image area (i.e., occupy the largest number of image pixels), can be taken1,S2,S3In which S is1Representing the pixel coordinates of a target pixel point corresponding to a first moving target on a first image, S2Representing the pixel coordinates of a target pixel point corresponding to a second moving target on the first image, S3And expressing the pixel coordinates of a target pixel point corresponding to the third moving target on the first image. And acquiring pixel coordinates { S 'of three moving objects on a second image'1,S'2,S'3In which, S'1Representing the pixel coordinate, S ', of the target pixel point corresponding to the first moving target in the second image'2Representing pixel coordinates, S ', of a target pixel point corresponding to a second moving target on a second image'3And expressing the pixel coordinates of a target pixel point corresponding to the third moving target on the second image. Wherein the content of the first and second substances,
Figure BDA0003047054260000111
Figure BDA0003047054260000112
Figure BDA0003047054260000113
representing the pixel coordinates of the moving object in the first image,
Figure BDA0003047054260000114
representing the pixel coordinates of the moving object in the second image, in figure 4,
Figure BDA0003047054260000115
representing the pixel coordinates of the first moving object in the first image,
Figure BDA0003047054260000116
representing the pixel coordinates of the first moving object in the second image,
Figure BDA0003047054260000117
representing the pixel coordinates of the second moving object in the first image,
Figure BDA0003047054260000118
representing the pixel coordinates of the second moving object in the second image,
Figure BDA0003047054260000119
representing the pixel coordinates of the third moving object in the first image,
Figure BDA00030470542600001110
representing the pixel coordinates of the third moving object in the second image.
Then, the relative position difference of the 3 moving objects in the first image and the second image, i.e. the S on the first image can be obtained1And S on the second image1' relative position difference Δ S between pixel coordinates1On the first image S2And S 'on the second image'2Pixel coordinates ofRelative position difference Δ S therebetween2And S on the first image3And S 'on the second image'3Relative position difference Δ S between pixel coordinates3Specifically, the following may be mentioned:
Figure BDA00030470542600001111
wherein, Delta SjThe relative position difference of 3 moving objects is shown, and the relative position difference deltaS of 3 moving objects can be obtained according to the acquisition time interval T of the first image and the second imagejDetermining the offset rate p corresponding to each moving objecttj
Figure BDA00030470542600001112
Wherein p ist1Representing the corresponding offset rate, p, of the first moving objectt2Indicating the corresponding offset rate, p, of the second moving objectt3Indicating the corresponding offset rate, Δ S, of the third moving object1Representing the relative position difference, Δ S, of the first moving object on the first image and the second image2Representing the relative position difference, Δ S, of the second moving object on the first and second images3Representing the relative position difference of the third moving object on the first image and the second image.
Obtaining the offset rate p corresponding to the backgroundbThe offset rate p corresponding to the first moving targett1The offset rate p corresponding to the second moving targett2And an offset rate p corresponding to the third moving objectt3Thereafter, a synthetically calculated target offset rate may be determined
Figure BDA00030470542600001113
Figure BDA0003047054260000121
Wherein the content of the first and second substances,
Figure BDA0003047054260000122
representing the target offset rate, pbIndicating the corresponding offset rate of the background, pt1Representing the corresponding offset rate, p, of the first moving objectt2Indicating the corresponding offset rate, p, of the second moving objectt3Indicating the corresponding offset rate of the third moving object, a0Weight value representing offset rate corresponding to background, a1A weight value representing an offset rate corresponding to the first moving object, a2A weight value representing an offset rate corresponding to the second moving object, a3A weight value representing an offset rate corresponding to the third moving object, a0+a1+a2+a3=1。
At this time, the movement characteristic of the target pixel point can be represented by using the target offset rate.
In some embodiments, obtaining motion state information of the movable platform may include: the height of the movable platform, the translational motion speed, and/or the angular speed in the yaw direction are acquired.
Wherein, the motion state information of the movable platform can include at least one of the height of the movable platform, the translational motion speed, and the angular velocity in the yaw direction, taking the movable platform as the unmanned aerial vehicle as an example, the motion state information can include at least one of the height at which the unmanned aerial vehicle can fly, the translational motion speed, and the angular velocity in the yaw direction, the height can be the height of the unmanned aerial vehicle from the ground, and the translational motion speed can be the flight speed of the unmanned aerial vehicle. In this embodiment, taking into account the height of the movable platform may further improve the accuracy of the frame rate adjustment. For example, when the drones take video at the same speed, at a height close to the ground and at a height far from the ground, the visual perception of the user is completely different. The speed of visual perception of the user is greater proximate the ground relative to the speed of visual perception of the user away from the ground. Therefore, further referring to the height on the basis of considering the translational motion speed and the angular speed in the yaw direction can make the adjustment of the frame rate more in line with the intuitive feeling of the user.
It should be noted that the motion state information may also include other information such as flight direction, flight attitude, or position information, and the specific content is not limited herein.
In some embodiments, acquiring the height of the movable platform, the translational motion speed, and/or the angular speed in the yaw direction may include: the height of the movable platform, the translational motion speed, and/or the angular velocity in the yaw direction are acquired by an inertial measurement unit mounted on the movable platform.
In order to improve the convenience and accuracy of acquiring the motion state information, the height, the translational motion speed, the angular speed in the yaw direction and/or other information of the movable platform can be acquired through an Inertial Measurement Unit (IMU) installed on the movable platform (such as an unmanned aerial vehicle).
S102, determining a target frame rate of image transmission from the movable platform to the control terminal according to the movement characteristics of the target pixel points in the image and/or the motion state information of the movable platform.
Taking the movable platform as an unmanned aerial vehicle as an example, after the movement characteristics of the target pixel points are obtained, the target frame rate of image transmission from the unmanned aerial vehicle to the control terminal can be determined according to the movement characteristics of the target pixel points; or after the motion state information of the unmanned aerial vehicle is obtained, the target frame rate of image transmission from the unmanned aerial vehicle to the control terminal can be determined according to the motion state information of the unmanned aerial vehicle; or after the movement characteristics of the target pixel points and the motion state information of the unmanned aerial vehicle are obtained, the target frame rate of the unmanned aerial vehicle for image transmission to the control terminal can be determined according to the movement characteristics of the target pixel points and the motion state information of the unmanned aerial vehicle. Therefore, the unmanned aerial vehicle can intelligently sense the current flying environment, the moving characteristic of visual perception of the unmanned aerial vehicle and the detected motion state information are combined, the target frame rate of image transmission is dynamically adjusted in a self-adaptive mode, the subsequent control terminal can conveniently achieve the optimal balance of smoothness and definition of the displayed image transmission picture, and the effect of image transmission of integral aerial photography is improved.
In some embodiments, the determining, according to the movement characteristic of the target pixel in the image, a target frame rate at which the movable platform transmits the image to the control terminal may include: when the offset rate is smaller than a first preset rate threshold value, setting the first frame rate as a target frame rate; or when the offset rate is greater than a second preset rate threshold, setting the second frame rate as the target frame rate; or when the offset rate is greater than or equal to a first preset rate threshold and the offset rate is less than or equal to a second preset rate threshold, setting the current frame rate as the target frame rate; the first preset speed threshold is smaller than the second preset speed threshold, and the first frame rate is smaller than the second frame rate.
In order to improve the convenience of determining the target frame rate, the target frame rate may be determined only by using the offset rate of the target pixel, specifically, after the offset rate of the target pixel is obtained, a frame rate control decision corresponding to the offset rate of the target pixel may be obtained, and the target frame rate of image transmission corresponding to the offset rate of the target pixel is determined according to the frame rate control decision, where the frame rate control decision may be a mapping relationship between a plurality of different offset rates and each frame rate, and the target frame rate of image transmission corresponding to the offset rate of the target pixel may be determined by querying the mapping relationship. Alternatively, the frame rate control decision may be a calculation conversion relationship between the offset rate and the frame rate, and the target frame rate of the corresponding image transmission may be calculated based on the offset rate of the target pixel point through the calculation conversion relationship. Of course, the frame rate control decision may also be flexibly set according to actual needs, and the specific content is not limited herein.
For example, after the offset rate of the target pixel point is obtained, whether the offset rate is smaller than a first preset rate threshold value or not may be determined, and when the offset rate is smaller than the first preset rate threshold value, it indicates that the flight speed of the unmanned aerial vehicle is slow, and the content change of the video image is slow. Or, when the offset rate is greater than the second preset rate threshold, it is indicated that the content of the video image changes faster, and in order to ensure the smoothness of subsequent video playing and improve the image transmission effect of aerial photography, a higher frame rate may be adopted at this time, that is, the second frame rate is set as the target frame rate. Or, when the offset rate is greater than or equal to the first preset rate threshold and the offset rate is less than or equal to the second preset rate threshold, it indicates that the flight state of the unmanned aerial vehicle does not change much and the frame rate does not need to be adjusted, and at this time, the current frame rate may be maintained unchanged and set as the target frame rate. Specifically, the following may be mentioned:
if
Figure BDA0003047054260000141
then, selecting a frame rate K2;
else if
Figure BDA0003047054260000142
then, selecting a frame rate K1;
else keeps the current frame rate unchanged.
Wherein the content of the first and second substances,
Figure BDA0003047054260000143
representing the offset rate, p, of the target pixeluRepresenting a second predetermined rate threshold, plRepresents the first preset rate threshold, K2 represents the second frame rate, and K1 represents the first frame rate. The first preset rate threshold is smaller than the second preset rate threshold, the first frame rate is smaller than the second frame rate, the first preset rate threshold, the second preset rate threshold, the first frame rate and the second frame rate can be flexibly set according to actual needs, and specific values are not limited here. For example, the frame rate for the high definition mode may be 30fps, and the frame rate for the fluency mode may be 60 fps.
It should be noted that the frame rate adjustment is not limited to the first frame rate and the second frame rate, and the like, and may also include a plurality of different frame rates, such as a third frame rate, a fourth frame rate, a fifth frame rate, and the like, and a mapping relationship between the plurality of different frame rates and each offset rate may be established, so that based on the mapping relationship between the plurality of different frame rates and each offset rate, the frame rate corresponding to the currently detected offset rate is determined, and the target frame rate is obtained.
In some embodiments, determining the target frame rate for image transmission from the movable platform to the control terminal according to the motion state information of the movable platform may include: if the height is smaller than a first height threshold value and the translational motion speed is larger than a first speed threshold value, setting a second frame rate as a target frame rate; if the height is greater than or equal to a first height threshold value, or the translational motion speed is less than or equal to a first speed threshold value, judging whether the angular speed is greater than the angular speed threshold value; if the angular velocity is larger than the angular velocity threshold, setting the second frame rate as the target frame rate; if the angular speed is less than or equal to the angular speed threshold, judging whether the height is greater than a second height threshold and whether the translational motion speed is less than a second speed threshold; if the height is larger than a second height threshold value and the translational motion speed is smaller than a second speed threshold value, setting the first frame rate as a target frame rate; if the height is smaller than or equal to a second height threshold value, or the translational motion speed is larger than or equal to a second speed threshold value, setting the current frame rate as the target frame rate; the first height threshold is smaller than the second height threshold, the first speed threshold is smaller than the second speed threshold, and the first frame rate is smaller than the second frame rate.
Taking a movable platform as an unmanned aerial vehicle as an example, in order to improve flexibility and efficiency of determining a target frame rate, the target frame rate may be determined only by using motion state information of the unmanned aerial vehicle, specifically, after obtaining motion state information such as a height of the unmanned aerial vehicle, a translational motion speed, and an angular speed in a yaw direction, a frame rate control decision corresponding to the motion state information may be obtained, and a target frame rate of image transmission corresponding to the motion state information is determined according to the frame rate control decision, where the frame rate control decision may be a mapping relationship between a plurality of different motion state information and each frame rate, and a target frame rate of image transmission corresponding to the motion state information obtained by current detection may be determined by querying the mapping relationship. Alternatively, the frame rate control decision may be a calculation conversion relationship between the motion state information and the frame rate, and the target frame rate of the corresponding image transmission may be calculated based on the motion state information through the calculation conversion relationship. Of course, the frame rate control decision may also be flexibly set according to actual needs, and the specific content is not limited herein.
For example, after obtaining the motion state information of the unmanned aerial vehicle, such as the altitude, the translational motion speed, and the angular speed in the yaw direction, it may be determined whether the altitude is smaller than a first altitude threshold, and whether the translational motion speed is greater than a first speed threshold, if the altitude is smaller than the first altitude threshold, and the translational motion speed is greater than the first speed threshold, it is indicated that the content of the video frame changes faster, in order to ensure the smoothness of subsequent video playing and improve the effect of image transmission of aerial photography, a higher frame rate may be used at this time, that is, the second frame rate is set as the target frame rate. If the height is greater than or equal to the first height threshold value, or the translational motion speed is less than or equal to the first speed threshold value, whether the angular speed is greater than the angular speed threshold value can be further judged; if the angular velocity is greater than the angular velocity threshold, it is indicated that the content of the video image changes rapidly, and in order to ensure the fluency of the subsequent video playing generated for the image and improve the image transmission effect of aerial photography, a higher frame rate may be adopted at this time, that is, the second frame rate is set as the target frame rate; if the angular velocity is less than or equal to the angular velocity threshold, whether the height is greater than a second height threshold and whether the translational motion velocity is less than a second velocity threshold can be further judged; if the height is greater than the second height threshold value and the translational motion speed is less than the second speed threshold value, it is indicated that the content change of the video image is slow, and in order to ensure the high definition of each frame of acquired image and improve the image quality of image transmission of aerial photography, a lower frame rate can be adopted at the moment, that is, the first frame rate is set as the target frame rate; if the altitude is less than or equal to the second altitude threshold, or the translational motion speed is greater than or equal to the second speed threshold, it indicates that the change of the flight state of the unmanned aerial vehicle is not large, and the frame rate does not need to be adjusted, and at this time, the current frame rate can be maintained unchanged, and the current frame rate is set as the target frame rate. Specifically, the following may be mentioned:
if h<Hland v is>vlThen, select frame rate K2;
eles if wy>wTthen, selecting the frame rateK2;
else if h>HuAnd v is<vuThen, select frame rate K1;
else keeps the current frame rate unchanged.
Wherein h represents height, v represents translational motion speed, wyRepresenting angular velocity in yaw direction, HlDenotes a first height threshold, HuIndicating a second height threshold, vlIndicating a first speed threshold, vuDenotes a second speed threshold, wTIndicating an angular velocity threshold, K2 indicating a second frame rate, and K1 indicating a first frame rate. The first height threshold is less than the second height threshold, the first speed threshold is less than the second speed threshold, and the first frame rate is less than the second frame rate. The first preset rate threshold, the second preset rate threshold, the first frame rate and the second frame rate can be flexibly set according to actual needs, and specific values are not limited here.
It should be noted that the frame rate adjustment is not limited to the first frame rate and the second frame rate, and the like, and may also include a plurality of different frame rates, such as a third frame rate, a fourth frame rate, a fifth frame rate, and the like, and a mapping relationship between the plurality of different frame rates and each motion state information may be established, so that based on the mapping relationship between the plurality of different frame rates and each motion state information, the frame rate corresponding to the motion state information obtained by current detection is determined, and the target frame rate is obtained. Further, when the translational motion velocity and the angular velocity are vector values, it is possible to take the absolute value of the translational motion velocity to compare with a velocity threshold value, and the absolute value of the angular velocity to compare with an angular velocity threshold value, and the like.
In some embodiments, determining the target frame rate of the image transmission from the movable platform to the control terminal according to the movement characteristics of the target pixel points in the image and the motion state information of the movable platform may include: if the offset rate is greater than a second preset rate threshold, setting the second frame rate as the target frame rate; if the offset rate is less than or equal to a second preset rate threshold, judging whether the height is less than a first height threshold and whether the translational motion speed is greater than a first speed threshold; if the height is smaller than a first height threshold value and the translational motion speed is larger than a first speed threshold value, setting a second frame rate as a target frame rate; if the height is greater than or equal to a first height threshold value, or the translational motion speed is less than or equal to a first speed threshold value, judging whether the angular speed is greater than the angular speed threshold value; if the angular velocity is larger than the angular velocity threshold, setting the second frame rate as the target frame rate; if the angular velocity is smaller than or equal to the angular velocity threshold, judging whether the offset rate is smaller than a first preset rate threshold; if the offset rate is smaller than a first preset rate threshold value, setting the first frame rate as a target frame rate; if the offset rate is greater than or equal to a first preset rate threshold, judging whether the height is greater than a second height threshold and whether the translational motion speed is less than a second speed threshold; if the height is larger than a second height threshold value and the translational motion speed is smaller than a second speed threshold value, setting the first frame rate as a target frame rate; if the height is smaller than or equal to a second height threshold value, or the translational motion speed is larger than or equal to a second speed threshold value, setting the current frame rate as the target frame rate; the first height threshold is smaller than the second height threshold, the first speed threshold is smaller than the second speed threshold, and the first frame rate is smaller than the second frame rate.
In order to improve the accuracy of determining the target frame rate, the target frame rate may be determined by combining the offset rate of the target pixel and the motion state information of the unmanned aerial vehicle, specifically, after obtaining the offset rate of the target pixel, and the motion state information such as the height, the translational motion speed, and the angular velocity in the yaw direction of the unmanned aerial vehicle, a frame rate control decision corresponding to the offset rate and the motion state information may be obtained, and the target frame rate of image transmission corresponding to the offset rate and the motion state information is determined according to the frame rate control decision, where the frame rate control decision may be a mapping relationship between different offset rates and motion state information and respective frame rates, and the target frame rate of image transmission corresponding to the offset rate and motion state information obtained by current detection may be determined by querying the mapping relationship. Alternatively, the frame rate control decision may be a calculation conversion relationship between the offset rate and the motion state information and the frame rate, and the corresponding target frame rate of image transmission may be calculated based on the offset rate and the motion state information through the calculation conversion relationship. Of course, the frame rate control decision may also be flexibly set according to actual needs, and the specific content is not limited herein.
For example, after obtaining the offset rate of the target pixel point, and the motion state information such as the height of the unmanned aerial vehicle, the translational motion speed, and the angular velocity in the yaw direction, it can be determined whether the offset rate is greater than a second preset rate threshold, if the offset rate is greater than the second preset rate threshold, it indicates that the flight speed of the unmanned aerial vehicle is fast, the content change of the video picture is also fast, in order to ensure the subsequent fluency of video playing generated for the image, the effect of image transmission for aerial photography is improved, a higher frame rate can be adopted at this time, that is, the second frame rate is set as the target frame rate. If the offset rate is less than or equal to a second preset rate threshold, further judging whether the height is less than a first height threshold and the translational motion speed is greater than a first speed threshold; if the height is smaller than the first height threshold value and the translational motion speed is greater than the first speed threshold value, the content change of the video image is relatively fast, and in order to guarantee the smoothness of video playing of subsequent image generation and improve the image transmission effect of aerial photography, a higher frame rate can be adopted at the moment, namely the second frame rate is set as a target frame rate; if the height is greater than or equal to the first height threshold value, or the translational motion speed is less than or equal to the first speed threshold value, whether the angular speed is greater than the angular speed threshold value can be further judged; if the angular velocity is greater than the angular velocity threshold, the content change of the video image is relatively fast, and in order to guarantee the fluency of the subsequent video playing generated for the image and improve the image transmission effect of aerial photography, a relatively high frame rate can be adopted, that is, the second frame rate is set as the target frame rate; if the angular velocity is less than or equal to the angular velocity threshold, further determining whether the offset rate is less than a first preset rate threshold; if the offset rate is smaller than a first preset rate threshold, the content change of the video image is slow, and in order to ensure the high definition of each frame of acquired image and improve the image quality of aerial image transmission, a lower frame rate can be adopted, that is, the first frame rate is set as a target frame rate; if the offset rate is greater than or equal to the first preset rate threshold, whether the height is greater than a second height threshold and whether the translational motion speed is less than a second speed threshold can be further judged; if the height is greater than the second height threshold value and the translational motion speed is less than the second speed threshold value, it is indicated that the content change of the video image is slow, and in order to ensure the high definition of each frame of acquired image and improve the image quality of image transmission of aerial photography, a lower frame rate can be adopted at the moment, that is, the first frame rate is set as the target frame rate; if the altitude is less than or equal to the second altitude threshold, or the translational motion speed is greater than or equal to the second speed threshold, it indicates that the change of the flight state of the unmanned aerial vehicle is not large, and the frame rate does not need to be adjusted, and at this time, the current frame rate can be maintained unchanged, and the current frame rate is set as the target frame rate. Specifically, the following may be mentioned:
If
Figure BDA0003047054260000181
selecting a frame rate K2;
else if h<Hland v is>vlThen, select frame rate K2;
eles if wy>wTthen, select frame rate K2;
else if
Figure BDA0003047054260000182
then, selecting a frame rate K1;
else if h>Huand v is<vuthen, selecting a frame rate K1;
else keeps the current frame rate unchanged.
The representation of each parameter is consistent with the above description, and is not described herein again.
It should be noted that the target frame rate may be determined based on only the offset rate and part of the motion state information, and in addition, the offset rate and the determination sequence of each type of motion state information may be flexibly adjusted according to actual needs, and specific content is not limited herein.
In some embodiments, after determining the target frame rate for image transmission from the movable platform to the control terminal, the image transmission method may further include: generating video code stream data based on the target frame rate; and sending the video code stream data to the control terminal.
In order to improve the efficiency and the security of image transmission, after the target frame rate is determined, the image acquired by the image acquisition device may be encoded by the encoding module based on the target frame rate to generate video code stream data, wherein the encoding mode may be flexibly set according to actual needs, for example, the encoding mode may include encoding such as h.264 or h.265. After the video code stream data are generated, the video code stream data can be sent to a control terminal connected with the unmanned aerial vehicle through the wireless communication module.
After receiving the video code stream data, the control terminal may decode the video code stream data through the video decoding module to obtain an image, where the image may include multiple frames, and the multiple frames of images may generate video data, where the video data may be a YUV video, and the YUV video is divided into three components, "Y" represents brightness (Luma or Luma), that is, a gray value; "U" and "V" represent Chroma (Chroma), which is used to describe the color and saturation of an image for a given pixel. At this time, the control terminal may display the decoded image through the display. Because the control terminal decodes one frame every time the control terminal receives one frame of video code stream data, frame rate information is not needed, the unmanned aerial vehicle does not need to inform the control terminal of the currently adopted frame rate, and the reliability of implementing the dynamic self-adaptive frame rate is ensured.
In some embodiments, the image capturing device is a first image capturing device, the movable platform further includes a second image capturing device, and the generating the video bitstream data based on the target frame rate may include: and based on the target frame rate, encoding the image acquired by the second image acquisition device to generate video code stream data.
The image acquisition device for acquiring the image for determining the target frame rate can be consistent with or inconsistent with the image acquisition device for acquiring the image for transmitting to the control terminal, in order to improve the flight safety of the unmanned aerial vehicle, improve the image transmission efficiency and save computing resources, the image acquisition device for determining the target frame rate by acquiring the image and the image acquisition device for generating video code stream data by acquiring the image are respectively arranged, namely, the image acquired by the first image acquisition device is used for determining the target frame rate based on the image, in the process of generating the video code stream data, the image can be acquired by the second image acquisition device, the image acquired by the second image acquisition device is encoded based on the target frame rate, the video code stream data is generated, and the video code stream data is transmitted to the control terminal.
In one embodiment, the first image capturing device is a lower resolution image capturing device, and the second image capturing device is a higher resolution image capturing device. The first image acquisition device can be a binocular camera installed on the unmanned aerial vehicle, and the second image acquisition device can be a main camera mounted on a tripod head of the unmanned aerial vehicle. In other embodiments, the image capturing device that captures the image used to determine the target frame rate may be the same as the image capturing device that captures the image used to transmit to the control terminal. For example. Are all main cameras mounted on the tripod head of the unmanned aerial vehicle.
The method and the device can acquire the moving characteristics of the target pixel points in the image acquired by the image acquisition device; and/or acquiring motion state information of the movable platform; and determining a target frame rate for image transmission from the movable platform to the control terminal according to the movement characteristics of the target pixel points in the image and/or the motion state information of the movable platform. The scheme can automatically determine the target frame rate without manual selection of a user, and the timeliness and the accuracy of determining the target frame rate are improved.
Referring to fig. 5, fig. 5 is a schematic block diagram of a movable platform according to an embodiment of the present application. The movable platform 11 may include a processor 111 and a memory 112, the processor 111 and the memory 112 being connected by a bus, such as an I2C (Inter-integrated Circuit) bus.
Specifically, the Processor 111 may be a Micro-controller Unit (MCU), a Central Processing Unit (CPU), a Digital Signal Processor (DSP), or the like.
Specifically, the Memory 112 may be a Flash chip, a Read-Only Memory (ROM) magnetic disk, an optical disk, a usb disk, or a removable hard disk, and may be used to store a computer program.
The movable platform 11 may further include an image acquisition device 113, the image acquisition device 113 is used for acquiring images, the movable platform 11 may further include a cradle head for carrying the image acquisition device 113, and the cradle head may drive the image acquisition device 113 to move to a suitable position, accurately acquire required images, and the like. The type of the movable platform 11 can be flexibly set according to actual needs, for example, the movable platform 11 can be a mobile terminal, an unmanned aerial vehicle, a robot, a pan-tilt camera, or the like.
For example, cloud platform camera can include camera and cloud platform etc. and the camera is used for gathering the image, and the cloud platform is used for carrying on the camera to drive the camera and remove to suitable position, accurate required image of gathering etc. and the cloud platform camera can carry on unmanned aerial vehicle. For example, the pan-tilt camera may collect an image, acquire a movement characteristic of a target pixel point in the collected image, and/or send an information acquisition request to the unmanned aerial vehicle, and receive motion state information of the unmanned aerial vehicle returned by the unmanned aerial vehicle based on the information acquisition request; and then, determining a target frame rate for image transmission of the unmanned aerial vehicle to the control terminal according to the movement characteristics of the target pixel points in the image and/or the motion state information of the unmanned aerial vehicle, and sending the target frame rate to the unmanned aerial vehicle. At this time, the unmanned aerial vehicle can encode the image based on the target frame rate to generate video code stream data, and the video code stream data is sent to the control terminal.
The processor 111 is configured to call a computer program stored in the memory 112, and implement the image transmission method provided in the embodiment of the present application when executing the computer program, for example, the following steps may be executed:
acquiring the movement characteristics of target pixel points in an image acquired by an image acquisition device; and/or acquiring motion state information of the movable platform; and determining a target frame rate for image transmission from the movable platform to the control terminal according to the movement characteristics of the target pixel points in the image and/or the motion state information of the movable platform.
In some embodiments, after determining the target frame rate for image transmission from the movable platform to the control terminal, the processor 111 further performs: generating video code stream data based on the target frame rate; and sending the video code stream data to the control terminal.
In some embodiments, the image capturing device is a first image capturing device, the movable platform further comprises a second image capturing device, and when generating the video bitstream data based on the target frame rate, the processor 111 further performs: and based on the target frame rate, encoding the image acquired by the second image acquisition device to generate video code stream data.
In some embodiments, in obtaining the movement characteristic of the target pixel point in the image captured by the image capturing device, the processor 111 further performs: acquiring a first image and a second image in a plurality of frames of images acquired by an image acquisition device; acquiring pixel coordinates of a target pixel point of a first image and pixel coordinates of a target pixel point of a second image, wherein the target pixel point of the first image corresponds to the target pixel point of the second image; and determining the movement characteristic of the target pixel point according to the pixel coordinate of the target pixel point of the first image and the pixel coordinate of the target pixel point of the second image.
In some embodiments, when determining the movement characteristic of the target pixel point according to the pixel coordinate of the target pixel point of the first image and the pixel coordinate of the target pixel point of the second image, the processor 111 further performs: determining a relative position difference according to the pixel coordinates of the target pixel point of the first image and the pixel coordinates of the target pixel point of the second image; determining the offset rate of the target pixel point according to the relative position difference and the acquisition time interval of the first image and the second image; and determining the movement characteristic of the target pixel point according to the offset rate.
In some embodiments, the target pixel points include a first target pixel point corresponding to a background and/or a second target pixel point corresponding to a moving target, the first target pixel point of the first image corresponds to the first target pixel point of the second image, and the second target pixel point of the first image corresponds to the second target pixel point of the second image.
In some embodiments, when determining the movement characteristic of the target pixel point according to the offset rate, the processor 111 further performs: and carrying out weighted average on the offset rate of the first target pixel point and the offset rate of the second target pixel point, so as to represent the movement characteristic of the target pixel point by using the offset rate after weighted average.
In some embodiments, processor 111 further performs: a background and/or moving object is identified in the first image and the second image based on a pre-trained computational model.
In some embodiments, the moving target includes one or more previous moving targets with the largest corresponding image region area in the multiple candidate moving targets; or the moving target comprises one or a plurality of front moving areas with the largest moving amplitude in the plurality of candidate moving targets.
In some embodiments, the movement characteristic of the target pixel is represented by an offset rate of the target pixel, and when determining a target frame rate for image transmission from the movable platform to the control terminal according to the movement characteristic of the target pixel in the image, the processor 111 further performs: when the offset rate is smaller than a first preset rate threshold value, setting the first frame rate as a target frame rate; or when the offset rate is greater than a second preset rate threshold, setting the second frame rate as the target frame rate; or when the offset rate is greater than or equal to a first preset rate threshold and the offset rate is less than or equal to a second preset rate threshold, setting the current frame rate as the target frame rate; the first preset speed threshold is smaller than the second preset speed threshold, and the first frame rate is smaller than the second frame rate.
In some embodiments, in obtaining motion state information of the movable platform, the processor 111 further performs: the height of the movable platform, the translational motion speed, and/or the angular speed in the yaw direction are acquired.
In some embodiments, in acquiring the height of the movable platform, the translational motion speed, and/or the angular speed in the yaw direction, the processor 111 further performs: the height of the movable platform, the translational motion speed, and/or the angular velocity in the yaw direction are acquired by an inertial measurement unit mounted on the movable platform.
In some embodiments, when determining the target frame rate for image transmission from the movable platform to the control terminal according to the motion state information of the movable platform, the processor 111 further performs: if the height is smaller than a first height threshold value and the translational motion speed is larger than a first speed threshold value, setting a second frame rate as a target frame rate; if the height is greater than or equal to a first height threshold value, or the translational motion speed is less than or equal to a first speed threshold value, judging whether the angular speed is greater than the angular speed threshold value; if the angular velocity is larger than the angular velocity threshold, setting the second frame rate as the target frame rate; if the angular speed is less than or equal to the angular speed threshold, judging whether the height is greater than a second height threshold and whether the translational motion speed is less than a second speed threshold; if the height is larger than a second height threshold value and the translational motion speed is smaller than a second speed threshold value, setting the first frame rate as a target frame rate; if the height is smaller than or equal to a second height threshold value, or the translational motion speed is larger than or equal to a second speed threshold value, setting the current frame rate as the target frame rate; the first height threshold is smaller than the second height threshold, the first speed threshold is smaller than the second speed threshold, and the first frame rate is smaller than the second frame rate.
In some embodiments, when determining the target frame rate for image transmission from the movable platform to the control terminal according to the movement characteristics of the target pixel points in the image and the motion state information of the movable platform, the processor 111 further performs: if the offset rate is greater than a second preset rate threshold, setting the second frame rate as the target frame rate; if the offset rate is less than or equal to a second preset rate threshold, judging whether the height is less than a first height threshold and whether the translational motion speed is greater than a first speed threshold; if the height is smaller than a first height threshold value and the translational motion speed is larger than a first speed threshold value, setting a second frame rate as a target frame rate; if the height is greater than or equal to a first height threshold value, or the translational motion speed is less than or equal to a first speed threshold value, judging whether the angular speed is greater than the angular speed threshold value; if the angular velocity is larger than the angular velocity threshold, setting the second frame rate as the target frame rate; if the angular velocity is smaller than or equal to the angular velocity threshold, judging whether the offset rate is smaller than a first preset rate threshold; if the offset rate is smaller than a first preset rate threshold value, setting the first frame rate as a target frame rate; if the offset rate is greater than or equal to a first preset rate threshold, judging whether the height is greater than a second height threshold and whether the translational motion speed is less than a second speed threshold; if the height is larger than a second height threshold value and the translational motion speed is smaller than a second speed threshold value, setting the first frame rate as a target frame rate; if the height is smaller than or equal to a second height threshold value, or the translational motion speed is larger than or equal to a second speed threshold value, setting the current frame rate as the target frame rate; the first height threshold is smaller than the second height threshold, the first speed threshold is smaller than the second speed threshold, and the first frame rate is smaller than the second frame rate.
In the above embodiments, the descriptions of the embodiments have respective emphasis, and parts that are not described in detail in a certain embodiment may refer to the above detailed description of the image transmission method, and are not described herein again.
The embodiment of the present application further provides a computer program, where the computer program includes program instructions, and a processor executes the program instructions to implement the image transmission method provided in the embodiment of the present application.
The embodiment of the present application further provides a computer-readable storage medium, where the computer-readable storage medium is a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, where the computer program includes program instructions, and a processor executes the program instructions to implement the image transmission method provided in the embodiment of the present application.
The computer readable storage medium may be an internal storage unit of the removable platform, such as a hard disk or a memory of the removable platform, according to any of the foregoing embodiments. The computer readable storage medium may also be an external storage device of the removable platform, such as a plug-in hard drive provided on the removable platform, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like.
Since the computer program stored in the computer-readable storage medium can execute any image transmission method provided in the embodiments of the present application, beneficial effects that can be achieved by any image transmission method provided in the embodiments of the present application can be achieved, which are detailed in the foregoing embodiments and will not be described herein again.
It is to be understood that the terminology used in the description of the present application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification of the present application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items. It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
While the invention has been described with reference to specific embodiments, the scope of the invention is not limited thereto, and those skilled in the art can easily conceive various equivalent modifications or substitutions within the technical scope of the invention. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (30)

1. An image transmission method is applied to a movable platform, the movable platform comprises an image acquisition device, and the movable platform is in communication connection with a control terminal, and the method comprises the following steps:
acquiring the movement characteristics of target pixel points in the image acquired by the image acquisition device; and/or the presence of a gas in the gas,
acquiring motion state information of the movable platform;
and determining a target frame rate for image transmission from the movable platform to the control terminal according to the movement characteristics of the target pixel points in the image and/or the motion state information of the movable platform.
2. The image transmission method according to claim 1, wherein after determining a target frame rate for image transmission from the movable platform to the control terminal, the method further comprises:
generating video code stream data based on the target frame rate;
and sending the video code stream data to the control terminal.
3. The image transmission method according to claim 2, wherein the image capturing device is a first image capturing device, the movable platform further includes a second image capturing device, and the generating video bitstream data based on the target frame rate includes:
and based on the target frame rate, encoding the image acquired by the second image acquisition device to generate the video code stream data.
4. The image transmission method according to claim 1, wherein the obtaining of the movement characteristics of the target pixel points in the image acquired by the image acquisition device comprises:
acquiring a first image and a second image in a plurality of frames of images acquired by the image acquisition device;
acquiring pixel coordinates of target pixel points of the first image and pixel coordinates of target pixel points of the second image, wherein the target pixel points of the first image correspond to the target pixel points of the second image;
and determining the movement characteristic of the target pixel point according to the pixel coordinate of the target pixel point of the first image and the pixel coordinate of the target pixel point of the second image.
5. The image transmission method according to claim 4, wherein the determining the movement characteristic of the target pixel point according to the pixel coordinates of the target pixel point of the first image and the pixel coordinates of the target pixel point of the second image comprises:
determining a relative position difference according to the pixel coordinates of the target pixel point of the first image and the pixel coordinates of the target pixel point of the second image;
determining the offset rate of the target pixel point according to the relative position difference and the acquisition time interval of the first image and the second image;
and determining the movement characteristic of the target pixel point according to the offset rate.
6. The image transmission method according to claim 5, wherein the target pixels include a first target pixel corresponding to a background and/or a second target pixel corresponding to a moving target, the first target pixel of the first image corresponds to the first target pixel of the second image, and the second target pixel of the first image corresponds to the second target pixel of the second image.
7. The image transmission method according to claim 6, wherein the determining the movement characteristic of the target pixel point according to the offset rate comprises:
and carrying out weighted average on the offset rate of the first target pixel point and the offset rate of the second target pixel point, so as to represent the movement characteristic of the target pixel point by using the offset rate after weighted average.
8. The image transmission method according to claim 6, characterized in that the image transmission method further comprises:
identifying a background and/or a moving object in the first image and the second image based on a pre-trained computational model.
9. The image transmission method according to claim 6, wherein the moving object includes one or more moving objects before the moving object with the largest corresponding image area; or the moving target comprises one or a plurality of front moving areas with the largest moving amplitude in a plurality of moving targets to be selected.
10. The image transmission method according to any one of claims 1 to 9, wherein the movement characteristic of the target pixel is characterized by an offset rate of the target pixel, and the determining the target frame rate for the image transmission from the movable platform to the control terminal according to the movement characteristic of the target pixel in the image includes:
when the offset rate is smaller than a first preset rate threshold value, setting a first frame rate as the target frame rate; alternatively, the first and second electrodes may be,
when the offset rate is larger than a second preset rate threshold value, setting a second frame rate as the target frame rate; alternatively, the first and second electrodes may be,
when the offset rate is greater than or equal to the first preset rate threshold and the offset rate is less than or equal to the second preset rate threshold, setting the current frame rate as the target frame rate;
the first preset speed threshold is smaller than the second preset speed threshold, and the first frame rate is smaller than the second frame rate.
11. The image transmission method according to any one of claims 1 to 9, wherein the acquiring motion state information of the movable platform includes:
acquiring a height, translational motion speed, and/or angular speed in a yaw direction of the movable platform.
12. The image transmission method according to claim 11, wherein the acquiring the height, translational motion speed, and/or angular speed in a yaw direction of the movable platform comprises:
acquiring, by an inertial measurement unit mounted to the movable platform, a height of the movable platform, a translational motion speed, and/or an angular speed in a yaw direction.
13. The image transmission method according to claim 11, wherein the determining a target frame rate for image transmission from the movable platform to the control terminal according to the motion state information of the movable platform comprises:
if the height is smaller than a first height threshold value and the translational motion speed is larger than a first speed threshold value, setting a second frame rate as the target frame rate;
if the height is greater than or equal to the first height threshold value, or the translational motion speed is less than or equal to the first speed threshold value, judging whether the angular speed is greater than an angular speed threshold value;
if the angular velocity is greater than the angular velocity threshold, setting the second frame rate as the target frame rate;
if the angular speed is less than or equal to the angular speed threshold, judging whether the height is greater than a second height threshold, and whether the translational motion speed is less than a second speed threshold;
if the height is larger than the second height threshold value and the translational motion speed is smaller than the second speed threshold value, setting a first frame rate as the target frame rate;
if the height is smaller than or equal to the second height threshold value, or the translational motion speed is larger than or equal to the second speed threshold value, setting the current frame rate as the target frame rate;
wherein the first height threshold is less than the second height threshold, the first speed threshold is less than the second speed threshold, and the first frame rate is less than the second frame rate.
14. The image transmission method according to claim 11, wherein the determining a target frame rate for image transmission from the movable platform to the control terminal according to the movement characteristics of the target pixel points in the image and the motion state information of the movable platform includes:
if the offset rate is greater than a second preset rate threshold, setting a second frame rate as the target frame rate;
if the offset rate is less than or equal to the second preset rate threshold, judging whether the height is less than a first height threshold and whether the translational motion speed is greater than a first speed threshold;
if the height is smaller than the first height threshold value and the translational motion speed is larger than the first speed threshold value, setting the second frame rate as the target frame rate;
if the height is greater than or equal to the first height threshold value, or the translational motion speed is less than or equal to the first speed threshold value, judging whether the angular speed is greater than an angular speed threshold value;
if the angular velocity is greater than the angular velocity threshold, setting the second frame rate as the target frame rate;
if the angular velocity is smaller than or equal to the angular velocity threshold, judging whether the offset rate is smaller than a first preset rate threshold;
if the offset rate is smaller than the first preset rate threshold, setting a first frame rate as the target frame rate;
if the offset rate is greater than or equal to the first preset rate threshold, judging whether the height is greater than a second height threshold and whether the translational motion speed is less than a second speed threshold;
if the height is larger than the second height threshold value and the translational motion speed is smaller than the second speed threshold value, setting the first frame rate as the target frame rate;
if the height is smaller than or equal to the second height threshold value, or the translational motion speed is larger than or equal to the second speed threshold value, setting the current frame rate as the target frame rate;
wherein the first height threshold is less than the second height threshold, the first speed threshold is less than the second speed threshold, and the first frame rate is less than the second frame rate.
15. A movable platform, wherein the movable platform is communicatively coupled to a control terminal, the movable platform comprising:
the image acquisition device is used for acquiring images;
a memory for storing a computer program;
a processor for invoking a computer program in the memory to perform:
acquiring the movement characteristics of target pixel points in the image acquired by the image acquisition device; and/or the presence of a gas in the gas,
acquiring motion state information of the movable platform;
and determining a target frame rate for image transmission from the movable platform to the control terminal according to the movement characteristics of the target pixel points in the image and/or the motion state information of the movable platform.
16. The movable platform of claim 15, wherein the processor further performs:
generating video code stream data based on the target frame rate;
and sending the video code stream data to the control terminal.
17. The movable platform of claim 16, wherein the processor further performs:
and based on the target frame rate, encoding the image acquired by the second image acquisition device to generate the video code stream data.
18. The movable platform of claim 15, wherein the processor further performs:
acquiring a first image and a second image in a plurality of frames of images acquired by the image acquisition device;
acquiring pixel coordinates of target pixel points of the first image and pixel coordinates of target pixel points of the second image, wherein the target pixel points of the first image correspond to the target pixel points of the second image;
and determining the movement characteristic of the target pixel point according to the pixel coordinate of the target pixel point of the first image and the pixel coordinate of the target pixel point of the second image.
19. The movable platform of claim 18, wherein the processor further performs:
determining a relative position difference according to the pixel coordinates of the target pixel point of the first image and the pixel coordinates of the target pixel point of the second image;
determining the offset rate of the target pixel point according to the relative position difference and the acquisition time interval of the first image and the second image;
and determining the movement characteristic of the target pixel point according to the offset rate.
20. The movable platform of claim 19, wherein the target pixels comprise first target pixels corresponding to a background and/or second target pixels corresponding to a moving target, wherein the first target pixels of the first image correspond to the first target pixels of the second image, and wherein the second target pixels of the first image correspond to the second target pixels of the second image.
21. The movable platform of claim 20, wherein the processor further performs:
and carrying out weighted average on the offset rate of the first target pixel point and the offset rate of the second target pixel point, so as to represent the movement characteristic of the target pixel point by using the offset rate after weighted average.
22. The movable platform of claim 20, wherein the processor further performs:
identifying a background and/or a moving object in the first image and the second image based on a pre-trained computational model.
23. The movable platform of claim 20, wherein the moving objects comprise one or more previous moving objects with the largest corresponding image area in a plurality of moving objects to be selected; or the moving target comprises one or a plurality of front moving areas with the largest moving amplitude in a plurality of moving targets to be selected.
24. The movable platform of any one of claims 15-23, wherein the movement characteristic of the target pixel is characterized by a shift rate of the target pixel, and wherein the processor further performs:
when the offset rate is smaller than a first preset rate threshold value, setting a first frame rate as the target frame rate; alternatively, the first and second electrodes may be,
when the offset rate is larger than a second preset rate threshold value, setting a second frame rate as the target frame rate; alternatively, the first and second electrodes may be,
when the offset rate is greater than or equal to the first preset rate threshold and the offset rate is less than or equal to the second preset rate threshold, setting the current frame rate as the target frame rate;
the first preset speed threshold is smaller than the second preset speed threshold, and the first frame rate is smaller than the second frame rate.
25. The movable platform of any one of claims 15-23, wherein the processor further performs:
acquiring a height, translational motion speed, and/or angular speed in a yaw direction of the movable platform.
26. The movable platform of claim 25, wherein the processor further performs:
acquiring, by an inertial measurement unit mounted to the movable platform, a height of the movable platform, a translational motion speed, and/or an angular speed in a yaw direction.
27. The movable platform of claim 25, wherein the processor further performs:
if the height is smaller than a first height threshold value and the translational motion speed is larger than a first speed threshold value, setting a second frame rate as the target frame rate;
if the height is greater than or equal to the first height threshold value, or the translational motion speed is less than or equal to the first speed threshold value, judging whether the angular speed is greater than an angular speed threshold value;
if the angular velocity is greater than the angular velocity threshold, setting the second frame rate as the target frame rate;
if the angular speed is less than or equal to the angular speed threshold, judging whether the height is greater than a second height threshold, and whether the translational motion speed is less than a second speed threshold;
if the height is larger than the second height threshold value and the translational motion speed is smaller than the second speed threshold value, setting a first frame rate as the target frame rate;
if the height is smaller than or equal to the second height threshold value, or the translational motion speed is larger than or equal to the second speed threshold value, setting the current frame rate as the target frame rate;
wherein the first height threshold is less than the second height threshold, the first speed threshold is less than the second speed threshold, and the first frame rate is less than the second frame rate.
28. The movable platform of claim 25, wherein the processor further performs:
if the offset rate is greater than a second preset rate threshold, setting a second frame rate as the target frame rate;
if the offset rate is less than or equal to the second preset rate threshold, judging whether the height is less than a first height threshold and whether the translational motion speed is greater than a first speed threshold;
if the height is smaller than the first height threshold value and the translational motion speed is larger than the first speed threshold value, setting the second frame rate as the target frame rate;
if the height is greater than or equal to the first height threshold value, or the translational motion speed is less than or equal to the first speed threshold value, judging whether the angular speed is greater than an angular speed threshold value;
if the angular velocity is greater than the angular velocity threshold, setting the second frame rate as the target frame rate;
if the angular velocity is smaller than or equal to the angular velocity threshold, judging whether the offset rate is smaller than a first preset rate threshold;
if the offset rate is smaller than the first preset rate threshold, setting a first frame rate as the target frame rate;
if the offset rate is greater than or equal to the first preset rate threshold, judging whether the height is greater than a second height threshold and whether the translational motion speed is less than a second speed threshold;
if the height is larger than the second height threshold value and the translational motion speed is smaller than the second speed threshold value, setting the first frame rate as the target frame rate;
if the height is smaller than or equal to the second height threshold value, or the translational motion speed is larger than or equal to the second speed threshold value, setting the current frame rate as the target frame rate;
wherein the first height threshold is less than the second height threshold, the first speed threshold is less than the second speed threshold, and the first frame rate is less than the second frame rate.
29. The movable platform of any one of claims 15-23, wherein the movable platform is a mobile terminal, a drone, a pan-tilt-camera, or a robot.
30. A computer-readable storage medium for storing a computer program which is loaded by a processor to perform the image transmission method according to any one of claims 1 to 14.
CN202080005966.8A 2020-05-28 2020-05-28 Image transmission method, movable platform and computer readable storage medium Pending CN113056904A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/093035 WO2021237616A1 (en) 2020-05-28 2020-05-28 Image transmission method, mobile platform, and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN113056904A true CN113056904A (en) 2021-06-29

Family

ID=76509772

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202080005966.8A Pending CN113056904A (en) 2020-05-28 2020-05-28 Image transmission method, movable platform and computer readable storage medium

Country Status (2)

Country Link
CN (1) CN113056904A (en)
WO (1) WO2021237616A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114268742A (en) * 2022-03-01 2022-04-01 北京瞭望神州科技有限公司 Sky eye chip processing apparatus
CN114913471A (en) * 2022-07-18 2022-08-16 深圳比特微电子科技有限公司 Image processing method and device and readable storage medium
CN116804882A (en) * 2023-06-14 2023-09-26 黑龙江大学 Intelligent unmanned aerial vehicle control system based on stream data processing and unmanned aerial vehicle thereof
CN117808324A (en) * 2024-02-27 2024-04-02 西安麦莎科技有限公司 Building progress assessment method for unmanned aerial vehicle vision coordination

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023225791A1 (en) * 2022-05-23 2023-11-30 广东逸动科技有限公司 Camera frame rate adjustment method and apparatus, electronic device and storage medium
CN117478929B (en) * 2023-12-28 2024-03-08 昆明中经网络有限公司 Novel media exquisite image processing system based on AI large model

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102377730A (en) * 2010-08-11 2012-03-14 中国电信股份有限公司 Audio/video signal processing method and mobile terminal
CN107079135A (en) * 2016-01-29 2017-08-18 深圳市大疆创新科技有限公司 Method of transmitting video data, system, equipment and filming apparatus
CN110012267A (en) * 2019-04-02 2019-07-12 深圳市即构科技有限公司 Unmanned aerial vehicle (UAV) control method and audio/video data transmission method
CN110807392A (en) * 2019-10-25 2020-02-18 浙江大华技术股份有限公司 Encoding control method and related device

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019051649A1 (en) * 2017-09-12 2019-03-21 深圳市大疆创新科技有限公司 Method and device for image transmission, movable platform, monitoring device, and system
WO2019174044A1 (en) * 2018-03-16 2019-09-19 深圳市大疆创新科技有限公司 Image processing method, device and system, and storage medium
CN109600579A (en) * 2018-10-29 2019-04-09 歌尔股份有限公司 Video wireless transmission method, apparatus, system and equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102377730A (en) * 2010-08-11 2012-03-14 中国电信股份有限公司 Audio/video signal processing method and mobile terminal
CN107079135A (en) * 2016-01-29 2017-08-18 深圳市大疆创新科技有限公司 Method of transmitting video data, system, equipment and filming apparatus
US20180359413A1 (en) * 2016-01-29 2018-12-13 SZ DJI Technology Co., Ltd. Method, system, device for video data transmission and photographing apparatus
CN110012267A (en) * 2019-04-02 2019-07-12 深圳市即构科技有限公司 Unmanned aerial vehicle (UAV) control method and audio/video data transmission method
CN110807392A (en) * 2019-10-25 2020-02-18 浙江大华技术股份有限公司 Encoding control method and related device

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114268742A (en) * 2022-03-01 2022-04-01 北京瞭望神州科技有限公司 Sky eye chip processing apparatus
CN114913471A (en) * 2022-07-18 2022-08-16 深圳比特微电子科技有限公司 Image processing method and device and readable storage medium
CN114913471B (en) * 2022-07-18 2023-09-12 深圳比特微电子科技有限公司 Image processing method, device and readable storage medium
CN116804882A (en) * 2023-06-14 2023-09-26 黑龙江大学 Intelligent unmanned aerial vehicle control system based on stream data processing and unmanned aerial vehicle thereof
CN116804882B (en) * 2023-06-14 2023-12-29 黑龙江大学 Intelligent unmanned aerial vehicle control system based on stream data processing and unmanned aerial vehicle thereof
CN117808324A (en) * 2024-02-27 2024-04-02 西安麦莎科技有限公司 Building progress assessment method for unmanned aerial vehicle vision coordination

Also Published As

Publication number Publication date
WO2021237616A1 (en) 2021-12-02

Similar Documents

Publication Publication Date Title
CN113056904A (en) Image transmission method, movable platform and computer readable storage medium
US20220174252A1 (en) Selective culling of multi-dimensional data sets
US20190246104A1 (en) Panoramic video processing method, device and system
US9875579B2 (en) Techniques for enhanced accurate pose estimation
EP3008695B1 (en) Robust tracking using point and line features
WO2018214078A1 (en) Photographing control method and device
US20170357873A1 (en) Method for determining the position of a portable device
CN111935393A (en) Shooting method, shooting device, electronic equipment and storage medium
US11258949B1 (en) Electronic image stabilization to improve video analytics accuracy
CN109453517B (en) Virtual character control method and device, storage medium and mobile terminal
KR20160003233A (en) Methods for facilitating computer vision application initialization
CN106303448B (en) Aerial image processing method, unmanned aerial vehicle, head-mounted display device and system
CN110009675B (en) Method, apparatus, medium, and device for generating disparity map
US10447926B1 (en) Motion estimation based video compression and encoding
US20220074743A1 (en) Aerial survey method, aircraft, and storage medium
CN110634138A (en) Bridge deformation monitoring method, device and equipment based on visual perception
CN112313596A (en) Inspection method, equipment and storage medium based on aircraft
CN112640419B (en) Following method, movable platform, device and storage medium
JP2014222825A (en) Video processing apparatus and video processing method
CN111880711B (en) Display control method, display control device, electronic equipment and storage medium
CN109949381B (en) Image processing method and device, image processing chip, camera shooting assembly and aircraft
WO2019061466A1 (en) Flight control method, remote control device, and remote control system
JP2016206989A (en) Transmitter, augmented reality system, transmission control method, and program
CN114449151B (en) Image processing method and related device
JP2023502552A (en) WEARABLE DEVICE, INTELLIGENT GUIDE METHOD AND APPARATUS, GUIDE SYSTEM, STORAGE MEDIUM

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20210629