WO2021237616A1 - Procédé de transmission d'image, plateforme mobile et support de stockage lisible par ordinateur - Google Patents

Procédé de transmission d'image, plateforme mobile et support de stockage lisible par ordinateur Download PDF

Info

Publication number
WO2021237616A1
WO2021237616A1 PCT/CN2020/093035 CN2020093035W WO2021237616A1 WO 2021237616 A1 WO2021237616 A1 WO 2021237616A1 CN 2020093035 W CN2020093035 W CN 2020093035W WO 2021237616 A1 WO2021237616 A1 WO 2021237616A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
threshold
frame rate
target
rate
Prior art date
Application number
PCT/CN2020/093035
Other languages
English (en)
Chinese (zh)
Inventor
饶雄斌
赵亮
陈颖
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to PCT/CN2020/093035 priority Critical patent/WO2021237616A1/fr
Priority to CN202080005966.8A priority patent/CN113056904A/zh
Publication of WO2021237616A1 publication Critical patent/WO2021237616A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/57Mechanical or electrical details of cameras or camera modules specially adapted for being embedded in other devices
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • G01S17/8943D imaging with simultaneous measurement of time-of-flight at a 2D array of receiver pixels, e.g. time-of-flight cameras or flash lidar
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/933Lidar systems specially adapted for specific applications for anti-collision purposes of aircraft or spacecraft
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/66Remote control of cameras or camera parts, e.g. by remote control devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Definitions

  • This application relates to the field of image transmission technology, and in particular to an image transmission method, a movable platform, and a computer-readable storage medium.
  • the user can manually select the frame rate by controlling the terminal.
  • the user's subjective selection of the frame rate may lead to problems of delayed adjustment or low accuracy.
  • the embodiments of the present application provide an image transmission method, a movable platform, and a computer-readable storage medium, which can improve the timeliness and accuracy of target frame rate determination.
  • an embodiment of the present application provides an image transmission method, the method is applied to a movable platform, the movable platform includes an image acquisition device, the movable platform is communicatively connected with a control terminal, and the method includes :
  • a target frame rate for image transmission by the movable platform to the control terminal is determined.
  • an embodiment of the present application also provides a movable platform, the movable platform is in communication connection with a control terminal, and the movable platform includes:
  • Image acquisition device for acquiring images
  • Memory used to store computer programs
  • the processor is used to call the computer program in the memory to execute:
  • a target frame rate for image transmission by the movable platform to the control terminal is determined.
  • the embodiments of the present application also provide a computer-readable storage medium, where the computer-readable storage medium is used to store a computer program, and the computer program is loaded by a processor to execute any of the provided by the embodiments of the present application.
  • An image transmission method An image transmission method.
  • the embodiment of the application can obtain the movement characteristics of the target pixel in the image collected by the image acquisition device; and/or obtain the movement state information of the movable platform; according to the movement characteristic of the target pixel in the image and/or the movement of the movable platform
  • the status information determines the target frame rate for image transmission from the movable platform to the control terminal.
  • This solution can automatically determine the target frame rate without the user's manual selection, which improves the timeliness and accuracy of determining the target frame rate.
  • Fig. 1 is a schematic diagram of an application scenario of an image transmission method provided by an embodiment of the present application
  • Figure 2 is a schematic diagram of interaction between a drone and a control terminal provided by an embodiment of the present application
  • FIG. 3 is a schematic flowchart of an image transmission method provided by an embodiment of the present application.
  • Fig. 4 is a schematic diagram of determining multiple moving targets from a first image and a second image provided by an embodiment of the present application
  • Fig. 5 is a schematic structural diagram of a movable platform provided by an embodiment of the present application.
  • the embodiments of the present application provide an image transmission method, a movable platform, and a computer-readable storage medium, which are used based on the movement characteristics of the target pixel in the image collected by the image acquisition device and/or the movement state information of the movable platform , Determine the target frame rate for image transmission from the movable platform to the control terminal, which improves the timeliness and accuracy of the target frame rate determination.
  • the movable platform may include a pan/tilt, a platform body, an image acquisition device, etc.
  • the platform body may be used to carry a pan/tilt, and the pan/tilt may carry an image acquisition device, so that the pan/tilt can drive the image acquisition device to move.
  • the image capture device may include one or more.
  • the types of the movable platform and the image acquisition device can be flexibly set according to actual needs, and the specific content is not limited here.
  • the image acquisition device may be a camera or a vision sensor, etc.
  • the movable platform may be a mobile terminal, a drone, a robot, or a pan-tilt camera, etc.
  • the pan-tilt camera may include a camera, a pan-tilt, etc.
  • the pan-tilt may include a pivot arm, etc., and the pivot arm can drive the camera to move, for example, the pivot arm can control the camera to move to a suitable position so that the camera can collect a desired image.
  • the camera may be a monocular camera, and the type of the camera may be an ultra-wide-angle camera, a wide-angle camera, a telephoto camera (that is, a zoom camera), an infrared camera, a far-infrared camera, an ultraviolet camera, and a time-of-flight (TOF, Time of Flight) Depth Camera (TOF Depth Camera for short), etc.
  • the drone may include a camera, a distance measuring device, an obstacle sensing device, and so on.
  • the unmanned aerial vehicle may also include a pan-tilt for carrying a camera, and the pan-tilt can drive the camera to a suitable position so as to collect the required images through the camera.
  • the drone can include a rotary-wing drone (such as a quad-rotor drone, a hexa-rotor drone, or an eight-rotor drone, etc.), a fixed-wing drone, or a rotary-wing and fixed-wing drone The combination of is not limited here.
  • the mobile platform may also be provided with positioning devices such as Global Positioning System (GPS) for accurate positioning of the mobile position of the mobile platform.
  • GPS Global Positioning System
  • the positional relationship between the camera and the positioning device can be on the same plane. In this plane, the camera and the positioning device can be on the same straight line, or form a preset angle, etc.; of course, the camera and the positioning device can also be on the same plane. They can be located on different planes.
  • GPS Global Positioning System
  • FIG. 1 is a schematic diagram of a scene for implementing the image transmission method provided by the embodiment of the present application.
  • the control terminal 100 is in communication connection with a drone 200, and the control terminal 100 It can be used to control the flight of the UAV 200 or perform corresponding actions, and obtain corresponding motion information from the UAV 200.
  • the motion information may include flight direction, flight attitude, flight height, flight speed and position information, etc.
  • the acquired exercise information is sent to the control terminal 100, and the control terminal 100 performs analysis and display.
  • the control terminal 100 can also receive control instructions input by the user, and control the distance measuring device or camera on the UAV 200 accordingly based on the control instructions.
  • control terminal 100 may receive a shooting instruction or a distance measurement instruction input by a user, and send the shooting instruction or a distance measurement instruction to the drone 200, and the drone 200 can control the camera to shoot the captured image according to the shooting instruction. Or control the distance measuring device to measure the distance of the target according to the distance measuring instruction.
  • the type of the control terminal 100 can be flexibly set according to actual needs, and the specific content is not limited here.
  • the control terminal 100 may be a remote control device provided with a display, control buttons, etc., for establishing a communication connection with the drone 200 and controlling the drone 200, and the display may be used for displaying images or videos.
  • the control terminal 100 may also be a third-party mobile phone or tablet computer, etc., and establish a communication connection with the drone 200 through a preset protocol, and control the drone 200.
  • the obstacle sensing device of the drone 200 can obtain the sensing signals around the drone 200, and by analyzing the sensing signals, the obstacle information can be obtained and displayed on the display of the drone 200. Obstacle information is displayed inside, so that the user can learn the obstacles that the drone 200 perceives, and it is convenient for the user to control the drone 200 to avoid the obstacle.
  • the display may be a liquid crystal display, or a touch screen, etc.
  • the obstacle sensing device may include at least one sensor for acquiring sensing signals from the drone 200 in at least one direction.
  • the obstacle sensing device may include a sensor for detecting obstacles in front of the drone 200.
  • the obstacle sensing device may include two sensors for detecting obstacles in front of and behind the drone 200, respectively.
  • the obstacle sensing device may include four sensors for detecting obstacles in the front, rear, left, and right of the drone 200, respectively.
  • the obstacle sensing device may include five sensors, which are used to detect obstacles in the front, rear, left, right, and above of the drone 200, respectively.
  • the obstacle sensing device may include six sensors for detecting obstacles in front, rear, left, right, above, and below the drone 200, respectively.
  • the sensors in the obstacle sensing device can be implemented separately or integrated.
  • the detection direction of the sensor can be set according to specific needs to detect obstacles in various directions or combinations of directions, and is not limited to the above-mentioned forms disclosed in this application.
  • the drone 200 may have multiple rotors.
  • the rotor may be connected to the body of the drone 200, and the body may include a control unit, an inertial measurement unit (IMU), a processor, a battery, a power supply, and/or other sensors.
  • the rotor can be connected to the body by one or more arms or extensions branching from the central part of the body.
  • one or more arms may extend radially from the central body of the drone 200, and may have a rotor at or near the end of the arm.
  • Figure 2 is a schematic diagram of the interaction between the drone and the control terminal provided by the embodiment of the application.
  • Humans and machines can collect images through visual sensors and determine the movement characteristics of the target pixels in the images (such as the offset rate), and obtain the movement status information of the drone itself (such as the height and translation of the drone) through the inertial measurement unit IMU. Movement speed, and/or angular velocity in the yaw direction, etc.), and then the drone can make frame rate control decisions to determine the target frame rate, that is, the drone can be based on the movement characteristics of the target pixel in the image and/or the unmanned
  • the motion state information of the aircraft itself determines the target frame rate for image transmission from the UAV to the control terminal.
  • the image collected by the camera can be video-encoded based on the target frame rate to generate video stream data, and the video stream data is sent to the control terminal through the wireless communication module of the drone.
  • the control terminal can receive the video stream data sent by the drone through its own wireless communication module, decode the video stream data to obtain an image, and display the image on the display.
  • the device structures such as the drone and the control terminal in FIGS. 1 and 2 do not constitute a limitation on the application scenario of the image transmission method.
  • FIG. 3 is a schematic flowchart of an image transmission method according to an embodiment of the present application.
  • the image transmission method can be applied to a movable platform to accurately determine the target frame rate.
  • the following will use the movable platform as a drone for detailed description.
  • the image transmission method may include step S101 to step S102, etc., which may be specifically as follows:
  • one or more image acquisition devices may be preset on the drone, and the image acquisition devices may be cameras or vision sensors. During the flight of the drone, one or more frames of images can be collected through the image acquisition device.
  • the target pixel can include one or more.
  • the target pixel can be a feature point, or the target pixel can be a pixel in the background area. , Or the target pixel can be the pixel of the area where the moving target is located, and so on.
  • the movement characteristic can be used to characterize the movement state of the target pixel.
  • the movement characteristic can be flexibly set according to actual needs. The specific content is not limited here.
  • the movement characteristic can be the movement rate, acceleration, etc. of the target pixel.
  • acquiring the movement characteristics of the target pixel in the image acquired by the image acquisition device may include: acquiring the first image and the second image in the multi-frame images acquired by the image acquisition device; acquiring the target pixel of the first image The pixel coordinates of the target pixel of the second image and the pixel coordinate of the target pixel of the second image.
  • the target pixel of the first image corresponds to the target pixel of the second image; according to the pixel coordinates of the target pixel of the first image and the target of the second image The pixel coordinates of the pixel determine the movement characteristics of the target pixel.
  • the movement characteristic may be determined based on the pixel coordinates of the target pixel in the multi-frame image.
  • the UAV can collect one frame of images every preset time interval through the image acquisition device, and after the preset time period, multiple frames of images can be collected.
  • the preset time per interval and the preset time period can be based on time requirements. Make flexible settings, the specific value is not limited here.
  • the target pixel is extracted from multiple frames of images, and the corresponding pixel coordinates of the target pixel on each frame of image are obtained, and the movement characteristics of the target pixel are determined according to the pixel coordinates of the target pixel in each frame of image.
  • the first image and the second image can be filtered from the multi-frame images collected by the image collection device.
  • the first image and the second image can be images that are adjacent in collection time, or can be Images that are not adjacent in acquisition time.
  • the first image and the second image that are adjacent in acquisition time can be filtered from the multi-frame images collected by the image acquisition device, or the first image and the second image with good image quality can be filtered from the multi-frame images collected by the image acquisition device.
  • the second image, or the first image and the second image with high image definition can be filtered from multiple frames of images collected by the image collection device.
  • the image acquisition device can acquire multiple frames of images based on the initial frame rate.
  • the default frame rate can be set as the initial frame rate of the image capture device.
  • the pixel coordinates of the target pixel of the first image and the pixel coordinate of the target pixel of the second image can be obtained, where the target pixel of the first image corresponds to the target pixel of the second image, for example, when the target When the pixel is the pixel of a moving vehicle, the pixel coordinates of the pixel in the area where the vehicle is located in the first image can be obtained, and the pixel coordinates of the pixel in the area where the vehicle is located in the second image can be obtained.
  • the movement characteristics of the target pixel can be determined according to the pixel coordinates of the target pixel of the first image and the pixel coordinate of the target pixel of the second image.
  • the target pixel may include the first target pixel corresponding to the background and/or the second target pixel corresponding to the moving target, the first target pixel of the first image and the first target pixel of the second image
  • the dots correspond
  • the second target pixel of the first image corresponds to the second target pixel of the second image.
  • the target pixel can be divided into the first target pixel corresponding to the background and/or the second target pixel corresponding to the moving target, where the moving target may be collected by the image acquisition device
  • the moving target may include one or more.
  • the moving target may be a walking person, a moving vehicle, or a running puppy, etc.
  • the background may be the image captured by the image capture device except for motion. The area outside the target.
  • the background and the moving target in the first image can be identified, and the pixel coordinates of the first target pixel corresponding to the background in the first image can be obtained, and the pixel of the second target pixel corresponding to the moving target in the first image can be obtained Coordinates, and can identify the background and the moving target in the second image, and obtain the pixel coordinates of the first target pixel corresponding to the background in the second image, and obtain the second target pixel corresponding to the moving target in the second image
  • the pixel coordinates can identify the background and the moving target in the second image, and obtain the pixel coordinates of the first target pixel corresponding to the background in the second image, and obtain the second target pixel corresponding to the moving target in the second image
  • the background or moving target in the first image can be identified, and the pixel coordinates of the first target pixel corresponding to the background in the first image or the pixel coordinate of the second target pixel corresponding to the moving target can be obtained.
  • the background or moving target in the second image can be recognized, and the
  • the image transmission method may further include: recognizing the background and/or moving target in the first image and the second image based on a pre-trained calculation model.
  • the background and/or the moving target in the multi-frame image can be recognized through a pre-trained calculation model.
  • the pre-trained calculation model may be a deep learning module.
  • the calculation model can be flexibly set according to actual needs.
  • the calculation model can be a target detection algorithm SSD or YOLO, and the calculation model can also be a convolutional neural network R-CNN or Faster R-CNN.
  • the background and/or moving target in the first image can be accurately identified through the pre-trained calculation model, and the background in the second image can be accurately identified And/or the moving target in order to obtain the pixel coordinates of the background and/or the pixel in the area where the moving target is located.
  • the calculation model for background recognition and the calculation model for moving target recognition can be the same or different.
  • the background is recognized to obtain the location of the background area on each frame of image, and the pixel coordinates of the background area on each frame of image are determined according to the location of the background area.
  • the moving target in the first image and the second image is recognized to obtain the location of the moving target on each frame of image, and the location of each frame of image is determined according to the location of the moving target.
  • the pixel coordinates of the moving target, where the first calculation model and the second calculation model may be the same or different.
  • the background area in the image or the area where the moving target is located may include multiple pixels.
  • Feature points can be extracted for the background area in the image and the area where the moving target is located, the extracted feature points are used as target pixels, and the feature point matching method is used to determine the target pixel points in the first image and the second image. Correspondence between target pixels.
  • the moving target may include one or the first multiple moving targets with the largest corresponding image area among the multiple moving targets to be selected; or, the moving target may include the moving target with the largest moving amplitude among the multiple moving targets to be selected. Or the previous multiple motion areas.
  • moving targets can be filtered based on the area of the image area. Specifically, when multiple moving targets to be selected can be identified from the image, the area of the image area occupied by each moving target can be obtained, and the multiple moving targets One or more moving objects with the largest image area are filtered out of the candidate moving objects. For example, one moving object with the largest image area or the top 3 moving objects with the largest image area can be selected from multiple moving objects to be selected. Sports goals. In another embodiment, the moving targets can be filtered based on the motion range.
  • the motion range of each candidate moving target can be obtained, and the moving targets can be selected from the multiple candidate moving targets.
  • One or more previous sports targets with the largest motion amplitude can be filtered out of the sports targets. For example, one sports target with the largest motion amplitude or the first 3 sports targets with the largest motion amplitude can be selected from multiple candidate sports targets.
  • determining the movement characteristics of the target pixel according to the pixel coordinates of the target pixel of the first image and the pixel coordinates of the target pixel of the second image may include: according to the pixel coordinates of the target pixel of the first image and The pixel coordinates of the target pixel of the second image determine the relative position difference; determine the offset rate of the target pixel according to the relative position difference and the collection time interval of the first image and the second image; determine the offset rate of the target pixel according to the offset rate Mobile characteristics.
  • the movement characteristic may be determined based on the relative position difference of the target pixel.
  • the relative position difference between the pixel coordinates of the target pixel point in the multi-frame image collected by the image acquisition device and the collection time interval of the multi-frame image can be obtained, and the target pixel determined according to the relative position difference and the collection time interval Point’s movement characteristics.
  • the two frames of images include a first image and a second image
  • the relative position difference can be determined according to the pixel coordinates of the target pixel of the first image and the pixel coordinates of the target pixel of the second image.
  • the relative position difference ⁇ A can be determined by the following formula (1):
  • (x i , y i ) represents the pixel coordinates of the target pixel of the first image
  • (x k , y k ) represents the pixel coordinates of the target pixel of the second image.
  • the acquisition time interval T between the first image and the second image can be acquired, and then the offset rate of the target pixel can be determined according to the relative position difference ⁇ A and the acquisition time interval T between the first image and the second image, which can be specifically As shown in the following formula (2):
  • p represents the offset rate of the target pixel
  • ⁇ A represents the relative position difference
  • T represents the collection time interval
  • the movement characteristics of the target pixel can be determined according to the offset rate.
  • determining the movement characteristics of the target pixel according to the offset rate may include: weighting the offset rate of the first target pixel and the offset rate of the second target pixel to use the weighted average The offset rate characterizes the movement characteristics of the target pixel.
  • the target pixel includes the first target pixel and the second target pixel
  • the target pixel includes the first target pixel corresponding to the background and the second target pixel corresponding to the moving target, which may be based on the first target pixel
  • the offset rate is used to characterize the movement characteristics of the target pixel, which can be expressed as the following formula (3) Shown:
  • p 1 indicates the offset rate of the first target pixel
  • p 2 indicates the offset rate of the second target pixel
  • the target pixel includes the first target pixel and the second target pixel
  • the target pixel includes the first target pixel corresponding to the background and the second target pixel corresponding to the moving target, which may be based on The pixel coordinates of the first target pixel in the first image and the second image are determined, the offset rate of the first target pixel is determined, and the pixel coordinates of the second target pixel in the first image and the second image are determined The offset rate of the second target pixel.
  • the weight value of the first target pixel (such as the weight value of the background) and the weight value of the second target pixel (such as the weight value of the moving target) can be obtained, and the weight value of the first target pixel, the weight value of the first target
  • the shift rate of the pixel, the weight value of the second target pixel, and the shift rate of the second target pixel are weighted and summed, and the shift rate after the weighted sum is used to characterize the movement characteristics of the target pixel. It can be shown in the following formula (4):
  • p 1 indicates the offset rate of the first target pixel
  • p 2 indicates the offset rate of the second target pixel
  • a 11 indicates the weight value of the first target pixel
  • a 12 represents the weight value of the second target pixel
  • a 11 + a 12 1.
  • the target pixel includes the target pixel corresponding to the background, the target pixel corresponding to the first moving target, the target pixel corresponding to the second moving target, and the target pixel corresponding to the third moving target as an example.
  • the first image and the second image will be described in detail as an example.
  • the pixel coordinate B of the target pixel corresponding to the background in the first image can be obtained, and the pixel coordinate B'of the target pixel corresponding to the background in the second image can be obtained, where,
  • the acquisition time interval T of the first image and the second image is acquired, and the offset rate p b corresponding to the background is determined according to the acquisition time interval T and the relative position difference ⁇ B:
  • the moving targets ⁇ S i ⁇ other than the background can be identified from the first image and the second image. Three of them can be taken as the three occupying the largest area of the image area (that is, occupying the largest number of image pixels).
  • the pixel coordinates of the moving target on the first image ⁇ S 1 , S 2 , S 3 ⁇ , where S 1 represents the pixel coordinates of the target pixel corresponding to the first moving target on the first image, and S 2 represents the pixel coordinates of the target pixel on the first image.
  • S 1' represents the pixel on the second image of the target pixel corresponding to the first moving object Coordinates
  • S′ 2 represents the pixel coordinates of the target pixel corresponding to the second moving target on the second image
  • S 3 ′ represents the pixel coordinate of the target pixel corresponding to the third moving target on the second image.
  • Indicates the pixel coordinates of the moving target in the first image Indicates the pixel coordinates of the moving target in the second image.
  • Indicates the pixel coordinates of the first moving target in the first image Indicates the pixel coordinates of the first moving target in the second image, Represents the pixel coordinates of the second moving target in the first image, Indicates the pixel coordinates of the second moving target in the second image, Indicates the pixel coordinates of the third moving target in the first image, Represents the pixel coordinates of the third moving target in the second image.
  • the relative position may be acquired first and second images of moving objects in three difference, i.e., obtaining the relative position between the coordinates of the pixel on the first image on the S 1 S 1 and the second image 'is the difference [Delta] S 1, the first The relative position difference ⁇ S 2 between the pixel coordinates of S 2 on the first image and S′ 2 on the second image, and the relative position difference ⁇ S between the pixel coordinates of S 3 on the first image and S 3 ′ on the second image 3.
  • the details can be as follows:
  • ⁇ S j represents the relative position difference of the three moving targets.
  • the offset rate of each moving target can be determined according to the acquisition time interval T of the first image and the second image and the relative position difference ⁇ S j of the three moving targets.
  • p t1 indicates the offset rate corresponding to the first moving target
  • p t2 indicates the offset rate corresponding to the second moving target
  • p t3 indicates the offset rate corresponding to the third moving target
  • ⁇ S 1 indicates that the first moving target is in the first moving target.
  • ⁇ S 2 represents the relative position difference of the second moving target on the first image and the second image
  • ⁇ S 3 represents the relative position difference of the third moving target on the first image and the second image.
  • the relative position is poor.
  • the offset rate p t1 corresponding to the first moving target the offset rate p t2 corresponding to the second moving target, and the offset rate p t3 corresponding to the third moving target, Determine the comprehensive calculation target offset rate
  • p b indicates the offset rate corresponding to the background
  • p t1 indicates the offset rate corresponding to the first moving target
  • p t2 indicates the offset rate corresponding to the second moving target
  • p t3 indicates the corresponding offset rate of the third moving target
  • a 0 represents the weight value of the offset rate corresponding to the background
  • a 1 represents the weight value of the offset rate corresponding to the first moving target
  • a 2 represents the weight of the offset rate corresponding to the second moving target Value
  • a 3 represents the weight value of the offset rate corresponding to the third moving target
  • a 0 +a 1 +a 2 +a 3 1.
  • the target offset rate can be used to characterize the movement characteristics of the target pixel.
  • acquiring the movement state information of the movable platform may include: acquiring the height, translational movement speed, and/or angular velocity in the yaw direction of the movable platform.
  • the movement state information of the movable platform may include at least one of the height of the movable platform, the translational movement speed, and the angular velocity in the yaw direction.
  • the movement state information may include At least one of the flying height of the drone, the translational speed, and the angular velocity in the yaw direction, the height may be the height of the drone from the ground, and the translational speed may be the flying speed of the drone.
  • the accuracy of frame rate adjustment can be further improved.
  • the drone shoots video at a height close to the ground and a height away from the ground at the same speed
  • the user's visual perception is completely different.
  • the speed of the user's visual perception when it is close to the ground is greater than the speed of the user's visual perception when it is far from the ground. Therefore, further referencing the altitude based on the consideration of the translational motion speed and the angular velocity in the yaw direction can make the adjustment of the frame rate more in line with the user's intuitive experience.
  • motion state information may also include other information such as flight direction, flight attitude, or position information, and the specific content is not limited here.
  • obtaining the height, translational speed, and/or angular velocity in the yaw direction of the movable platform may include: obtaining the height, translational speed, and speed of the movable platform through an inertial measurement unit installed on the movable platform And/or angular velocity in the yaw direction.
  • the inertial measurement unit IMU installed on the movable platform can be used to obtain the height, translational movement speed, and/or yaw direction of the movable platform. Angular velocity and other information.
  • S102 Determine a target frame rate for image transmission by the movable platform to the control terminal according to the movement characteristics of the target pixel in the image and/or the movement state information of the movable platform.
  • the target frame rate of the drone to the control terminal can be determined according to the movement characteristics of the target pixel; or, after obtaining the unmanned After the movement status information of the drone, the target frame rate for image transmission from the drone to the control terminal can be determined according to the movement status information of the drone; or, after obtaining the movement characteristics of the target pixel and the movement status information of the drone , According to the movement characteristics of the target pixel and the movement state information of the drone, the target frame rate for image transmission from the drone to the control terminal can be determined.
  • the drone can intelligently perceive the current flight environment, combined with the movement characteristics of the drone's visual perception and the detected motion state information, adaptively dynamically adjust the target frame rate of image transmission, so that the subsequent control terminal can achieve the display image
  • the movement characteristic of the target pixel is characterized by the offset rate of the target pixel.
  • determining the target frame rate of the image transmission from the movable platform to the control terminal may include: When the offset rate is less than the first preset rate threshold, set the first frame rate as the target frame rate; or, when the offset rate is greater than the second preset rate threshold, set the second frame rate as the target frame rate; or , When the offset rate is greater than or equal to the first preset rate threshold, and the offset rate is less than or equal to the second preset rate threshold, the current frame rate is set as the target frame rate; wherein, the first preset rate threshold is less than the first preset rate threshold Two preset rate thresholds, the first frame rate is less than the second frame rate.
  • the target frame rate can be determined only by the offset rate of the target pixel. Specifically, after the offset rate of the target pixel is obtained, the offset rate from the target pixel can be obtained According to the corresponding frame rate control decision, the target frame rate of image transmission corresponding to the offset rate of the target pixel is determined according to the frame rate control decision.
  • the frame rate control decision can be a number of different offset rates and each frame rate. The mapping relationship between the two can be determined by querying the mapping relationship to determine the target frame rate of image transmission corresponding to the offset rate of the target pixel.
  • the frame rate control decision may be a calculated conversion relationship between the offset rate and the frame rate, and the corresponding target frame rate of image transmission may be calculated based on the offset rate of the target pixel through the calculation conversion relationship.
  • the frame rate control decision can also be flexibly set according to actual needs, and the specific content is not limited here.
  • the offset rate of the target pixel it can be determined whether the offset rate is less than the first preset rate threshold.
  • the offset rate is less than the first preset rate threshold, it indicates that the flying speed of the drone is slow.
  • the content of the video screen changes slowly.
  • a lower frame rate can be used at this time, that is, the first frame rate is set as the target frame rate.
  • the offset rate is greater than the second preset rate threshold, it indicates that the content of the video screen changes quickly.
  • a higher frame rate can be used at this time.
  • the second frame rate as the target frame rate.
  • the offset rate is greater than or equal to the first preset rate threshold, and the offset rate is less than or equal to the second preset rate threshold, it indicates that the flight status of the drone has not changed much and the frame rate does not need to be adjusted.
  • the current frame rate can be maintained unchanged, and the current frame rate can be set as the target frame rate. It can be as follows:
  • the first preset rate threshold is less than the second preset rate threshold
  • the first frame rate is less than the second frame rate
  • the first preset rate threshold, the second preset rate threshold, the first frame rate, and the second frame rate may be based on Flexible settings are actually required, and the specific values are not limited here.
  • the frame rate corresponding to the high-definition mode may be 30fps
  • the frame rate corresponding to the smooth mode may be 60fps, and so on.
  • the frame rate adjustment is not limited to the first frame rate and the second frame rate, etc., but can also include multiple different frame rates such as the third frame rate, the fourth frame rate, and the fifth frame rate, and multiple frames can be established.
  • the mapping relationship between different frame rates and each offset rate is used to determine the frame rate corresponding to the currently detected offset rate based on the mapping relationship between multiple different frame rates and each offset rate to obtain the target frame rate.
  • determining the target frame rate of image transmission by the movable platform to the control terminal according to the movement state information of the movable platform may include: if the height is less than the first height threshold, and the translational movement speed is greater than the first speed threshold, Then set the second frame rate as the target frame rate; if the height is greater than or equal to the first height threshold, or the translational motion speed is less than or equal to the first speed threshold, it is determined whether the angular velocity is greater than the angular velocity threshold; if the angular velocity is greater than the angular velocity threshold, the The second frame rate is set as the target frame rate; if the angular velocity is less than or equal to the angular velocity threshold, it is determined whether the height is greater than the second height threshold, and whether the translational motion speed is less than the second speed threshold; if the height is greater than the second height threshold, and the translational motion If the speed is less than the second speed threshold, the first frame rate is set as the target frame rate; if the height is less than the second speed threshold
  • the target frame rate can be determined only by using the motion state information of the drone.
  • the frame rate control decision corresponding to the movement state information can be obtained, and the target frame rate of the image transmission corresponding to the movement state information can be determined according to the frame rate control decision.
  • the frame rate control decision may be a mapping relationship between a plurality of different motion state information and each frame rate, and the target frame rate of image transmission corresponding to the currently detected motion state information can be determined by querying the mapping relationship.
  • the frame rate control decision may be a calculated conversion relationship between the motion state information and the frame rate, and the corresponding target frame rate of image transmission may be calculated based on the motion state information through the calculation conversion relationship.
  • the frame rate control decision can also be flexibly set according to actual needs, and the specific content is not limited here.
  • the altitude is less than the first height threshold and whether the translational movement speed is greater than the first speed threshold. Less than the first height threshold, and the translational movement speed is greater than the first speed threshold, it means that the content of the video screen changes quickly.
  • a higher frame can be used at this time Rate, that is, set the second frame rate as the target frame rate.
  • the height is greater than or equal to the first height threshold, or the translational motion speed is less than or equal to the first speed threshold, it can be further judged whether the angular velocity is greater than the angular velocity threshold; if the angular velocity is greater than the angular velocity threshold, it means that the content of the video screen changes quickly, in order to ensure Subsequent image generation video playback smoothness to improve the effect of aerial image transmission.
  • a higher frame rate can be used, that is, the second frame rate is set as the target frame rate; if the angular velocity is less than or equal to the angular velocity threshold, further judgment can be made Whether the height is greater than the second height threshold and whether the translational motion speed is less than the second speed threshold; if the height is greater than the second height threshold and the translational motion speed is less than the second speed threshold, it means that the content of the video screen changes slowly, in order to ensure collection To achieve the high definition of each frame of image, improve the image quality of aerial image transmission.
  • a lower frame rate can be used, that is, the first frame rate is set as the target frame rate; if the height is less than or equal to the second height threshold, or translation The movement speed is greater than or equal to the second speed threshold, indicating that the flight status of the drone has not changed much and the frame rate does not need to be adjusted.
  • the current frame rate can be maintained unchanged, and the current frame rate can be set as the target frame rate. It can be as follows:
  • h altitude
  • v the speed of translational movement
  • w y the angular velocity in the yaw direction
  • H l the first altitude threshold
  • Hu the second altitude threshold
  • v l the first speed threshold
  • v u the first altitude threshold
  • K2 represents the second frame rate
  • K1 represents the first frame rate.
  • the first height threshold is less than the second height threshold
  • the first speed threshold is less than the second speed threshold
  • the first frame rate is less than the second frame rate.
  • the first preset rate threshold, the second preset rate threshold, the first frame rate, and the second frame rate can be flexibly set according to actual needs, and specific values are not limited here.
  • the adjustment of the frame rate is not limited to the first frame rate and the second frame rate. It can also include multiple different frame rates such as the third frame rate, the fourth frame rate, and the fifth frame rate.
  • the mapping relationship between different frame rates and each motion state information so as to determine the frame rate corresponding to the currently detected motion state information based on the mapping relationship between multiple different frame rates and each motion state information to obtain the target frame rate .
  • the absolute value of the translational speed can be compared with the speed threshold, and the absolute value of the angular speed can be compared with the angular speed threshold.
  • determining the target frame rate of image transmission by the movable platform to the control terminal may include: if the offset rate is greater than the second preset Rate threshold, set the second frame rate as the target frame rate; if the offset rate is less than or equal to the second preset rate threshold, determine whether the height is less than the first height threshold, and whether the translational motion speed is greater than the first speed threshold; If the height is less than the first height threshold, and the translational movement speed is greater than the first speed threshold, the second frame rate is set as the target frame rate; if the height is greater than or equal to the first height threshold, or the translational movement speed is less than or equal to the first speed Threshold, judge whether the angular velocity is greater than the angular velocity threshold; if the angular velocity is greater than the angular velocity threshold, set the second frame rate as the target frame rate; if the angular velocity is less than or equal to the
  • the target frame rate can be determined by combining the target pixel offset rate and the drone's motion state information. Specifically, the target pixel offset rate is obtained, and the drone After the height, translational motion speed, and angular velocity in the yaw direction and other motion state information, the frame rate control decision corresponding to the offset rate and motion state information can be obtained, and the offset rate and motion state can be determined according to the frame rate control decision.
  • the target frame rate of image transmission corresponding to the information, where the frame rate control decision can be different offset rates and motion state information, and the mapping relationship with each frame rate. By querying the mapping relationship, you can determine the The target frame rate of image transmission corresponding to the offset rate and motion state information.
  • the frame rate control decision may be a calculated conversion relationship between the offset rate and motion state information and the frame rate, and the corresponding target frame rate of image transmission may be calculated based on the offset rate and motion state information through the calculation conversion relationship.
  • the frame rate control decision can also be flexibly set according to actual needs, and the specific content is not limited here.
  • the offset rate of the target pixel After obtaining the offset rate of the target pixel, and the movement status information of the drone's altitude, translational motion speed, and angular velocity in the yaw direction, it can be determined whether the offset rate is greater than the second preset rate threshold. If the offset rate is greater than the second preset rate threshold, it indicates that the drone is flying faster and the content of the video screen changes quickly. In order to ensure the smoothness of subsequent image generation video playback, and improve the transmission of aerial images As a result, a higher frame rate can be used at this time, that is, the second frame rate is set as the target frame rate.
  • the offset rate is less than or equal to the second preset rate threshold, it can be further determined whether the height is less than the first height threshold, and whether the translational movement speed is greater than the first speed threshold; if the height is less than the first height threshold, and the translational movement speed is greater than The first speed threshold indicates that the content of the video screen changes quickly.
  • a higher frame rate can be used at this time, that is, the second frame rate Set as the target frame rate; if the height is greater than or equal to the first height threshold, or the translational motion speed is less than or equal to the first speed threshold, it can be further judged whether the angular velocity is greater than the angular velocity threshold; if the angular velocity is greater than the angular velocity threshold, the content of the video screen The change is also fast.
  • a higher frame rate can be used at this time, that is, the second frame rate is set as the target frame rate; if the angular velocity is less than or equal to Angular velocity threshold, you can further determine whether the offset rate is less than the first preset rate threshold; if the offset rate is less than the first preset rate threshold, it means that the content of the video screen changes slowly, in order to ensure that the capture of each frame of image Sharpness improves the image quality of aerial image transmission.
  • a lower frame rate can be used, that is, the first frame rate is set as the target frame rate; if the offset rate is greater than or equal to the first preset rate threshold, further judgment can be made Whether the height is greater than the second height threshold and whether the translational motion speed is less than the second speed threshold; if the height is greater than the second height threshold and the translational motion speed is less than the second speed threshold, it means that the content of the video screen changes slowly, in order to ensure collection To achieve the high definition of each frame of image, improve the image quality of aerial image transmission.
  • a lower frame rate can be used, that is, the first frame rate is set as the target frame rate; if the height is less than or equal to the second height threshold, or translation The movement speed is greater than or equal to the second speed threshold, indicating that the flight status of the drone has not changed much and the frame rate does not need to be adjusted.
  • the current frame rate can be maintained unchanged, and the current frame rate can be set as the target frame rate.
  • the target frame rate can be determined based only on the offset rate and part of the motion state information.
  • the judgment order of the offset rate and various motion state information can be flexibly adjusted according to actual needs.
  • the specific content is here Not limited.
  • the image transmission method may further include: generating video code stream data based on the target frame rate; and sending the video code stream data to the control terminal.
  • the encoding module can be used to encode the image collected by the image acquisition device based on the target frame rate to generate video stream data.
  • the encoding method can be based on actual needs. Make flexible settings.
  • the encoding method can include encoding such as H.264 or H.265.
  • the video stream data can be sent to the control terminal connected to the drone through the wireless communication module.
  • the control terminal After the control terminal receives the video code stream data, it can decode the video code stream data through the video decoding module to obtain an image.
  • the image can include multiple frames.
  • the multiple frame images can generate video data.
  • the video data can be YUV video.
  • the YUV is divided into three components, "Y" represents the brightness (Luminance or Luma), which is the gray value; "U” and “V” represent the chrominance (Chrominance or Chroma), which is used to describe the color and saturation of the image Degree, used to specify the color of the pixel.
  • the control terminal can display the decoded image through the display. Since the control terminal decodes one frame every time it receives a frame of video code stream data, frame rate information is not required, so the drone does not need to notify the control terminal of the current frame rate used, which ensures the reliability of dynamic adaptive frame rate implementation .
  • the image capture device is a first image capture device
  • the movable platform further includes a second image capture device.
  • generating video stream data may include: capturing the second image based on the target frame rate
  • the image collected by the device is encoded to generate video stream data.
  • the image acquisition device that collects the image used to determine the target frame rate may be the same or different from the image acquisition device that collects the image for transmission to the control terminal.
  • the image capture device that captures images may be set based on the The image is used to determine the target frame rate, and in the process of generating video stream data, the image can be captured by the second image capture device, and the image captured by the second image capture device is encoded based on the target frame rate to generate video stream data , Send the video stream data to the control terminal.
  • the first image acquisition device is an image acquisition device with a lower resolution
  • the second image acquisition device is an image acquisition device with a higher resolution.
  • the first image acquisition device may be, for example, a binocular camera installed on the drone
  • the second image acquisition device may be, for example, a main camera mounted on the pan/tilt of the drone.
  • the image capture device that captures the image used to determine the target frame rate may be the same as the image capture device that captures the image to be transmitted to the control terminal. E.g. They are all the main cameras mounted on the drone's gimbal.
  • the embodiment of the application can obtain the movement characteristics of the target pixel in the image collected by the image acquisition device; and/or obtain the movement state information of the movable platform; according to the movement characteristic of the target pixel in the image and/or the movement of the movable platform
  • the status information determines the target frame rate for image transmission from the movable platform to the control terminal.
  • This solution can automatically determine the target frame rate without the user's manual selection, which improves the timeliness and accuracy of determining the target frame rate.
  • FIG. 5 is a schematic block diagram of a movable platform provided by an embodiment of the present application.
  • the mobile platform 11 may include a processor 111 and a memory 112, and the processor 111 and the memory 112 are connected by a bus, such as an I2C (Inter-integrated Circuit) bus.
  • I2C Inter-integrated Circuit
  • the processor 111 may be a micro-controller unit (MCU), a central processing unit (Central Processing Unit, CPU), a digital signal processor (Digital Signal Processor, DSP), or the like.
  • MCU micro-controller unit
  • CPU Central Processing Unit
  • DSP Digital Signal Processor
  • the memory 112 may be a Flash chip, a read-only memory (ROM, Read-Only Memory) disk, an optical disk, a U disk, or a mobile hard disk, etc., and may be used to store computer programs.
  • ROM Read-Only Memory
  • the memory 112 may be a Flash chip, a read-only memory (ROM, Read-Only Memory) disk, an optical disk, a U disk, or a mobile hard disk, etc., and may be used to store computer programs.
  • the movable platform 11 may also include an image capture device 113, etc.
  • the image capture device 113 is used to capture images
  • the mobile platform 11 may also include a pan/tilt for carrying the image capture device 113, which can drive the image capture device 113 to move. To the appropriate location, accurately collect the required images, etc.
  • the type of the movable platform 11 can be flexibly set according to actual needs.
  • the movable platform 11 can be a mobile terminal, a drone, a robot, or a pan-tilt camera.
  • the pan-tilt camera can include a camera and a pan-tilt, etc.
  • the camera is used to collect images
  • the pan-tilt is used to carry the camera to drive the camera to a suitable position and accurately collect the required images.
  • the pan-tilt camera can be mounted on the unmanned on board.
  • the pan/tilt camera can collect images, and obtain the movement characteristics of the target pixels in the collected images, and/or send an information acquisition request to the drone, and receive the drone's movement based on the information acquisition request.
  • the target frame rate for image transmission from the drone to the control terminal can be determined based on the movement characteristics of the target pixel in the image and/or the movement status information of the drone, and the target frame rate can be sent to the drone .
  • the drone can encode the image based on the target frame rate to generate video stream data, and send the video stream data to the control terminal.
  • the processor 111 is configured to call a computer program stored in the memory 112 and implement the image transmission method provided in the embodiment of the present application when the computer program is executed. For example, the following steps may be performed:
  • the processor 111 after determining the target frame rate for image transmission of the movable platform to the control terminal, the processor 111 further executes: generating video stream data based on the target frame rate; and sending the video stream data to the control terminal.
  • the image capture device is a first image capture device
  • the movable platform further includes a second image capture device.
  • the processor 111 executes: , Encoding the image collected by the second image collecting device to generate video stream data.
  • the processor 111 when acquiring the movement characteristics of the target pixel in the image acquired by the image acquisition device, the processor 111 further executes: acquiring the first image and the second image in the multi-frame images acquired by the image acquisition device; The pixel coordinates of the target pixel of an image and the pixel coordinates of the target pixel of the second image.
  • the target pixel of the first image corresponds to the target pixel of the second image; according to the pixel coordinate of the target pixel of the first image And the pixel coordinates of the target pixel of the second image to determine the movement characteristics of the target pixel.
  • the processor 111 when determining the movement characteristics of the target pixel according to the pixel coordinates of the target pixel of the first image and the pixel coordinates of the target pixel of the second image, the processor 111 further executes: Determine the relative position difference between the pixel coordinates of the pixel and the pixel coordinates of the target pixel of the second image; determine the offset rate of the target pixel according to the relative position difference and the collection time interval of the first image and the second image; according to the offset The velocity determines the movement characteristics of the target pixel.
  • the target pixel includes the first target pixel corresponding to the background and/or the second target pixel corresponding to the moving target, the first target pixel of the first image and the first target pixel of the second image
  • the second target pixel of the first image corresponds to the second target pixel of the second image.
  • the processor 111 when determining the movement characteristics of the target pixel according to the offset rate, the processor 111 further executes: weighted average the offset rate of the first target pixel and the offset rate of the second target pixel, The shift rate after weighted average is used to characterize the movement characteristics of the target pixel.
  • the processor 111 further executes: based on a pre-trained calculation model, recognizing the background and/or moving target in the first image and the second image.
  • the moving target includes one or the previous moving targets with the largest corresponding image area among the multiple moving targets to be selected; or, the moving target includes the one or the previous moving targets with the largest moving amplitude among the multiple moving targets to be selected. Multiple sports areas.
  • the movement characteristics of the target pixel are characterized by the offset rate of the target pixel.
  • the processing The device 111 further executes: when the offset rate is less than the first preset rate threshold, the first frame rate is set as the target frame rate; or, when the offset rate is greater than the second preset rate threshold, the second frame rate is set Is the target frame rate; or, when the offset rate is greater than or equal to the first preset rate threshold, and the offset rate is less than or equal to the second preset rate threshold, the current frame rate is set as the target frame rate; where the first The preset rate threshold is less than the second preset rate threshold, and the first frame rate is less than the second frame rate.
  • the processor 111 when acquiring the movement state information of the movable platform, the processor 111 further executes: acquiring the height, translational movement speed, and/or angular velocity in the yaw direction of the movable platform.
  • the processor 111 when acquiring the height, translational motion speed, and/or angular velocity in the yaw direction of the movable platform, the processor 111 further executes: obtain the movable platform's information through the inertial measurement unit installed on the movable platform Altitude, translational speed, and/or angular velocity in the yaw direction.
  • the processor 111 when determining the target frame rate for image transmission from the movable platform to the control terminal according to the movement state information of the movable platform, the processor 111 further executes: if the height is less than the first height threshold, and the translational movement speed If it is greater than the first speed threshold, the second frame rate is set as the target frame rate; if the height is greater than or equal to the first height threshold, or the translational motion speed is less than or equal to the first speed threshold, then it is determined whether the angular velocity is greater than the angular velocity threshold; if the angular velocity If the angular velocity is greater than the threshold, the second frame rate is set as the target frame rate; if the angular velocity is less than or equal to the angular velocity threshold, it is determined whether the height is greater than the second height threshold, and the translational motion speed is less than the second speed threshold; if the height is greater than the second Height threshold, and the translational motion speed is less than the second speed threshold, then the first frame rate is set as the target frame rate
  • the processor 111 when determining the target frame rate for image transmission from the movable platform to the control terminal according to the movement characteristics of the target pixel in the image and the movement state information of the movable platform, the processor 111 further executes: If the rate is greater than the second preset rate threshold, the second frame rate is set as the target frame rate; if the offset rate is less than or equal to the second preset rate threshold, it is determined whether the height is less than the first height threshold and whether the translational motion speed is Greater than the first speed threshold; if the height is less than the first height threshold, and the translation movement speed is greater than the first speed threshold, then the second frame rate is set as the target frame rate; if the height is greater than or equal to the first height threshold, or the translation movement speed If the angular velocity is less than or equal to the first velocity threshold, it is judged whether the angular velocity is greater than the angular velocity threshold; if the angular velocity is greater than the angular velocity threshold, the second frame rate is set as the target frame rate; if
  • a preset rate threshold if the offset rate is less than the first preset rate threshold, the first frame rate is set as the target frame rate; if the offset rate is greater than or equal to the first preset rate threshold, it is determined whether the height is greater than the first 2. Height threshold, and whether the translational motion speed is less than the second speed threshold; if the height is greater than the second height threshold, and the translational motion speed is less than the second speed threshold, set the first frame rate as the target frame rate; if the height is less than or equal to If the second height threshold, or the translational motion speed is greater than or equal to the second speed threshold, the current frame rate is set as the target frame rate; where the first height threshold is less than the second height threshold, and the first speed threshold is less than the second speed threshold, The first frame rate is less than the second frame rate.
  • the embodiment of the present application also provides a computer program.
  • the computer program includes program instructions, and the processor executes the program instructions to implement the image transmission method provided in the embodiments of the present application.
  • the embodiments of the present application also provide a computer-readable storage medium, the computer-readable storage medium is a computer-readable storage medium, the computer-readable storage medium stores a computer program, the computer program includes program instructions, the processor executes The program instructions implement the image transmission method provided in the embodiment of the present application.
  • the computer-readable storage medium may be the internal storage unit of the removable platform described in any of the foregoing embodiments, such as the hard disk or memory of the removable platform.
  • the computer-readable storage medium can also be an external storage device on a removable platform, such as a plug-in hard disk equipped on a removable platform, a smart memory card (Smart Media Card, SMC), a Secure Digital (SD) card, and a flash memory. Card (Flash Card) etc.
  • the computer program stored in the computer-readable storage medium can execute any of the image transmission methods provided in the embodiments of this application, it can implement what can be achieved by any of the image transmission methods provided in the embodiments of this application.
  • the beneficial effects of refer to the previous embodiment for details, and will not be repeated here.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Image Analysis (AREA)
  • Studio Devices (AREA)

Abstract

La présente invention concerne un procédé de transmission d'image, une plateforme mobile et un support de stockage lisible par ordinateur. Le procédé comprend les étapes consistant à : obtenir des caractéristiques de mouvement d'un pixel cible dans une image capturée par un appareil de capture d'Image; et/ou obtenir des informations d'état de mouvement d'une plateforme mobile (S101); et déterminer, en fonction des caractéristiques de mouvement du pixel cible dans l'image et/ou des informations d'état de mouvement de la plateforme mobile, une fréquence de trame cible pour la transmission d'image de la plateforme mobile à une borne de commande (S102). L'opportunité et la précision de la détermination de la vitesse de la trame cible sont améliorées.
PCT/CN2020/093035 2020-05-28 2020-05-28 Procédé de transmission d'image, plateforme mobile et support de stockage lisible par ordinateur WO2021237616A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2020/093035 WO2021237616A1 (fr) 2020-05-28 2020-05-28 Procédé de transmission d'image, plateforme mobile et support de stockage lisible par ordinateur
CN202080005966.8A CN113056904A (zh) 2020-05-28 2020-05-28 图像传输方法、可移动平台及计算机可读存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/093035 WO2021237616A1 (fr) 2020-05-28 2020-05-28 Procédé de transmission d'image, plateforme mobile et support de stockage lisible par ordinateur

Publications (1)

Publication Number Publication Date
WO2021237616A1 true WO2021237616A1 (fr) 2021-12-02

Family

ID=76509772

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/093035 WO2021237616A1 (fr) 2020-05-28 2020-05-28 Procédé de transmission d'image, plateforme mobile et support de stockage lisible par ordinateur

Country Status (2)

Country Link
CN (1) CN113056904A (fr)
WO (1) WO2021237616A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115152200A (zh) * 2022-05-23 2022-10-04 广东逸动科技有限公司 摄像头的帧率调节方法、装置、电子设备及存储介质
CN117478929A (zh) * 2023-12-28 2024-01-30 昆明中经网络有限公司 一种基于ai大模型的新媒体精品影像处理系统
CN118069894A (zh) * 2024-04-12 2024-05-24 乾健科技有限公司 一种大数据存储管理方法及系统

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114268742B (zh) * 2022-03-01 2022-05-24 北京瞭望神州科技有限公司 一种天眼芯片处理装置
CN114913471B (zh) * 2022-07-18 2023-09-12 深圳比特微电子科技有限公司 一种图像处理方法、装置和可读存储介质
CN116112475A (zh) * 2022-11-18 2023-05-12 深圳元戎启行科技有限公司 一种用于自动驾驶远程接管的图像传输方法及车载终端
CN116804882B (zh) * 2023-06-14 2023-12-29 黑龙江大学 一种基于流数据处理的智能无人机控制系统及其无人机
CN117808324B (zh) * 2024-02-27 2024-06-04 西安麦莎科技有限公司 一种无人机视觉协同的建筑进度评估方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180359413A1 (en) * 2016-01-29 2018-12-13 SZ DJI Technology Co., Ltd. Method, system, device for video data transmission and photographing apparatus
WO2019051649A1 (fr) * 2017-09-12 2019-03-21 深圳市大疆创新科技有限公司 Procédé et dispositif de transmission d'image, plate-forme mobile, dispositif de surveillance, et système
CN109600579A (zh) * 2018-10-29 2019-04-09 歌尔股份有限公司 视频无线传输方法、装置、系统和设备
CN110012267A (zh) * 2019-04-02 2019-07-12 深圳市即构科技有限公司 无人机控制方法及音视频数据传输方法
CN110291774A (zh) * 2018-03-16 2019-09-27 深圳市大疆创新科技有限公司 一种图像处理方法、设备、系统及存储介质

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102377730A (zh) * 2010-08-11 2012-03-14 中国电信股份有限公司 音视频信号的处理方法及移动终端
CN110807392B (zh) * 2019-10-25 2022-09-06 浙江大华技术股份有限公司 编码控制方法以及相关装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180359413A1 (en) * 2016-01-29 2018-12-13 SZ DJI Technology Co., Ltd. Method, system, device for video data transmission and photographing apparatus
WO2019051649A1 (fr) * 2017-09-12 2019-03-21 深圳市大疆创新科技有限公司 Procédé et dispositif de transmission d'image, plate-forme mobile, dispositif de surveillance, et système
CN110291774A (zh) * 2018-03-16 2019-09-27 深圳市大疆创新科技有限公司 一种图像处理方法、设备、系统及存储介质
CN109600579A (zh) * 2018-10-29 2019-04-09 歌尔股份有限公司 视频无线传输方法、装置、系统和设备
CN110012267A (zh) * 2019-04-02 2019-07-12 深圳市即构科技有限公司 无人机控制方法及音视频数据传输方法

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115152200A (zh) * 2022-05-23 2022-10-04 广东逸动科技有限公司 摄像头的帧率调节方法、装置、电子设备及存储介质
CN117478929A (zh) * 2023-12-28 2024-01-30 昆明中经网络有限公司 一种基于ai大模型的新媒体精品影像处理系统
CN117478929B (zh) * 2023-12-28 2024-03-08 昆明中经网络有限公司 一种基于ai大模型的新媒体精品影像处理系统
CN118069894A (zh) * 2024-04-12 2024-05-24 乾健科技有限公司 一种大数据存储管理方法及系统

Also Published As

Publication number Publication date
CN113056904A (zh) 2021-06-29

Similar Documents

Publication Publication Date Title
WO2021237616A1 (fr) Procédé de transmission d'image, plateforme mobile et support de stockage lisible par ordinateur
CN108139799B (zh) 基于用户的兴趣区(roi)处理图像数据的系统和方法
US20190246104A1 (en) Panoramic video processing method, device and system
US11258949B1 (en) Electronic image stabilization to improve video analytics accuracy
WO2018214078A1 (fr) Procédé et dispositif de commande de photographie
WO2022141418A1 (fr) Procédé et dispositif de traitement d'image
CN108363946B (zh) 基于无人机的人脸跟踪系统及方法
WO2022141376A1 (fr) Procédé d'estimation de posture et appareil associé
CN107509031B (zh) 图像处理方法、装置、移动终端及计算机可读存储介质
WO2018133589A1 (fr) Dispositif, procédé de photographie aérienne, et véhicule aérien sans pilote
WO2022141445A1 (fr) Procédé et dispositif de traitement d'image
EP2590396A1 (fr) Système de traitement d'informations, dispositif de traitement d'informations et procédé de traitement d'informations
WO2017045326A1 (fr) Procédé de traitement de photographie pour un véhicule aérien sans équipage
WO2020057609A1 (fr) Procédé et appareil de transmission d'image, terminal d'envoi d'image et système de transmission d'image d'aéronef
WO2022141333A1 (fr) Procédé et appareil de traitement d'images
WO2022141351A1 (fr) Puce de capteur de vision, procédé de fonctionnement de puce de capteur de vision, et dispositif
CN109685709A (zh) 一种智能机器人的照明控制方法及装置
JP2023502552A (ja) ウェアラブルデバイス、インテリジェントガイド方法及び装置、ガイドシステム、記憶媒体
CN111880711B (zh) 显示控制方法、装置、电子设备及存储介质
CN112640419B (zh) 跟随方法、可移动平台、设备和存储介质
US20120092519A1 (en) Gesture recognition using chroma-keying
CN109949381B (zh) 图像处理方法、装置、图像处理芯片、摄像组件及飞行器
WO2022089341A1 (fr) Procédé de traitement d'images et appareil associé
WO2020019130A1 (fr) Procédé d'estimation de mouvement et dispositif électronique
WO2022082440A1 (fr) Procédé, appareil et système pour déterminer une stratégie de suivi de cible et dispositif et support de stockage

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20938074

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20938074

Country of ref document: EP

Kind code of ref document: A1