WO2020047691A1 - 动态图像的生成方法、装置、可移动平台和存储介质 - Google Patents

动态图像的生成方法、装置、可移动平台和存储介质 Download PDF

Info

Publication number
WO2020047691A1
WO2020047691A1 PCT/CN2018/103737 CN2018103737W WO2020047691A1 WO 2020047691 A1 WO2020047691 A1 WO 2020047691A1 CN 2018103737 W CN2018103737 W CN 2018103737W WO 2020047691 A1 WO2020047691 A1 WO 2020047691A1
Authority
WO
WIPO (PCT)
Prior art keywords
video data
dynamic image
frames
picture
still pictures
Prior art date
Application number
PCT/CN2018/103737
Other languages
English (en)
French (fr)
Inventor
汪滔
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to CN201880040364.9A priority Critical patent/CN111034187A/zh
Priority to PCT/CN2018/103737 priority patent/WO2020047691A1/zh
Publication of WO2020047691A1 publication Critical patent/WO2020047691A1/zh
Priority to US17/190,364 priority patent/US20210195134A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/132Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/44Receiver circuitry for the reception of television signals according to analogue transmission standards
    • H04N5/4448Receiver circuitry for the reception of television signals according to analogue transmission standards for frame-grabbing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/02Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
    • G11B27/031Electronic editing of digitised analogue information signals, e.g. audio or video signals
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/02Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
    • G11B27/031Electronic editing of digitised analogue information signals, e.g. audio or video signals
    • G11B27/034Electronic editing of digitised analogue information signals, e.g. audio or video signals on discs
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/34Indicating arrangements 
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/162User input
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/172Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/179Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a scene or a shot
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/765Interface circuits between an apparatus for recording and another apparatus
    • H04N5/77Interface circuits between an apparatus for recording and another apparatus between a recording apparatus and a television camera
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/765Interface circuits between an apparatus for recording and another apparatus
    • H04N5/77Interface circuits between an apparatus for recording and another apparatus between a recording apparatus and a television camera
    • H04N5/772Interface circuits between an apparatus for recording and another apparatus between a recording apparatus and a television camera the recording apparatus and the television camera being placed in the same enclosure
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/91Television signal processing therefor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/01Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
    • H04N7/0102Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving the resampling of the incoming video signal

Definitions

  • the invention relates to the technical field of image processing, and in particular, to a method, a device, a movable platform, and a storage medium for generating a dynamic image.
  • the Image Interchange Format (GIF for short) is a bitmap graphic file format that reproduces true-color images in 8-bit colors (that is, 256 colors). GIF is actually a two-dimensional silent pixel dot-matrix format second-frame-type dynamic image animation. It has a high compression ratio and cannot store more than 256-color images. It is one of the network transmission image formats widely used on the World Wide Web at present.
  • the invention provides a method, a device, a movable platform, and a storage medium for generating a dynamic image, which are used to solve the problem that the video image data existing in the prior art has more restrictions on use on the Internet, poor compatibility, and relatively low power Small, thereby reducing the user's convenience and flexibility in using image data.
  • a first aspect of the present invention is to provide a method for generating a dynamic image, including:
  • An image conversion process is performed on the video data to generate a dynamic image corresponding to at least a part of the video data.
  • a second aspect of the present invention is to provide a device for generating a dynamic image, including:
  • a processor for running a computer program stored in the memory to implement: acquiring video data, the video data being output by at least one photographing device configured on a movable platform; performing image conversion processing on the video data, An image conversion process is performed on the video data to generate a dynamic image corresponding to at least a part of the video data.
  • a third aspect of the present invention is to provide a device for generating a dynamic image, including:
  • An acquisition module configured to acquire video data output by at least one photographing device configured on a movable platform
  • a generating module is configured to perform image conversion processing on the video data to generate a dynamic image corresponding to at least a part of the video data.
  • a fourth aspect of the present invention is to provide a movable platform, including:
  • the generating device is configured to receive video data output by the photographing device.
  • a fifth aspect of the present invention is to provide a computer-readable storage medium.
  • the storage medium is a computer-readable storage medium.
  • the computer-readable storage medium stores program instructions, and the program instructions are used in the first aspect. The method of generating moving images described above.
  • the method, device, movable platform and storage medium provided by the present invention generate a dynamic image corresponding to at least a part of the video data by performing image conversion processing on the video data, solving the existing technology
  • the existing problems of the use of video image data on the Internet are more restricted, which improves the compatibility of image data and expands the spread of image data, thereby ensuring the user's convenience and flexibility in using image data and effectively improving
  • the practicability of this method is conducive to market promotion and application.
  • FIG. 1 is a schematic flowchart of a method for generating a dynamic image according to an embodiment of the present invention
  • FIG. 2 is a schematic flowchart of a process of performing image conversion processing on the video data to generate a dynamic image corresponding to at least a part of the video data according to an embodiment of the present invention
  • FIG. 3 is a schematic flowchart of obtaining at least two still pictures in the still picture group according to an embodiment of the present invention
  • FIG. 4 is a first schematic flowchart of a process of encoding the at least two frames of still pictures to generate the dynamic image according to an embodiment of the present invention
  • FIG. 5 is a schematic flowchart of a process of encoding the at least two frames of still pictures according to the picture size and the target size to generate the dynamic image according to an embodiment of the present invention
  • FIG. 6 is a second schematic flowchart of a process of encoding the at least two frames of still pictures to generate the dynamic image according to an embodiment of the present invention
  • FIG. 7 is a schematic flowchart of another method for generating a dynamic image according to an embodiment of the present invention.
  • FIG. 8 is a schematic flowchart of interception processing of the video data according to an embodiment of the present invention.
  • FIG. 9 is a first schematic structural diagram of a dynamic image generating device according to an embodiment of the present invention.
  • FIG. 10 is a second schematic structural diagram of a dynamic image generating device according to an embodiment of the present invention.
  • FIG. 11 is a schematic structural diagram of a movable platform according to an embodiment of the present invention.
  • FIG. 1 is a schematic flowchart of a method for generating a dynamic image according to an embodiment of the present invention. Referring to FIG. 1, this embodiment provides a method for generating a dynamic image, including:
  • S1 Obtain video data, which is output by at least one shooting device configured on a movable platform;
  • the video data may be video data in AVI, wma, MP4, flash and other formats generated by compression of multiple image frames; and the mobile platform may include at least one of the following: unmanned aerial vehicle, unmanned ship, and unmanned vehicle.
  • the movable platform may include a device that is moved by an external force, such as a handheld device, such as a handheld gimbal.
  • One or more shooting devices are configured on the movable platform. When multiple shooting devices are included on the movable platform, the shooting directions of the multiple shooting devices can be different, so that each shooting device can output video data of a range of viewing angles; or The shooting directions of multiple shooting devices can be the same, so that multiple shooting devices can output video data of the same viewing angle range.
  • S2 Perform image conversion processing on the video data to generate a dynamic image corresponding to at least a part of the video data.
  • the dynamic image may be a GIF image; after the video data is obtained, the video data may be subjected to image conversion processing, so that the video data may be converted into a corresponding dynamic image.
  • image conversion processing may be performed on part of the video data to obtain a dynamic image corresponding to a part of the video data;
  • the entire video data can be image-converted to obtain a dynamic image corresponding to the entire video data.
  • the method for generating a dynamic image provided by this embodiment generates a dynamic image corresponding to at least a part of the video data by performing image conversion processing on the video data, and solves the limitation on the use of the video image data in the prior art on the Internet. More problems improve the compatibility of image data and expand the spread of image data, thereby ensuring the user's convenience and flexibility in using image data, effectively improving the practicability of the method, and being conducive to market promotion and application.
  • FIG. 2 is a schematic flowchart of performing image conversion processing on video data to generate a dynamic image corresponding to at least a part of the video data according to an embodiment of the present invention
  • FIG. 3 is a flowchart of obtaining at least two of a still picture group according to an embodiment of the present invention Schematic diagram of the flow of frame still pictures.
  • image conversion processing is performed on video data to generate a dynamic image corresponding to at least a part of the video data.
  • S21 Convert video data into a still picture group.
  • the still picture group includes multiple frames of still pictures corresponding to the video data.
  • the video data Since the video data is essentially a still picture that is played continuously, the video data can be converted into a corresponding still picture group. In the specific conversion process, since the video data has sound attributes, but the still picture group does not have sound attributes, you can remove the sound attributes from the video data first, and then convert the video data with the sound attributes removed to the corresponding Group of still pictures.
  • obtaining at least two frames of still pictures in the still picture group may include:
  • S222 Acquire at least two frames of still pictures in the still picture group according to the picture selection operation.
  • a picture input operation input by a user may be a frame number of a static picture directly input by a user.
  • a picture input operation input by a user is: selecting frames 100-110, or selecting a frame 100 frames, 105 frames, 120 frames, etc.
  • the user can only input the above frame numbers, and according to the input frame numbers, at least two corresponding frames can be determined in the still picture group. image.
  • the user can directly select at least two frames of static pictures through touch operation. For example, the user can view all the pictures in the group of still pictures and detect when the user stays in a certain picture. When the preset time threshold is exceeded, or when a user clicks or presses a selection operation on a certain picture, it can be determined that the user has selected the frame picture. At this time, the picture selection operation input by the user is manually performed by the user. operating.
  • the picture selection operation input by the user may be time period information, for example, the user enters time period information, and the time period information is 50s-55s. Therefore, the static picture group can be obtained from the above. At least two frames of static pictures corresponding to the time period information can obtain all the still pictures in the 50s-55s.
  • the operation of selecting a picture input by the user is an operation of the user to input the time zone information.
  • the user-selected picture selection operation can also be performed by the user for inputting point-in-time information.
  • the point-in-time information entered by the user is 30s, 35s, and 40s. Therefore, the same as the above can be obtained in the static picture group. At least two frames of still pictures corresponding to the point-in-time information, three frames of still pictures corresponding to the 30s, 35s, and 40s can be obtained.
  • S23 Perform encoding processing on at least two frames of still pictures to generate a dynamic image.
  • the at least two frames of still pictures can be encoded to generate a dynamic image composed of at least two frames of still pictures.
  • FIG. 4 is a first schematic flowchart of encoding a still image according to an embodiment of the present invention to generate a dynamic image.
  • FIG. 5 is an embodiment of the present invention that encodes at least two still images according to a picture size and a target size. Schematic diagram of the process of generating a dynamic image; based on the above embodiments, and continuing to refer to FIGS. 4-5, it can be known that, for the specific implementation process of generating a dynamic image, an achievable way is to perform static for at least two frames.
  • Pictures are encoded, and generating dynamic images can include:
  • S231 Acquire a picture size of at least two frames of still pictures and a target size of a moving picture input by the user;
  • the picture size of the still picture can be determined according to the video data, and the target size of the dynamic picture can be input and set by the user, and the target size can be the same as or different from the picture size.
  • S232 Encode at least two frames of still pictures according to the picture size and the target size to generate a dynamic image.
  • At least two frames of still pictures are coded according to the picture size and the target size, and an achievable way to generate a dynamic image is:
  • the preset encoding algorithm directly encodes and synthesizes at least two frames of still pictures to generate a dynamic image.
  • Another implementable method is to encode at least two still pictures according to the picture size and the target size, and generating a dynamic image may include:
  • the comparison result is that the image size is inconsistent with the target size, it means that the image size does not meet the user's use needs, and then the picture size of the still picture can be changed and adjusted according to the target size to meet the user's use needs; specific When the picture size is larger than the target size, the still picture is larger at this time, so the target size can be used as the standard size to reduce the still picture to obtain the still picture of the target size; when the picture size is smaller than the target size At this time, the still picture is small, so the target size can be used as the standard size, and the still picture can be enlarged to obtain a still picture of the target size.
  • S2323 Encode and synthesize at least two frames of static pictures after scaling processing to generate a dynamic image.
  • the scaled still picture can meet the user's size requirements for dynamic pictures. Therefore, at least two frames of still pictures can be encoded and synthesized using a preset encoding algorithm to generate a dynamic image.
  • the dynamic image is generated through the above process, which effectively ensures that the size of the dynamic image can meet the user's needs, thereby improving the stability and reliability of the method.
  • FIG. 6 is a second schematic flowchart of encoding a still picture according to an embodiment of the present invention to generate a dynamic image according to the second embodiment.
  • FIG. 6 it can be seen that, for generating a dynamic image, another An achievable way is to perform encoding processing on at least two frames of still pictures, and generating a dynamic image may include:
  • S233 Acquire a picture display order of at least two frames of still pictures in the video data
  • each frame of still pictures obtained through the video data can correspond to time information
  • the picture display order of the still pictures in the video data can be obtained according to the time information of the still pictures.
  • the video data includes the first still picture
  • the time information corresponding to the second still picture, the third still picture, and the first still picture, the second still picture, and the third still picture are: 1 minute 20 seconds, 5 minutes 40 seconds, and 3 minutes 15 seconds.
  • the display order of the pictures is determined according to the sequence of the time information, that is, the first still picture-the third still picture-the second still picture.
  • those skilled in the art can also obtain the display order of pictures according to other methods, as long as the accuracy and reliability of obtaining the display order of pictures can be ensured, and details are not described herein again.
  • S234 Determine the target display order of the dynamic image according to the display order of the pictures
  • the picture display order may be the same as or different from the target display order.
  • the target display order of the dynamic image may be the first still picture.
  • Picture-third still picture-second still picture; or, the target display order of the moving image may also be second still picture-third still picture-first still picture.
  • the target display order and the picture display order are mutually Reverse relationship.
  • S235 Encode and synthesize at least two frames of still pictures according to the target display order to generate a dynamic image.
  • At least two frames of still pictures can be encoded and synthesized according to the target display order, so that a dynamic image can be generated.
  • the dynamic image is generated through the above process, which effectively ensures that the display order of the dynamic image can meet the user's needs, and further improves the flexibility and reliability of the method.
  • FIG. 7 is a schematic flowchart of another method for generating a dynamic image according to an embodiment of the present invention
  • FIG. 8 is a schematic flowchart of an interception process of video data according to an embodiment of the present invention. based on the foregoing embodiment, reference is continued
  • the method in order to improve the practicability of the method, before performing image conversion processing on the video data, the method further includes:
  • the longer the playback time of video data the more still pictures included in the video data. Therefore, when the video data is converted into a dynamic image, the playback time of the video data can be obtained, and the acquired Analysis and comparison of the playback duration of the video and the threshold duration.
  • the video data at this time corresponds to more static pictures. In general, 1s video data corresponds to at least 18 frames of static pictures. Therefore, in order to ensure conversion Efficiency and quality, video data can be intercepted. Specifically, video data can be intercepted:
  • the video interception operation includes at least one of the following: interception time, intercepted first frame still picture, intercepted last frame still picture, and number of intercepted still pictures.
  • S0022 Perform interception processing on the video data according to the video interception operation, and determine the video data after the interception processing.
  • the video capture operation input by the user is the capture time
  • the capture time is 3 minutes and 50 seconds to 4 minutes
  • the video data can be captured at the above capture time.
  • 3 points can be obtained Video data between 50 seconds and 4 minutes.
  • the video input operation input by the user is the first frame still picture and the last frame still picture
  • the first frame still picture is frame 101 and the last frame still picture is frame 120.
  • the video data is processed.
  • video data including the 101st and 120th frames can be acquired.
  • the video input operation input by the user is the number of still pictures taken, for example, the number of still pictures is 50
  • the video data can be randomly intercepted according to the number of still pictures, so that a video including 50 still pictures can be obtained. data.
  • dynamic image conversion processing can be performed on the intercepted video data, thereby effectively improving the efficiency and quality of dynamic image generation, and further ensuring the stability and reliability of the method.
  • the movable platform can be an unmanned aerial vehicle.
  • the corresponding video data can be generated and output, so that the wireless communication link connected to the shooting device or Wired communication link to obtain video data.
  • the video data can be converted into an optional size GIF dynamic image.
  • necessary adjustment operations such as scaling, compression, and color approximation to 8-bit color can be performed on the video data according to the target size of the selected GIF dynamic image.
  • FIG. 9 is a schematic structural diagram of a dynamic image generating device according to an embodiment of the present invention.
  • this embodiment provides a dynamic image generating device.
  • the generating device can execute the foregoing dynamic image.
  • the generating method specifically, the device may include:
  • a memory 301 configured to store a computer program
  • the processor 302 is configured to run a computer program stored in the memory 301 to implement: acquiring video data, the video data is output by at least one photographing device configured on a movable platform; performing image conversion processing on the video data, and processing the video data The image conversion process generates a moving image corresponding to at least a part of the video data.
  • the dynamic image may be a GIF image.
  • Movable platforms include at least one of the following: unmanned aerial vehicles, unmanned ships, and unmanned vehicles.
  • the processor 302 when the processor 302 performs image conversion processing on the video data to generate a dynamic image corresponding to at least a part of the video data, the processor 302 is configured to:
  • the still picture group includes multiple frames of still pictures corresponding to the video data.
  • the processor 302 acquires at least two frames of still pictures in the still picture group
  • the processor 302 is configured to:
  • processor 302 encodes at least two frames of still pictures to generate a dynamic image
  • an implementable manner is that the processor 302 is configured to:
  • an implementable way is that the processor 302 is configured to:
  • At least two frames of still pictures are coded and synthesized to generate a dynamic image.
  • processor 302 encodes at least two frames of still pictures according to the picture size and the target size, and generates a dynamic image
  • another feasible way is that the processor 302 is configured to:
  • processor 302 when the processor 302 encodes at least two frames of still pictures to generate a dynamic image, another implementable manner is that the processor 302 is configured to:
  • the display order of the pictures is the same as or different from the target display order.
  • processor 302 is further configured to:
  • the video data is intercepted.
  • the processor 302 when the processor 302 intercepts the video data, the processor 302 is configured to:
  • the video data is intercepted according to the video interception operation, and the video data after the interception processing is determined.
  • the video interception operation includes at least one of the following: interception time, intercepted first frame still picture, intercepted last frame still picture, and number of intercepted still pictures.
  • the apparatus for generating a dynamic image provided in this embodiment can be used to execute the methods corresponding to the embodiments in FIG. 1 to FIG. 8.
  • the specific implementation manners and beneficial effects are similar, which are not described herein again.
  • FIG. 10 is a second structural schematic diagram of a dynamic image generating device according to an embodiment of the present invention. As can be seen with reference to FIG. 10, this embodiment provides another dynamic image generating device that can perform the foregoing dynamic
  • the image generating method specifically, the device may include:
  • the obtaining module 101 is configured to obtain video data that is output by at least one photographing device configured on a movable platform;
  • the generating module 102 is configured to perform image conversion processing on the video data to generate a dynamic image corresponding to at least a part of the video data.
  • the acquisition module 101 and the generation module 102 in the dynamic image generation device provided in this embodiment can be used to execute the methods corresponding to the embodiments in FIG. 1 to FIG. 8, and the specific implementation manners and beneficial effects thereof are similar, and are not repeated here.
  • FIG. 11 is a schematic structural diagram of a movable platform according to an embodiment of the present invention. Referring to FIG. 11, it is known that another aspect of this embodiment provides a movable platform.
  • the movable platform is at least one of the following: None Personnel aircraft, unmanned ship, and unmanned vehicle; specifically, the movable platform 201 includes:
  • At least one photographing device for outputting video data
  • FIG. 9 corresponds to a device for generating a dynamic image
  • the generating device 203 is configured to receive video data output by the shooting device.
  • the at least one photographing device may include: a photographing device 2021, a photographing device 2022, and a photographing device 2023.
  • the generating device 203 receives video data output by the photographing device 2021, the photographing device 2022, and the photographing device 2023, and can convert the video data into a corresponding dynamic image.
  • a further aspect of this embodiment provides a computer-readable storage medium.
  • the storage medium is a computer-readable storage medium.
  • the computer-readable storage medium stores program instructions, and the program instructions are used to implement the corresponding operations shown in FIG. 1 to FIG. 8.
  • a method for generating a moving image in the embodiment is a computer-readable storage medium.
  • the related remote control device and method disclosed may be implemented in other ways.
  • the embodiments of the remote control device described above are only schematic.
  • the division of the module or unit is only a logical function division.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of the remote control device or unit, and may be electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, may be located in one place, or may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objective of the solution of this embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist separately physically, or two or more units may be integrated into one unit.
  • the above integrated unit may be implemented in the form of hardware or in the form of software functional unit.
  • the integrated unit When the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it may be stored in a computer-readable storage medium.
  • the technical solution of the present invention essentially or part that contributes to the existing technology or all or part of the technical solution can be embodied in the form of a software product, which is stored in a storage medium , Including a number of instructions to cause the computer processor 101 (processor) to perform all or part of the steps of the method described in various embodiments of the present invention.
  • the foregoing storage media include: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disks or optical disks and other media that can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

本发明公开了一种动态图像的生成方法、装置、可移动平台和存储介质,方法包括:获取视频数据,所述视频数据是配置在可移动平台上的至少一个拍摄装置输出的;对所述视频数据进行图像转换处理,生成与至少一部分的所述视频数据相对应的动态图像。本发明提供的动态图像的生成方法、装置、可移动平台和存储介质,通过对所述视频数据进行图像转换处理,生成与至少一部分的所述视频数据相对应的动态图像,解决了现有技术中存在的视频图像数据在互联网上的使用限制较多的问题,提高了图像数据的兼容性,扩大了图像数据的传播力,进而保证了用户对图像数据使用的方便灵活程度,有效地提高了该方法的实用性。

Description

动态图像的生成方法、装置、可移动平台和存储介质 技术领域
本发明涉及图像处理技术领域,尤其涉及一种动态图像的生成方法、装置、可移动平台和存储介质。
背景技术
图像互换格式(Graphics Interchange Format,简称GIF)是一种位图图形文件格式,以8位色(即256种颜色)重现真彩色的图像。GIF实际上是两维无声像素点阵格式的秒帧型动态图像动画,其具有压缩比高、不能存储超过256色图像的特点,其是目前万维网广泛应用的网络传输图像格式之一。
现有的无人机所拍摄的图像数据大多为视频图像数据,相对于GIF图像而言,视频是有声音的,且颜色的体验几乎无限制。然而,在对图像数据进行应用时,视频图像数据在互联网上的使用限制较多,兼容性较差,传播力较小,从而降低了用户对图像数据使用的方便灵活程度。
发明内容
本发明提供了一种动态图像的生成方法、装置、可移动平台和存储介质,用于解决现有技术中存在的视频图像数据在互联网上的使用限制较多,兼容性较差,传播力较小,从而降低了用户对图像数据使用的方便灵活程度的问题。
本发明的第一方面是为了提供一种动态图像的生成方法,包括:
获取视频数据,所述视频数据是配置在可移动平台上的至少一个拍摄装置输出的;
对所述视频数据进行图像转换处理,生成与至少一部分的所述视频数据相对应的动态图像。
本发明的第二方面是为了提供一种动态图像的生成装置,包括:
存储器,用于存储计算机程序;
处理器,用于运行所述存储器中存储的计算机程序以实现:获取视频数据,所述视频数据是配置在可移动平台上的至少一个拍摄装置输出的;对所述视频数据进行图像转换处理,对所述视频数据进行图像转换处理,生成与至少一部分的所述视频数据相对应的动态图像。
本发明的第三方面是为了提供一种动态图像的生成装置,包括:
获取模块,用于获取视频数据,所述视频数据是配置在可移动平台上的至少一个拍摄装置输出的;
生成模块,用于对所述视频数据进行图像转换处理,生成与至少一部分的所述视频数据相对应的动态图像。
本发明的第四方面是为了提供一种可移动平台,包括:
拍摄装置,用于输出视频数据;
第二方面所述的动态图像的生成装置;
所述生成装置用于接收所述拍摄装置输出的视频数据。
本发明的第五方面是为了提供一种计算机可读存储介质,所述存储介质为计算机可读存储介质,该计算机可读存储介质中存储有程序指令,所述程序指令用于第一方面所述的动态图像的生成方法。
本发明提供的动态图像的生成方法、装置、可移动平台和存储介质,通过对所述视频数据进行图像转换处理,生成与至少一部分的所述视频数据相对应的动态图像,解决了现有技术中存在的视频图像数据在互联网上的使用限制较多的问题,提高了图像数据的兼容性,扩大了图像数据的传播力,进而保证了用户对图像数据使用的方便灵活程度,有效地提高了该方法的实用性,有利于市场的推广与应用。
附图说明
图1为本发明实施例提供的一种动态图像的生成方法的流程示意图;
图2为本发明实施例提供的对所述视频数据进行图像转换处理,生成与至少一部分的所述视频数据相对应的动态图像的流程示意图;
图3为本发明实施例提供的获取所述静态图片组中的至少两帧静态图片的流程示意图;
图4为本发明实施例提供的对所述至少两帧静态图片进行编码处理,生成所述动态图像的流程示意图一;
图5为本发明实施例提供的根据所述图片尺寸和目标尺寸对所述至少两帧静态图片进行编码处理,生成所述动态图像的流程示意图;
图6为本发明实施例提供的对所述至少两帧静态图片进行编码处理,生成所述动态图像的流程示意图二;
图7为本发明实施例提供的另一种动态图像的生成方法的流程示意图;
图8为本发明实施例提供的对所述视频数据进行截取处理的流程示意图;
图9为本发明实施例提供的一种动态图像的生成装置的结构示意图一;
图10为本发明实施例提供的一种动态图像的生成装置的结构示意图二;
图11为本发明实施例提供的一种可移动平台的结构示意图。
具体实施方式
为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
除非另有定义,本文所使用的所有的技术和科学术语与属于本发明的技术领域的技术人员通常理解的含义相同。本文中在本发明的说明书中所使用的术语只是为了描述具体的实施例的目的,不是旨在于限制本发明。
下面结合附图,对本发明的一些实施方式作详细说明。在各实施例之间不冲突的情况下,下述的实施例及实施例中的特征可以相互组合。
图1为本发明实施例提供的一种动态图像的生成方法的流程示意图;参考附图1所示,本实施例提供了一种动态图像的生成方法,包括:
S1:获取视频数据,视频数据是配置在可移动平台上的至少一个拍摄装置输出的;
其中,视频数据可以是由多个图像帧压缩生成的AVI、wma、MP4、flash等格式的视频数据;而可移动平台可以包括以下至少之一:无人飞行器、无人船、无人车。在某些情况中,可移动平台可以包括借助外力移动的设备, 例如手持式设备,如手持云台等。可移动平台上配置一个或多个拍摄装置,在可移动平台上包括多个拍摄装置时,多个拍摄装置的拍摄方向可以不相同,这样每一个拍摄装置可以输出一个视角范围的视频数据;或者,多个拍摄装置的拍摄方向可以相同,这样多个拍摄装置可以输出同一个视角范围的视频数据。
S2:对视频数据进行图像转换处理,生成与至少一部分的视频数据相对应的动态图像。
其中,动态图像可以为GIF图像;在获取到视频数据之后,可以对视频数据进行图像转换处理,从而可以将视频数据转换为相对应的动态图像。其中,在视频数据的存储数据较大时,为了提高动态图像获取的质量和效率,可以对部分的视频数据进行图像转换处理,从而获取与到一部分的视频数据相对应的动态图像;在视频数据的存储数据较小时,在可以满足用户的使用需求的前提下,可以对整个视频数据进行图像转换处理,从而获取到与整个视频数据相对应的动态图像。
本实施例提供的动态图像的生成方法,通过对视频数据进行图像转换处理,生成与至少一部分的视频数据相对应的动态图像,解决了现有技术中存在的视频图像数据在互联网上的使用限制较多的问题,提高了图像数据的兼容性,扩大了图像数据的传播力,进而保证了用户对图像数据使用的方便灵活程度,有效地提高了该方法的实用性,有利于市场的推广与应用。
图2为本发明实施例提供的对视频数据进行图像转换处理,生成与至少一部分的视频数据相对应的动态图像的流程示意图;图3为本发明实施例提供的获取静态图片组中的至少两帧静态图片的流程示意图;在上述实施例的基础上,继续参考附图2-3可知,本实施例中,对视频数据进行图像转换处理,生成与至少一部分的视频数据相对应的动态图像,可以包括:
S21:将视频数据转换为静态图片组,静态图片组中包括多帧与视频数据相对应的静态图片;
由于视频数据本质上是连续播放的静态图片,因此,可以将视频数据转换为相对应的静态图片组。在具体转换过程中,由于视频数据附带有声音属性,而静态图片组不带有声音属性,因此,可以先将视频数据中的声音属性去除,而后将去除声音属性的视频数据转换为相对应的静态图片组。
S22:获取静态图片组中的至少两帧静态图片;
具体的,获取静态图片组中的至少两帧静态图片可以包括:
S221:检测用户输入的图片选取操作;
S222:根据图片选取操作在静态图片组中获取至少两帧静态图片。
具体的,一种可实现的方式为,用户输入的图片选取操作可以为用户直接输入的静态图片的帧序号,例如,用户输入的图片选取操作为:选取第100-110帧,或者,选取第100帧、第105帧以及第120帧等等,在具体操作时,此时用户可以只输入上述的帧序号,根据所输入的帧序号即可在静态图片组中确定相对应的至少两帧静态图片。
另一种可实现的方式为,用户可以直接通过触摸操作来选取至少两帧静态图片,例如:用户可以对静态图片组中的所有图片进行查看操作,在检测到用户停留在某一图片的时间超过预设的时间阈值时,或者,检测到用户对某一图片进行点击选择操作或者按压选择操作时,可以确定用户选取了该帧图片,此时,用户输入的图片选取操作为用户手动执行的操作。
又一种可实现的方式为,用户输入的图片选取操作可以为时间段信息,例如:用户输入时间段信息,该时间段信息为第50s-55s,因此,可以在静态图片组中获取与上述时间段信息相对应的至少两帧静态图片,即可获取到第50s-55s内的所有静态图片,此时,用户输入的图片选取操作为用户输入时间段信息的操作。相类似的,用户输入的图片选取操作还可以为用户输入时间点信息的操作,例如:用户输入的时间点信息为第30s、第35s和第40s,因此,可以在静态图片组中获取与上述时间点信息相对应的至少两帧静态图片,即可获取到与第30s、第35s和第40s相对应的三帧静态图片。
S23:对至少两帧静态图片进行编码处理,生成动态图像。
在获取到至少两帧静态图片之后,可以对至少两帧静态图片进行编码处理,以生成由至少两帧静态图片所构成的动态图像。
图4为本发明实施例提供的对至少两帧静态图片进行编码处理,生成动态图像的流程示意图一;图5为本发明实施例提供的根据图片尺寸和目标尺寸对至少两帧静态图片进行编码处理,生成动态图像的流程示意图;在上述实施例的基础上,继续参考附图4-5可知,对于生成动态图像的具体实现过程而言,一种可实现的方式为,对至少两帧静态图片进行编码处理,生成动 态图像可以包括:
S231:获取至少两帧静态图片的图片尺寸和用户输入的动态图像的目标尺寸;
静态图片的图片尺寸可以根据视频数据来确定,而动态图片的目标尺寸可以由用户输入并设置,该目标尺寸可以与图片尺寸相同或不同。
S232:根据图片尺寸和目标尺寸对至少两帧静态图片进行编码处理,生成动态图像。
其中,根据图片尺寸和目标尺寸对至少两帧静态图片进行编码处理,生成动态图像的一种可实现方式为:
S2321:在图片尺寸和目标尺寸相一致时,对至少两帧静态图片进行编码合成处理,生成动态图像。
在获取到图片尺寸和目标尺寸之后,可以将图片尺寸与目标尺寸进行分析比较,当比较结果为图片尺寸与目标尺寸相一致时,此时则说明图片尺寸可以满足用户的使用需求,进而可以采用预设的编码算法直接对至少两帧静态图片进行编码合成处理,从而生成动态图像。
另一种可实现方式为,根据图片尺寸和目标尺寸对至少两帧静态图片进行编码处理,生成动态图像可以包括:
S2322:在图片尺寸和目标尺寸不一致时,根据目标尺寸对至少两帧静态图片进行缩放处理;
当比较结果为图片尺寸与目标尺寸不一致时,此时则说明图片尺寸不满足用户的使用需求,进而可以根据目标尺寸对静态图片的图片尺寸进行更改和调整,以满足用户的使用需求;具体的,当图片尺寸大于目标尺寸时,此时的静态图片较大,因此可以以目标尺寸作为标准尺寸,对静态图片进行缩小处理,以获取到目标尺寸大小的静态图片;当图片尺寸小于目标尺寸时,此时的静态图片较小,因此可以以目标尺寸作为标准尺寸,对静态图片进行放大处理,以获取到目标尺寸大小的静态图片。
S2323:对经过缩放处理后的至少两帧静态图片进行编码合成处理,生成动态图像。
经过缩放处理后的静态图片可以满足用户对动态图片的尺寸需求,因此,可以采用预设的编码算法对至少两帧静态图片进行编码合成处理,从而生成 动态图像。
通过上述过程生成动态图像,有效地保证了动态图像的尺寸可以满足用户的使用需求,进而提高了该方法使用的稳定可靠性。
图6为本发明实施例提供的对至少两帧静态图片进行编码处理,生成动态图像的流程示意图二;在上述实施例的基础上,继续参考附图6可知,对于生成动态图像而言,另一种可实现的方式为,对至少两帧静态图片进行编码处理,生成动态图像可以包括:
S233:获取至少两帧静态图片在视频数据中的图片显示顺序;
具体的,通过视频数据所获取的每帧静态图片均可以对应一时间信息,根据静态图片的时间信息可以获取到静态图片在视频数据中的图片显示顺序,例如:视频数据包括第一静态图片、第二静态图片、第三静态图片,上述第一静态图片、第二静态图片、第三静态图片所对应的时间信息分别为:1分20秒、5分40秒、3分15秒,进而可以根据时间信息的先后顺序来确定图片显示顺序,即第一静态图片-第三静态图片-第二静态图片。当然的,本领域技术人员还可以根据其他的方式来获取图片显示顺序,只要能够保证图片显示顺序获取的准确可靠性即可,在此不再赘述。
S234:根据图片显示顺序确定动态图像的目标显示顺序;
其中,图片显示顺序可以与目标显示顺序相同或者不同;举例来说:当图片显示顺序为第一静态图片-第三静态图片-第二静态图片时,动态图像的目标显示顺序可以为第一静态图片-第三静态图片-第二静态图片;或者,动态图像的目标显示顺序还可以为第二静态图片-第三静态图片-第一静态图片,此时,目标显示顺序与图片显示顺序互为倒序的关系。
S235:根据目标显示顺序对至少两帧静态图片进行编码合成处理,生成动态图像。
在获取到目标显示顺序之后,可以按照该目标显示顺序对至少两帧静态图片进行编码合成处理,从而可以生成动态图像。
通过上述过程生成动态图像,有效地保证了动态图像的显示顺序可以满足用户的使用需求,进而提高了该方法使用的灵活可靠性。
图7为本发明实施例提供的另一种动态图像的生成方法的流程示意图;图8为本发明实施例提供的对视频数据进行截取处理的流程示意图;在上述 实施例的基础上,继续参考附图7-8可知,为了提高该方法的实用性,在对视频数据进行图像转换处理之前,方法还包括:
S001:获取视频数据的播放时长;
S002:在播放时长超过预设的阈值时长时,对视频数据进行截取处理。
一般情况下,视频数据的播放时长越长,该视频数据中所包括的静态图片就越多,因此,在对视频数据进行动态图像转换时,可以获取到视频数据的播放时长,并将所获取的播放时长与阈值时长进行分析比较,在播放时长大于阈值时长时,此时的视频数据对应的静态图片较多,一般情况下,1s视频数据对应有至少18帧静态图片,因此,为了保证转换的效率和质量,可以对视频数据进行截取处理,具体的,对视频数据进行截取处理可以包括:
S0021:获取用户输入的视频截取操作;
其中,视频截取操作包括以下至少之一:截取时间、截取的首帧静态图片、截取的末帧静态图片、截取的静态图片数量。
S0022:根据视频截取操作对视频数据进行截取处理,并确定经过截取处理后的视频数据。
举例来说:当用户输入的视频截取操作为截取时间时,例如:截取时间为3分50秒至4分钟,则可以上述截取时间对视频数据进行截取处理,截取处理之后,可以获取到3分50秒至4分钟之间的视频数据。当用户输入的视频截取操作为截取的首帧静态图片和截取的末帧静态图片时,例如:首帧静态图片为第101帧,末帧静态图片为第120帧,此时,对视频数据进行截取处理后,可以获取到包括第101帧和第120帧的视频数据。当用户输入的视频截取操作为截取的静态图片数量时,例如:静态图片数量为50张,则可以根据该静态图片数量对视频数据进行随机截取处理,从而可以获取到包括50张静态图片的视频数据。
通过对视频数据进行截取操作后,可以对截取处理后的视频数据进行动态图像转换处理,从而有效地提高了动态图像的生成效率和质量,进一步保证了该方法使用的稳定可靠性。
具体应用时,可移动平台可以为无人飞行器,在无人飞行器上的拍摄装置进行视频录制后,可以生成并输出相应的视频数据,从而可以通过与拍摄装置的相连接的无线通信链路或者有线通信链路来获取视频数据,在获取到 视频数据之后,可以将视频数据转换为可选大小的GIF动态图像。其中,在对视频数据进行转换的过程中,可以根据选择的GIF动态图像的目标尺寸对视频数据进行必要的大小缩放、压缩和色彩近似到8位色等调整操作。进一步的,还可以直接控制拍摄装置进行拍摄并输出动态图像,以代替对视频数据的转换处理操作。
通过将视频数据转换为动态图像,有效地提高了图像数据的兼容性,便于用户直接使用和传播,有效地提高了该动态图像的生成方法的实用性,有利于市场的推广与应用。
图9为本发明实施例提供的一种动态图像的生成装置的结构示意图一,参考附图9所示,本实施例提供了一种动态图像的生成装置,该生成装置可以执行上述的动态图像的生成方法,具体的,该装置可以包括:
存储器301,用于存储计算机程序;
处理器302,用于运行存储器301中存储的计算机程序以实现:获取视频数据,视频数据是配置在可移动平台上的至少一个拍摄装置输出的;对视频数据进行图像转换处理,对视频数据进行图像转换处理,生成与至少一部分的视频数据相对应的动态图像。
其中,动态图像可以为GIF图像。可移动平台包括以下至少之一:无人飞行器、无人船、无人车。
在上述实施例的基础上,继续参考附图9可知,在处理器302,对视频数据进行图像转换处理,生成与至少一部分的视频数据相对应的动态图像时,处理器302被配置为:
将视频数据转换为静态图片组,静态图片组中包括多帧与视频数据相对应的静态图片;
获取静态图片组中的至少两帧静态图片;
对至少两帧静态图片进行编码处理,生成动态图像。
其中,在处理器302获取静态图片组中的至少两帧静态图片时,处理器302被配置为:
检测用户输入的图片选取操作;
根据图片选取操作在静态图片组中获取至少两帧静态图片。
另外,在处理器302对至少两帧静态图片进行编码处理,生成动态图像 时,一种可实现的方式为,处理器302被配置为:
获取至少两帧静态图片的图片尺寸和用户输入的动态图像的目标尺寸;
根据图片尺寸和目标尺寸对至少两帧静态图片进行编码处理,生成动态图像。
其中,在处理器302根据图片尺寸和目标尺寸对至少两帧静态图片进行编码处理,生成动态图像时,一种可实现的方式为,处理器302被配置为:
在图片尺寸和目标尺寸相一致时,对至少两帧静态图片进行编码合成处理,生成动态图像。
在处理器302根据图片尺寸和目标尺寸对至少两帧静态图片进行编码处理,生成动态图像时,另一种可实现的方式为,处理器302被配置为:
在图片尺寸和目标尺寸不一致时,根据目标尺寸对至少两帧静态图片进行缩放处理;
对经过缩放处理后的至少两帧静态图片进行编码合成处理,生成动态图像。
可选的,在处理器302对至少两帧静态图片进行编码处理,生成动态图像时,另一种可实现的方式为,处理器302被配置为:
获取至少两帧静态图片在视频数据中的图片显示顺序;
根据图片显示顺序确定动态图像的目标显示顺序;
根据目标显示顺序对至少两帧静态图片进行编码合成处理,生成动态图像。
其中,图片显示顺序与目标显示顺序相同或者不同。
进一步的,处理器302还用于:
在对视频数据进行图像转换处理之前,获取视频数据的播放时长;
在播放时长超过预设的阈值时长时,对视频数据进行截取处理。
具体的,在处理器302对视频数据进行截取处理时,处理器302被配置为:
获取用户输入的视频截取操作;
根据视频截取操作对视频数据进行截取处理,并确定经过截取处理后的视频数据。
其中,视频截取操作包括以下至少之一:截取时间、截取的首帧静态图 片、截取的末帧静态图片、截取的静态图片数量。
本实施例提供的动态图像的生成装置能够用于执行图1-图8实施例所对应的方法,其具体执行方式和有益效果类似,在这里不再赘述。
图10为本发明实施例提供的一种动态图像的生成装置的结构示意图二,参考附图10可知,本实施例的提供了另一种动态图像的生成装置,该生成装置可以执行上述的动态图像的生成方法,具体的,该装置可以包括:
获取模块101,用于获取视频数据,视频数据是配置在可移动平台上的至少一个拍摄装置输出的;
生成模块102,用于对视频数据进行图像转换处理,生成与至少一部分的视频数据相对应的动态图像。
本实施例提供的动态图像的生成装置中的获取模块101和生成模块102能够用于执行图1-图8实施例所对应的方法,其具体执行方式和有益效果类似,在这里不再赘述。
图11为本发明实施例提供的一种可移动平台的结构示意图,参考附图11可知,本实施例的另一方面提供了一种可移动平台,该可移动平台为以下至少之一:无人飞行器、无人船、无人车;具体的,该可移动平台201包括:
至少一个拍摄装置,用于输出视频数据;
图9实施例所对应的动态图像的生成装置;
生成装置203用于接收拍摄装置输出的视频数据。
举例来说,至少一个拍摄装置可以包括:拍摄装置2021、拍摄装置2022和拍摄装置2023。生成装置203接收拍摄装置2021、拍摄装置2022和拍摄装置2023所输出的视频数据,并可以将视频数据转换为相对应的动态图像。
本实施例提供的可移动平台的具体实现原理和实现效果与图9所对应的动态图像的生成装置相一致,具体可参考上述陈述内容,在这里不再赘述。
本实施例的又一方面提供了一种计算机可读存储介质,存储介质为计算机可读存储介质,该计算机可读存储介质中存储有程序指令,程序指令用于实现图1-图8所对应实施例中的动态图像的生成方法。
以上各个实施例中的技术方案、技术特征在与本相冲突的情况下均可以 单独,或者进行组合,只要未超出本领域技术人员的认知范围,均属于本申请保护范围内的等同实施例。
在本发明所提供的几个实施例中,应该理解到,所揭露的相关遥控装置和方法,可以通过其它的方式实现。例如,以上所描述的遥控装置实施例仅仅是示意性的,例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,遥控装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得计算机处理器101(processor)执行本发明各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁盘或者光盘等各种可以存储程序代码的介质。
以上所述仅为本发明的实施例,并非因此限制本发明的专利范围,凡是利用本发明说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本发明的专利保护范围内。
最后应说明的是:以上各实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述各实施例对本发明进行了详细的说明,本领域的普通 技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分或者全部技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的范围。

Claims (28)

  1. 一种动态图像的生成方法,其特征在于,包括:
    获取视频数据,所述视频数据是配置在可移动平台上的至少一个拍摄装置输出的;
    对所述视频数据进行图像转换处理,生成与至少一部分的所述视频数据相对应的动态图像。
  2. 根据权利要求1所述的方法,其特征在于,所述对所述视频数据进行图像转换处理,生成与至少一部分的所述视频数据相对应的动态图像,包括:
    将所述视频数据转换为静态图片组,所述静态图片组中包括多帧与所述视频数据相对应的静态图片;
    获取所述静态图片组中的至少两帧静态图片;
    对所述至少两帧静态图片进行编码处理,生成所述动态图像。
  3. 根据权利要求2所述的方法,其特征在于,获取所述静态图片组中的至少两帧静态图片,包括:
    检测用户输入的图片选取操作;
    根据所述图片选取操作在所述静态图片组中获取至少两帧静态图片。
  4. 根据权利要求2所述的方法,其特征在于,所述获取所述静态图片组中的至少两帧静态图片,包括:
    获取至少两帧静态图片的图片尺寸和用户输入的所述动态图像的目标尺寸;
    根据所述图片尺寸和目标尺寸对所述至少两帧静态图片进行编码处理,生成所述动态图像。
  5. 根据权利要求4所述的方法,其特征在于,根据所述图片尺寸和目标尺寸对所述至少两帧静态图片进行编码处理,生成所述动态图像,包括:
    在所述图片尺寸和目标尺寸相一致时,对所述至少两帧静态图片进行编码合成处理,生成所述动态图像。
  6. 根据权利要求4所述的方法,其特征在于,根据所述图片尺寸和目标尺寸对所述至少两帧静态图片进行编码处理,生成所述动态图像,包括:
    在所述图片尺寸和目标尺寸不一致时,根据所述目标尺寸对所述至少两 帧静态图片进行缩放处理;
    对经过缩放处理后的至少两帧静态图片进行编码合成处理,生成所述动态图像。
  7. 根据权利要求2所述的方法,其特征在于,所述对所述至少两帧静态图片进行编码处理,生成所述动态图像,包括:
    获取所述至少两帧静态图片在所述视频数据中的图片显示顺序;
    根据所述图片显示顺序确定所述动态图像的目标显示顺序;
    根据所述目标显示顺序对所述至少两帧静态图片进行编码合成处理,生成所述动态图像。
  8. 根据权利要求7所述的方法,其特征在于,所述图片显示顺序与所述目标显示顺序相同或者不同。
  9. 根据权利要求2所述的方法,其特征在于,在对所述视频数据进行图像转换处理之前,所述方法还包括:
    获取所述视频数据的播放时长;
    在所述播放时长超过预设的阈值时长时,对所述视频数据进行截取处理。
  10. 根据权利要求9所述的方法,其特征在于,对所述视频数据进行截取处理,包括:
    获取用户输入的视频截取操作;
    根据所述视频截取操作对所述视频数据进行截取处理,并确定经过截取处理后的视频数据。
  11. 根据权利要求10所述的方法,其特征在于,所述视频截取操作包括以下至少之一:截取时间、截取的首帧静态图片、截取的末帧静态图片、截取的静态图片数量。
  12. 根据权利要求1-11中任意一项所述的方法,其特征在于,所述动态图像为GIF图像。
  13. 根据权利要求1-11中任意一项所述的方法,其特征在于,所述可移动平台包括以下至少之一:无人飞行器、无人船、无人车。
  14. 一种动态图像的生成装置,其特征在于,包括:
    存储器,用于存储计算机程序;
    处理器,用于运行所述存储器中存储的计算机程序以实现:获取视频数 据,所述视频数据是配置在可移动平台上的至少一个拍摄装置输出的;对所述视频数据进行图像转换处理,对所述视频数据进行图像转换处理,生成与至少一部分的所述视频数据相对应的动态图像。
  15. 根据权利要求14所述的装置,其特征在于,在所述处理器对所述视频数据进行图像转换处理,生成与至少一部分的所述视频数据相对应的动态图像时,所述处理器被配置为:
    将所述视频数据转换为静态图片组,所述静态图片组中包括多帧与所述视频数据相对应的静态图片;
    获取所述静态图片组中的至少两帧静态图片;
    对所述至少两帧静态图片进行编码处理,生成所述动态图像。
  16. 根据权利要求15所述的装置,其特征在于,在所述处理器获取所述静态图片组中的至少两帧静态图片时,所述处理器被配置为:
    检测用户输入的图片选取操作;
    根据所述图片选取操作在所述静态图片组中获取至少两帧静态图片。
  17. 根据权利要求15所述的装置,其特征在于,在所述处理器对所述至少两帧静态图片进行编码处理,生成所述动态图像时,所述处理器被配置为:
    获取至少两帧静态图片的图片尺寸和用户输入的所述动态图像的目标尺寸;
    根据所述图片尺寸和目标尺寸对所述至少两帧静态图片进行编码处理,生成所述动态图像。
  18. 根据权利要求17所述的装置,其特征在于,在所述处理器根据所述图片尺寸和目标尺寸对所述至少两帧静态图片进行编码处理,生成所述动态图像时,所述处理器被配置为:
    在所述图片尺寸和目标尺寸相一致时,对所述至少两帧静态图片进行编码合成处理,生成所述动态图像。
  19. 根据权利要求17所述的装置,其特征在于,在所述处理器根据所述图片尺寸和目标尺寸对所述至少两帧静态图片进行编码处理,生成所述动态图像时,所述处理器被配置为:
    在所述图片尺寸和目标尺寸不一致时,根据所述目标尺寸对所述至少两帧静态图片进行缩放处理;
    对经过缩放处理后的至少两帧静态图片进行编码合成处理,生成所述动态图像。
  20. 根据权利要求15所述的装置,其特征在于,在所述处理器对所述至少两帧静态图片进行编码处理,生成所述动态图像时,所述处理器被配置为:
    获取所述至少两帧静态图片在所述视频数据中的图片显示顺序;
    根据所述图片显示顺序确定所述动态图像的目标显示顺序;
    根据所述目标显示顺序对所述至少两帧静态图片进行编码合成处理,生成所述动态图像。
  21. 根据权利要求20所述的装置,其特征在于,所述图片显示顺序与所述目标显示顺序相同或者不同。
  22. 根据权利要求15所述的装置,其特征在于,所述处理器,用于:
    在对所述视频数据进行图像转换处理之前,获取所述视频数据的播放时长;
    在所述播放时长超过预设的阈值时长时,对所述视频数据进行截取处理。
  23. 根据权利要求22所述的装置,其特征在于,在所述处理器对所述视频数据进行截取处理时,所述处理器被配置为:
    获取用户输入的视频截取操作;
    根据所述视频截取操作对所述视频数据进行截取处理,并确定经过截取处理后的视频数据。
  24. 根据权利要求23所述的装置,其特征在于,所述视频截取操作包括以下至少之一:截取时间、截取的首帧静态图片、截取的末帧静态图片、截取的静态图片数量。
  25. 根据权利要求14-24中任意一项所述的装置,其特征在于,所述动态图像为GIF图像。
  26. 根据权利要求14-24中任意一项所述的装置,其特征在于,所述可移动平台包括以下至少之一:无人飞行器、无人船、无人车。
  27. 一种可移动平台,其特征在于,包括:
    拍摄装置,用于输出视频数据;
    权利要求14-26中任意一项所述的动态图像的生成装置;
    所述生成装置用于接收所述拍摄装置输出的视频数据。
  28. 一种计算机可读存储介质,其特征在于,所述存储介质为计算机可读存储介质,该计算机可读存储介质中存储有程序指令,所述程序指令用于实现权利要求1-13中任意一项所述的动态图像的生成方法。
PCT/CN2018/103737 2018-09-03 2018-09-03 动态图像的生成方法、装置、可移动平台和存储介质 WO2020047691A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN201880040364.9A CN111034187A (zh) 2018-09-03 2018-09-03 动态图像的生成方法、装置、可移动平台和存储介质
PCT/CN2018/103737 WO2020047691A1 (zh) 2018-09-03 2018-09-03 动态图像的生成方法、装置、可移动平台和存储介质
US17/190,364 US20210195134A1 (en) 2018-09-03 2021-03-02 Method and device for generating dynamic image, mobile platform, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/103737 WO2020047691A1 (zh) 2018-09-03 2018-09-03 动态图像的生成方法、装置、可移动平台和存储介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/190,364 Continuation US20210195134A1 (en) 2018-09-03 2021-03-02 Method and device for generating dynamic image, mobile platform, and storage medium

Publications (1)

Publication Number Publication Date
WO2020047691A1 true WO2020047691A1 (zh) 2020-03-12

Family

ID=69721969

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/103737 WO2020047691A1 (zh) 2018-09-03 2018-09-03 动态图像的生成方法、装置、可移动平台和存储介质

Country Status (3)

Country Link
US (1) US20210195134A1 (zh)
CN (1) CN111034187A (zh)
WO (1) WO2020047691A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113316016A (zh) * 2021-05-28 2021-08-27 Tcl通讯(宁波)有限公司 视频处理方法、装置、存储介质及移动终端

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113628292B (zh) * 2021-08-16 2023-07-25 上海云轴信息科技有限公司 一种在目标终端中预览图片的方法及设备

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101510313A (zh) * 2009-03-13 2009-08-19 腾讯科技(深圳)有限公司 一种gif生成方法、系统及媒体播放器
US20110083087A1 (en) * 2009-10-05 2011-04-07 Harris Corporation Video processing system providing association between displayed video and media content and related methods
CN104954719A (zh) * 2015-06-08 2015-09-30 小米科技有限责任公司 一种视频信息处理方法及装置
CN105139341A (zh) * 2015-09-21 2015-12-09 合一网络技术(北京)有限公司 一种gif图像编辑方法及装置
CN105681746A (zh) * 2016-01-05 2016-06-15 零度智控(北京)智能科技有限公司 航拍装置及系统
CN106657836A (zh) * 2016-11-28 2017-05-10 合网络技术(北京)有限公司 图像互换格式图的制作方法和装置

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7047305B1 (en) * 1999-12-09 2006-05-16 Vidiator Enterprises Inc. Personal broadcasting system for audio and video data using a wide area network
CN1283125C (zh) * 2003-08-05 2006-11-01 株式会社日立制作所 电话通信系统
JP2011091571A (ja) * 2009-10-21 2011-05-06 Olympus Imaging Corp 動画像作成装置及び動画像作成方法
CN102724471A (zh) * 2012-06-11 2012-10-10 宇龙计算机通信科技(深圳)有限公司 图片和视频的转换方法和装置
CN104581403A (zh) * 2013-10-12 2015-04-29 广州市千钧网络科技有限公司 用于分享视频内容的方法和装置
JP6402934B2 (ja) * 2015-05-19 2018-10-10 カシオ計算機株式会社 動画生成装置、動画生成方法、及びプログラム

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101510313A (zh) * 2009-03-13 2009-08-19 腾讯科技(深圳)有限公司 一种gif生成方法、系统及媒体播放器
US20110083087A1 (en) * 2009-10-05 2011-04-07 Harris Corporation Video processing system providing association between displayed video and media content and related methods
CN104954719A (zh) * 2015-06-08 2015-09-30 小米科技有限责任公司 一种视频信息处理方法及装置
CN105139341A (zh) * 2015-09-21 2015-12-09 合一网络技术(北京)有限公司 一种gif图像编辑方法及装置
CN105681746A (zh) * 2016-01-05 2016-06-15 零度智控(北京)智能科技有限公司 航拍装置及系统
CN106657836A (zh) * 2016-11-28 2017-05-10 合网络技术(北京)有限公司 图像互换格式图的制作方法和装置

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113316016A (zh) * 2021-05-28 2021-08-27 Tcl通讯(宁波)有限公司 视频处理方法、装置、存储介质及移动终端

Also Published As

Publication number Publication date
CN111034187A (zh) 2020-04-17
US20210195134A1 (en) 2021-06-24

Similar Documents

Publication Publication Date Title
US11887231B2 (en) Avatar animation system
CN111213183A (zh) 渲染三维内容的方法和装置
CN110232722B (zh) 一种图像处理方法及装置
WO2014194439A1 (en) Avatar-based video encoding
WO2016155382A1 (zh) 生成马赛克图像的方法和装置
CN108668168B (zh) 基于Unity 3D的安卓VR视频播放器及其设计方法
US20170186243A1 (en) Video Image Processing Method and Electronic Device Based on the Virtual Reality
US11302063B2 (en) 3D conversations in an artificial reality environment
US11310560B2 (en) Bitstream merger and extractor
WO2013134936A1 (zh) 屏幕录制方法、屏幕录制控制方法及装置
WO2020047691A1 (zh) 动态图像的生成方法、装置、可移动平台和存储介质
CN111444743A (zh) 一种视频人像替换方法及装置
EP3744088A1 (en) Techniques to capture and edit dynamic depth images
WO2017124870A1 (zh) 一种处理多媒体信息的方法及装置
CN114463470A (zh) 虚拟空间浏览方法、装置、电子设备和可读存储介质
WO2022237116A1 (zh) 图像处理方法及装置
WO2020062998A1 (zh) 图像处理方法、存储介质及电子设备
TWI420315B (zh) 顯示螢幕之記錄內容
US20180160133A1 (en) Realtime recording of gestures and/or voice to modify animations
CN116962743A (zh) 视频图像编码、抠图方法和装置及直播系统
EP4315313A1 (en) Neural networks accompaniment extraction from songs
US11954779B2 (en) Animation generation method for tracking facial expression and neural network training method thereof
US11825276B2 (en) Selector input device to transmit audio signals
US20220377309A1 (en) Hardware encoder for stereo stitching
WO2024020908A1 (en) Video processing with preview of ar effects

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18932747

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18932747

Country of ref document: EP

Kind code of ref document: A1