WO2020227997A1 - 拍摄装置、无人飞行器、控制终端和拍摄方法 - Google Patents

拍摄装置、无人飞行器、控制终端和拍摄方法 Download PDF

Info

Publication number
WO2020227997A1
WO2020227997A1 PCT/CN2019/087116 CN2019087116W WO2020227997A1 WO 2020227997 A1 WO2020227997 A1 WO 2020227997A1 CN 2019087116 W CN2019087116 W CN 2019087116W WO 2020227997 A1 WO2020227997 A1 WO 2020227997A1
Authority
WO
WIPO (PCT)
Prior art keywords
image data
area
buffer area
cache area
original image
Prior art date
Application number
PCT/CN2019/087116
Other languages
English (en)
French (fr)
Inventor
赵东相
王博
黄文坚
朱超
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to PCT/CN2019/087116 priority Critical patent/WO2020227997A1/zh
Priority to CN201980007822.3A priority patent/CN111567033A/zh
Publication of WO2020227997A1 publication Critical patent/WO2020227997A1/zh
Priority to US17/513,870 priority patent/US20220053126A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/66Remote control of cameras or camera parts, e.g. by remote control devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64CAEROPLANES; HELICOPTERS
    • B64C39/00Aircraft not otherwise provided for
    • B64C39/02Aircraft not otherwise provided for characterised by special use
    • B64C39/024Aircraft not otherwise provided for characterised by special use of the remote controlled vehicle type, i.e. RPV
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64DEQUIPMENT FOR FITTING IN OR TO AIRCRAFT; FLIGHT SUITS; PARACHUTES; ARRANGEMENT OR MOUNTING OF POWER PLANTS OR PROPULSION TRANSMISSIONS IN AIRCRAFT
    • B64D47/00Equipment not otherwise provided for
    • B64D47/08Arrangements of cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • H04N23/88Camera processing pipelines; Components thereof for processing colour signals for colour balance, e.g. white-balance circuits or colour temperature control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/183Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64UUNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
    • B64U2101/00UAVs specially adapted for particular uses or applications
    • B64U2101/30UAVs specially adapted for particular uses or applications for imaging, photography or videography
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • H04N19/423Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation characterised by memory arrangements
    • H04N19/426Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation characterised by memory arrangements using memory downsizing methods

Definitions

  • This application relates to the technical field of photographing devices, and in particular to a photographing device, an unmanned aerial vehicle, a control terminal of an unmanned aerial vehicle, and a photographing method.
  • the photographing device has a Quick view (quick playback) function, which specifically means that the display of the photographing device continuously displays the last photograph taken before the next photograph is taken during the timed photograph of the photographing device.
  • the shooting device also has a Live view (real-time preview) function, specifically, the display of the shooting device continuously displays the image of the framing frame obtained by the sensor.
  • This application aims to solve at least one of the technical problems existing in the prior art or related technologies.
  • the first aspect of this application proposes a photographing device.
  • the second aspect of this application proposes an unmanned aerial vehicle.
  • the third aspect of this application proposes a control terminal for an unmanned aerial vehicle.
  • the fourth aspect of this application proposes a shooting method.
  • the first aspect of the present application provides a photographing device.
  • the photographing device includes an image sensor, a display screen, a running memory, a memory, a processor, and computer instructions stored in the memory and executable by the processor.
  • Implementation of computer instructions receiving continuous shooting instructions, dividing the pre-buffer area in the running memory according to the continuous shooting instructions; controlling the image sensor to obtain original image data, and storing the original image data in the pre-buffer area; controlling the display screen to display based on the original image data The generated image data.
  • the shooting device when the shooting device receives the continuous shooting instruction, before the shooting starts, first divide the pre-buffer area in the running memory (RAM, Random Access Memory, random access memory) of the shooting device according to the continuous shooting instruction.
  • the size of the pre-buffer area can be specifically determined according to the shooting interval and the number of shooting times corresponding to the continuous shooting instruction.
  • the image sensor After the shooting starts, the image sensor starts to obtain the original image data of the first photo (such as the original image file in RAW format), and after obtaining the original image data of the first photo, the original image data is stored in the pre-buffer area in.
  • the pre-caching area is divided by the running memory, it has a very high writing speed, so compared to the current technical solution, the original image data is directly stored in the memory (such as HDD (Hard Disk Drive, hard disk drive)) Hard disk, SD (Secure Digital) memory card, etc., have larger capacity but slower writing speed), data writing speed is faster, and the time required for speed writing is shorter, so you can start shooting faster Improve the continuous shooting speed for the next photo.
  • HDD Hard Disk Drive, hard disk drive
  • SD Secure Digital
  • the processor of the camera can directly read the image data generated from the original image data in the pre-cache area and control
  • the display screen displays the image data, which realizes the Quick view function in continuous shooting, so that the shooting device has a better interactive effect.
  • the photographing device in the above technical solution provided by this application may also have the following additional technical features:
  • the process of controlling the image sensor to acquire the original image data when the processor executes the computer instruction includes: acquiring the continuous shooting interval duration according to the continuous shooting instruction, and controlling the image sensor to acquire the original image data according to the interval duration.
  • the continuous shooting instruction includes the continuous shooting interval time, that is, the time interval between the shooting of the Nth image and the N+1th image, and the image sensor is controlled to obtain the original image data according to the continuous shooting interval. And sequentially store to the pre-buffer area to achieve continuous shooting.
  • the processor executes computer instructions, it is realized: generating corresponding intermediate image data according to the original image data, storing the intermediate image data in the pre-cache area, and generating image data according to the intermediate image data; Delete the original image data in the buffer area, and display the image data.
  • the corresponding intermediate image data is generated according to the original data.
  • the intermediate image data can be in YUV (a color coding format) format
  • the intermediate image data is correspondingly stored in the pre-buffer area, and corresponding image data is generated according to the intermediate image data.
  • the image data is RGB image data.
  • the RGB image data is displayed At the same time as the image data, the corresponding original image data is deleted in the pre-cache area to release the storage space of the pre-cache area.
  • the processor when the processor executes computer instructions, the processor generates corresponding target image data according to the intermediate image data, stores the target image data in the pre-cache area, and deletes the intermediate image data in the pre-cache area.
  • the corresponding target image data is generated according to the intermediate image data.
  • the target image data can be JPEG (Joint Photographic Experts Group, a common image format) format image file, and after the target image data is generated, the target image data is correspondingly stored in the pre-cache area, and the corresponding intermediate image is deleted in the pre-cache area at the same time Data to release storage space in the pre-cache area.
  • the target image data (ie, JPEG format data) is only used for storage, and there is no need to display the target image data.
  • the photographing device further includes an encoder, and when the processor executes computer instructions, the process of generating corresponding target image data according to the intermediate image data includes: acquiring image processing information according to continuous shooting instructions; The processing information controls the encoder to encode intermediate image data to generate target image data.
  • the continuous shooting instruction includes image processing information, which may specifically include the imaging direction of the target image data (such as forward, reverse, horizontal, vertical, or mirror flip, etc.).
  • the processing of the camera is based on the image processing information.
  • the intermediate image is encoded, and finally the target image data consistent with the image processing information is obtained.
  • each target image data is stored in the memory, and the target image data is deleted correspondingly in the pre-cache area .
  • the multiple target image data in the pre-buffer area are sequentially stored in the memory according to the generation order of the multiple target image data.
  • the data writing speed of the memory is relatively slow. Therefore, as the continuous shooting process proceeds, the obtained target image data will be stacked in the pre-cache area, according to multiple target images
  • the time sequence of data generation (ie, the order of shooting) generates a queue to be stored, and stores multiple target images in the memory in sequence according to the queue to be stored. Whenever a target image data in the queue is successfully stored in the memory, the queue Delete the target image data to release the space in the pre-buffer area.
  • the pre-cache area includes a first cache area, a second cache area, and a third cache area
  • the original image data is stored in the first cache area
  • the intermediate image data is stored in the second cache area
  • the target image data is stored in the third buffer area.
  • the pre-buffer area includes a first buffer area, which can be recorded as RAW buffer, and a second buffer area, which can be recorded as YUV buffer, and the third buffer area, which can be JPEG buffer.
  • the first buffer area (RAW buffer) is used to buffer raw image data (RAW)
  • the second buffer area (YUV buffer) is used to buffer intermediate image data (YUV)
  • the third buffer area is used to buffer target image data (JPEG).
  • the third cache area may be divided in the running memory, or may be divided in an external storage space such as an HDD hard disk and/or an SD card.
  • the processor when the processor executes a computer instruction, the processor can obtain the storage capacity of the respective data in the first cache area, the second cache area and the third cache area, and adjust the first cache area according to the storage capacity. , The storage capacity of the second cache area and the third cache area.
  • the processor of the camera monitors the stored amount of corresponding image data in the first cache area, the second cache area, and the third cache area in real time, and dynamically adjusts the first cache area according to the stored amount of the corresponding image data source.
  • the storage capacity of the cache area, the second cache area and the third cache area Specifically, if the storage capacity of the original image data in the first cache area is small and the first cache area is relatively free, the storage capacity of the first cache area can be reduced correspondingly; the stored amount of target image data in the third cache area If it is larger, the third cache area will be full. At this time, the storage capacity of the third cache area is correspondingly increased to ensure the space utilization efficiency of the pre-cache area.
  • the process of controlling the image sensor to obtain the original image data when the processor executes the computer instruction includes: locking the image shooting parameters of the image sensor, and obtaining the original image according to the image shooting parameters.
  • the image shooting parameters of the image sensor are locked first, and the original image is obtained according to the locked image shooting parameters, which can ensure that the styles of multiple images obtained by continuous shooting are consistent and avoid continuous shooting.
  • the image shooting parameters are re-determined for each image, which leads to a waste of performance and slows down the continuous shooting speed.
  • the image shooting parameters include any one or a combination of the following: image exposure parameters, image focus parameters, and image white balance parameters.
  • image shooting parameters generally include image exposure parameters, which affect the exposure (brightness) of image imaging; image shooting parameters also include image focus parameters, which affect the final target image data. Focus position; image shooting parameters also include image white balance parameters, which affect the overall color tone of the image.
  • the display screen includes a first display area and a second display area
  • the processor executes computer instructions to achieve: control the image sensor to continuously obtain real-time image data; control the display screen to display in the first display area Real-time image data, and control the display screen to display image data in the second display area.
  • the display screen of the photographing device includes a first display area and a second display area.
  • Real-time image data is displayed in the first display area, that is, the Live view function is realized, and the image data is displayed in the second display area, namely Realize the Quickview function.
  • the second display area is within the first display area.
  • the second aspect of the application provides an unmanned aerial vehicle.
  • the unmanned aerial vehicle includes an image sensor, a running memory, a memory, a processor, and computer instructions stored in the memory and executable by the processor.
  • the processor executes the computer instructions. : Receive continuous shooting instructions, divide the pre-buffer area in the running memory according to the continuous shooting instructions; control the image sensor to obtain the original image data, store the original image data in the pre-buffer area, and send the image data generated based on the original image data to the control terminal.
  • the unmanned aerial vehicle receives the continuous shooting quality from the control terminal or mobile phone and other terminals.
  • the unmanned aerial vehicle's operating memory RAM, Random Access Memory is divided into a pre-buffer area, and the size of the pre-buffer area can be determined according to the shooting interval and the number of shooting times corresponding to the continuous shooting command.
  • the image sensor starts to obtain the original image data of the first photo (such as the original image file in RAW format), and after obtaining the original image data of the first photo, the original image data is stored in the pre-buffer area in.
  • the pre-caching area is divided by the running memory, it has a very high writing speed, so compared with the current technical solution, the original image data is directly stored in the memory (such as HDD hard disk, SD memory card, etc.) Larger capacity but slower writing speed), data writing speed is faster, and the time required for speed writing is shorter, so you can start to take the next photo faster, increasing the continuous shooting speed.
  • the memory such as HDD hard disk, SD memory card, etc.
  • the process of controlling the image sensor to acquire the original image data when the processor executes the computer instruction includes: acquiring the continuous shooting interval duration according to the continuous shooting instruction, and controlling the image sensor to acquire the original image data according to the interval duration.
  • the continuous shooting instruction includes the continuous shooting interval time, that is, the time interval between the shooting of the Nth image and the N+1th image, and the image sensor is controlled to obtain the original image data according to the continuous shooting interval. And sequentially store to the pre-buffer area to achieve continuous shooting.
  • the processor executes computer instructions, it is realized: generating corresponding intermediate image data according to the original image data, storing the intermediate image data in the pre-cache area, and generating image data according to the intermediate image data; The original image data is deleted from the buffer area, and the image data is sent to the control terminal.
  • the corresponding intermediate image data is generated according to the original data.
  • the intermediate image data can be in YUV (a color coding format) format
  • the intermediate image data is correspondingly stored in the pre-cache area, and the corresponding original image data is deleted in the pre-cache area at the same time to release the storage space in the pre-cache area.
  • the processor when the processor executes computer instructions, the processor generates corresponding target image data according to the intermediate image data, stores the target image data in the pre-cache area, and deletes the intermediate image data in the pre-cache area.
  • the corresponding target image data is generated according to the intermediate image data.
  • the target image data can be JPEG (Joint Photographic Experts Group, a common image format) format image file, and after the target image data is generated, the target image data is correspondingly stored in the pre-cache area, and the corresponding intermediate image is deleted in the pre-cache area at the same time Data to release storage space in the pre-cache area.
  • the unmanned aerial vehicle further includes an encoder, and when the processor executes computer instructions, the process of generating corresponding target image data according to intermediate image data includes: acquiring image processing information according to the continuous shooting instruction; The image processing information controls the encoder to encode intermediate image data to generate target image data.
  • the continuous shooting instruction includes image processing information, which can specifically include the imaging direction of the target image data (such as forward, reverse, horizontal, vertical, or mirror flip, etc.).
  • the processing of the UAV is based on the image processing
  • the information encodes the intermediate image, and finally the target image data that matches the image processing information is obtained.
  • the target image data (ie, JPEG format data) is only used for storage, and there is no need to display the target image data.
  • each target image data is stored in the memory, and the target image is deleted correspondingly in the pre-cache area.
  • the multiple target image data in the pre-buffer area are sequentially stored in the memory according to the generation order of the multiple target image data.
  • the data writing speed of the memory is relatively slow. Therefore, as the continuous shooting process proceeds, the obtained target image data will be stacked in the pre-cache area, according to multiple target images
  • the time sequence of data generation (ie, the order of shooting) generates a queue to be stored, and stores multiple target images in the memory in sequence according to the queue to be stored. Whenever a target image data in the queue is successfully stored in the memory, the queue Delete the target image data to release the space in the pre-buffer area.
  • the pre-cache area includes a first cache area, a second cache area, and a third cache area
  • the original image data is stored in the first cache area
  • the intermediate image data is stored in the second cache area
  • the target image data is stored in the third buffer area.
  • the pre-buffer area includes a first buffer area, which can be recorded as RAW buffer, and a second buffer area, which can be recorded as YUV buffer, and the third buffer area, which can be JPEG buffer.
  • the first buffer area (RAW buffer) is used to buffer raw image data (RAW)
  • the second buffer area (YUV buffer) is used to buffer intermediate image data (YUV)
  • the third buffer area is used to buffer target image data (JPEG).
  • the third cache area may be divided in the running memory, or may be divided in an external storage space such as an HDD hard disk and/or an SD card.
  • the processor when the processor executes a computer instruction, the processor can obtain the storage capacity of the respective data in the first cache area, the second cache area and the third cache area, and adjust the first cache area according to the storage capacity. , The storage capacity of the second cache area and the third cache area.
  • the processor of the unmanned aerial vehicle monitors the stored amount of corresponding image data in the first cache area, the second cache area, and the third cache area in real time, and dynamically adjusts the first cache area according to the stored amount of the corresponding image data source.
  • the storage capacity of a cache area, a second cache area and a third cache area Specifically, if the storage capacity of the original image data in the first cache area is small and the first cache area is relatively free, the storage capacity of the first cache area can be reduced correspondingly; the stored amount of target image data in the third cache area If it is larger, the third cache area will be full. At this time, the storage capacity of the third cache area is correspondingly increased to ensure the space utilization efficiency of the pre-cache area.
  • the process of controlling the image sensor to obtain the original image data when the processor executes the computer instruction includes: locking the image shooting parameters of the image sensor, and obtaining the original image according to the image shooting parameters.
  • the image shooting parameters of the image sensor are locked first, and the original image is obtained according to the locked image shooting parameters, which can ensure that the styles of multiple images obtained by continuous shooting are consistent and avoid continuous shooting.
  • the image shooting parameters are re-determined for each image, which leads to a waste of performance and slows down the continuous shooting speed.
  • the image shooting parameters include any one or a combination of the following: image exposure parameters, image focus parameters, and image white balance parameters.
  • image shooting parameters generally include image exposure parameters, which affect the exposure (brightness) of image imaging; image shooting parameters also include image focus parameters, which affect the final target image data. Focus position; image shooting parameters also include image white balance parameters, which affect the overall color tone of the image.
  • the third aspect of the present application provides a control terminal of an unmanned aerial vehicle.
  • the control terminal of the unmanned aerial vehicle includes a display screen, a memory, a processor, and computer instructions stored in the memory and executable by the processor.
  • the processor executes the computer
  • the instruction is realized: send continuous shooting instructions to the UAV to control the image sensor set on the UAV to obtain the original image data; receive the image data generated based on the original image data, and display the image data on the control display screen.
  • the control terminal of the unmanned aerial vehicle is used to control the operation of the unmanned aerial vehicle.
  • the control terminal of the unmanned aerial vehicle sends continuous shooting instructions to the unmanned aerial vehicle to control the image sensor set on the unmanned aerial vehicle to obtain images. Data.
  • the unmanned aerial vehicle receives continuous shooting quality from the control terminal or mobile phone and other terminals.
  • the unmanned aerial vehicle's operating memory RAM, Random Access
  • a pre-buffer area is divided in Memory, and the size of the pre-buffer area can be specifically determined according to the shooting interval and the number of shooting times corresponding to the continuous shooting instruction.
  • the image sensor starts to obtain the image data of the first photo, and after obtaining the image data of the first photo, the image data is stored in the pre-buffer area.
  • the UAV's processor synchronously obtains the image files in the pre-buffer area and sends them to the control terminal of the UAV. After the control terminal of the UAV receives the image data To display the received image data on the screen.
  • the continuous shooting instruction includes a continuous shooting interval duration
  • the unmanned aerial vehicle controls the image sensor to obtain image data according to the interval duration
  • the continuous shooting instruction includes the continuous shooting interval time, that is, the time interval between the shooting of the Nth image and the N+1th image, and the unmanned aerial vehicle is controlled to continuously acquire images according to the continuous shooting interval. Data to achieve continuous shooting.
  • the display screen includes a first display area and a second display area, and when the processor executes computer instructions, it can realize: continuously receiving real-time image data sent by the unmanned aerial vehicle; controlling the display screen in the first display The area displays real-time image data, and controls the display screen to display image data in the second display area.
  • the display screen of the control terminal of the unmanned aerial vehicle includes a first display area and a second display area.
  • Real-time image data is displayed in the first display area, that is, to realize the Live view function, and display in the second display area.
  • the image data realizes the Quick view function.
  • the second display area is within the first display area.
  • the fourth aspect of the present application provides a shooting method, including: receiving a continuous shooting instruction, dividing a pre-buffer area in the operating memory of the shooting device according to the continuous shooting instruction; acquiring original image data, and storing the original image data in the pre-buffer area ; Display the image data generated from the original image data.
  • the pre-buffer area is divided into the running memory (RAM, Random Access Memory, random access memory) according to the continuous shooting command.
  • the size can be determined according to the shooting interval and shooting times corresponding to the continuous shooting command.
  • the image sensor starts to obtain the original image data of the first photo (such as the original image file in RAW format), and after obtaining the original image data of the first photo, the original image data is stored in the pre-buffer area in.
  • the pre-caching area is divided by the running memory, it has a very high writing speed, so compared with the current technical solution, the original image data is directly stored in the memory (such as HDD hard disk, SD memory card, etc.) Larger capacity but slower writing speed), data writing speed is faster, and the time required for speed writing is shorter, so you can start to take the next photo faster, increasing the continuous shooting speed.
  • the processor can directly read the image data generated from the original image data in the pre-cache area and control the display screen Displaying the image data realizes the Quick view function in continuous shooting, which has a better interactive effect.
  • the shooting method further includes: acquiring the continuous shooting interval duration according to the continuous shooting instruction, and acquiring the original image data according to the interval duration.
  • the continuous shooting instruction includes the continuous shooting interval time, that is, the time interval between the shooting of the Nth image and the N+1th image, and the image sensor is controlled to obtain the original image data according to the continuous shooting interval. And sequentially store to the pre-buffer area to achieve continuous shooting.
  • displaying the image data generated according to the original image data includes: generating corresponding intermediate image data according to the original image data, storing the intermediate image data in a pre-cache area, and generating image data according to the intermediate data; Delete the original image data in the pre-buffer area, and display the image data.
  • the corresponding intermediate image data is generated according to the original data.
  • the intermediate image data can be in YUV (a color coding format) format
  • the intermediate image data is correspondingly stored in the pre-buffer area, and corresponding image data is generated according to the intermediate image data.
  • the image data is RGB image data.
  • the shooting method further includes: generating corresponding target image data according to the intermediate image data, and storing the target image data in the pre-cache area ; Delete the intermediate image data in the pre-cache area.
  • the corresponding target image data is generated according to the intermediate image data.
  • the target image data can be JPEG (Joint Photographic Experts Group, a common image format) format image file, and after the target image data is generated, the target image data is correspondingly stored in the pre-cache area, and the corresponding intermediate image is deleted in the pre-cache area at the same time Data to release storage space in the pre-cache area.
  • the target image data (ie, JPEG format data) is only used for storage, and there is no need to display the target image data.
  • the shooting method further includes: acquiring image processing information according to the continuous shooting instruction; and encoding the intermediate image data according to the image processing information to generate target image data.
  • the continuous shooting instruction includes image processing information, which may specifically include the imaging direction of the target image data (such as forward, reverse, horizontal, vertical, or mirror flip, etc.).
  • the processing of the camera is based on the image processing information.
  • the intermediate image is encoded, and finally the target image data consistent with the image processing information is obtained.
  • the shooting method further includes: storing each target image data in the memory according to the generation sequence of the multiple target image data, and correspondingly deleting the target image in the pre-cache area.
  • the multiple target image data in the pre-buffer area are sequentially stored in the memory according to the generation order of the multiple target image data.
  • the data writing speed of the memory is relatively slow. Therefore, as the continuous shooting process proceeds, the obtained target image data will be stacked in the pre-cache area, according to multiple target images
  • the time sequence of data generation (ie, the order of shooting) generates a queue to be stored, and stores multiple target images in the memory in sequence according to the queue to be stored. Whenever a target image data in the queue is successfully stored in the memory, the queue Delete the target image data to release the space in the pre-buffer area.
  • the pre-cache area includes a first cache area, a second cache area, and a third cache area
  • the original image data is stored in the first cache area
  • the intermediate image data is stored in the second cache area
  • the target image data is stored in the third buffer area.
  • the pre-buffer area includes a first buffer area, which can be recorded as RAW buffer, and a second buffer area, which can be recorded as YUV buffer, and the third buffer area, which can be JPEG buffer.
  • the first buffer area (RAW buffer) is used to buffer raw image data (RAW)
  • the second buffer area (YUV buffer) is used to buffer intermediate image data (YUV)
  • the third buffer area is used to buffer target image data (JPEG).
  • the third cache area may be divided in the running memory, or may be divided in an external storage space such as an HDD hard disk and/or an SD card.
  • the shooting method further includes: acquiring the storage capacity of the respective data in the first cache area, the second cache area, and the third cache area, and adjust the first cache area and the second cache area according to the storage capacity.
  • the storage capacity of the cache area and the third cache area is not limited to: acquiring the storage capacity of the respective data in the first cache area, the second cache area, and the third cache area, and adjust the first cache area and the second cache area according to the storage capacity.
  • the stored amount of corresponding image data in the first cache area, the second cache area, and the third cache area are monitored in real time, and the first cache area and the second cache area are dynamically adjusted according to the stored amount of the corresponding image data source.
  • the storage capacity of the cache area and the third cache area Specifically, if the storage capacity of the original image data in the first cache area is small and the first cache area is relatively free, the storage capacity of the first cache area can be reduced correspondingly; the stored amount of target image data in the third cache area If it is larger, the third cache area will be full. At this time, the storage capacity of the third cache area is correspondingly increased to ensure the space utilization efficiency of the pre-cache area.
  • the step of obtaining the original image data specifically includes: locking the image shooting parameters, and obtaining the original image according to the image shooting parameters.
  • the image shooting parameters are first locked, and the original image is acquired according to the locked image shooting parameters, which can ensure that the styles of multiple images obtained in continuous shooting are consistent, and at the same time avoid each image in the continuous shooting process.
  • the captured images are re-determined for image shooting parameters, resulting in a waste of performance and slowing down the continuous shooting speed.
  • the image shooting parameters include any one or a combination of the following: image exposure parameters, image focus parameters, and image white balance parameters.
  • image shooting parameters generally include image exposure parameters, which affect the exposure (brightness) of image imaging; image shooting parameters also include image focus parameters, which affect the final target image data. Focus position; image shooting parameters also include image white balance parameters, which affect the overall color tone of the image.
  • the step of displaying image data generated based on the original image data further includes: continuously acquiring real-time image data; displaying the real-time image data in the first display area, and displaying the image in the second display area data.
  • the display screen includes a first display area and a second display area.
  • the real-time image data is displayed in the first display area to realize the Liveview function, and the image data is displayed in the second display area to realize Quickview.
  • the second display area is within the first display area.
  • Fig. 1 shows a structural block diagram of a photographing device according to an embodiment of the present application
  • Figure 2 shows a structural block diagram of an unmanned aerial vehicle according to an embodiment of the present application
  • Figure 3 shows a structural block diagram of an unmanned aerial vehicle control terminal according to an embodiment of the present application
  • Fig. 4 shows a flowchart of a photographing method according to an embodiment of the present application
  • Fig. 5 shows a flowchart of a shooting method according to another embodiment of the present application.
  • Fig. 6 shows a flowchart of a shooting method according to another embodiment of the present application.
  • Fig. 7 shows a flowchart of a shooting method according to still another embodiment of the present application.
  • Fig. 8 shows a flowchart of a photographing method according to still another embodiment of the present application.
  • Fig. 9 shows a flowchart of a photographing method according to still another embodiment of the present application.
  • Fig. 10 shows a flowchart of a photographing method according to still another embodiment of the present application.
  • Fig. 11 shows a flowchart of a shooting method according to still another embodiment of the present application.
  • the following describes the photographing device, the unmanned aerial vehicle, the control terminal and the photographing method of the unmanned aerial vehicle according to some embodiments of the present application with reference to FIGS. 1 to 11.
  • a photographing device 100 is provided.
  • the photographing device includes an image sensor 102, a display screen 104, a running memory 106, a memory 108, a processor 110, and a memory
  • the processor 110 executes the computer instructions
  • the processor 110 receives the continuous shooting instructions, and divides the pre-buffer area in the running memory according to the continuous shooting instructions; controls the image sensor to obtain the original image data, and the original image data Store in the pre-buffer area; control the display screen to display the image data generated based on the original image data.
  • the shooting device 100 when the shooting device 100 receives a continuous shooting command, before shooting starts, it is first divided into the running memory 106 (RAM, Random Access Memory, random access memory 108) of the shooting device 100 according to the continuous shooting command.
  • Pre-buffer area the size of the pre-buffer area can be specifically determined according to the shooting interval and the number of shooting times corresponding to the continuous shooting instruction.
  • the image sensor 102 starts to obtain the original image data of the first photo (such as the original image file in RAW format), and after obtaining the original image data of the first photo, the original image data is stored in the pre-cache Area.
  • the pre-caching area is divided by the running memory 106, it has a very high writing speed, so compared to the current technical solution, the original image data is directly stored in the memory 108 (such as HDD hard disk, SD memory card, etc.) , Has a larger capacity but slower writing speed), data writing speed is faster, and the time required for speed writing is shorter, so you can start to take the next photo faster, increasing the continuous shooting speed.
  • the memory 108 such as HDD hard disk, SD memory card, etc.
  • the processor 110 of the camera 100 can directly read the image data generated from the original image data in the pre-cache area , And control the display screen 104 to display the image data, which realizes the Quick view function in continuous shooting, so that the shooting device 100 has a better interactive effect.
  • the process of controlling the image sensor 102 to obtain the original image data includes: obtaining the continuous shooting interval according to the continuous shooting instruction, and according to the interval The duration controls the image sensor 102 to obtain the original image data.
  • the continuous shooting instruction includes the continuous shooting interval time, that is, the time interval between the shooting of the Nth image and the N+1th image, and the image sensor 102 is controlled to obtain the original image according to the continuous shooting interval.
  • the data is sequentially stored in the pre-buffer area to achieve continuous shooting.
  • the processor 110 when the processor 110 executes computer instructions, it realizes: generating corresponding intermediate image data according to the original image data, storing the intermediate image data in the pre-cache area, and according to the intermediate image data.
  • Image data is generated from image data; the original image data is deleted in the pre-cache area, and the image data is displayed.
  • the corresponding intermediate image data is generated according to the original data.
  • the intermediate image data can be in YUV (a color coding format) format.
  • the intermediate image data is correspondingly stored in the pre-buffer area, and corresponding image data is generated according to the intermediate image data.
  • the image data is RGB image data.
  • the RGB image data is displayed At the same time as the image data, the corresponding original image data is deleted in the pre-cache area to release the storage space of the pre-cache area.
  • the processor 110 when the processor 110 executes a computer instruction, it realizes: generating corresponding target image data according to intermediate image data, storing the target image data in the pre-cache area; Delete the intermediate image data in the buffer area.
  • the corresponding target image data is generated according to the intermediate image data.
  • the target image data can be JPEG (Joint Photographic Experts Group, a common image format) format image file, and after the target image data is generated, the target image data is correspondingly stored in the pre-cache area, and the corresponding intermediate image is deleted in the pre-cache area at the same time Data to release storage space in the pre-cache area.
  • the target image data (ie, JPEG format data) is only used for storage, and there is no need to display the target image data.
  • the photographing device 100 further includes an encoder.
  • the process of generating corresponding target image data according to intermediate image data includes: The shooting instruction acquires image processing information; according to the image processing information, the encoder is controlled to encode the intermediate image data to generate target image data.
  • the continuous shooting instruction includes image processing information, which may specifically include the imaging direction of the target image data (such as forward, reverse, horizontal, vertical, or mirror flip, etc.).
  • image processing information may specifically include the imaging direction of the target image data (such as forward, reverse, horizontal, vertical, or mirror flip, etc.).
  • the processing of the camera 100 is based on image processing.
  • the information encodes the intermediate image, and finally the target image data that matches the image processing information is obtained.
  • each target image data is stored in the memory 108, and the The target image data is deleted in the pre-cache area.
  • the multiple target image data in the pre-buffer area are sequentially stored in the memory 108 in the order of generation of the multiple target image data.
  • the data writing speed of the memory 108 is relatively slow, so as the continuous shooting process proceeds, the obtained target image data will be stacked in the pre-cache area, according to multiple
  • the time sequence of target image data generation (ie, the sequence of shooting) generates a queue to be stored, and stores multiple target images in the memory 108 sequentially according to the queue to be stored, and each target image data in the queue is successfully stored in the memory 108 Then, delete the target image data in the queue to release the space in the pre-buffer area.
  • the pre-cache area includes a first cache area, a second cache area, and a third cache area
  • the original image data is stored in the first cache area
  • the intermediate image data Store in the second cache area, and store the target image data in the third cache area.
  • the pre-buffer area includes a first buffer area, which can be denoted as RAW buffer, and a second buffer area, which can be denoted as YUV buffer, and the third buffer area, which can be a JPEG buffer.
  • the first buffer area (RAW buffer) is used to buffer raw image data (RAW)
  • the second buffer area (YUV buffer) is used to buffer intermediate image data (YUV)
  • the third buffer area is used to buffer target image data (JPEG).
  • the third cache area may be divided in the running memory, or may be divided in an external storage space such as an HDD hard disk and/or an SD card.
  • the processor 110 when the processor 110 executes a computer instruction, it realizes: obtain the storage amount of each data in the first cache area, the second cache area, and the third cache area, according to The storage capacity is respectively adjusted to the storage capacity of the first cache area, the second cache area and the third cache area.
  • the processor 110 of the camera 100 monitors the stored amount of corresponding image data in the first cache area, the second cache area, and the third cache area in real time, and dynamically adjusts it according to the stored amount of the corresponding image data source
  • the storage capacity of the first cache area, the second cache area, and the third cache area Specifically, if the storage capacity of the original image data in the first cache area is small and the first cache area is relatively free, the storage capacity of the first cache area can be reduced correspondingly; the stored amount of target image data in the third cache area If it is larger, the third cache area will be full. At this time, the storage capacity of the third cache area is correspondingly increased to ensure the space utilization efficiency of the pre-cache area.
  • the process of controlling the image sensor 102 to obtain original image data includes: locking the image shooting parameters of the image sensor 102, and according to the image shooting Parameters to obtain the original image.
  • the image shooting parameters of the image sensor 102 are locked first, and the original image is obtained according to the locked image shooting parameters, which can ensure that the styles of multiple images obtained by continuous shooting are consistent, and at the same time Avoid re-determining image shooting parameters for each image taken during continuous shooting, resulting in waste of performance and slowing down the continuous shooting speed.
  • the image shooting parameters include any one or a combination of the following: image exposure parameters, image focus parameters, and image white balance parameters.
  • image shooting parameters generally include image exposure parameters, which affect the exposure (brightness) of image imaging; image shooting parameters also include image focus parameters, which affect the final target image data of the object being shot. Focus position; image shooting parameters also include image white balance parameters, which affect the overall color tone of the image.
  • the display screen 104 includes a first display area and a second display area.
  • the processor 110 executes a computer instruction, it is realized: the image sensor 102 is controlled to continuously obtain real-time image data. ; Control the display screen 104 to display real-time image data in the first display area, and control the display screen 104 to display image data in the second display area.
  • the display screen 104 of the photographing device 100 includes a first display area and a second display area.
  • Real-time image data is displayed in the first display area, that is, the Live view function is realized, and the image data is displayed in the second display area. , Which realizes the Quick view function.
  • the second display area is within the first display area.
  • an unmanned aerial vehicle 200 includes an image sensor 202, a running memory 204, a memory 206, a processor 208, and a memory
  • the processor 208 executes the computer instructions it is realized: receiving the continuous shooting instruction, dividing the pre-cache area in the running memory according to the continuous shooting instruction; controlling the image sensor to obtain the original image data, and the original image The data is stored in the pre-buffer area, and the image data generated from the original image data is sent to the control terminal.
  • the unmanned aerial vehicle 200 receives continuous shooting quality from a control terminal or a mobile phone and other terminals.
  • the unmanned aerial vehicle 200 is first stored in the operating memory 204 of the unmanned aerial vehicle 200 according to the continuous shooting instruction.
  • RAM Random Access Memory, Random Access Memory 206
  • the size of the pre-cache area can be specifically determined according to the shooting interval and the number of shooting times corresponding to the continuous shooting command.
  • the image sensor 202 starts to obtain the original image data of the first photo (such as the original image file in RAW format), and after obtaining the original image data of the first photo, the original image data is stored in the pre-cache Area.
  • the pre-cache area is divided by the running memory 204, it has a very high writing speed, so compared to the current technical solution, the original image data is directly stored in the memory 206 (such as HDD hard disk, SD memory card, etc.) , Has a larger capacity but slower writing speed), data writing speed is faster, and the time required for speed writing is shorter, so you can start to take the next photo faster, increasing the continuous shooting speed.
  • the memory 206 such as HDD hard disk, SD memory card, etc.
  • the process of controlling the image sensor 202 to obtain the original image data includes: obtaining the continuous shooting interval duration according to the continuous shooting instruction, and controlling the image sensor 202 to obtain the original image data according to the interval duration.
  • Original image data when the processor 208 executes a computer instruction, the process of controlling the image sensor 202 to obtain the original image data includes: obtaining the continuous shooting interval duration according to the continuous shooting instruction, and controlling the image sensor 202 to obtain the original image data according to the interval duration.
  • Original image data is
  • the continuous shooting instruction includes the continuous shooting interval time, that is, the time interval between shooting the Nth image and shooting the N+1th image, and the image sensor 202 is controlled to obtain the original image according to the continuous shooting interval time.
  • the data is sequentially stored in the pre-buffer area to achieve continuous shooting.
  • the processor 208 executes computer instructions, it is implemented: generating corresponding intermediate image data according to the original image data, storing the intermediate image data in the pre-cache area, and generating image data according to the intermediate image data; Delete the original image data in the pre-buffer area, and send the image data to the control terminal.
  • the corresponding intermediate image data is generated according to the original data.
  • the intermediate image data can be in YUV (a color coding format) format.
  • the intermediate image data is correspondingly stored in the pre-cache area, and the corresponding original image data is deleted in the pre-cache area at the same time to release the storage space in the pre-cache area.
  • processor 208 when the processor 208 executes computer instructions, it is implemented: generate corresponding target image data according to the intermediate image data, store the target image data in the pre-cache area; delete the intermediate image in the pre-cache area data.
  • the corresponding target image data is generated according to the intermediate image data.
  • the target image data can be JPEG (Joint Photographic Experts Group, a common image format) format image file, and after the target image data is generated, the target image data is correspondingly stored in the pre-cache area, and the corresponding intermediate image is deleted in the pre-cache area at the same time Data to release storage space in the pre-cache area.
  • the UAV 200 further includes an encoder.
  • the process of generating corresponding target image data according to the intermediate image data includes: acquiring image processing according to the continuous shooting instruction Information; According to the image processing information, the encoder is controlled to encode the intermediate image data to generate target image data.
  • the continuous shooting instruction includes image processing information, which may specifically include the imaging direction of the target image data (such as forward, reverse, lateral, vertical, or mirror flip, etc.).
  • image processing information may specifically include the imaging direction of the target image data (such as forward, reverse, lateral, vertical, or mirror flip, etc.).
  • the processing of the UAV 200 is based on the image
  • the processing information encodes the intermediate image, and finally obtains the target image data consistent with the image processing information.
  • the target image data (ie, JPEG format data) is only used for storage, and there is no need to display the target image data.
  • each target image data is stored in the memory 206, and correspondingly deleted in the pre-cache area Target image.
  • the multiple target image data in the pre-buffer area are sequentially stored in the memory 206 according to the generation order of the multiple target image data.
  • the data writing speed of the memory 206 is relatively slow. Therefore, as the continuous shooting process proceeds, the obtained target image data will be stacked in the pre-cache area, according to multiple The time sequence of target image data generation (ie the order of shooting) generates a queue to be stored, and stores multiple target images in the memory 206 sequentially according to the queue to be stored, and each target image data in the queue is successfully stored in the memory 206 Then, delete the target image data in the queue to release the space in the pre-buffer area.
  • the pre-cache area includes a first cache area, a second cache area, and a third cache area.
  • the original image data is stored in the first cache area
  • the intermediate image data is stored in the second cache area.
  • the target image data is stored in the third buffer area.
  • the pre-buffer area includes a first buffer area, which can be denoted as RAW buffer, and a second buffer area, which can be denoted as YUV buffer, and the third buffer area, which can be a JPEG buffer.
  • the first buffer area (RAW buffer) is used to buffer raw image data (RAW)
  • the second buffer area (YUV buffer) is used to buffer intermediate image data (YUV)
  • the third buffer area is used to buffer target image data (JPEG).
  • the third cache area may be divided in the running memory, or may be divided in an external storage space such as an HDD hard disk and/or an SD card.
  • the processor 208 executes a computer instruction, it implements: obtain the storage capacity of each data in the first cache area, the second cache area, and the third cache area, and adjust the first cache area according to the storage capacity.
  • the storage capacity of the cache area, the second cache area and the third cache area is not limited to:
  • the processor 208 of the human aircraft monitors the stored amount of corresponding image data in the first cache area, the second cache area, and the third cache area in real time, and dynamically adjusts the first cache area according to the stored amount of the corresponding image data source.
  • the storage capacity of a cache area, a second cache area and a third cache area Specifically, if the storage capacity of the original image data in the first cache area is small and the first cache area is relatively free, the storage capacity of the first cache area can be reduced correspondingly; the stored amount of target image data in the third cache area If it is larger, the third cache area will be full. At this time, the storage capacity of the third cache area is correspondingly increased to ensure the space utilization efficiency of the pre-cache area.
  • the process of controlling the image sensor 202 to obtain original image data includes: locking the image capturing parameters of the image sensor 202, and obtaining the original image according to the image capturing parameters.
  • the image shooting parameters of the image sensor 202 are locked first, and the original images are acquired according to the locked image shooting parameters, which can ensure that the styles of multiple images obtained by continuous shooting are consistent, and at the same time Avoid re-determining image shooting parameters for each image taken during continuous shooting, resulting in waste of performance and slowing down the continuous shooting speed.
  • the image shooting parameters include any one or a combination of the following: image exposure parameters, image focus parameters, and image white balance parameters.
  • image shooting parameters generally include image exposure parameters, which affect the exposure (brightness) of image imaging; image shooting parameters also include image focus parameters, which affect the final target image data of the object being shot. Focus position; image shooting parameters also include image white balance parameters, which affect the overall color tone of the image.
  • an unmanned aerial vehicle control terminal 300 includes a display screen 302, a memory 304, a processor 306, and a storage device.
  • the computer instructions on the memory that can be executed by the processor are implemented when the processor 306 executes the computer instructions: sending continuous shooting instructions to the UAV to control the image sensor set on the UAV to obtain the original image data; receiving the original image data The image data is generated from the data, and the image data is displayed on the control screen.
  • the control terminal 300 of the unmanned aerial vehicle is used to control the operation of the unmanned aerial vehicle.
  • the control terminal 300 of the unmanned aerial vehicle sends continuous shooting instructions to the unmanned aerial vehicle to control the image sensor provided on the unmanned aerial vehicle. Acquire image data.
  • the unmanned aerial vehicle receives continuous shooting quality from the control terminal 300 or mobile phone and other terminals.
  • the unmanned aerial vehicle's operating memory (RAM) Random Access Memory (Random Access Memory, random access memory 304) is divided into a pre-buffer area, and the size of the pre-buffer area can be specifically determined according to the shooting interval and the number of shooting times corresponding to the continuous shooting instruction.
  • the image sensor starts to acquire the image data of the first photo, and after acquiring the image data of the first photo, the image data is stored in the pre-buffer area.
  • the processor 306 of the UAV synchronously acquires the image file in the pre-buffer area and sends it to the control terminal 300 of the UAV, and the control terminal 300 of the UAV receives After the image data, the received image data is displayed on the display screen 302.
  • the continuous shooting instruction includes a continuous shooting interval duration
  • the unmanned aerial vehicle controls the image sensor to obtain image data according to the interval duration.
  • the continuous shooting command includes the continuous shooting interval time, that is, the length of time between shooting the Nth image and shooting the N+1th image, and the UAV is controlled to continuously acquire images according to the continuous shooting interval time Data to achieve continuous shooting.
  • the display screen 302 includes a first display area and a second display area.
  • the processor 306 executes computer instructions, the processor 306 implements: continuously receiving real-time image data sent by the UAV; controlling the display screen 302 The real-time image data is displayed in the first display area, and the display screen 302 is controlled to display the image data in the second display area.
  • the display screen 302 of the control terminal 300 of the unmanned aerial vehicle includes a first display area and a second display area.
  • the real-time image data is displayed in the first display area, that is, to realize the Live view function, in the second display area.
  • the image data is displayed in the, which realizes the Quick view function.
  • the second display area is within the first display area.
  • a shooting method including:
  • S406 Display image data generated according to the original image data.
  • the pre-cache area is divided into the running memory (RAM, Random Access Memory, random access memory) according to the continuous shooting command.
  • the size can be determined according to the shooting interval and shooting times corresponding to the continuous shooting command.
  • the image sensor starts to obtain the original image data of the first photo (such as the original image file in RAW format), and after obtaining the original image data of the first photo, the original image data is stored in the pre-buffer area in.
  • the pre-caching area Since the pre-caching area is divided by the running memory, it has a very high writing speed, so compared with the current technical solution, the original image data is directly stored in the memory (such as HDD hard disk, SD memory card, etc.) Larger capacity but slower writing speed), data writing speed is faster, and the time required for speed writing is shorter, so you can start to take the next photo faster, increasing the continuous shooting speed.
  • the processor since the original image data is stored in the pre-cache area (running memory), its reading speed is also faster, so the processor can directly read the image data of the image data generated from the original image data in the pre-cache area, and Controlling the display screen to display the image data realizes the Quickview function in continuous shooting, which has a better interactive effect.
  • the shooting method further includes:
  • S502 Acquire the duration of the continuous shooting interval according to the continuous shooting instruction
  • the continuous shooting instruction includes the continuous shooting interval time, that is, the time interval between the shooting of the Nth image and the N+1th image, and the image sensor is controlled to obtain the original image data according to the continuous shooting interval. And sequentially store to the pre-buffer area to achieve continuous shooting.
  • displaying image data generated based on the original image data further includes:
  • S602 Generate corresponding intermediate image data according to the original image data, store the intermediate image data in a pre-buffer area, and generate image data according to the intermediate data;
  • the corresponding intermediate image data is generated according to the original data.
  • the intermediate image data can be in YUV (a color coding format) format.
  • the intermediate image data is correspondingly stored in the pre-buffer area, and corresponding image data is generated according to the intermediate image data.
  • the image data is RGB image data.
  • the shooting method further includes:
  • S702 Generate corresponding target image data according to the intermediate image data, and store the target image data in a pre-cache area;
  • the corresponding target image data is generated according to the intermediate image data.
  • the target image data can be JPEG (Joint Photographic Experts Group, a common image format) format image file, and after the target image data is generated, the target image data is correspondingly stored in the pre-cache area, and the corresponding intermediate image is deleted in the pre-cache area at the same time Data to release storage space in the pre-cache area.
  • the target image data (ie, JPEG format data) is only used for storage, and there is no need to display the target image data.
  • the shooting method further includes:
  • S804 Perform encoding processing on the intermediate image data according to the image processing information to generate target image data.
  • the continuous shooting instruction includes image processing information, which can specifically include the imaging direction of the target image data (such as forward, reverse, horizontal, vertical, or mirror flip, etc.).
  • the processing of the camera is based on the image processing information.
  • the intermediate image is encoded, and finally the target image data consistent with the image processing information is obtained.
  • the shooting method further includes:
  • S902 Store each target image data in the memory according to the generation order of the multiple target image data
  • the multiple target image data in the pre-buffer area are sequentially stored in the memory according to the generation order of the multiple target image data.
  • the data writing speed of the memory is relatively slow. Therefore, as the continuous shooting process proceeds, the obtained target image data will be stacked in the pre-cache area, according to multiple target images
  • the time sequence of data generation (ie, the order of shooting) generates a queue to be stored, and stores multiple target images in the memory in sequence according to the queue to be stored. Whenever a target image data in the queue is successfully stored in the memory, the queue Delete the target image data to release the space in the pre-buffer area.
  • the pre-cache area includes a first cache area, a second cache area, and a third cache area.
  • the original image data is stored in the first cache area
  • the intermediate image data is stored in the second cache area.
  • the target image data is stored in the third buffer area.
  • the pre-buffer area includes a first buffer area, which can be denoted as RAW buffer, and a second buffer area, which can be denoted as YUV buffer, and the third buffer area, which can be a JPEG buffer.
  • the first buffer area (RAW buffer) is used to buffer raw image data (RAW)
  • the second buffer area (YUV buffer) is used to buffer intermediate image data (YUV)
  • the third buffer area is used to buffer target image data (JPEG).
  • the third cache area may be divided in the running memory, or may be divided in an external storage space such as an HDD hard disk and/or an SD card.
  • the shooting method further includes:
  • S1002 Obtain the storage capacity of respective data in the first cache area, the second cache area, and the third cache area;
  • S1004 Adjust the storage capacity of the first cache area, the second cache area, and the third cache area according to the storage capacity.
  • the stored amount of corresponding image data in the first cache area, the second cache area, and the third cache area is monitored in real time, and the first cache area and the second cache area are dynamically adjusted according to the stored amount of the corresponding image data source.
  • the storage capacity of the cache area and the third cache area Specifically, if the storage capacity of the original image data in the first cache area is small and the first cache area is relatively free, the storage capacity of the first cache area can be reduced correspondingly; the stored amount of target image data in the third cache area If it is larger, the third cache area will be full. At this time, the storage capacity of the third cache area is correspondingly increased to ensure the space utilization efficiency of the pre-cache area.
  • the step of obtaining the original image data specifically includes: locking the image shooting parameters, and obtaining the original image according to the image shooting parameters.
  • the image shooting parameters are first locked, and the original image is acquired according to the locked image shooting parameters, which can ensure that the styles of multiple images obtained in continuous shooting are consistent, and at the same time, avoid each image in the continuous shooting process.
  • the captured images are re-determined for image shooting parameters, resulting in a waste of performance and slowing down the continuous shooting speed.
  • the image shooting parameters include any one or a combination of the following: image exposure parameters, image focus parameters, and image white balance parameters.
  • image shooting parameters generally include image exposure parameters, which affect the exposure (brightness) of image imaging; image shooting parameters also include image focus parameters, which affect the final target image data of the object being shot. Focus position; image shooting parameters also include image white balance parameters, which affect the overall color tone of the image.
  • the step of displaying the image data generated from the original image data further includes: continuously acquiring real-time image data; displaying the real-time image data in the first display area, and displaying in the second display area Image data.
  • the display screen includes a first display area and a second display area.
  • the real-time image data is displayed in the first display area to realize the Liveview function, and the image data is displayed in the second display area to realize Quickview.
  • the second display area is within the first display area.
  • the continuous shooting method specifically includes:
  • the camera starts and displays the Live view on the screen, and the sensor (image sensor) starts to detect 3A parameters.
  • the 3A parameters are specifically image exposure parameters, image focus parameters, and image white balance. parameter.
  • the 0.5s shooting interval is determined according to the continuous shooting command.
  • it can also be a smaller shooting interval, such as 0.3s, or a larger shooting interval, such as 0.8s.
  • the image sensor ie, the sensor
  • the image sensor is configured according to the locked 3A parameters to control the sensor to acquire the original image data with the locked 3A parameters.
  • the senor acquires and generates raw image data, namely RAW images, according to the locked 3A parameters.
  • RAW buffer buffers one frame, and repeats S1104;
  • the pre-buffer area is divided in the running memory according to the continuous shooting instruction.
  • the pre-buffer area includes the first buffer area, the second buffer area and the third buffer area.
  • the first buffer area is the RAW buffer, which is obtained by the sensor.
  • the original image data RAW image is cached in the RAW buffer.
  • the real-time image obtained by the sensor is continuously displayed in the first area of the display screen.
  • the Quick view image generated according to the acquired RAW image is displayed in the second area of the display screen.
  • the corresponding YUV image is generated according to the RAW image buffered in the RAW buffer.
  • the generated YUV image is buffered in the second buffer area in the pre-buffer area, that is, the YUV buffer.
  • the corresponding RAW image is deleted in the RAW buffer to free up space.
  • the DSP encoder is configured according to the image processing information in the continuous shooting instruction to control the DSP encoder to encode the YUV image.
  • the JPEG image of the target image data is obtained by encoding the YUV image through the DSP encoder.
  • JPEG buffer buffers one frame
  • the generated JPEG image is buffered in the third buffer area in the pre-buffer area, that is, the JPEG buffer.
  • the JPEG images are written into the SD card and saved sequentially in a queue.
  • JPEG buffer releases one frame.
  • the corresponding JPEG image is deleted in the JPEG buffer to free up space.
  • the camera when the 0.5s timing interval is reached, the camera will start the following photographing process: lock 3A parameters, stop Liveview, configure the sensor, generate RAW images, and send the generated RAW images to the RAW divided from the running memory Buffer cache, start Liveview, LCD display Quick view, generate YUV image based on RAW image, send the generated YUV image into YUV buffer for buffering, release the RAW image data generated by this photographing process from RAW buffer, configure DSP encoder (controls the encoding method, used to encode: image data of different shooting modes such as forward, vertical and reverse), generates JPEG image from YUV image, and sends the generated JPEG image into JPEG buffer for buffering, from YUV Release the YUV image generated by this photographing process in the buffer, write the JPEG image generated by this photographing process to the SD card for storage, and finally release the JPEG image generated by this photographing process from the JPEG buffer.
  • DSP encoder controls the encoding method, used to encode:
  • RAW buffer, YUV buffer, and JPEG buffer allocate a fixed-size storage area from the hardware memory when the camera starts the 0.5s timing continuous shooting function (the capacity of the three can be set, and it can also be dynamically adjusted according to the actual storage situation).
  • the storage area can cache several to dozens of RAW images, YUV images, and JPEG images.
  • the time required for the above process may exceed the continuous shooting interval 0.5s.
  • the embodiment provided in this application uses a data buffer mechanism to process multiple steps of the above photographing process in parallel. After the RAW image is sent to the RAW buffer, it starts immediately The next photographing process, that is, when the current photographing process is not fully completed, the next photographing process is started. Through the method of buffer data caching, different steps of multiple photographing processes can run completely independently without blocking or waiting for different moments. Different steps of the photo process started.
  • the term “plurality” refers to two or more than two. Unless otherwise clearly defined, the orientation or positional relationship indicated by the terms “upper” and “lower” are based on the orientation described in the drawings. Or the positional relationship is only for the convenience of describing the application and simplifying the description, rather than indicating or implying that the device or element referred to must have a specific orientation, be constructed and operated in a specific orientation, and therefore cannot be understood as a limitation of the application; “Connected”, “installed”, “fixed”, etc. should be understood in a broad sense. For example, “connected” can be a fixed connection, a detachable connection, or an integral connection; it can be directly connected or through an intermediate medium. Indirectly connected. For those of ordinary skill in the art, the specific meanings of the above-mentioned terms in this application can be understood according to specific circumstances.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Studio Devices (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

本申请提供了一种拍摄装置、无人飞行器、控制终端和拍摄方法,拍摄装置包括图像传感器、显示屏、运行内存、存储器、处理器及存储在存储器上并可被处理器执行的计算机指令,处理器执行计算机指令时实现:接收连拍指令,根据连拍指令在运行内存中划分预缓存区域;控制图像传感器获取原始图像数据,将原始图像数据存储至预缓存区域;控制显示屏显示根据原始图像数据生成的图像数据。本申请提供的技术方案提高了连拍速度,实现了连拍中的Quick view功能,使拍摄装置具有较好的交互效果。

Description

拍摄装置、无人飞行器、控制终端和拍摄方法 技术领域
本申请涉及拍摄装置技术领域,具体而言,涉及一种拍摄装置、一种无人飞行器、一种无人飞行器的控制终端和一种拍摄方法。
背景技术
在相关技术中,拍摄装置具有Quick view(快速回放)功能,具体为在拍摄装置定时拍照的过程中,在拍摄下一张照片之前,拍摄装置的显示器持续显示上一张拍摄完成的照片。同时拍摄装置还具有Live view(实时预览)功能,具体为拍摄装置的显示器持续显示传感器获取的取景框图像。
而目前的拍摄装置,由于硬件限制,难以实现快速连拍,且在快速连拍时难以实现Quick view功能,拍摄的交互效果较差。
申请内容
本申请旨在至少解决现有技术或相关技术中存在的技术问题之一。
为此,本申请的第一方面提出一种拍摄装置。
本申请的第二方面提出一种无人飞行器。
本申请的第三方面提出一种无人飞行器的控制终端。
本申请的第四方面提出一种拍摄方法。
有鉴于此,本申请的第一方面提供了一种拍摄装置,拍摄装置包括图像传感器、显示屏、运行内存、存储器、处理器及存储在存储器上并可被处理器执行的计算机指令,处理器执行计算机指令时实现:接收连拍指令,根据连拍指令在运行内存中划分预缓存区域;控制图像传感器获取原始图像数据,将原始图像数据存储至预缓存区域;控制显示屏显示根据原始图像数据生成的图像数据。
在该技术方案中,当拍摄装置接收到连拍指令时,在拍摄开始前,首 先根据连拍指令在拍摄装置的运行内存(RAM,Random Access Memory,随机存取存储器)中划分预缓存区域,预缓存区域的大小可具体根据连拍指令对应的拍摄间隔和拍摄次数确定。在拍摄开始后,图像传感器开始获取第一张照片的原始图像数据(如RAW格式的原始图像文件),在获取到第一张照片的原始图像数据后,将该原始图像数据存储至预缓存区域中。由于预缓存区域是由运行内存划分而来的,其具有极高的写入速度,因此相较于目前的技术方案中将原始图像数据直接存入存储器(如HDD(Hard Disk Drive,硬盘驱动器)硬盘,SD(Secure Digital,安全数码)存储卡等,具有较大容量但写入速度较慢),数据写入速度更快,速度写入所需的时间更短,因此可以更快地开始拍摄下一张照片,提高了连拍速度。同时,由于原始图像数据存储于预缓存区域(运行内存)中,其读取速度也较快,所以拍摄装置的处理器可直接读取预缓存区域中根据原始图像数据生成的图像数据,并控制显示屏显示该图像数据,实现了连拍中的Quick view功能,使拍摄装置具有较好的交互效果。
另外,本申请提供的上述技术方案中的拍摄装置还可以具有如下附加技术特征:
在上述技术方案中,进一步地,处理器执行计算机指令时实现控制图像传感器获取原始图像数据的过程包括:根据连拍指令获取连拍间隔时长,按照间隔时长控制图像传感器获取原始图像数据。
在该技术方案中,连拍指令中包括连拍间隔时长,即拍摄第N张图像和拍摄第N+1张图像之间所间隔的时长,按照该连拍间隔时长控制图像传感器获取原始图像数据并顺次存储至预缓存区域,以实现连拍。
在上述任一技术方案中,进一步地,处理器执行计算机指令时实现:根据原始图像数据生成对应的中间图像数据,将中间图像数据存储至预缓存区域,根据中间图像数据生成图像数据;在预缓存区域中删除原始图像数据,并显示图像数据。
在该技术方案中,在将任一张图像的原始数据存储至预缓存区域后,根据原始数据生成对应的中间图像数据,一般地,中间图像数据可以是YUV(一种颜色编码格式)格式的数据,并在生成了中间图像数据后,将 该中间图像数据对应存储至预缓存区域,同时根据中间图像数据生成对应的图像数据,可选的,图像数据为RGB图像数据,最后,在显示RGB图像数据的同时,在预缓存区域中删除对应的原始图像数据,以释放预缓存区域的存储空间。
在上述任一技术方案中,进一步地,处理器执行计算机指令时实现:根据中间图像数据生成对应的目标图像数据,将目标图像数据存储至预缓存区域;在预缓存区域中删除中间图像数据。
在该技术方案中,在将任一张图像的中间图像数据(如YUV格式的数据)存储至预缓存区域后,根据中间图像数据生成对应的目标图像数据,一般地,目标图像数据可以是JPEG(Joint Photographic Experts Group,一种常见的图像格式)格式的图像文件,并在生成目标图像数据后,将该目标图像数据对应存储至预缓存区域,并同时在预缓存区域中删除对应的中间图像数据,以释放预缓存区域的存储空间。
其中,目标图像数据(即JPEG格式的数据)仅用于存储,不需要对目标图像数据进行显示。
在上述任一技术方案中,进一步地,拍摄装置还包括编码器,处理器执行计算机指令时实现根据中间图像数据生成对应的目标图像数据的过程包括:根据连拍指令获取图像处理信息;根据图像处理信息控制编码器对中间图像数据进行编码处理,以生成目标图像数据。
在该技术方案中,连拍指令中包括图像处理信息,具体可包括目标图像数据的成像方向(如正向、反向、横向、竖向或镜像翻转等),拍摄装置的处理根据图像处理信息对中间图像进行编码处理,最终得到与图像处理信息相符的目标图像数据。
在上述任一技术方案中,进一步地,处理器执行计算机指令时实现:按照多个目标图像数据的生成顺序,将每个目标图像数据存储至存储器,并在预缓存区域中对应删除目标图像数据。
在该技术方案中,按照多个目标图像数据的生成顺序顺次将预缓存区域中的多个目标图像数据存储至存储器。具体地,相较于预缓存区域(运行内存),存储器的数据写入速度相对较慢,因此随着连拍过程进行,得 到的目标图像数据会在预缓存区域中堆栈,根据多个目标图像数据生成的时间顺序(即拍摄的先后顺序)生成待存储队列,并按照待存储队列将多个目标图像顺次存储至存储器,每当队列中的一个目标图像数据成功存储至存储器后,在队列中删除该目标图像数据以释放预缓存区域中的空间。
在上述任一技术方案中,进一步地,预缓存区域包括第一缓存区域、第二缓存区域和第三缓存区域,原始图像数据存储于第一缓存区域,中间图像数据存储至第二缓存区域,目标图像数据存储至第三缓存区域。
在该技术方案中,预缓存区域包括第一缓存区域,可记为RAW buffer,还包括第二缓存区域,可记为YUV buffer,以及第三缓存区域,可即为JPEG buffer。其中,第一缓存区域(RAW buffer)用于缓存原始图像数据(RAW),第二缓存区域(YUV buffer)用于缓存中间图像数据(YUV),第三缓存区域用于缓存目标图像数据(JPEG)。
其中,第三缓存区域可以使在运行内存中划分得到,也可以是在HDD硬盘和/或SD卡等外部存储空间中划分得到。
在上述任一技术方案中,进一步地,处理器执行计算机指令时实现:获取第一缓存区域、第二缓存区域和第三缓存区域中各自数据的存储量,根据存储量分别调整第一缓存区域、第二缓存区域和第三缓存区域的可存储容量。
在该技术方案中,拍摄装置的处理器实时监控第一缓存区域、第二缓存区域和第三缓存区域中对应图像数据的已存储量,并根据对应图像数据源的已存储量动态调整第一缓存区域、第二缓存区域和第三缓存区域的可存储容量。具体地,如第一缓存区域中原始图像数据的存储量较少,第一缓存区域较为空闲,可对应减小第一缓存区域的可存储容量;第三缓存区域中目标图像数据的已存储量较大,第三缓存区域将满,此时对应增加第三缓存区域的可存储容量,以保证预缓存区域空间的利用效率。
在上述任一技术方案中,进一步地,处理器执行计算机指令时实现控制图像传感器获取原始图像数据的过程包括:锁定图像传感器的图像拍摄参数,根据图像拍摄参数获取原始图像。
在该技术方案中,控制图像传感器获取原始图像数据时,首先锁定图 像传感器的图像拍摄参数,根据锁定后的图像拍摄参数获取原始图像,可以保证连拍所得的多张图像风格一致,同时避免连拍过程中每张拍摄的图像都重新确定图像拍摄参数导致性能浪费,连拍速度被拖慢。
在上述任一技术方案中,进一步地,图像拍摄参数包括以下任一或其组合:图像曝光参数、图像对焦参数、图像白平衡参数。
在该技术方案中,图像拍摄参数一般包括图像的曝光参数,曝光参数影响图像成像的曝光度(亮度);图像拍摄参数还包括图像对焦参数,图像对焦参数影响最终目标图像数据中被拍摄物体的焦点位置;图像拍摄参数还包括图像白平衡参数,图像白平衡参数影响得到图像整体的颜色色调。
在上述任一技术方案中,进一步地,显示屏包括第一显示区域和第二显示区域,处理器执行计算机指令时实现:控制图像传感器持续获取实时图像数据;控制显示屏在第一显示区域显示实时图像数据,并控制显示屏在第二显示区域显示图像数据。
在该技术方案中,拍摄装置的显示屏包括第一显示区域和第二显示区域,在第一显示区域中显示实时图像数据,即实现Live view功能,在第二显示区域中显示图像数据,即实现Quick view功能。其中,可选地,第二显示区域在第一显示区域内。
本申请的第二方面提供了一种无人飞行器,无人飞行器包括图像传感器、运行内存、存储器、处理器及存储在存储器上并可被处理器执行的计算机指令,处理器执行计算机指令时实现:接收连拍指令,根据连拍指令在运行内存中划分预缓存区域;控制图像传感器获取原始图像数据,将原始图像数据存储至预缓存区域,并将根据原始图像数据生成的图像数据发送至控制终端。
在该技术方案中,无人飞行器接收来自控制终端或手机等终端的连拍质量,当接收到连拍指令时,在拍摄开始前,首先根据连拍指令在无人飞行器的运行内存(RAM,Random Access Memory,随机存取存储器)中划分预缓存区域,预缓存区域的大小可具体根据连拍指令对应的拍摄间隔和拍摄次数确定。在拍摄开始后,图像传感器开始获取第一张照片的原始图像数据(如RAW格式的原始图像文件),在获取到第一张照片的原始图 像数据后,将该原始图像数据存储至预缓存区域中。由于预缓存区域是由运行内存划分而来的,其具有极高的写入速度,因此相较于目前的技术方案中将原始图像数据直接存入存储器(如HDD硬盘,SD存储卡等,具有较大容量但写入速度较慢),数据写入速度更快,速度写入所需的时间更短,因此可以更快地开始拍摄下一张照片,提高了连拍速度。
在上述技术方案中,进一步地,处理器执行计算机指令时实现控制图像传感器获取原始图像数据的过程包括:根据连拍指令获取连拍间隔时长,按照间隔时长控制图像传感器获取原始图像数据。
在该技术方案中,连拍指令中包括连拍间隔时长,即拍摄第N张图像和拍摄第N+1张图像之间所间隔的时长,按照该连拍间隔时长控制图像传感器获取原始图像数据并顺次存储至预缓存区域,以实现连拍。
在上述任一技术方案中,进一步地,处理器执行计算机指令时实现:根据原始图像数据生成对应的中间图像数据,将中间图像数据存储至预缓存区域,根据中间图像数据生成图像数据;在预缓存区域中删除原始图像数据,并将图像数据发送至控制终端。
在该技术方案中,在将任一张图像的原始数据存储至预缓存区域后,根据原始数据生成对应的中间图像数据,一般地,中间图像数据可以是YUV(一种颜色编码格式)格式的数据,并在生成了中间图像数据后,将该中间图像数据对应存储至预缓存区域,并同时在预缓存区域中删除对应的原始图像数据,以释放预缓存区域的存储空间。
在上述任一技术方案中,进一步地,处理器执行计算机指令时实现:根据中间图像数据生成对应的目标图像数据,将目标图像数据存储至预缓存区域;在预缓存区域中删除中间图像数据。
在该技术方案中,在将任一张图像的中间图像数据(如YUV格式的数据)存储至预缓存区域后,根据中间图像数据生成对应的目标图像数据,一般地,目标图像数据可以是JPEG(Joint Photographic Experts Group,一种常见的图像格式)格式的图像文件,并在生成目标图像数据后,将该目标图像数据对应存储至预缓存区域,并同时在预缓存区域中删除对应的中间图像数据,以释放预缓存区域的存储空间。
在上述任一技术方案中,进一步地,无人飞行器还包括编码器,处理器执行计算机指令时实现根据中间图像数据生成对应的目标图像数据的过程包括:根据连拍指令获取图像处理信息;根据图像处理信息控制编码器对中间图像数据进行编码处理,以生成目标图像数据。
在该技术方案中,连拍指令中包括图像处理信息,具体可包括目标图像数据的成像方向(如正向、反向、横向、竖向或镜像翻转等),无人飞行器的处理根据图像处理信息对中间图像进行编码处理,最终得到与图像处理信息相符的目标图像数据。
其中,目标图像数据(即JPEG格式的数据)仅用于存储,不需要对目标图像数据进行显示。
在上述任一技术方案中,进一步地,处理器执行计算机指令时实现:按照多个目标图像数据的生成顺序,将每个目标图像数据存储至存储器,并在预缓存区域中对应删除目标图像。
在该技术方案中,按照多个目标图像数据的生成顺序顺次将预缓存区域中的多个目标图像数据存储至存储器。具体地,相较于预缓存区域(运行内存),存储器的数据写入速度相对较慢,因此随着连拍过程进行,得到的目标图像数据会在预缓存区域中堆栈,根据多个目标图像数据生成的时间顺序(即拍摄的先后顺序)生成待存储队列,并按照待存储队列将多个目标图像顺次存储至存储器,每当队列中的一个目标图像数据成功存储至存储器后,在队列中删除该目标图像数据以释放预缓存区域中的空间。
在上述任一技术方案中,进一步地,预缓存区域包括第一缓存区域、第二缓存区域和第三缓存区域,原始图像数据存储于第一缓存区域,中间图像数据存储至第二缓存区域,目标图像数据存储至第三缓存区域。
在该技术方案中,预缓存区域包括第一缓存区域,可记为RAW buffer,还包括第二缓存区域,可记为YUV buffer,以及第三缓存区域,可即为JPEG buffer。其中,第一缓存区域(RAW buffer)用于缓存原始图像数据(RAW),第二缓存区域(YUV buffer)用于缓存中间图像数据(YUV),第三缓存区域用于缓存目标图像数据(JPEG)。
其中,第三缓存区域可以使在运行内存中划分得到,也可以是在HDD 硬盘和/或SD卡等外部存储空间中划分得到。
在上述任一技术方案中,进一步地,处理器执行计算机指令时实现:获取第一缓存区域、第二缓存区域和第三缓存区域中各自数据的存储量,根据存储量分别调整第一缓存区域、第二缓存区域和第三缓存区域的可存储容量。
在该技术方案中,无人飞行器的处理器实时监控第一缓存区域、第二缓存区域和第三缓存区域中对应图像数据的已存储量,并根据对应图像数据源的已存储量动态调整第一缓存区域、第二缓存区域和第三缓存区域的可存储容量。具体地,如第一缓存区域中原始图像数据的存储量较少,第一缓存区域较为空闲,可对应减小第一缓存区域的可存储容量;第三缓存区域中目标图像数据的已存储量较大,第三缓存区域将满,此时对应增加第三缓存区域的可存储容量,以保证预缓存区域空间的利用效率。
在上述任一技术方案中,进一步地,处理器执行计算机指令时实现控制图像传感器获取原始图像数据的过程包括:锁定图像传感器的图像拍摄参数,根据图像拍摄参数获取原始图像。
在该技术方案中,控制图像传感器获取原始图像数据时,首先锁定图像传感器的图像拍摄参数,根据锁定后的图像拍摄参数获取原始图像,可以保证连拍所得的多张图像风格一致,同时避免连拍过程中每张拍摄的图像都重新确定图像拍摄参数导致性能浪费,连拍速度被拖慢。
在上述任一技术方案中,进一步地,图像拍摄参数包括以下任一或其组合:图像曝光参数、图像对焦参数、图像白平衡参数。
在该技术方案中,图像拍摄参数一般包括图像的曝光参数,曝光参数影响图像成像的曝光度(亮度);图像拍摄参数还包括图像对焦参数,图像对焦参数影响最终目标图像数据中被拍摄物体的焦点位置;图像拍摄参数还包括图像白平衡参数,图像白平衡参数影响得到图像整体的颜色色调。
本申请的第三方面提供了一种无人飞行器的控制终端,无人飞行器的控制终端包括显示屏、存储器、处理器及存储在存储器上并可被处理器执行的计算机指令,处理器执行计算机指令时实现:向无人飞行器发送连拍指令,以控制无人飞行器上设置的图像传感器获取原始图像数据;接收根 据原始图像数据生成的图像数据,并在控制显示屏显示图像数据。
在该技术方案中,无人飞行器的控制终端用于控制无人飞行器工作,具体地,无人飞行器的控制终端向无人飞行器发送连拍指令,以控制无人飞行器上设置的图像传感器获取图像数据,其中,无人飞行器接收来自控制终端或手机等终端的连拍质量,当接收到连拍指令时,在拍摄开始前,首先根据连拍指令在无人飞行器的运行内存(RAM,Random Access Memory,随机存取存储器)中划分预缓存区域,预缓存区域的大小可具体根据连拍指令对应的拍摄间隔和拍摄次数确定。在拍摄开始后,图像传感器开始获取第一张照片的图像数据,在获取到第一张照片的图像数据后,将该图像数据存储至预缓存区域中。同时,由于预缓存区域的读取速度较快,无人飞行器的处理器同步获取预缓存区域中的图像文件,并发送至无人飞行器的控制终端,无人飞行器的控制终端接收到图像数据后,在显示屏上显示接收到的图像数据。
在上述任一技术方案中,进一步地,连拍指令包括连拍间隔时长,无人飞行器按照间隔时长控制图像传感器获取图像数据。
在该技术方案中,连拍指令中包括连拍间隔时长,即拍摄第N张图像和拍摄第N+1张图像之间所间隔的时长,按照该连拍间隔时长控制无人飞行器连续获取图像数据,以实现连拍。
在上述任一技术方案中,进一步地,显示屏包括第一显示区域和第二显示区域,处理器执行计算机指令时实现:持续接收无人飞行器发送的实时图像数据;控制显示屏在第一显示区域显示实时图像数据,并控制显示屏在第二显示区域显示图像数据。
在该技术方案中,无人飞行器的控制终端的显示屏包括第一显示区域和第二显示区域,在第一显示区域中显示实时图像数据,即实现Live view功能,在第二显示区域中显示图像数据,即实现Quick view功能。其中,可选地,第二显示区域在第一显示区域内。
本申请的第四方面提供了一种拍摄方法,包括:接收连拍指令,根据连拍指令在拍摄装置的运行内存中划分预缓存区域;获取原始图像数据,将原始图像数据存储至预缓存区域;显示根据原始图像数据生成的图像数 据。
在该技术方案中,当接收到连拍指令时,在拍摄开始前,首先根据连拍指令在的运行内存(RAM,Random Access Memory,随机存取存储器)中划分预缓存区域,预缓存区域的大小可具体根据连拍指令对应的拍摄间隔和拍摄次数确定。在拍摄开始后,图像传感器开始获取第一张照片的原始图像数据(如RAW格式的原始图像文件),在获取到第一张照片的原始图像数据后,将该原始图像数据存储至预缓存区域中。由于预缓存区域是由运行内存划分而来的,其具有极高的写入速度,因此相较于目前的技术方案中将原始图像数据直接存入存储器(如HDD硬盘,SD存储卡等,具有较大容量但写入速度较慢),数据写入速度更快,速度写入所需的时间更短,因此可以更快地开始拍摄下一张照片,提高了连拍速度。同时,由于原始图像数据存储于预缓存区域(运行内存)中,其读取速度也较快,所以的处理器可直接读取预缓存区域中根据原始图像数据生成的图像数据,并控制显示屏显示该图像数据,实现了连拍中的Quick view功能,使具有较好的交互效果。
在上述技术方案中,进一步地,拍摄方法还包括:根据连拍指令获取连拍间隔时长,按照间隔时长获取原始图像数据。
在该技术方案中,连拍指令中包括连拍间隔时长,即拍摄第N张图像和拍摄第N+1张图像之间所间隔的时长,按照该连拍间隔时长控制图像传感器获取原始图像数据并顺次存储至预缓存区域,以实现连拍。
在上述任一技术方案中,进一步地,显示根据原始图像数据生成的图像数据包括:根据原始图像数据生成对应的中间图像数据,将中间图像数据存储至预缓存区域,根据中间数据生成图像数据;在预缓存区域中删除原始图像数据,并显示图像数据。
在该技术方案中,在将任一张图像的原始数据存储至预缓存区域后,根据原始数据生成对应的中间图像数据,一般地,中间图像数据可以是YUV(一种颜色编码格式)格式的数据,并在生成了中间图像数据后,将该中间图像数据对应存储至预缓存区域,同时根据中间图像数据生成对应的图像数据,可选的,图像数据为RGB图像数据,最后,在预缓存区域中 删除对应的原始图像数据,以释放预缓存区域的存储空间。
在上述任一技术方案中,进一步地,在在预缓存区域中删除原始图像数据的步骤之后,拍摄方法还包括:根据中间图像数据生成对应的目标图像数据,将目标图像数据存储至预缓存区域;在预缓存区域中删除中间图像数据。
在该技术方案中,在将任一张图像的中间图像数据(如YUV格式的数据)存储至预缓存区域后,根据中间图像数据生成对应的目标图像数据,一般地,目标图像数据可以是JPEG(Joint Photographic Experts Group,一种常见的图像格式)格式的图像文件,并在生成目标图像数据后,将该目标图像数据对应存储至预缓存区域,并同时在预缓存区域中删除对应的中间图像数据,以释放预缓存区域的存储空间。
其中,目标图像数据(即JPEG格式的数据)仅用于存储,不需要对目标图像数据进行显示。
在上述任一技术方案中,进一步地,拍摄方法还包括:根据连拍指令获取图像处理信息;根据图像处理信息对中间图像数据进行编码处理,以生成目标图像数据。
在该技术方案中,连拍指令中包括图像处理信息,具体可包括目标图像数据的成像方向(如正向、反向、横向、竖向或镜像翻转等),拍摄装置的处理根据图像处理信息对中间图像进行编码处理,最终得到与图像处理信息相符的目标图像数据。
在上述任一技术方案中,进一步地,拍摄方法还包括:按照多个目标图像数据的生成顺序,将每个目标图像数据存储至存储器,并在预缓存区域中对应删除目标图像。
在该技术方案中,按照多个目标图像数据的生成顺序顺次将预缓存区域中的多个目标图像数据存储至存储器。具体地,相较于预缓存区域(运行内存),存储器的数据写入速度相对较慢,因此随着连拍过程进行,得到的目标图像数据会在预缓存区域中堆栈,根据多个目标图像数据生成的时间顺序(即拍摄的先后顺序)生成待存储队列,并按照待存储队列将多个目标图像顺次存储至存储器,每当队列中的一个目标图像数据成功存储 至存储器后,在队列中删除该目标图像数据以释放预缓存区域中的空间。
在上述任一技术方案中,进一步地,预缓存区域包括第一缓存区域、第二缓存区域和第三缓存区域,原始图像数据存储于第一缓存区域,中间图像数据存储至第二缓存区域,目标图像数据存储至第三缓存区域。
在该技术方案中,预缓存区域包括第一缓存区域,可记为RAW buffer,还包括第二缓存区域,可记为YUV buffer,以及第三缓存区域,可即为JPEG buffer。其中,第一缓存区域(RAW buffer)用于缓存原始图像数据(RAW),第二缓存区域(YUV buffer)用于缓存中间图像数据(YUV),第三缓存区域用于缓存目标图像数据(JPEG)。
其中,第三缓存区域可以使在运行内存中划分得到,也可以是在HDD硬盘和/或SD卡等外部存储空间中划分得到。
在上述任一技术方案中,进一步地,拍摄方法还包括:获取第一缓存区域、第二缓存区域和第三缓存区域中各自数据的存储量,根据存储量分别调整第一缓存区域、第二缓存区域和第三缓存区域的可存储容量。
在该技术方案中,实时监控第一缓存区域、第二缓存区域和第三缓存区域中对应图像数据的已存储量,并根据对应图像数据源的已存储量动态调整第一缓存区域、第二缓存区域和第三缓存区域的可存储容量。具体地,如第一缓存区域中原始图像数据的存储量较少,第一缓存区域较为空闲,可对应减小第一缓存区域的可存储容量;第三缓存区域中目标图像数据的已存储量较大,第三缓存区域将满,此时对应增加第三缓存区域的可存储容量,以保证预缓存区域空间的利用效率。
在上述任一技术方案中,进一步地,获取原始图像数据的步骤,具体包括:锁定图像拍摄参数,根据图像拍摄参数获取原始图像。
在该技术方案中,在获取原始图像数据时,首先锁定图像拍摄参数,根据锁定后的图像拍摄参数获取原始图像,可以保证连拍所得的多张图像风格一致,同时避免连拍过程中每张拍摄的图像都重新确定图像拍摄参数导致性能浪费,连拍速度被拖慢。
在上述任一技术方案中,进一步地,图像拍摄参数包括以下任一或其组合:图像曝光参数、图像对焦参数、图像白平衡参数。
在该技术方案中,图像拍摄参数一般包括图像的曝光参数,曝光参数影响图像成像的曝光度(亮度);图像拍摄参数还包括图像对焦参数,图像对焦参数影响最终目标图像数据中被拍摄物体的焦点位置;图像拍摄参数还包括图像白平衡参数,图像白平衡参数影响得到图像整体的颜色色调。
在上述任一技术方案中,进一步地,显示根据原始图像数据生成的图像数据的步骤,还包括:持续获取实时图像数据;在第一显示区域显示实时图像数据,并在第二显示区域显示图像数据。
在该技术方案中,显示屏包括第一显示区域和第二显示区域,在第一显示区域中显示实时图像数据,即实现Live view功能,在第二显示区域中显示图像数据,即实现Quick view功能。其中,可选地,第二显示区域在第一显示区域内。
附图说明
本申请的上述和/或附加的方面和优点从结合下面附图对实施例的描述中将变得明显和容易理解,其中:
图1示出了根据本申请的一个实施例的拍摄装置的结构框图;
图2示出了根据本申请的一个实施例的无人飞行器的结构框图;
图3示出了根据本申请的一个实施例的无人飞行器的控制终端的结构框图;
图4示出了根据本申请的一个实施例的拍摄方法的流程图;
图5示出了根据本申请的另一个实施例的拍摄方法的流程图;
图6示出了根据本申请的又一个实施例的拍摄方法的流程图;
图7示出了根据本申请的再一个实施例的拍摄方法的流程图;
图8示出了根据本申请的再一个实施例的拍摄方法的流程图;
图9示出了根据本申请的再一个实施例的拍摄方法的流程图;
图10示出了根据本申请的再一个实施例的拍摄方法的流程图;
图11示出了根据本申请的再一个实施例的拍摄方法的流程图。
具体实施方式
为了能够更清楚地理解本申请的上述目的、特征和优点,下面结合附图和具体实施方式对本申请进行进一步的详细描述。需要说明的是,在不冲突的情况下,本申请的实施例及实施例中的特征可以相互组合。
在下面的描述中阐述了很多具体细节以便于充分理解本申请,但是,本申请还可以采用其他不同于在此描述的其他方式来实施,因此,本申请的保护范围并不受下面公开的具体实施例的限制。
下面参照图1至图11描述根据本申请一些实施例所述拍摄装置、无人飞行器、无人飞行器的控制终端和拍摄方法。
如图1所示,在本申请第一方面的实施例中,提供了一种拍摄装置100,拍摄装置包括图像传感器102、显示屏104、运行内存106、存储器108、处理器110及存储在存储器上并可被处理器执行的计算机指令,处理器110执行计算机指令时实现:接收连拍指令,根据连拍指令在运行内存中划分预缓存区域;控制图像传感器获取原始图像数据,将原始图像数据存储至预缓存区域;控制显示屏显示根据原始图像数据生成的图像数据。
在该实施例中,当拍摄装置100接收到连拍指令时,在拍摄开始前,首先根据连拍指令在拍摄装置100的运行内存106(RAM,Random Access Memory,随机存取存储器108)中划分预缓存区域,预缓存区域的大小可具体根据连拍指令对应的拍摄间隔和拍摄次数确定。在拍摄开始后,图像传感器102开始获取第一张照片的原始图像数据(如RAW格式的原始图像文件),在获取到第一张照片的原始图像数据后,将该原始图像数据存储至预缓存区域中。由于预缓存区域是由运行内存106划分而来的,其具有极高的写入速度,因此相较于目前的技术方案中将原始图像数据直接存入存储器108(如HDD硬盘,SD存储卡等,具有较大容量但写入速度较慢),数据写入速度更快,速度写入所需的时间更短,因此可以更快地开始拍摄下一张照片,提高了连拍速度。同时,由于原始图像数据存储于预缓存区域(运行内存106)中,其读取速度也较快,所以拍摄装置100的处理器110可直接读取预缓存区域中根据原始图像数据生成的图像数据,并控制显示屏104显示该图像数据,实现了连拍中的Quick view功能,使拍摄装置100具有较好的交互效果。
在本申请的一个实施例中,进一步地,如图1所示,处理器110执行计算机指令时实现控制图像传感器102获取原始图像数据的过程包括:根据连拍指令获取连拍间隔时长,按照间隔时长控制图像传感器102获取原始图像数据。
在该实施例中,连拍指令中包括连拍间隔时长,即拍摄第N张图像和拍摄第N+1张图像之间所间隔的时长,按照该连拍间隔时长控制图像传感器102获取原始图像数据并顺次存储至预缓存区域,以实现连拍。
在本申请的一个实施例中,进一步地,如图1所示,处理器110执行计算机指令时实现:根据原始图像数据生成对应的中间图像数据,将中间图像数据存储至预缓存区域,根据中间图像数据生成图像数据;在预缓存区域中删除原始图像数据,并显示图像数据。
在该实施例中,在将任一张图像的原始数据存储至预缓存区域后,根据原始数据生成对应的中间图像数据,一般地,中间图像数据可以是YUV(一种颜色编码格式)格式的数据,并在生成了中间图像数据后,将该中间图像数据对应存储至预缓存区域,同时根据中间图像数据生成对应的图像数据,可选的,图像数据为RGB图像数据,最后,在显示RGB图像数据的同时,在预缓存区域中删除对应的原始图像数据,以释放预缓存区域的存储空间。
在本申请的一个实施例中,进一步地,如图1所示,处理器110执行计算机指令时实现:根据中间图像数据生成对应的目标图像数据,将目标图像数据存储至预缓存区域;在预缓存区域中删除中间图像数据。
在该实施例中,在将任一张图像的中间图像数据(如YUV格式的数据)存储至预缓存区域后,根据中间图像数据生成对应的目标图像数据,一般地,目标图像数据可以是JPEG(Joint Photographic Experts Group,一种常见的图像格式)格式的图像文件,并在生成目标图像数据后,将该目标图像数据对应存储至预缓存区域,并同时在预缓存区域中删除对应的中间图像数据,以释放预缓存区域的存储空间。
其中,目标图像数据(即JPEG格式的数据)仅用于存储,不需要对目标图像数据进行显示。
在本申请的一个实施例中,进一步地,如图1所示,拍摄装置100还包括编码器,处理器110执行计算机指令时实现根据中间图像数据生成对应的目标图像数据的过程包括:根据连拍指令获取图像处理信息;根据图像处理信息控制编码器对中间图像数据进行编码处理,以生成目标图像数据。
在该实施例中,连拍指令中包括图像处理信息,具体可包括目标图像数据的成像方向(如正向、反向、横向、竖向或镜像翻转等),拍摄装置100的处理根据图像处理信息对中间图像进行编码处理,最终得到与图像处理信息相符的目标图像数据。
在本申请的一个实施例中,进一步地,如图1所示,处理器110执行计算机指令时实现:按照多个目标图像数据的生成顺序,将每个目标图像数据存储至存储器108,并在预缓存区域中对应删除目标图像数据。
在该实施例中,按照多个目标图像数据的生成顺序顺次将预缓存区域中的多个目标图像数据存储至存储器108。具体地,相较于预缓存区域(运行内存106),存储器108的数据写入速度相对较慢,因此随着连拍过程进行,得到的目标图像数据会在预缓存区域中堆栈,根据多个目标图像数据生成的时间顺序(即拍摄的先后顺序)生成待存储队列,并按照待存储队列将多个目标图像顺次存储至存储器108,每当队列中的一个目标图像数据成功存储至存储器108后,在队列中删除该目标图像数据以释放预缓存区域中的空间。
在本申请的一个实施例中,进一步地,如图1所示,预缓存区域包括第一缓存区域、第二缓存区域和第三缓存区域,原始图像数据存储于第一缓存区域,中间图像数据存储至第二缓存区域,目标图像数据存储至第三缓存区域。
在该实施例中,预缓存区域包括第一缓存区域,可记为RAW buffer,还包括第二缓存区域,可记为YUV buffer,以及第三缓存区域,可即为JPEG buffer。其中,第一缓存区域(RAW buffer)用于缓存原始图像数据(RAW),第二缓存区域(YUV buffer)用于缓存中间图像数据(YUV),第三缓存区域用于缓存目标图像数据(JPEG)。
其中,第三缓存区域可以使在运行内存中划分得到,也可以是在HDD硬盘和/或SD卡等外部存储空间中划分得到。
在本申请的一个实施例中,进一步地,如图1所示,处理器110执行计算机指令时实现:获取第一缓存区域、第二缓存区域和第三缓存区域中各自数据的存储量,根据存储量分别调整第一缓存区域、第二缓存区域和第三缓存区域的可存储容量。
在该实施例中,拍摄装置100的处理器110实时监控第一缓存区域、第二缓存区域和第三缓存区域中对应图像数据的已存储量,并根据对应图像数据源的已存储量动态调整第一缓存区域、第二缓存区域和第三缓存区域的可存储容量。具体地,如第一缓存区域中原始图像数据的存储量较少,第一缓存区域较为空闲,可对应减小第一缓存区域的可存储容量;第三缓存区域中目标图像数据的已存储量较大,第三缓存区域将满,此时对应增加第三缓存区域的可存储容量,以保证预缓存区域空间的利用效率。
在本申请的一个实施例中,进一步地,如图1所示,处理器110执行计算机指令时实现控制图像传感器102获取原始图像数据的过程包括:锁定图像传感器102的图像拍摄参数,根据图像拍摄参数获取原始图像。
在该实施例中,控制图像传感器102获取原始图像数据时,首先锁定图像传感器102的图像拍摄参数,根据锁定后的图像拍摄参数获取原始图像,可以保证连拍所得的多张图像风格一致,同时避免连拍过程中每张拍摄的图像都重新确定图像拍摄参数导致性能浪费,连拍速度被拖慢。
在本申请的一个实施例中,进一步地,如图1所示,图像拍摄参数包括以下任一或其组合:图像曝光参数、图像对焦参数、图像白平衡参数。
在该实施例中,图像拍摄参数一般包括图像的曝光参数,曝光参数影响图像成像的曝光度(亮度);图像拍摄参数还包括图像对焦参数,图像对焦参数影响最终目标图像数据中被拍摄物体的焦点位置;图像拍摄参数还包括图像白平衡参数,图像白平衡参数影响得到图像整体的颜色色调。
在本申请的一个实施例中,进一步地,如图1所示,显示屏104包括第一显示区域和第二显示区域,处理器110执行计算机指令时实现:控制图像传感器102持续获取实时图像数据;控制显示屏104在第一显示区域 显示实时图像数据,并控制显示屏104在第二显示区域显示图像数据。
在该实施例中,拍摄装置100的显示屏104包括第一显示区域和第二显示区域,在第一显示区域中显示实时图像数据,即实现Live view功能,在第二显示区域中显示图像数据,即实现Quick view功能。其中,可选地,第二显示区域在第一显示区域内。
如图2所示,在本申请的第二方面的实施例中,提供了一种无人飞行器200,无人飞行器200包括图像传感器202、运行内存204、存储器206、处理器208及存储在存储器上并可被处理器执行的计算机指令,处理器208-执行计算机指令时实现:接收连拍指令,根据连拍指令在运行内存中划分预缓存区域;控制图像传感器获取原始图像数据,将原始图像数据存储至预缓存区域,并将根据原始图像数据生成的图像数据发送至控制终端。
在该实施例中,无人飞行器200接收来自控制终端或手机等终端的连拍质量,当接收到连拍指令时,在拍摄开始前,首先根据连拍指令在无人飞行器200的运行内存204(RAM,Random Access Memory,随机存取存储器206)中划分预缓存区域,预缓存区域的大小可具体根据连拍指令对应的拍摄间隔和拍摄次数确定。在拍摄开始后,图像传感器202开始获取第一张照片的原始图像数据(如RAW格式的原始图像文件),在获取到第一张照片的原始图像数据后,将该原始图像数据存储至预缓存区域中。由于预缓存区域是由运行内存204划分而来的,其具有极高的写入速度,因此相较于目前的技术方案中将原始图像数据直接存入存储器206(如HDD硬盘,SD存储卡等,具有较大容量但写入速度较慢),数据写入速度更快,速度写入所需的时间更短,因此可以更快地开始拍摄下一张照片,提高了连拍速度。
在本申请的一个实施例中,进一步地,处理器208执行计算机指令时实现控制图像传感器202获取原始图像数据的过程包括:根据连拍指令获取连拍间隔时长,按照间隔时长控制图像传感器202获取原始图像数据。
在该实施例中,连拍指令中包括连拍间隔时长,即拍摄第N张图像和拍摄第N+1张图像之间所间隔的时长,按照该连拍间隔时长控制图像传感器202获取原始图像数据并顺次存储至预缓存区域,以实现连拍。
在本申请的一个实施例中,进一步地,处理器208执行计算机指令时实现:根据原始图像数据生成对应的中间图像数据,将中间图像数据存储至预缓存区域,根据中间图像数据生成图像数据;在预缓存区域中删除原始图像数据,并将图像数据发送至控制终端。
在该实施例中,在将任一张图像的原始数据存储至预缓存区域后,根据原始数据生成对应的中间图像数据,一般地,中间图像数据可以是YUV(一种颜色编码格式)格式的数据,并在生成了中间图像数据后,将该中间图像数据对应存储至预缓存区域,并同时在预缓存区域中删除对应的原始图像数据,以释放预缓存区域的存储空间。
在本申请的一个实施例中,进一步地,处理器208执行计算机指令时实现:根据中间图像数据生成对应的目标图像数据,将目标图像数据存储至预缓存区域;在预缓存区域中删除中间图像数据。
在该实施例中,在将任一张图像的中间图像数据(如YUV格式的数据)存储至预缓存区域后,根据中间图像数据生成对应的目标图像数据,一般地,目标图像数据可以是JPEG(Joint Photographic Experts Group,一种常见的图像格式)格式的图像文件,并在生成目标图像数据后,将该目标图像数据对应存储至预缓存区域,并同时在预缓存区域中删除对应的中间图像数据,以释放预缓存区域的存储空间。
在本申请的一个实施例中,进一步地,无人飞行器200还包括编码器,处理器208执行计算机指令时实现根据中间图像数据生成对应的目标图像数据的过程包括:根据连拍指令获取图像处理信息;根据图像处理信息控制编码器对中间图像数据进行编码处理,以生成目标图像数据。
在该实施例中,连拍指令中包括图像处理信息,具体可包括目标图像数据的成像方向(如正向、反向、横向、竖向或镜像翻转等),无人飞行器200的处理根据图像处理信息对中间图像进行编码处理,最终得到与图像处理信息相符的目标图像数据。
其中,目标图像数据(即JPEG格式的数据)仅用于存储,不需要对目标图像数据进行显示。
在本申请的一个实施例中,进一步地,处理器208执行计算机指令时 实现:按照多个目标图像数据的生成顺序,将每个目标图像数据存储至存储器206,并在预缓存区域中对应删除目标图像。
在该实施例中,按照多个目标图像数据的生成顺序顺次将预缓存区域中的多个目标图像数据存储至存储器206。具体地,相较于预缓存区域(运行内存204),存储器206的数据写入速度相对较慢,因此随着连拍过程进行,得到的目标图像数据会在预缓存区域中堆栈,根据多个目标图像数据生成的时间顺序(即拍摄的先后顺序)生成待存储队列,并按照待存储队列将多个目标图像顺次存储至存储器206,每当队列中的一个目标图像数据成功存储至存储器206后,在队列中删除该目标图像数据以释放预缓存区域中的空间。
在本申请的一个实施例中,进一步地,预缓存区域包括第一缓存区域、第二缓存区域和第三缓存区域,原始图像数据存储于第一缓存区域,中间图像数据存储至第二缓存区域,目标图像数据存储至第三缓存区域。
在该实施例中,预缓存区域包括第一缓存区域,可记为RAW buffer,还包括第二缓存区域,可记为YUV buffer,以及第三缓存区域,可即为JPEG buffer。其中,第一缓存区域(RAW buffer)用于缓存原始图像数据(RAW),第二缓存区域(YUV buffer)用于缓存中间图像数据(YUV),第三缓存区域用于缓存目标图像数据(JPEG)。
其中,第三缓存区域可以使在运行内存中划分得到,也可以是在HDD硬盘和/或SD卡等外部存储空间中划分得到。
在本申请的一个实施例中,进一步地,处理器208执行计算机指令时实现:获取第一缓存区域、第二缓存区域和第三缓存区域中各自数据的存储量,根据存储量分别调整第一缓存区域、第二缓存区域和第三缓存区域的可存储容量。
在该实施例中,人飞行器的处理器208实时监控第一缓存区域、第二缓存区域和第三缓存区域中对应图像数据的已存储量,并根据对应图像数据源的已存储量动态调整第一缓存区域、第二缓存区域和第三缓存区域的可存储容量。具体地,如第一缓存区域中原始图像数据的存储量较少,第一缓存区域较为空闲,可对应减小第一缓存区域的可存储容量;第三缓存 区域中目标图像数据的已存储量较大,第三缓存区域将满,此时对应增加第三缓存区域的可存储容量,以保证预缓存区域空间的利用效率。
在本申请的一个实施例中,进一步地,处理器208执行计算机指令时实现控制图像传感器202获取原始图像数据的过程包括:锁定图像传感器202的图像拍摄参数,根据图像拍摄参数获取原始图像。
在该实施例中,控制图像传感器202获取原始图像数据时,首先锁定图像传感器202的图像拍摄参数,根据锁定后的图像拍摄参数获取原始图像,可以保证连拍所得的多张图像风格一致,同时避免连拍过程中每张拍摄的图像都重新确定图像拍摄参数导致性能浪费,连拍速度被拖慢。
在本申请的一个实施例中,进一步地,图像拍摄参数包括以下任一或其组合:图像曝光参数、图像对焦参数、图像白平衡参数。
在该实施例中,图像拍摄参数一般包括图像的曝光参数,曝光参数影响图像成像的曝光度(亮度);图像拍摄参数还包括图像对焦参数,图像对焦参数影响最终目标图像数据中被拍摄物体的焦点位置;图像拍摄参数还包括图像白平衡参数,图像白平衡参数影响得到图像整体的颜色色调。
如图3所示,在本申请的第三方面的实施例中,提供了一种无人飞行器的控制终端300,无人飞行器的控制终端300包括显示屏302、存储器304、处理器306及存储在存储器上并可被处理器执行的计算机指令,处理器306执行计算机指令时实现:向无人飞行器发送连拍指令,以控制无人飞行器上设置的图像传感器获取原始图像数据;接收根据原始图像数据生成的图像数据,并在控制显示屏显示图像数据。
在该实施例中,无人飞行器的控制终端300用于控制无人飞行器工作,具体地,无人飞行器的控制终端300向无人飞行器发送连拍指令,以控制无人飞行器上设置的图像传感器获取图像数据,其中,无人飞行器接收来自控制终端300或手机等终端的连拍质量,当接收到连拍指令时,在拍摄开始前,首先根据连拍指令在无人飞行器的运行内存(RAM,Random Access Memory,随机存取存储器304)中划分预缓存区域,预缓存区域的大小可具体根据连拍指令对应的拍摄间隔和拍摄次数确定。在拍摄开始后,图像传感器开始获取第一张照片的图像数据,在获取到第一张照片的图像数据 后,将该图像数据存储至预缓存区域中。同时,由于预缓存区域的读取速度较快,无人飞行器的处理器306同步获取预缓存区域中的图像文件,并发送至无人飞行器的控制终端300,无人飞行器的控制终端300接收到图像数据后,在显示屏302上显示接收到的图像数据。
在本申请的一个实施例中,进一步地,连拍指令包括连拍间隔时长,无人飞行器按照间隔时长控制图像传感器获取图像数据。
在该实施例中,连拍指令中包括连拍间隔时长,即拍摄第N张图像和拍摄第N+1张图像之间所间隔的时长,按照该连拍间隔时长控制无人飞行器连续获取图像数据,以实现连拍。
在本申请的一个实施例中,进一步地,显示屏302包括第一显示区域和第二显示区域,处理器306执行计算机指令时实现:持续接收无人飞行器发送的实时图像数据;控制显示屏302在第一显示区域显示实时图像数据,并控制显示屏302在第二显示区域显示图像数据。
在该实施例中,无人飞行器的控制终端300的显示屏302包括第一显示区域和第二显示区域,在第一显示区域中显示实时图像数据,即实现Live view功能,在第二显示区域中显示图像数据,即实现Quick view功能。其中,可选地,第二显示区域在第一显示区域内。
如图4所示,在本申请的第四方面的实施例中,提供了一种拍摄方法,包括:
S402,接收连拍指令,根据连拍指令在拍摄装置的运行内存中划分预缓存区域;
S404,获取原始图像数据,将原始图像数据存储至预缓存区域;
S406,显示根据原始图像数据生成的图像数据。
在该实施例中,当接收到连拍指令时,在拍摄开始前,首先根据连拍指令在的运行内存(RAM,Random Access Memory,随机存取存储器)中划分预缓存区域,预缓存区域的大小可具体根据连拍指令对应的拍摄间隔和拍摄次数确定。在拍摄开始后,图像传感器开始获取第一张照片的原始图像数据(如RAW格式的原始图像文件),在获取到第一张照片的原始图像数据后,将该原始图像数据存储至预缓存区域中。由于预缓存区域是 由运行内存划分而来的,其具有极高的写入速度,因此相较于目前的技术方案中将原始图像数据直接存入存储器(如HDD硬盘,SD存储卡等,具有较大容量但写入速度较慢),数据写入速度更快,速度写入所需的时间更短,因此可以更快地开始拍摄下一张照片,提高了连拍速度。同时,由于原始图像数据存储于预缓存区域(运行内存)中,其读取速度也较快,所以的处理器可直接读取预缓存区域中根据原始图像数据生成的图像数据的图像数据,并控制显示屏显示该图像数据,实现了连拍中的Quick view功能,使具有较好的交互效果。
在本申请的一个实施例中,进一步地,如图5所示,拍摄方法还包括:
S502,根据连拍指令获取连拍间隔时长;
S504,按照间隔时长获取原始图像数据。
在该实施例中,连拍指令中包括连拍间隔时长,即拍摄第N张图像和拍摄第N+1张图像之间所间隔的时长,按照该连拍间隔时长控制图像传感器获取原始图像数据并顺次存储至预缓存区域,以实现连拍。
在本申请的一个实施例中,进一步地,如图6所示,显示根据原始图像数据生成的图像数据还包括:
S602,根据原始图像数据生成对应的中间图像数据,将中间图像数据存储至预缓存区域,根据中间数据生成图像数据;
S604,在预缓存区域中删除原始图像数据,并显示图像数据。
在该实施例中,在将任一张图像的原始数据存储至预缓存区域后,根据原始数据生成对应的中间图像数据,一般地,中间图像数据可以是YUV(一种颜色编码格式)格式的数据,并在生成了中间图像数据后,将该中间图像数据对应存储至预缓存区域,同时根据中间图像数据生成对应的图像数据,可选的,图像数据为RGB图像数据,最后,在预缓存区域中删除对应的原始图像数据,以释放预缓存区域的存储空间。
在本申请的一个实施例中,进一步地,如图7所示,在在预缓存区域中删除原始图像数据的步骤之后,拍摄方法还包括:
S702,根据中间图像数据生成对应的目标图像数据,将目标图像数据存储至预缓存区域;
S704,在预缓存区域中删除中间图像数据。
在该实施例中,在将任一张图像的中间图像数据(如YUV格式的数据)存储至预缓存区域后,根据中间图像数据生成对应的目标图像数据,一般地,目标图像数据可以是JPEG(Joint Photographic Experts Group,一种常见的图像格式)格式的图像文件,并在生成目标图像数据后,将该目标图像数据对应存储至预缓存区域,并同时在预缓存区域中删除对应的中间图像数据,以释放预缓存区域的存储空间。
其中,目标图像数据(即JPEG格式的数据)仅用于存储,不需要对目标图像数据进行显示。
在本申请的一个实施例中,进一步地,如图8所示,拍摄方法还包括:
S802,根据连拍指令获取图像处理信息;
S804,根据图像处理信息对中间图像数据进行编码处理,以生成目标图像数据。
在该实施例中,连拍指令中包括图像处理信息,具体可包括目标图像数据的成像方向(如正向、反向、横向、竖向或镜像翻转等),拍摄装置的处理根据图像处理信息对中间图像进行编码处理,最终得到与图像处理信息相符的目标图像数据。
在本申请的一个实施例中,进一步地,如图9所示,拍摄方法还包括:
S902,按照多个目标图像数据的生成顺序,将每个目标图像数据存储至存储器;
S904,在预缓存区域中对应删除目标图像。
在该实施例中,按照多个目标图像数据的生成顺序顺次将预缓存区域中的多个目标图像数据存储至存储器。具体地,相较于预缓存区域(运行内存),存储器的数据写入速度相对较慢,因此随着连拍过程进行,得到的目标图像数据会在预缓存区域中堆栈,根据多个目标图像数据生成的时间顺序(即拍摄的先后顺序)生成待存储队列,并按照待存储队列将多个目标图像顺次存储至存储器,每当队列中的一个目标图像数据成功存储至存储器后,在队列中删除该目标图像数据以释放预缓存区域中的空间。
在本申请的一个实施例中,进一步地,预缓存区域包括第一缓存区域、 第二缓存区域和第三缓存区域,原始图像数据存储于第一缓存区域,中间图像数据存储至第二缓存区域,目标图像数据存储至第三缓存区域。
在该实施例中,预缓存区域包括第一缓存区域,可记为RAW buffer,还包括第二缓存区域,可记为YUV buffer,以及第三缓存区域,可即为JPEG buffer。其中,第一缓存区域(RAW buffer)用于缓存原始图像数据(RAW),第二缓存区域(YUV buffer)用于缓存中间图像数据(YUV),第三缓存区域用于缓存目标图像数据(JPEG)。
其中,第三缓存区域可以使在运行内存中划分得到,也可以是在HDD硬盘和/或SD卡等外部存储空间中划分得到。
在本申请的一个实施例中,进一步地,如图10所示,拍摄方法还包括:
S1002,获取第一缓存区域、第二缓存区域和第三缓存区域中各自数据的存储量;
S1004,根据存储量分别调整第一缓存区域、第二缓存区域和第三缓存区域的可存储容量。
在该实施例中,实时监控第一缓存区域、第二缓存区域和第三缓存区域中对应图像数据的已存储量,并根据对应图像数据源的已存储量动态调整第一缓存区域、第二缓存区域和第三缓存区域的可存储容量。具体地,如第一缓存区域中原始图像数据的存储量较少,第一缓存区域较为空闲,可对应减小第一缓存区域的可存储容量;第三缓存区域中目标图像数据的已存储量较大,第三缓存区域将满,此时对应增加第三缓存区域的可存储容量,以保证预缓存区域空间的利用效率。
在本申请的一个实施例中,进一步地,获取原始图像数据的步骤,具体包括:锁定图像拍摄参数,根据图像拍摄参数获取原始图像。
在该实施例中,在获取原始图像数据时,首先锁定图像拍摄参数,根据锁定后的图像拍摄参数获取原始图像,可以保证连拍所得的多张图像风格一致,同时避免连拍过程中每张拍摄的图像都重新确定图像拍摄参数导致性能浪费,连拍速度被拖慢。
在本申请的一个实施例中,进一步地,图像拍摄参数包括以下任一或其组合:图像曝光参数、图像对焦参数、图像白平衡参数。
在该实施例中,图像拍摄参数一般包括图像的曝光参数,曝光参数影响图像成像的曝光度(亮度);图像拍摄参数还包括图像对焦参数,图像对焦参数影响最终目标图像数据中被拍摄物体的焦点位置;图像拍摄参数还包括图像白平衡参数,图像白平衡参数影响得到图像整体的颜色色调。
在本申请的一个实施例中,进一步地,显示根据原始图像数据生成的图像数据的步骤,还包括:持续获取实时图像数据;在第一显示区域显示实时图像数据,并在第二显示区域显示图像数据。
在该实施例中,显示屏包括第一显示区域和第二显示区域,在第一显示区域中显示实时图像数据,即实现Live view功能,在第二显示区域中显示图像数据,即实现Quick view功能。其中,可选地,第二显示区域在第一显示区域内。
在本申请的一个实施例中,进一步地,如图11所示,以连拍的拍摄间隔为0.5s为例,连拍的拍摄方法具体包括:
S1102,相机启动;
在该步骤中,在接收到连拍指令时,相机启动,并在屏幕上展示Live view,同时sensor(图像传感器)开始检测3A参数,3A参数具体为图像曝光参数、图像对焦参数、图像白平衡参数。
S1104,判断是否到达0.5s的拍摄间隔;当判断结构为否,重复执行S1104;当判断结果为是,进入S1106;
在该步骤中,0.5s的拍摄间隔根据连拍指令确定,当然,也可也是更小的拍摄间隔,如0.3s,或更大的拍摄间隔,如0.8s。
S1106,锁定3A参数;
在该步骤中,在开始拍摄第一张图像前,首先锁定3A参数,以保证图像风格一致。
S1108,配置sensor;
在该步骤中,在锁定3A参数后,根据锁定的3A参数配置图像传感器(即sensor),以控制sensor以锁定的3A参数获取原始图像数据。
S1110,生成RAW图;
在该步骤中,sensor根据锁定的3A参数获取并生成原始图像数据,即 RAW图。
S1112,RAW buffer缓存一帧,并重复执行S1104;
在该步骤中,根据连拍指令在运行内存中划分预缓存区域,预缓存区域中包括第一缓存区域、第二缓存区域和第三缓存区域,其中第一缓存区域即RAW buffer,sensor获取的原始图像数据RAW图缓存至RAW buffer中。
S1114,启动Live view;
在该步骤中,在显示屏的第一区域内上继续显示sensor获取的实时图像。
S1116,显示Quick view;
在该步骤中,在显示屏内的第二区域内显示根据获取的RAW图生成的Quick view图像。
S1118,生成YUV图;
在该步骤中,根据RAW buffer中缓存的RAW图生成对应的YUV图。
S1120,YUV buffer缓存一帧;
在该步骤中,将生成的YUV图缓存至预缓存区域中的第二缓存区域,即YUV buffer中。
S1122,RAW buffer释放一帧;
在该步骤中,在生成的YUV图缓存至YUV buffer后,对应在RAW buffer中删除对应的RAW图,以释放空间。
S1124,配置DSP编码器;
在该步骤中,根据连拍指令中的图像处理信息配置DSP编码器,以控制DSP编码器对YUV图进行编码。
S1126,生成JPEG图;
在该步骤中,通过将YUV图经过DSP编码器进行编码后,得到目标图像数据的JPEG图。
S1128,JPEG buffer缓存一帧;
在该步骤中,将生成的JPEG图缓存至预缓存区域中的第三缓存区域中,即JPEG buffer中。
S1130,YUV buffer释放一帧;
在该步骤中,在生成的JPEG图缓存至JPEG buffer后,对应在YUV buffer中删除对应的YUV图,以释放空间
S1132,SD卡存储;
在该步骤中,按照JPEG图的生成顺序,以队列的方式顺次将JPEG图写入SD卡保存。
S1134,JPEG buffer释放一帧。
在该步骤中,在JPEG图存储至SD卡后,对应在JPEG buffer中删除对应的JPEG图,以释放空间。
在该实施例中,当0.5s定时间隔到达时,相机将启动如下拍照流程:锁定3A参数,停止Live view,配置sensor,生成RAW图,将生成的RAW图送入从运行内存中划分的RAW buffer进行缓存,启动Live view,显示屏LCD显示Quick view,根据RAW图生成YUV图,将生成的YUV图送入YUV buffer进行缓存,从RAW buffer中释放本次拍照流程生成的RAW图数据,配置DSP编码器(控制编码方式,用于编码:正拍、竖排和倒拍等不同拍摄模式的图像数据),由YUV图生成JPEG图,将生成的JPEG图送入JPEG buffer进行缓存,从YUV buffer中释放本次拍照流程生成的YUV图,将本次拍照流程生成的JPEG图写入SD卡进行存储,最后从JPEG buffer中释放本次拍照流程生成的JPEG图。
其中,RAW buffer、YUV buffer、JPEG buffer在相机启动0.5s定时连拍功能时从硬件内存中分配一块大小固定的存储区域(三者容量大小均可设置,也可根据实际存储情况动态调整),可选的,该存储区域可缓存几张至几十张RAW图、YUV图、JPEG图数据。
上述流程所需要的时间存在着超过连拍间隔0.5s的可能,本申请提供的实施例通过数据缓存机制,将上述拍照流程的多个步骤并行处理,在RAW图送入RAW buffer后,立即启动下次的拍照流程,即在本次拍照流程未全部完成时,启动下次拍照流程,通过buffer数据缓存的方法,使得多个拍照流程的不同步骤完全独立运行,不会阻塞也无需等待不同时刻启动的拍照流程的不同步骤。
本申请的描述中,术语“多个”则指两个或两个以上,除非另有明确的限定,术语“上”、“下”等指示的方位或位置关系为基于附图所述的方位或位置关系,仅是为了便于描述本申请和简化描述,而不是指示或暗示所指的装置或元件必须具有特定的方位、以特定的方位构造和操作,因此不能理解为对本申请的限制;术语“连接”、“安装”、“固定”等均应做广义理解,例如,“连接”可以是固定连接,也可以是可拆卸连接,或一体地连接;可以是直接相连,也可以通过中间媒介间接相连。对于本领域的普通技术人员而言,可以根据具体情况理解上述术语在本申请中的具体含义。
在本申请的描述中,术语“一个实施例”、“一些实施例”、“具体实施例”等的描述意指结合该实施例或示例描述的具体特征、结构、材料或特点包含于本申请的至少一个实施例或示例中。在本申请中,对上述术语的示意性表述不一定指的是相同的实施例或实例。而且,描述的具体特征、结构、材料或特点可以在任何的一个或多个实施例或示例中以合适的方式结合。
以上所述仅为本申请的优选实施例而已,并不用于限制本申请,对于本领域的技术人员来说,本申请可以有各种更改和变化。凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。

Claims (35)

  1. 一种拍摄装置,其中,所述拍摄装置包括图像传感器、显示屏、运行内存、存储器、处理器及存储在所述存储器上并可被所述处理器执行的计算机指令,所述处理器执行所述计算机指令时实现:
    接收连拍指令,根据所述连拍指令在所述运行内存中划分预缓存区域;
    控制图像传感器获取原始图像数据,将所述原始图像数据存储至所述预缓存区域;
    控制所述显示屏显示根据所述原始图像数据生成的图像数据。
  2. 根据权利要求1所述的拍摄装置,其中,所述处理器执行所述计算机指令时实现控制图像传感器获取原始图像数据的过程包括:
    根据所述连拍指令获取连拍间隔时长,按照所述间隔时长控制所述图像传感器获取所述原始图像数据。
  3. 根据权利要求1所述的拍摄装置,其中,所述处理器执行所述计算机指令时实现:
    根据所述原始图像数据生成对应的中间图像数据,将所述中间图像数据存储至所述预缓存区域,根据所述中间图像数据生成所述图像数据;
    在所述预缓存区域中删除所述原始图像数据,并显示所述图像数据。
  4. 根据权利要求3所述的拍摄装置,其中,所述处理器执行所述计算机指令时实现:
    根据所述中间图像数据生成对应的目标图像数据,将所述目标图像数据存储至所述预缓存区域;
    在所述预缓存区域中删除所述中间图像数据。
  5. 根据权利要求4所述的拍摄装置,其中,所述拍摄装置还包括编码器,所述处理器执行所述计算机指令时实现所述根据所述中间图像数据生成对应的目标图像数据的过程包括:
    根据所述连拍指令获取图像处理信息;
    根据所述图像处理信息控制所述编码器对所述中间图像数据进行编码处理,以生成所述目标图像数据。
  6. 根据权利要求4所述的拍摄装置,其中,所述处理器执行所述计算机指令时实现:
    按照多个所述目标图像数据的生成顺序,将每个所述目标图像数据存储至所述存储器,并在所述预缓存区域中对应删除所述目标图像数据。
  7. 根据权利要求4所述的拍摄装置,其中,所述预缓存区域包括第一缓存区域、第二缓存区域和第三缓存区域,所述原始图像数据存储于所述第一缓存区域,所述中间图像数据存储至所述第二缓存区域,所述目标图像数据存储至所述第三缓存区域。
  8. 根据权利要求7所述的拍摄装置,其中,所述处理器执行所述计算机指令时实现:
    获取所述第一缓存区域、所述第二缓存区域和所述第三缓存区域中各自数据的存储量,根据所述存储量分别调整所述第一缓存区域、所述第二缓存区域和所述第三缓存区域的可存储容量。
  9. 根据权利要求1所述的拍摄装置,其中,所述处理器执行所述计算机指令时实现控制图像传感器获取原始图像数据的过程包括:
    锁定所述图像传感器的图像拍摄参数,根据所述图像拍摄参数获取所述原始图像。
  10. 根据权利要求9所述的拍摄装置,其中,所述图像拍摄参数包括以下任一或其组合:
    图像曝光参数、图像对焦参数、图像白平衡参数。
  11. 根据权利要求1至10中任一项所述的拍摄装置,其中,所述显示屏包括第一显示区域和第二显示区域,所述处理器执行所述计算机指令时实现:
    控制所述图像传感器持续获取实时图像数据;
    控制所述显示屏在所述第一显示区域显示所述实时图像数据,并控制所述显示屏在所述第二显示区域显示所述图像数据。
  12. 一种无人飞行器,其中,所述无人飞行器包括图像传感器、运行内存、存储器、处理器及存储在所述存储器上并可被所述处理器执行的计算机指令,所述处理器执行所述计算机指令时实现:
    接收连拍指令,根据所述连拍指令在所述运行内存中划分预缓存区域;
    控制图像传感器获取原始图像数据,将所述原始图像数据存储至所述预缓存区域,并将根据所述原始图像数据生成的图像数据发送至控制终端。
  13. 根据权利要求12所述的无人飞行器,其中,所述处理器执行所述计算机指令时实现控制图像传感器获取原始图像数据的过程包括:
    根据所述连拍指令获取连拍间隔时长,按照所述间隔时长控制所述图像传感器获取所述原始图像数据。
  14. 根据权利要求12所述的无人飞行器,其中,所述处理器执行所述计算机指令时实现:
    根据所述原始图像数据生成对应的中间图像数据,将所述中间图像数据存储至所述预缓存区域,根据所述中间图像数据生成所述图像数据;
    在所述预缓存区域中删除所述原始图像数据,并将所述图像图像数据发送至所述控制终端。
  15. 根据权利要求14所述的无人飞行器,其中,所述处理器执行所述计算机指令时实现:
    根据所述中间图像数据生成对应的目标图像数据,将所述目标图像数据存储至所述预缓存区域;
    在所述预缓存区域中删除所述中间图像数据。
  16. 根据权利要求15所述的无人飞行器,其中,所述无人飞行器还包括编码器,所述处理器执行所述计算机指令时实现所述根据所述中间图像数据生成对应的目标图像数据的过程包括:
    根据所述连拍指令获取图像处理信息;
    根据所述图像处理信息控制所述编码器对所述中间图像数据进行编码处理,以生成所述目标图像数据。
  17. 根据权利要求15所述的无人飞行器,其中,所述处理器执行所述计算机指令时实现:
    按照多个所述目标图像数据的生成顺序,将每个所述目标图像数据存储至所述存储器,并在所述预缓存区域中对应删除所述目标图像。
  18. 根据权利要求15所述的无人飞行器,其中,所述预缓存区域包括 第一缓存区域、第二缓存区域和第三缓存区域,所述原始图像数据存储于所述第一缓存区域,所述中间图像数据存储至所述第二缓存区域,所述目标图像数据存储至所述第三缓存区域。
  19. 根据权利要求18所述的无人飞行器,其中,所述处理器执行所述计算机指令时实现:
    获取所述第一缓存区域、所述第二缓存区域和所述第三缓存区域中各自数据的存储量,根据所述存储量分别调整所述第一缓存区域、所述第二缓存区域和所述第三缓存区域的可存储容量。
  20. 根据权利要求12所述的无人飞行器,其中,所述处理器执行所述计算机指令时实现控制图像传感器获取原始图像数据的过程包括:
    锁定所述图像传感器的图像拍摄参数,根据所述图像拍摄参数获取所述原始图像。
  21. 根据权利要求20所述的无人飞行器,其中,所述图像拍摄参数包括以下任一或其组合:
    图像曝光参数、图像对焦参数、图像白平衡参数。
  22. 一种无人飞行器的控制终端,其特征在于,所述无人飞行器的控制终端包括显示屏、存储器、处理器及存储在所述存储器上并可被所述处理器执行的计算机指令,所述处理器执行所述计算机指令时实现:
    向无人飞行器发送连拍指令,以控制所述无人飞行器上设置的图像传感器获取原始图像数据;
    接收根据所述原始图像数据生成的图像数据,并在控制所述显示屏显示所述图像数据。
  23. 根据权利要求22所述的无人飞行器的控制终端,其中,所述连拍指令包括连拍间隔时长,所述无人飞行器按照所述间隔时长控制所述图像传感器获取所述图像数据。
  24. 根据权利要求22所述的无人飞行器的控制终端,其中,所述显示屏包括第一显示区域和第二显示区域,所述处理器执行所述计算机指令时实现:
    持续接收所述无人飞行器发送的实时图像数据;
    控制所述显示屏在所述第一显示区域显示所述实时图像数据,并控制所述显示屏在所述第二显示区域显示所述图像数据。
  25. 一种拍摄方法,其中,包括:
    接收连拍指令,根据所述连拍指令在拍摄装置的运行内存中划分预缓存区域;
    获取原始图像数据,将所述原始图像数据存储至所述预缓存区域;
    显示根据所述原始图像数据生成的图像数据。
  26. 根据权利要求25所述的拍摄方法,其中,还包括:
    根据所述连拍指令获取连拍间隔时长,按照所述间隔时长获取所述原始图像数据。
  27. 根据权利要求25所述的拍摄方法,其中,所述显示根据所述原始图像数据生成的图像数据包括:
    根据所述原始图像数据生成对应的中间图像数据,将所述中间图像数据存储至所述预缓存区域,根据所述中间图像数据生成所述图像数据;
    在所述预缓存区域中删除所述原始图像数据,并显示所述图像数据。
  28. 根据权利要求27所述的拍摄方法,其中,在所述在所述预缓存区域中删除所述原始图像数据的步骤之后,所述拍摄方法还包括:
    根据所述中间图像数据生成对应的目标图像数据,将所述目标图像数据存储至所述预缓存区域;
    在所述预缓存区域中删除所述中间图像数据。
  29. 根据权利要求28所述的拍摄方法,其中,还包括:
    根据所述连拍指令获取图像处理信息;
    根据所述图像处理信息对所述中间图像数据进行编码处理,以生成所述目标图像数据。
  30. 根据权利要求28所述的拍摄方法,其中,还包括:
    按照多个所述目标图像数据的生成顺序,将每个所述目标图像数据存储至存储器,并在所述预缓存区域中对应删除所述目标图像。
  31. 根据权利要求28所述的拍摄方法,其中,所述预缓存区域包括第一缓存区域、第二缓存区域和第三缓存区域,所述原始图像数据存储于所 述第一缓存区域,所述中间图像数据存储至所述第二缓存区域,所述目标图像数据存储至所述第三缓存区域。
  32. 根据权利要求31所述的拍摄方法,其中,还包括:
    获取所述第一缓存区域、所述第二缓存区域和所述第三缓存区域中各自数据的存储量,根据所述存储量分别调整所述第一缓存区域、所述第二缓存区域和所述第三缓存区域的可存储容量。
  33. 根据权利要求25所述的拍摄方法,其中,所述获取原始图像数据的步骤,具体包括:
    锁定图像拍摄参数,根据所述图像拍摄参数获取所述原始图像。
  34. 根据权利要求33所述的拍摄方法,其中,所述图像拍摄参数包括以下任一或其组合:
    图像曝光参数、图像对焦参数、图像白平衡参数。
  35. 根据权利要求25至34中任一项所述的拍摄方法,其中,所述显示根据所述原始图像数据生成的图像数据的步骤,还包括:
    持续获取实时图像数据;
    在第一显示区域显示所述实时图像数据,并在第二显示区域显示所述图像数据。
PCT/CN2019/087116 2019-05-15 2019-05-15 拍摄装置、无人飞行器、控制终端和拍摄方法 WO2020227997A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
PCT/CN2019/087116 WO2020227997A1 (zh) 2019-05-15 2019-05-15 拍摄装置、无人飞行器、控制终端和拍摄方法
CN201980007822.3A CN111567033A (zh) 2019-05-15 2019-05-15 拍摄装置、无人飞行器、控制终端和拍摄方法
US17/513,870 US20220053126A1 (en) 2019-05-15 2021-10-28 Photographing apparatus, unmanned aerial vehicle, control terminal and method for photographing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/087116 WO2020227997A1 (zh) 2019-05-15 2019-05-15 拍摄装置、无人飞行器、控制终端和拍摄方法

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/513,870 Continuation US20220053126A1 (en) 2019-05-15 2021-10-28 Photographing apparatus, unmanned aerial vehicle, control terminal and method for photographing

Publications (1)

Publication Number Publication Date
WO2020227997A1 true WO2020227997A1 (zh) 2020-11-19

Family

ID=72074005

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/087116 WO2020227997A1 (zh) 2019-05-15 2019-05-15 拍摄装置、无人飞行器、控制终端和拍摄方法

Country Status (3)

Country Link
US (1) US20220053126A1 (zh)
CN (1) CN111567033A (zh)
WO (1) WO2020227997A1 (zh)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112312023B (zh) * 2020-10-30 2022-04-08 北京小米移动软件有限公司 一种相机的缓存队列分配方法、装置、电子设备和存储介质
CN112422832B (zh) * 2020-11-20 2022-07-15 展讯通信(天津)有限公司 图像数据的传输方法、移动终端及存储介质
CN112672046B (zh) * 2020-12-18 2022-05-03 闻泰通讯股份有限公司 连拍图像的存储方法、装置、电子设备和存储介质
CN112925478B (zh) * 2021-01-29 2022-10-25 惠州Tcl移动通信有限公司 相机存储空间控制方法、智能终端及计算机可读存储介质
CN116028383B (zh) * 2022-08-22 2023-10-20 荣耀终端有限公司 缓存管理方法及电子设备
CN116668836B (zh) * 2022-11-22 2024-04-19 荣耀终端有限公司 拍照处理方法和电子设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101496395A (zh) * 2006-09-20 2009-07-29 卡西欧计算机株式会社 摄像设备和图像显示控制方法
JP2013135459A (ja) * 2011-12-27 2013-07-08 Canon Marketing Japan Inc 撮影装置とその制御方法及びプログラム
CN103327252A (zh) * 2013-06-26 2013-09-25 深圳市中兴移动通信有限公司 拍摄装置及其拍摄方法
CN105791664A (zh) * 2014-12-23 2016-07-20 中国移动通信集团公司 一种终端摄像处理方法、装置及终端
CN105827942A (zh) * 2015-09-24 2016-08-03 维沃移动通信有限公司 一种快速拍照方法及电子设备

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102014265A (zh) * 2007-01-15 2011-04-13 松下电器产业株式会社 摄像装置
KR20140035769A (ko) * 2012-09-14 2014-03-24 삼성전자주식회사 연속 촬영 이미지 데이터를 처리할 수 있는 방법들과 장치들
JP6020545B2 (ja) * 2014-12-24 2016-11-02 カシオ計算機株式会社 撮影装置、撮影条件設定方法及び撮影条件設定プログラム
CN105827951B (zh) * 2016-01-29 2019-05-17 维沃移动通信有限公司 一种运动对象拍照方法及移动终端
CN108965689A (zh) * 2017-05-27 2018-12-07 昊翔电能运动科技(昆山)有限公司 无人机拍摄方法及装置、无人机和地面控制装置
CN107360376A (zh) * 2017-08-30 2017-11-17 努比亚技术有限公司 一种拍摄方法及终端
CN108322656B (zh) * 2018-03-09 2022-02-15 深圳市道通智能航空技术股份有限公司 一种拍摄方法、拍摄装置及拍摄系统
CN207943180U (zh) * 2018-03-13 2018-10-09 济南赛尔无人机科技有限公司 一种用于无人机的照相装置
KR102061867B1 (ko) * 2018-09-10 2020-01-02 한성욱 이미지 생성 장치 및 그 방법
CN109120853B (zh) * 2018-09-27 2020-09-08 维沃移动通信有限公司 一种长曝光图像拍摄方法及终端

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101496395A (zh) * 2006-09-20 2009-07-29 卡西欧计算机株式会社 摄像设备和图像显示控制方法
JP2013135459A (ja) * 2011-12-27 2013-07-08 Canon Marketing Japan Inc 撮影装置とその制御方法及びプログラム
CN103327252A (zh) * 2013-06-26 2013-09-25 深圳市中兴移动通信有限公司 拍摄装置及其拍摄方法
CN105791664A (zh) * 2014-12-23 2016-07-20 中国移动通信集团公司 一种终端摄像处理方法、装置及终端
CN105827942A (zh) * 2015-09-24 2016-08-03 维沃移动通信有限公司 一种快速拍照方法及电子设备

Also Published As

Publication number Publication date
US20220053126A1 (en) 2022-02-17
CN111567033A (zh) 2020-08-21

Similar Documents

Publication Publication Date Title
WO2020227997A1 (zh) 拍摄装置、无人飞行器、控制终端和拍摄方法
US10021302B2 (en) Video recording method and device
KR101905621B1 (ko) 카메라의 프레임 이미지 전송 장치 및 방법
US8520086B2 (en) Recording apparatus and method, reproducing apparatus and method, and program
US7227576B2 (en) Electronic camera
KR100840461B1 (ko) 촬상장치, 촬영방법 및 그 프로그램을 기록한 기록매체
JP2010154506A (ja) 画像ファイル生成装置、カメラ、および画像ファイル生成プログラム
US9374494B2 (en) Recording apparatus and method, reproducing apparatus and method, and program
JP6021594B2 (ja) 撮像装置及びプログラム
KR20130094632A (ko) 카메라의 연속 촬영 장치 및 방법.
JP4535089B2 (ja) 撮像装置、画像処理方法及びプログラム
JP2005117447A (ja) 動画記録装置、動画記録方法および動画記録プログラム
KR102166331B1 (ko) 촬영 후 빠른 재생을 구현하는 단말기 및 방법
JP2018019122A (ja) 画像データ処理装置および撮像装置
JP2007142908A (ja) 撮像装置及び撮像方法
JP5873255B2 (ja) 画像処理装置およびその方法、並びに、画像再生装置
US20120195571A1 (en) Image processing apparatus
JP2011044818A (ja) 電子カメラ
JP2013243733A (ja) 撮像装置、撮像方法及びプログラム
JP2005333437A (ja) 画像撮影機器
JP2012009951A (ja) 撮像装置及び動画像の記録方法
JP2019009486A (ja) 撮像装置
JP2015159406A (ja) 撮像装置及びその制御方法
JP2005117370A (ja) デジタルカメラ
JP2020048167A (ja) 撮像装置、画像記録方法及びプログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19928690

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19928690

Country of ref document: EP

Kind code of ref document: A1