WO2016011859A1 - 拍摄光绘视频的方法、移动终端和计算机存储介质 - Google Patents

拍摄光绘视频的方法、移动终端和计算机存储介质 Download PDF

Info

Publication number
WO2016011859A1
WO2016011859A1 PCT/CN2015/081871 CN2015081871W WO2016011859A1 WO 2016011859 A1 WO2016011859 A1 WO 2016011859A1 CN 2015081871 W CN2015081871 W CN 2015081871W WO 2016011859 A1 WO2016011859 A1 WO 2016011859A1
Authority
WO
WIPO (PCT)
Prior art keywords
pixel
image
composite image
video
mobile terminal
Prior art date
Application number
PCT/CN2015/081871
Other languages
English (en)
French (fr)
Inventor
刘林汶
里强
苗雷
Original Assignee
努比亚技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 努比亚技术有限公司 filed Critical 努比亚技术有限公司
Priority to US15/327,627 priority Critical patent/US10129488B2/en
Publication of WO2016011859A1 publication Critical patent/WO2016011859A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2621Cameras specially adapted for the electronic generation of special effects during image pickup, e.g. digital cameras, camcorders, video cameras having integrated special effects capability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/70Media network packetisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/56Cameras or camera modules comprising electronic image sensors; Control thereof provided with illuminating means
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/57Mechanical or electrical details of cameras or camera modules specially adapted for being embedded in other devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/74Circuitry for compensating brightness variation in the scene by influencing the scene brightness using illuminating means

Definitions

  • the present invention relates to the field of camera technology, and in particular, to a method for photographing a light-drawn video, a mobile terminal, and a computer storage medium.
  • the current mobile terminal has a shooting function that relies on the relevant processing algorithms provided by the camera hardware device and the chip supplier, and only several fixed shooting modes such as focus and white balance.
  • a shooting mode of light painting photography has been launched, and users can use light painting photography to create art.
  • Photographic photography refers to a shooting mode that uses long-time exposure to create a special image through changes in the light source during exposure. Due to the long-time exposure, it is necessary to support the corresponding photosensitive hardware, and the sensitive hardware capable of supporting long-time exposure is relatively expensive. Therefore, only professional camera devices such as SLR cameras have the function of photo-painting.
  • the main purpose of the present invention is to realize the shooting of the light drawing video, satisfy the diverse needs of the user, and improve the user experience.
  • the present invention provides a method of photographing a light-drawn video, the method of photographing a light-drawn video comprising the following steps:
  • the photographic image is continuously collected by the camera
  • the step of generating a composite image according to the current lithographic image and the previously acquired lithographic image comprises:
  • pixels satisfying the preset condition are selected, and addition is performed on the pixels at the same position to generate a composite image.
  • the selecting the pixel that meets the preset condition comprises:
  • the selecting the pixel that meets the preset condition comprises:
  • the pixel is a mutated pixel, calculate an average value of brightness parameters of a preset number of pixels around the mutated pixel, and determine whether the average value is greater than a preset threshold, and if yes, determine that the mutated pixel satisfies a preset Condition, select the mutant pixel;
  • the pixel is not a mutated pixel, it is further determined whether the brightness parameter of the pixel is greater than a preset threshold, and if yes, determining that the pixel meets a preset condition, and selecting the pixel.
  • the method for photographing the light-drawn video further includes:
  • the present invention further provides a mobile terminal, where the mobile terminal includes:
  • the acquisition module is configured to continuously collect the light drawing image through the camera after the shooting starts;
  • An image generating module configured to read the illuminating image at intervals, and generate a composite image according to the current illuminating image and the previously acquired illuminating image;
  • the video generating module is configured to capture the composite image, perform video encoding processing on the captured composite image, and generate a light drawing video according to the composite image after the video encoding process.
  • the image generation module is configured to:
  • pixels satisfying the preset condition are selected, and addition is performed on the pixels at the same position to generate a composite image.
  • the image generating module is further configured to:
  • the image generating module is further configured to:
  • the pixel is a mutated pixel, calculate an average value of brightness parameters of a preset number of pixels around the mutated pixel, and determine whether the average value is greater than a preset threshold, and if yes, determine that the mutated pixel satisfies a preset Condition, select the mutant pixel;
  • the pixel is not a mutated pixel, it is further determined whether the brightness parameter of the pixel is greater than a preset threshold, and if yes, determining that the pixel meets a preset condition, and selecting the pixel.
  • the mobile terminal further includes:
  • a processing module configured to perform special effect processing on the captured composite image.
  • the present invention also provides a computer storage medium having stored therein computer executable instructions for performing the above processing.
  • the invention continuously collects the light drawing image by using the camera after the shooting starts, and reads the light drawing image at intervals, generates a composite image according to the current light drawing image and the previously collected light drawing image; and grabs the composite image, and grasps the captured image
  • the composite image is input into the video encoding process, and the light-drawn video is generated according to the composite image after the video encoding process, thereby realizing the shooting of the light-drawn video.
  • the user can use the camera to capture the video of the running process of the display light source, or apply to a similar application scenario, which satisfies the diverse needs of the user and improves the user experience.
  • the composite image is encoded without the need to store the generated composite image, so the video file obtained by the final shooting is not large in size and does not occupy too much storage space.
  • FIG. 1 is a schematic flow chart of a first embodiment of a method for photographing a photographic video according to the present invention
  • FIG. 2 is a schematic flow chart of a second embodiment of a method for photographing a photographic video according to the present invention
  • FIG. 3 is a schematic diagram of functional modules of a first embodiment of a mobile terminal according to the present invention.
  • FIG. 4 is a schematic diagram of functional modules of a second embodiment of a mobile terminal according to the present invention.
  • FIG. 5 is a schematic diagram of an electrical structure of an apparatus for photographing a light-drawn video according to an embodiment of the present invention.
  • Embodiments of the present invention provide a method for capturing a picture-drawn video.
  • FIG. 1 is a schematic flowchart diagram of a first embodiment of a method for photographing a photographic video according to the present invention.
  • a method of taking a picture-drawn video includes:
  • Step S10 after the shooting starts, continuously collecting the light drawing image through the camera;
  • the invention adds a light painting photography mode for the shooting function of the mobile terminal, and the user can select the light painting photography mode or the ordinary photography mode to perform the shooting, wherein the light painting photography mode combined with the requirements of the light painting photography scene, the ISO, the painting in advance
  • the parameters such as quality and scene mode are adjusted and limited, and the parameters are output to the relevant hardware devices, so that the related hardware devices can sample or process the collected image data.
  • the mobile terminal When the user selects the light painting photography mode, presses the shooting button or triggers the virtual shooting button, the mobile terminal starts the light painting shooting, and the camera continuously collects the light painting image, and the camera continuously
  • the speed at which the photographic image is acquired can be set in advance. In order to ensure the consistency of the light painting, the camera needs to continuously collect at least ten images within 1 s clock, and the subsequent image processing is often unable to keep up with the image acquisition speed. Therefore, it is preferable to cache the illuminating image in the cache module. (Of course, if the processing speed of the mobile terminal is fast enough, you can also not use the cache).
  • the mobile terminal can adjust the acquisition speed in real time according to the remaining space of the cache module, thereby maximally utilizing the processing capability of the mobile terminal and preventing the data from being too fast due to the acquisition speed. Overflow, which in turn leads to data loss.
  • Step S20 reading the illuminating image at intervals, and generating a composite image according to the current illuminating image and the previously acquired illuminating image;
  • the image synthesizing module for processing the illuminating image to generate the composite image in the mobile terminal directly receives and intermittently reads the collected illuminating image; or reads the illuminating image from the cache module in real time to perform image compositing, and resets the buffer.
  • the module clears the data and provides space for subsequent data.
  • the speed or interval at which the image synthesis module reads the lithographic image may be preset or may depend on the calculation speed of the mobile terminal.
  • the image synthesis module superimposes the current illuminating image with the pixels in the previously acquired illuminating image to generate a composite image. Since the camera continuously collects the illuminating image, the composite image is also continuously generated in real time.
  • the first photo-image when the first photo-image is acquired, it is taken as the image to be synthesized, and after the second photo-image is acquired, it is combined with the image to be synthesized into a current composite image, and sequentially
  • the illuminating image acquired later is combined with the composite image generated by the previous one to finally generate a composite image formed by all the photographic images captured.
  • the interval may be performed periodically; wherein the duration of the period may be set according to actual conditions.
  • the image synthesizing module selects a pixel that satisfies a preset condition from the current illuminating image and the previously acquired illuminating image, and then performs an addition operation on the pixel.
  • the image synthesizing module when it determines whether a certain pixel meets the preset condition, it may directly determine whether the brightness parameter of the pixel is greater than a threshold, and if so, determine The pixel is set to meet the preset condition.
  • the image synthesizing module selects pixels with brightness parameters greater than a threshold from the current illuminating image and the previously acquired illuminating images (ie, the absolute value of the brightness of a point on the image is greater than a threshold), and then performs only on the pixels that satisfy the preset condition.
  • the addition operation filters the pixels with lower brightness to a certain degree, thereby avoiding the accumulation effect of the ambient light and polluting the picture of the final composite image.
  • the size of the threshold may be determined according to an average brightness of the image; the brightness parameter is an optical parameter such as an RGB value or a YUV value.
  • the pixel unit 1 includes a pixel unit 1, a pixel unit 2, and a pixel unit n, and a total of n pixel units, wherein the pixel parameters of the pixel unit 101 to the pixel unit 200 in the current light-drawn image are greater than a threshold, and the pixel unit 1 to 100
  • the addition operation is performed on the current and past pixel parameters of the pixel unit 1 to the pixel unit 200.
  • the brightness parameter value of the pixel unit 1 in the current light picture is 10
  • the brightness parameter value in the past light picture is 100
  • the image synthesis module also performs noise reduction processing on the composite image, and also controls the synthesis ratio of the newly synthesized image according to the exposure degree of the existing image to suppress overexposure generation.
  • the pixels satisfying the preset condition may also be selected by the following steps:
  • the average value of the brightness parameter of the preset number of pixels around the abrupt pixel is calculated, and it is determined whether the average value is greater than a preset threshold. If yes, it is determined that the abrupt pixel satisfies a preset condition, and the mutation is selected.
  • the pixel is not a mutated pixel, it is further determined whether the brightness parameter of the pixel is greater than a preset threshold, and if yes, determining that the pixel satisfies a preset condition, and selecting the pixel.
  • the image synthesis module compares the brightness parameter of a pixel with an average value of brightness parameters of several (preferably 8) pixels around the pixel, and if it is higher or lower than a preset multiple of the average value, the pixel is determined to be Mutant pixels.
  • the preset multiple is preferably 2 times higher than the average value or 0.5 times lower than the average value.
  • the average of the luminance parameters of the surrounding pixels is taken.
  • the surrounding pixels are preferably a plurality of pixels around the pixel, and the preset number is preferably eight. After calculating an average value of the brightness parameters of the preset number of pixels around the abrupt pixel, determining whether the average value is greater than a preset threshold, if the average value is greater than the preset threshold, determining that the abrupt pixel satisfies a preset condition, and selecting the pixel Subsequent execution of the addition operation to generate a composite image, thereby eliminating noise in the image and avoiding affecting the effect of the final composite image. If the average value is less than or equal to the preset threshold, it is determined that the abrupt pixel does not satisfy the preset condition, Selected.
  • the brightness parameter of the pixel is directly compared to a preset threshold. If it is greater than the preset threshold, it is determined that the pixel satisfies the preset condition, the pixel is selected, and the addition operation is performed subsequently to generate a composite image. If it is less than or equal to the preset threshold, it is determined that the pixel does not satisfy the preset condition and is not selected.
  • each composite image is continuously generated, it is limited by the processing speed of the image synthesis module, and the generated adjacent images actually have a certain time interval.
  • the speed of image generation is generated. In turn, it affects the speed of collecting image data.
  • the faster the image is generated the faster the image data in the cache module is read, and the space of the cache module is emptied too fast, so that the speed of the mobile terminal to collect the light image data is fast. Also faster.
  • the mobile terminal displays the composite image in real time on the display screen for the user to preview the current light painting effect in real time.
  • the composite image displayed by the mobile terminal is a compressed small-sized thumbnail image, and the full-size image is stored, that is, displayed and stored as two threads.
  • the capture button again or presses the end button, the shooting ends.
  • the mobile terminal may store each of the composite images locally, or may only store one composite image that was last generated when the shooting was ended.
  • Step S30 capturing a composite image
  • Step S31 performing video encoding processing on the captured composite image, according to the video encoding processing
  • the composite image generates a light-drawn video.
  • the composite image or the intermittent captured composite image may be continuously captured, and the composite image is subjected to video encoding processing to generate a light drawing video.
  • Continuously capturing a composite image means that each time a composite image is generated, one image is captured for encoding, that is, all the generated composite images are used as the material of the composite video.
  • the generation of the composite image and the capture of the composite image for the encoding process are performed simultaneously by the two threads, and since the composite image is encoded while being imaged, it is not necessary to store the generated composite image.
  • Interval grabbing refers to selectively capturing a portion of a composite image as a material for a composite video.
  • the interval mode can be a manual interval mode or an automatic interval mode.
  • the manual interval mode refers to providing an operation interface for the user to click to trigger the captured image data, such as clicking the screen to capture the currently generated composite image (when there is a preview, that is, the current preview image);
  • the automatic interval mode refers to The composite image is captured at a preset time interval, that is, a composite image is captured every preset time.
  • the interval at which the composite image is captured is preferably longer than the interval at which the camera captures the image (ie, the exposure time), avoiding capturing the same composite image two or more times, or reducing the size of the final synthesized video file.
  • a composite image can be captured every 1 to 2 Min, which is the currently generated composite image and the current time photo. Then, the captured composite image is subjected to video encoding processing, and processed into common video encodings such as MPEG-4, H264, H263, and VP8, for later generation of the video file, and the method for encoding the composite image and the prior art The same, no longer repeat here.
  • Video file formats include, but are not limited to, mp4, 3gp, avi, rmvb, and the like.
  • the camera continuously collects the light drawing image, and reads the light drawing image at intervals, generates a composite image according to the current light drawing image and the previously collected light drawing image; captures the composite image, and captures the captured image.
  • the composite image is subjected to video encoding processing, and a light-drawn video is generated according to the composite image after the video encoding process, thereby realizing the shooting of the light-drawn video.
  • the user can use the camera to capture the video of the running process of the display light source, or apply to a similar application scenario, which satisfies the diverse needs of the user and improves the user experience.
  • the composite image is encoded while being photographed, there is no need to store the generated composite image, so the volume of the video file obtained by the final shooting is not large and does not occupy too much storage space.
  • FIG. 2 is a schematic flowchart diagram of a second embodiment of a method for photographing a light-drawn video according to the present invention
  • the method further includes:
  • step S40 special effects processing is performed on the captured composite image.
  • special effects processing is also performed on the captured composite image before the encoding process of the captured composite image, the special effect processing including basic effect processing, filter effect processing, and/or Special scene effects processing, etc.
  • the basic effect processing including noise reduction, brightness, chromaticity and other processing
  • filter effect processing including sketch, negative, black and white processing
  • special scene effect processing including processing for common weather, starry sky and so on.
  • the user can record the sound, capture the synthesized image and perform the encoding process, and further includes: turning on the audio device, receiving the audio data; and encoding the audio data.
  • audio data There are two main ways to source audio data: microphone capture or custom audio files.
  • the audio source is a custom audio file
  • the audio file is first decoded to obtain the original audio data.
  • special effects processing is also performed on the received audio data, the special effect processing including special effect recording, variable sound, pitch change and/or shifting, and the like.
  • the specific way to generate a video file is as follows: The user captures the end command, and generates the video file according to the video file format set by the user, and the encoded image data and the encoded audio data.
  • the invention also provides a mobile terminal.
  • FIG. 3 is a schematic diagram of functional modules of a first embodiment of a mobile terminal according to the present invention.
  • the mobile terminal includes:
  • the collecting module 10 is configured to continuously collect the illuminating image through the camera after the shooting starts;
  • the image generating module 20 is configured to read the illuminating image at intervals, and generate a composite image according to the current illuminating image and the previously acquired illuminating image;
  • the video generation module 30 is configured to capture a composite image, perform video encoding processing on the captured composite image, and generate a light-drawn video according to the composite image after the video encoding process.
  • the invention adds a light painting photography mode for the shooting function of the mobile terminal, and the user can select the light painting photography mode or the ordinary photography mode to perform the shooting, wherein the light painting photography mode combined with the requirements of the light painting photography scene, the ISO, the painting in advance
  • the parameters such as quality and scene mode are adjusted and limited, and the parameters are output to the relevant hardware devices, so that the related hardware devices can sample or process the collected image data.
  • the mobile terminal When the user selects the light painting photography mode, presses the shooting button or triggers the virtual shooting button, the mobile terminal starts the light drawing shooting, and the collecting module 10 continuously collects the light drawing image by using the camera, and the speed at which the camera continuously collects the light drawing image can be preset. .
  • the camera In order to ensure the consistency of the light painting, the camera needs to continuously collect at least ten images within 1 s clock, and the subsequent image processing is often unable to keep up with the image acquisition speed. Therefore, it is preferable to cache the illuminating image in the cache module. (Of course, if the processing speed of the mobile terminal is fast enough, you can also not use the cache).
  • the mobile terminal can adjust in real time according to the remaining space of the cache module.
  • the acquisition speed can not only make maximum use of the processing power of the mobile terminal, but also prevent data overflow due to excessive acquisition speed, resulting in data loss.
  • the image generating module 20 directly receives and intermittently reads the collected illuminating image by using an image compositing module for processing the illuminating image in the mobile terminal to generate a composite image; or reading the illuminating image in real time from the cache module for image compositing And reset the cache module, emptying the data to provide space for subsequent data.
  • the speed or interval at which the image synthesis module reads the lithographic image may be preset or may depend on the calculation speed of the mobile terminal.
  • the image synthesis module superimposes the current illuminating image with the pixels in the previously acquired illuminating image to generate a composite image. Since the camera continuously collects the illuminating image, the composite image is also continuously generated in real time.
  • the first photo-image when the first photo-image is acquired, it is taken as the image to be synthesized, and after the second photo-image is acquired, it is combined with the image to be synthesized into a current composite image, and sequentially
  • the illuminating image acquired later is combined with the composite image generated by the previous one to finally generate a composite image formed by all the photographic images captured.
  • the image synthesizing module selects a pixel that satisfies a preset condition from the current illuminating image and the previously acquired illuminating image, and then performs an addition operation on the pixel.
  • the image synthesis module may directly determine whether the brightness parameter of the pixel is greater than a threshold, and if yes, determine that the pixel meets the preset condition.
  • the image synthesizing module selects pixels with brightness parameters greater than a threshold from the current illuminating image and the previously acquired illuminating images (ie, the absolute value of the brightness of a point on the image is greater than a threshold), and then performs only on the pixels that satisfy the preset condition.
  • the addition operation filters the pixels with lower brightness to a certain degree, thereby avoiding the accumulation effect of the ambient light and polluting the picture of the final composite image.
  • the size of the threshold may be determined according to an average brightness of the image; the brightness parameter is an optical parameter such as an RGB value or a YUV value.
  • the image synthesis module also performs noise reduction processing on the composite image, and also controls the synthesis ratio of the newly synthesized image according to the exposure degree of the existing image to suppress overexposure generation.
  • the pixels satisfying the preset condition may also be selected by the following steps:
  • the average value of the brightness parameter of the preset number of pixels around the abrupt pixel is calculated, and it is determined whether the average value is greater than a preset threshold. If yes, it is determined that the abrupt pixel satisfies a preset condition, and the mutation is selected.
  • the pixel is not a mutated pixel, it is further determined whether the brightness parameter of the pixel is greater than a preset threshold, and if yes, determining that the pixel satisfies a preset condition, and selecting the pixel.
  • the image synthesis module compares the brightness parameter of a pixel with an average value of brightness parameters of several (preferably 8) pixels around the pixel, and if it is higher or lower than a preset multiple of the average value, the pixel is determined to be Mutant pixels.
  • the preset multiple is preferably 2 times higher than the average value or 0.5 times lower than the average value.
  • the average of the luminance parameters of the surrounding pixels is taken.
  • the surrounding pixels are preferably a plurality of pixels around the pixel, and the preset number is preferably eight. After calculating an average value of the brightness parameters of the preset number of pixels around the abrupt pixel, determining whether the average value is greater than a preset threshold, if the average value is greater than the preset threshold, determining that the abrupt pixel satisfies a preset condition, and selecting the pixel Subsequent execution of the addition operation to generate a composite image, thereby eliminating noise in the image and avoiding affecting the effect of the final composite image. If the average value is less than or equal to the preset threshold, it is determined that the abrupt pixel does not satisfy the preset condition, Selected.
  • the brightness parameter of the pixel is directly compared to a preset threshold. If it is greater than a preset threshold, it is determined that the pixel satisfies a preset condition, and the pixel is selected. Subsequent addition operations are performed to generate a composite image. If it is less than or equal to the preset threshold, it is determined that the pixel does not satisfy the preset condition and is not selected.
  • each composite image is continuously generated, it is limited by the processing speed of the image synthesis module, and the generated adjacent images actually have a certain time interval.
  • the speed of image generation is generated. In turn, it affects the speed of collecting image data.
  • the faster the image is generated the faster the image data in the cache module is read, and the space of the cache module is emptied too fast, so that the speed of the mobile terminal to collect the light image data is fast. Also faster.
  • the mobile terminal displays the composite image in real time on the display screen for the user to preview the current light painting effect in real time.
  • the composite image displayed by the mobile terminal is a compressed small-sized thumbnail image, and the full-size image is stored, that is, displayed and stored as two threads.
  • the capture button again or presses the end button, the shooting ends.
  • the mobile terminal may store each of the composite images locally, or may only store one composite image that was last generated when the shooting was ended.
  • the video generating module 30 may continuously capture the composite image or the intermittent captured composite image, and perform video encoding processing on the composite image to generate a illuminating video. Continuously capturing a composite image means that each time a composite image is generated, one image is captured for encoding, that is, all the generated composite images are used as the material of the composite video. The generation of the composite image and the capture of the composite image for the encoding process are performed simultaneously by the two threads, and since the composite image is encoded while being imaged, it is not necessary to store the generated composite image.
  • Interval grabbing refers to selectively capturing a portion of a composite image as a material for a composite video.
  • the interval mode can be a manual interval mode or an automatic interval mode.
  • the manual interval mode refers to providing an operation interface for the user to click to trigger the captured image data, such as clicking the screen to capture the currently generated composite image (when there is a preview, that is, the current preview image);
  • the automatic interval mode refers to The composite image is captured at a preset time interval, that is, a composite image is captured every preset time.
  • Grab The interval of the composite image is preferably longer than the interval between the images captured by the camera (ie, the exposure time), avoiding the same composite image being captured two or more times, or reducing the size of the final synthesized video file.
  • a composite image can be captured every 1 to 2 Min, which is the currently generated composite image and the current time photo. Then, the captured composite image is subjected to video encoding processing, and processed into common video encodings such as MPEG-4, H264, H263, and VP8, for later generation of the video file, and the method for encoding the composite image and the prior art The same, no longer repeat here.
  • Video file formats include, but are not limited to, mp4, 3gp, avi, rmvb, and the like.
  • the camera continuously collects the light drawing image, and reads the light drawing image at intervals, generates a composite image according to the current light drawing image and the previously collected light drawing image; captures the composite image, and captures the captured image.
  • the composite image is subjected to video encoding processing, and a light-drawn video is generated according to the composite image after the video encoding process, thereby realizing the shooting of the light-drawn video.
  • the user can use the camera to capture the video of the running process of the display light source, or apply to a similar application scenario, which satisfies the diverse needs of the user and improves the user experience.
  • the composite image is encoded while being photographed, there is no need to store the generated composite image, so the volume of the video file obtained by the final shooting is not large and does not occupy too much storage space.
  • FIG. 4 is a schematic diagram of functional modules of a second embodiment of a mobile terminal according to the present invention.
  • the mobile terminal further includes:
  • the processing module 40 is configured to perform special effect processing on the captured composite image.
  • the processing module 40 performs special effect processing on the captured composite image, where the special effect processing includes basic effect processing and filter effect processing. And/or special scene effects processing, etc.
  • the basic effect processing including noise reduction, brightness, chromaticity and other processing
  • filter effect processing including sketch, negative, black and white processing
  • special scene effect processing including processing for common weather, starry sky and so on.
  • the user can record the sound, capture the synthesized image and perform the encoding process, and further includes: turning on the audio device, receiving the audio data; and encoding the audio data.
  • audio data There are two main ways to source audio data: microphone capture or custom audio files.
  • the audio source is a custom audio file
  • the audio file is first decoded to obtain the original audio data.
  • special effects processing is also performed on the received audio data, the special effect processing including special effect recording, variable sound, pitch change and/or shifting, and the like.
  • the specific way of generating the video file is: according to the user shooting end instruction, the encoded image data and the encoded audio data are generated according to the video file format set by the user. Video file.
  • the disclosed apparatus and method may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division, and may be implemented in actual implementation.
  • multiple units or components may be combined, or may be integrated into another system, or some features may be ignored or not performed.
  • the coupling, or direct coupling, or communication connection of the components shown or discussed may be indirect coupling or communication connection through some interfaces, devices or units, and may be electrical, mechanical or other forms. of.
  • the units described above as separate components may or may not be physically separated, and the components displayed as the unit may or may not be physical units, that is, may be located in one place or distributed to multiple network units; Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing module, or each unit may be separately used as one unit, or two or more units may be integrated into one unit; the above integration
  • the unit can be implemented in the form of hardware or in the form of hardware plus software functional units.
  • the foregoing program may be stored in a computer readable storage medium, and the program is executed when executed.
  • the foregoing storage device includes the following steps: the foregoing storage medium includes: a mobile storage device, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk.
  • ROM read-only memory
  • RAM random access memory
  • magnetic disk or an optical disk.
  • optical disk A medium that can store program code.
  • the embodiment of the present invention further provides a computer storage medium, where the computer storage medium stores computer executable instructions, and the computer executable instructions are used in at least one of the methods for photographing the light drawing video; And/or the method shown in Figure 2.
  • the computer storage medium may be various types of storage media such as a ROM/RAM, a magnetic disk, an optical disk, a DVD, or a USB flash drive.
  • the computer storage medium may be a non-transitory storage medium.
  • the device for taking a picture-drawing video can correspond to various structures capable of performing the above functions, for example, various types of processors having information processing functions.
  • the processor may include an application processor (AP), a central processing unit (CPU), a digital signal processor (DSP), or a programmable gate array (FPGA, Field Programmable Gate). Array) or other information processing structure or chip that can implement the above functions by executing specified code.
  • AP application processor
  • CPU central processing unit
  • DSP digital signal processor
  • FPGA Field Programmable Gate
  • Array programmable gate array
  • Fig. 5 is a block diagram showing a main electrical configuration of a camera according to an embodiment of the present invention.
  • the photographic lens 101 is composed of a plurality of optical lenses for forming a subject image, and is a single focus lens or a zoom lens.
  • the photographic lens 101 can be moved in the optical axis direction by the lens driving unit 111, and controls the focus position of the taking lens 101 based on the control signal from the lens driving control unit 112, and also controls the focus distance in the case of the zoom lens.
  • the lens drive control circuit 112 performs drive control of the lens drive unit 111 in accordance with a control command from the microcomputer 107.
  • An imaging element 102 is disposed in the vicinity of a position where the subject image is formed by the photographing lens 101 on the optical axis of the photographing lens 101.
  • the imaging element 102 functions as an imaging unit that captures a subject image and acquires captured image data.
  • Photodiodes constituting each pixel are two-dimensionally arranged in a matrix on the imaging element 102. Each photodiode generates a photoelectric conversion current corresponding to the amount of received light, and the photoelectric conversion current is charged by a capacitor connected to each photodiode.
  • the front surface of each pixel is provided with a Bayer array of RGB color filters.
  • the imaging element 102 is connected to an imaging circuit 103 that performs charge accumulation control and image signal readout control in the imaging element 102, and performs waveform shaping after reducing the reset noise of the read image signal (analog image signal). Further, gain improvement or the like is performed to obtain an appropriate signal level.
  • the imaging circuit 103 is connected to the A/D conversion unit 104, which performs analog-to-digital conversion on the analog image signal, and outputs a digital image signal (hereinafter referred to as image data) to the bus 199.
  • image data a digital image signal
  • the bus 199 is a transmission path for transmitting various data read or generated inside the camera.
  • the A/D conversion unit 104 is connected to the bus 199, and an image processor 105 and JPEG are also connected.
  • the image processor 105 performs various kinds of images such as OB subtraction processing, white balance adjustment, color matrix calculation, gamma conversion, color difference signal processing, noise removal processing, simultaneous processing, edge processing, and the like on the image data based on the output of the imaging element 102. deal with.
  • the JPEG processor 106 compresses the image data read out from the SDRAM 108 in accordance with the JPEG compression method. Further, the JPEG processor 106 performs decompression of JPEG image data for image reproduction display.
  • the file recorded on the recording medium 115 is read, and after the compression processing is performed in the JPEG processor 106, the decompressed image data is temporarily stored in the SDRAM 108 and displayed on the LCD 116.
  • the JPEG method is adopted as the image compression/decompression method.
  • the compression/decompression method is not limited thereto, and other compression/decompression methods such as MPEG, TIFF, and H.264 may be used.
  • the operation unit 113 includes but is not limited to a physical button or a virtual button, and the entity or virtual button may be a power button, a camera button, an edit button, a dynamic image button, a reproduction button, a menu button, a cross button, an OK button, a delete button, and an enlarge button.
  • the operation members such as various input buttons and various input keys are detected, and the operation states of these operation members are detected.
  • the detection result is output to the microcomputer 107.
  • a touch panel is provided on the front surface of the LCD 116 as a display portion, and the touch position of the user is detected, and the touch position is output to the microcomputer 107.
  • the microcomputer 107 executes various processing sequences corresponding to the operation of the user based on the detection result of the operation member from the operation unit 113. Also, this place can be changed to the computer 107 to execute various processing sequences corresponding to the user's operation based on the detection result of the touch panel on the front of the LCD 116.
  • the flash memory 114 stores programs for executing various processing sequences of the microcomputer 107.
  • the microcomputer 107 performs overall control of the camera in accordance with the program. Further, the flash memory 114 stores various adjustment values of the camera, and the microcomputer 107 reads out the adjustment value, and performs control of the camera in accordance with the adjustment value.
  • the SDRAM 108 is an electrically rewritable volatile memory for temporarily storing image data or the like.
  • the SDRAM 108 temporarily stores image data output from the A/D conversion unit 104 and image data processed in the image processor 105, the JPEG processor 106, and the like.
  • the microcomputer 107 functions as a control unit of the entire camera, and collectively controls various processing sequences of the camera.
  • the microcomputer 107 is connected to the operation unit 113 and the flash memory 114.
  • the microcomputer 107 can control the apparatus in this embodiment to perform the following operations by executing a program:
  • the encoded image data is generated as a video file.
  • the synthesizing the current image with the past image comprises:
  • Image synthesis is performed based on the current image and the brightness information of the past image.
  • the performing image synthesis according to the brightness information of the current image and the past image comprises: determining whether the brightness of the pixel in the current image at the same position is greater than the brightness of the pixel in the past image; if yes, the same position The pixels in the past image are replaced with the pixels in the current image, and image synthesis is performed accordingly.
  • the camera is a front camera
  • the step of acquiring an image by the camera every preset time further comprises: performing image processing on the image.
  • the step of performing the encoding process on the captured composite image further includes: performing special effect processing on the captured composite image, where the special effect processing includes basic effect processing, filter effect processing, and/or special scene effects. deal with.
  • the memory interface 109 is connected to the recording medium 115, and performs control for writing image data and a file header attached to the image data to the recording medium 115 and reading from the recording medium 115.
  • the recording medium 115 is, for example, a recording medium such as a memory card that can be detachably attached to the camera body.
  • the recording medium 115 is not limited thereto, and may be a hard disk or the like built in the camera body.
  • the LCD driver 110 is connected to the LCD 116, and stores image data processed by the image processor 105 in the SDRAM.
  • the image data stored in the SDRAM is read and displayed on the LCD 116, or the image data stored in the JPEG processor 106 is compressed.
  • the JPEG processor 106 reads the compressed image data of the SDRAM, decompresses it, and displays the decompressed image data on the LCD 116.
  • the LCD 116 is disposed on the back surface of the camera body or the like to perform image display.
  • the LCD 116 is provided with a touch panel that detects a user's touch operation.
  • the liquid crystal display panel (LCD 116) is disposed as the display portion.
  • the present invention is not limited thereto, and various display panels such as an organic EL may be employed.

Abstract

一种拍摄光绘视频的方法,该方法包括以下步骤:拍摄开始后,通过摄像头连续采集光绘图像;间隔读取所述光绘图像,根据当前的光绘图像与之前采集的光绘图像生成合成图像;抓取所述合成图像,对抓取的合成图像进行视频编码处理,根据视频编码处理后的合成图像生成光绘视频。还公开了一种移动终端。

Description

拍摄光绘视频的方法、移动终端和计算机存储介质 技术领域
本发明涉及摄像技术领域,尤其涉及拍摄光绘视频的方法、移动终端和计算机存储介质。
背景技术
随着手机、平板电脑等移动终端的摄像硬件的不断提升,移动终端的拍摄功能也越来越多样化,用户对移动终端的拍摄要求也越来越高。目前的移动终端具备的拍摄功能,依赖于摄像硬件设备和芯片供应商提供的相关处理算法,只有对焦、白平衡等几种固定的拍摄模式。近几年兴起了一种光绘摄影的拍摄模式,用户可以利用光绘摄影进行艺术创作。光绘摄影是指利用长时间曝光,在曝光过程中通过光源的变化创造出特殊影像的一种拍摄模式。由于需要长时间曝光,因此需要相应的感光硬件予以支持,而能够支持长时间曝光的感光硬件比较昂贵,所以目前只有专业的摄像装置如单反机才具备光绘摄影功能。
目前的光缓摄影只能拍摄出光绘照片,即最终得到的只是一张显示光源轨迹的静态图像,无法拍摄出能够显示光源的运行过程的动态视频。因此,现有技术中尚没有实现拍摄光绘视频的解决方案,无法满足用户的多样化需求,影响了用户体验。
发明内容
本发明的主要目的在于实现光绘视频的拍摄,满足用户的多样化需求,提升用户体验。
为实现上述目的,本发明提供一种拍摄光绘视频的方法,所述拍摄光绘视频的方法包括以下步骤:
拍摄开始后,通过摄像头连续采集光绘图像;
间隔读取所述光绘图像,根据当前的光绘图像与之前采集的光绘图像生成合成图像;
抓取所述合成图像,对抓取的合成图像进视频编码处理,根据视频编码处理后的合成图像生成光绘视频。
优选地,所述根据当前的光绘图像与之前采集的光绘图像生成合成图像的步骤包括:
从当前的光绘图像和之前采集的光绘图像中,选出满足预设条件的像素,对同一位置的像素执行加法运算,生成合成图像。
优选地,所述选出满足预设条件的像素包括:
判断所述像素的亮度参数是否大于预设阈值,若是,则判定所述像素满足预设条件,选出该像素。
优选地,所述选出满足预设条件的像素包括:
判断所述像素是否为突变像素;
若所述像素为突变像素,则计算出所述突变像素周围预设个数像素的亮度参数的平均值,并判断该平均值是否大于预设阈值,若是,则判定所述突变像素满足预设条件,选出该突变像素;
若所述像素不是突变像素,则进一步判断所述像素的亮度参数是否大于预设阈值,若是,则判定所述像素满足预设条件,选出该像素。
优选地,所述对抓取的合成图像进视频编码处理的步骤之前,所述拍摄光绘视频的方法还包括:
对抓取的所述合成图像进行特效处理。
此外,为实现上述目的,本发明还提供一种移动终端,所述移动终端包括:
采集模块,用于拍摄开始后,通过摄像头连续采集光绘图像;
图像生成模块,用于间隔读取所述光绘图像,根据当前的光绘图像与之前采集的光绘图像生成合成图像;
视频生成模块,用于抓取所述合成图像,对抓取的合成图像进视频编码处理,根据视频编码处理后的合成图像生成光绘视频。
优选地,所述图像生成模块用于:
从当前的光绘图像和之前采集的光绘图像中,选出满足预设条件的像素,对同一位置的像素执行加法运算,生成合成图像。
优选地,所述图像生成模块还用于:
判断所述像素的亮度参数是否大于预设阈值,若是,则判定所述像素满足预设条件,选出该像素。
优选地,所述图像生成模块还用于:
判断所述像素是否为突变像素;
若所述像素为突变像素,则计算出所述突变像素周围预设个数像素的亮度参数的平均值,并判断该平均值是否大于预设阈值,若是,则判定所述突变像素满足预设条件,选出该突变像素;
若所述像素不是突变像素,则进一步判断所述像素的亮度参数是否大于预设阈值,若是,则判定所述像素满足预设条件,选出该像素。
优选地,所述移动终端还包括:
处理模块,用于对抓取的所述合成图像进行特效处理。
此外,为实现上述目的,本发明还提供一种计算机存储介质,所述计算机存储介质中存储有计算机可执行指令,所述计算机可执行指令用于执行上述处理。
本发明通过在拍摄开始后,利用摄像头连续采集光绘图像,并间隔读取光绘图像,根据当前的光绘图像与之前采集的光绘图像生成合成图像;抓取合成图像,对抓取的合成图像进视频编码处理,根据视频编码处理后的合成图像生成光绘视频,实现了光绘视频的拍摄。使得用户可以利用拍摄装置拍摄出显示光源的运行过程的视频,或者应用于类似的应用场景,满足了用户的多样化需求,提升了用户体验。同时,由于是一边拍摄一边 对合成图像进行编码处理,无需存储生成的合成图像,因此最终拍摄获得的视频文件的体积不会很大,不会占用太多的存储空间。
附图说明
图1为本发明拍摄光绘视频的方法第一实施例的流程示意图;
图2为本发明拍摄光绘视频的方法第二实施例的流程示意图;
图3为本发明移动终端第一实施例的功能模块示意图;
图4为本发明移动终端第二实施例的功能模块示意图;
图5是本发明实施例提供的拍摄光绘视频的装置的电气结构示意图。
本发明目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。
具体实施方式
应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。
本发明实施例提供一种拍摄光绘视频的方法。
参照图1,图1为本发明拍摄光绘视频的方法第一实施例的流程示意图。
在一实施例中,拍摄光绘视频的方法包括:
步骤S10,拍摄开始后,通过摄像头连续采集光绘图像;
本发明为移动终端的拍摄功能增加了一种光绘摄影模式,用户可以选择光绘摄影模式或普通摄影模式进行拍摄,其中,光绘摄影模式结合光绘摄影场景的要求,预先对ISO、画片质量、场景模式等参数进行了调整和限制,将该参数输出给相关硬件设备,以使相关硬件设备对采集到的图像数据进行选样或处理。
当用户选择了光绘摄影模式,按下拍摄按键或触发虚拟拍摄按键后,移动终端开始进行光绘拍摄,利用摄像头连续采集光绘图像,摄像头连续 采集光绘图像的速度可以预先设置。为了保证光绘的连贯性,摄像头需要在1s钟之内连续采集至少十多张图像,而后续对图像的合成处理往往跟不上图像的采集速度,因此优选将光绘图像缓存于缓存模块中(当然,如果移动终端的处理速度足够快,也可以不用缓存)。进一步地,在采集光绘图像的过程中,移动终端可以根据缓存模块的剩余空间来实时调整采集速度,从而既能最大限度的利用移动终端的处理能力,又能防止因采集速度过快导致数据溢出,进而导致数据丢失。
步骤S20,间隔读取光绘图像,根据当前的光绘图像与之前采集的光绘图像生成合成图像;
移动终端中用于处理光绘图像以生成合成图像的图像合成模块直接接收并间隔读取采集到的光绘图像;或者从缓存模块中实时间隔读取光绘图像进行图像合成,并重置缓存模块,清空其中的数据,为后续数据提供空间。图像合成模块读取光绘图像的速度或间隔时间可以预先设置,或者取决于移动终端的计算速度。图像合成模块将当前的光绘图像与之前采集的光绘图像中的像素进行叠加,生成一张合成图像。因摄像头连续采集光绘图像,因此合成图像也是实时的连续的生成。在拍摄的过程中,在采集到第一张光绘图像时,将其作为待合成图像,在采集到第二张光绘图像后将其与待合成图像合成为一张当前合成图像,并依次将后面所采集到的光绘图像与上一张生成的合成图像进行合成,最终生成所拍摄的所有光绘图像所形成的合成图像。
所述间隔可以为周期性的进行操作;其中,周期的时长可以根据实际情况进行设置。
作为优选,图像合成模块从当前的光绘图像和之前采集的光绘图像中选出满足预设条件的像素,然后对该像素执行加法运算。
具体的,在一种实施方式中,图像合成模块在判断某一像素是否满足预设条件时,可以直接判断该像素的亮度参数是否大于阈值,若是,则判 定该像素满足预设条件。图像合成模块从当前的光绘图像和之前采集的光绘图像中选出亮度参数大于阈值的像素(即图像上某点亮度的绝对值大于阈值)后,只对这些满足预设条件的像素执行加法运算,从而对亮度较低的像素进行了一定程度的过滤,避免了环境光的累计效果对最终的合成图像的画面造成污染。所述阈值的大小,可根据图像的平均亮度而定;所述亮度参数为RGB值、YUV值等光学参数。
例如,光绘图像中包括像素单元1、像素单元2…像素单元n共n个像素单元,其中像素单元101~像素单元200在当前的光绘图像中的像素参数大于阈值,像素单元1~100在过去的光绘图像中的亮度参数大于阈值,则对像素单元1~像素单元200的当前和过去的像素参数执行加法运算。假设像素单元1在当前光绘图像中的亮度参数值为10,在过去的光绘图像中的亮度参数值为100,则执行加法运算后在合成图像中像素单元1的亮度参数值就为100+10=110。此外,图像合成模块还对合成图像进行降噪处理,同时还根据现有图像的曝光度,控制新合成图像的合成比例,抑制过曝产生。
在另外一种实施方式中,还可以通过以下步骤来选择满足预设条件的像素:
判断像素是否为突变像素;
若像素为突变像素,则计算出突变像素周围预设个数像素的亮度参数的平均值,并判断该平均值是否大于预设阈值,若是,则判定突变像素满足预设条件,选出该突变像素;
若像素不是突变像素,则进一步判断像素的亮度参数是否大于预设阈值,若是,则判定像素满足预设条件,选出该像素。
图像合成模块比较某一像素的亮度参数,与该像素周围若干个(优选8个)像素的亮度参数的平均值进行比较,如果高于或低于平均值的预设倍数,则判定该像素为突变像素。所述预设倍数优选为高于平均值2倍或低于平均值0.5倍。
如果该像素为突变像素,则取其周围像素的亮度参数的平均值。其中,周围的像素优选为该像素周围若干个的像素,所述预设个数优选为8个。计算出突变像素周围预设个数像素的亮度参数的平均值后,判断该平均值是否大于预设阈值,如果平均值大于预设阈值,则判定该突变像素满足预设条件,选出该像素,后续执行加法运算,生成合成图像,从而排除了图像中的噪点,避免其影响最终的合成图像的画面效果,如果平均值小于等于预设阈值,则判定该突变像素不满足预设条件,不予选取。
如果该像素不是突变像素,则直接将该像素的亮度参数与预设阈值进行比较。如果大于预设阈值,则判定该像素满足预设条件,选取该像素,后续执行加法运算,生成合成图像。如果小于等于预设阈值,则判定该像素不满足预设条件,不予选取。
由于在进行图像合成时,只叠加图像中亮度较高的区域,其余区域不予叠加,使得亮者恒亮,暗者恒暗,提高了合成图像的光绘效果。
各合成图像虽为连续生成,但受限于图像合成模块的处理速度,生成的相邻图像之间实际上也有一定的时间间隔,计算速度越快,时间间隔越短;同时,生成图像的速度反过来影响着采集图像数据的速度,生成图像的速度越快,读取缓存模块中的图像数据也越快,则缓存模块的空间被腾空得也快,从而移动终端采集光绘图像数据的速度也更快。
移动终端在显示屏上实时显示合成图像,供用户实时预览当前的光绘效果。为了达到流畅预览的效果,移动终端显示的合成图像为经压缩后的小尺寸的缩略图,全尺寸的图像予以存储,即显示和存储为两个线程。当用户再次按下拍摄按键或按下结束按键后,拍摄结束。移动终端可以将每一张合成图像均存储于本地,也可以仅存储结束拍摄时最后生成的一张合成图像。
步骤S30,抓取合成图像;
步骤S31,对抓取的合成图像进视频编码处理,根据视频编码处理后的 合成图像生成光绘视频。
在生成光绘图像对应的合成图像后,可以连续抓取合成图像或者间隔的抓取合成图像,并对合成图像进行视频编码处理,以生成光绘视频。连续抓取合成图像,是指每生成一张合成图像就抓取一张进行编码处理,即,将生成的所有合成图像都作为合成视频的素材。生成合成图像和抓取合成图像进行编码处理是两个线程同步进行,由于是一边拍摄一边对合成图像进行编码处理,因此无需存储生成的合成图像。
间隔抓取是指选择性的抓取部分合成图像作为合成视频的素材。间隔方式可以是手动间隔模式或者自动间隔模式。其中,手动间隔模式,是指提供操作界面以便用户点击触发抓取图像数据,如点击屏幕,抓取当前生成的合成图像(有预览时,即当前的预览图像);自动间隔模式,是指按照预设的时间间隔抓取合成图像,即每隔预设时间抓取一张合成图像。抓取合成图像的间隔时间优选长于摄像头采集图像的间隔时间(即曝光时间),避免两次或多次抓取到相同的合成图像,或者减小最终合成的视频文件的大小。例如可以每隔1~2Min抓取一张合成图像,该合成图像即当前所生成的合成图像,当前时刻的光绘照片。然后对抓取到的合成图像进行视频编码处理,将其处理为MPEG-4、H264、H263、VP8等常见视频编码,以备后续生成视频文件,对合成图像进行编码处理的方法与现有技术相同,在此不再赘述。
此外,每隔预设时间抓取一张合成图像,也可为当摄像头每采集预设张图像后抓取一张合成图像。例如,假设摄像头每隔10S采集一张图像(即曝光时间为10S),拍摄装置在其摄像头每采集3张图像后抓取一张合成图像,实则相当于每隔3*10S=30S后抓取一张合成图像。
将抓取的合成图像进行视频编码处理,并在拍摄结束后,根据编码处理后的合成图像生成视频文件,所生成的视频文件的格式,可以由用户指定。视频文件格式包括但不限于mp4、3gp、avi、rmvb等。
本实施例通过在拍摄开始后,利用摄像头连续采集光绘图像,并间隔读取光绘图像,根据当前的光绘图像与之前采集的光绘图像生成合成图像;抓取合成图像,对抓取的合成图像进视频编码处理,根据视频编码处理后的合成图像生成光绘视频,实现了光绘视频的拍摄。使得用户可以利用拍摄装置拍摄出显示光源的运行过程的视频,或者应用于类似的应用场景,满足了用户的多样化需求,提升了用户体验。同时,由于是一边拍摄一边对合成图像进行编码处理,无需存储生成的合成图像,因此最终拍摄获得的视频文件的体积不会很大,不会占用太多的存储空间。
参照图2,图2为本发明拍摄光绘视频的方法第二实施例的流程示意图;
在本发明拍摄光绘视频的方法第一实施例的基础上,在执行步骤S31之前,该方法还包括:
步骤S40,对抓取的合成图像进行特效处理。
进一步的,为了提高用户拍摄的趣味性,在对抓取的合成图像进行编码处理之前,还对抓取的合成图像进行特效处理,所述特效处理包括基本效果处理、滤镜效果处理和/或特殊场景效果处理等。其中,基本效果处理,包含减噪、亮度、色度等处理;滤镜效果处理,包含素描、负片、黑白等处理;特殊场景效果处理,包含处理为常见天气、星空等。
进一步的,为了在录制视频的同时,用户能够录制声音,抓取合成图像并进行编码处理的同时,还包括:开启音频设备,接收音频数据;对音频数据进行编码处理。音频数据的来源方式主要有两种:麦克风采集或者自定义音频文件。当音频来源为自定义音频文件时,先对音频文件进行解码,得到原始的音频数据。优选地,在对音频数据进行编码处理之前,还对接收到的音频数据进行特效处理,所述特效处理包括特效录音、变声、变调和/或变速等。
在增加了录制音频的功能基础上,生成视频文件的具体方式为:根据 用户拍摄结束指令,将编码处理后的图像数据,以及编码处理后的音频数据,按照用户设定的视频文件格式,生成视频文件。
为了用户操作起来更为方便实用,还可以给用户提供一个操作界面,用来设定抓取合成图像的方式(间隔抓取或连续抓取),间隔抓取时的间隔时间,是否进行特效处理,是否开启录制音频功能等。
本发明还提供一种移动终端。
参照图3,图3为本发明移动终端第一实施例的功能模块示意图。
在一实施例中,移动终端包括:
采集模块10,用于拍摄开始后,通过摄像头连续采集光绘图像;
图像生成模块20,用于间隔读取光绘图像,根据当前的光绘图像与之前采集的光绘图像生成合成图像;
视频生成模块30,用于抓取合成图像,对抓取的合成图像进视频编码处理,根据视频编码处理后的合成图像生成光绘视频。
本发明为移动终端的拍摄功能增加了一种光绘摄影模式,用户可以选择光绘摄影模式或普通摄影模式进行拍摄,其中,光绘摄影模式结合光绘摄影场景的要求,预先对ISO、画片质量、场景模式等参数进行了调整和限制,将该参数输出给相关硬件设备,以使相关硬件设备对采集到的图像数据进行选样或处理。
当用户选择了光绘摄影模式,按下拍摄按键或触发虚拟拍摄按键后,移动终端开始进行光绘拍摄,采集模块10利用摄像头连续采集光绘图像,摄像头连续采集光绘图像的速度可以预先设置。为了保证光绘的连贯性,摄像头需要在1s钟之内连续采集至少十多张图像,而后续对图像的合成处理往往跟不上图像的采集速度,因此优选将光绘图像缓存于缓存模块中(当然,如果移动终端的处理速度足够快,也可以不用缓存)。进一步地,在采集光绘图像的过程中,移动终端可以根据缓存模块的剩余空间来实时调整 采集速度,从而既能最大限度的利用移动终端的处理能力,又能防止因采集速度过快导致数据溢出,进而导致数据丢失。
图像生成模块20通过移动终端中用于处理光绘图像以生成合成图像的图像合成模块直接接收并间隔读取采集到的光绘图像;或者从缓存模块中实时间隔读取光绘图像进行图像合成,并重置缓存模块,清空其中的数据,为后续数据提供空间。图像合成模块读取光绘图像的速度或间隔时间可以预先设置,或者取决于移动终端的计算速度。图像合成模块将当前的光绘图像与之前采集的光绘图像中的像素进行叠加,生成一张合成图像。因摄像头连续采集光绘图像,因此合成图像也是实时的连续的生成。在拍摄的过程中,在采集到第一张光绘图像时,将其作为待合成图像,在采集到第二张光绘图像后将其与待合成图像合成为一张当前合成图像,并依次将后面所采集到的光绘图像与上一张生成的合成图像进行合成,最终生成所拍摄的所有光绘图像所形成的合成图像。
作为优选,图像合成模块从当前的光绘图像和之前采集的光绘图像中选出满足预设条件的像素,然后对该像素执行加法运算。
具体的,在一种实施方式中,图像合成模块在判断某一像素是否满足预设条件时,可以直接判断该像素的亮度参数是否大于阈值,若是,则判定该像素满足预设条件。图像合成模块从当前的光绘图像和之前采集的光绘图像中选出亮度参数大于阈值的像素(即图像上某点亮度的绝对值大于阈值)后,只对这些满足预设条件的像素执行加法运算,从而对亮度较低的像素进行了一定程度的过滤,避免了环境光的累计效果对最终的合成图像的画面造成污染。所述阈值的大小,可根据图像的平均亮度而定;所述亮度参数为RGB值、YUV值等光学参数。
例如,光绘图像中包括像素单元1、像素单元2…像素单元n共n个像素单元,其中像素单元101~像素单元200在当前的光绘图像中的像素参数大于阈值,像素单元1~100在过去的光绘图像中的亮度参数大于阈值,则 对像素单元1~像素单元200的当前和过去的像素参数执行加法运算。假设像素单元1在当前光绘图像中的亮度参数值为10,在过去的光绘图像中的亮度参数值为100,则执行加法运算后在合成图像中像素单元1的亮度参数值就为100+10=110。此外,图像合成模块还对合成图像进行降噪处理,同时还根据现有图像的曝光度,控制新合成图像的合成比例,抑制过曝产生。
在另外一种实施方式中,还可以通过以下步骤来选择满足预设条件的像素:
判断像素是否为突变像素;
若像素为突变像素,则计算出突变像素周围预设个数像素的亮度参数的平均值,并判断该平均值是否大于预设阈值,若是,则判定突变像素满足预设条件,选出该突变像素;
若像素不是突变像素,则进一步判断像素的亮度参数是否大于预设阈值,若是,则判定像素满足预设条件,选出该像素。
图像合成模块比较某一像素的亮度参数,与该像素周围若干个(优选8个)像素的亮度参数的平均值进行比较,如果高于或低于平均值的预设倍数,则判定该像素为突变像素。所述预设倍数优选为高于平均值2倍或低于平均值0.5倍。
如果该像素为突变像素,则取其周围像素的亮度参数的平均值。其中,周围的像素优选为该像素周围若干个的像素,所述预设个数优选为8个。计算出突变像素周围预设个数像素的亮度参数的平均值后,判断该平均值是否大于预设阈值,如果平均值大于预设阈值,则判定该突变像素满足预设条件,选出该像素,后续执行加法运算,生成合成图像,从而排除了图像中的噪点,避免其影响最终的合成图像的画面效果,如果平均值小于等于预设阈值,则判定该突变像素不满足预设条件,不予选取。
如果该像素不是突变像素,则直接将该像素的亮度参数与预设阈值进行比较。如果大于预设阈值,则判定该像素满足预设条件,选取该像素, 后续执行加法运算,生成合成图像。如果小于等于预设阈值,则判定该像素不满足预设条件,不予选取。
由于在进行图像合成时,只叠加图像中亮度较高的区域,其余区域不予叠加,使得亮者恒亮,暗者恒暗,提高了合成图像的光绘效果。
各合成图像虽为连续生成,但受限于图像合成模块的处理速度,生成的相邻图像之间实际上也有一定的时间间隔,计算速度越快,时间间隔越短;同时,生成图像的速度反过来影响着采集图像数据的速度,生成图像的速度越快,读取缓存模块中的图像数据也越快,则缓存模块的空间被腾空得也快,从而移动终端采集光绘图像数据的速度也更快。
移动终端在显示屏上实时显示合成图像,供用户实时预览当前的光绘效果。为了达到流畅预览的效果,移动终端显示的合成图像为经压缩后的小尺寸的缩略图,全尺寸的图像予以存储,即显示和存储为两个线程。当用户再次按下拍摄按键或按下结束按键后,拍摄结束。移动终端可以将每一张合成图像均存储于本地,也可以仅存储结束拍摄时最后生成的一张合成图像。
在生成光绘图像对应的合成图像后,视频生成模块30可以连续抓取合成图像或者间隔的抓取合成图像,并对合成图像进行视频编码处理,以生成光绘视频。连续抓取合成图像,是指每生成一张合成图像就抓取一张进行编码处理,即,将生成的所有合成图像都作为合成视频的素材。生成合成图像和抓取合成图像进行编码处理是两个线程同步进行,由于是一边拍摄一边对合成图像进行编码处理,因此无需存储生成的合成图像。
间隔抓取是指选择性的抓取部分合成图像作为合成视频的素材。间隔方式可以是手动间隔模式或者自动间隔模式。其中,手动间隔模式,是指提供操作界面以便用户点击触发抓取图像数据,如点击屏幕,抓取当前生成的合成图像(有预览时,即当前的预览图像);自动间隔模式,是指按照预设的时间间隔抓取合成图像,即每隔预设时间抓取一张合成图像。抓取 合成图像的间隔时间优选长于摄像头采集图像的间隔时间(即曝光时间),避免两次或多次抓取到相同的合成图像,或者减小最终合成的视频文件的大小。例如可以每隔1~2Min抓取一张合成图像,该合成图像即当前所生成的合成图像,当前时刻的光绘照片。然后对抓取到的合成图像进行视频编码处理,将其处理为MPEG-4、H264、H263、VP8等常见视频编码,以备后续生成视频文件,对合成图像进行编码处理的方法与现有技术相同,在此不再赘述。
此外,每隔预设时间抓取一张合成图像,也可为当摄像头每采集预设张图像后抓取一张合成图像。例如,假设摄像头每隔10S采集一张图像(即曝光时间为10S),拍摄装置在其摄像头每采集3张图像后抓取一张合成图像,实则相当于每隔3*10S=30S后抓取一张合成图像。
将抓取的合成图像进行视频编码处理,并在拍摄结束后,根据编码处理后的合成图像生成视频文件,所生成的视频文件的格式,可以由用户指定。视频文件格式包括但不限于mp4、3gp、avi、rmvb等。
本实施例通过在拍摄开始后,利用摄像头连续采集光绘图像,并间隔读取光绘图像,根据当前的光绘图像与之前采集的光绘图像生成合成图像;抓取合成图像,对抓取的合成图像进视频编码处理,根据视频编码处理后的合成图像生成光绘视频,实现了光绘视频的拍摄。使得用户可以利用拍摄装置拍摄出显示光源的运行过程的视频,或者应用于类似的应用场景,满足了用户的多样化需求,提升了用户体验。同时,由于是一边拍摄一边对合成图像进行编码处理,无需存储生成的合成图像,因此最终拍摄获得的视频文件的体积不会很大,不会占用太多的存储空间。
参照图4,图4为本发明移动终端第二实施例的功能模块示意图。
在本发明移动终端第一实施例的基础上,该移动终端还包括:
处理模块40,用于对抓取的所述合成图像进行特效处理。
进一步的,为了提高用户拍摄的趣味性,在对抓取的合成图像进行编码处理之前,处理模块40还对抓取的合成图像进行特效处理,所述特效处理包括基本效果处理、滤镜效果处理和/或特殊场景效果处理等。其中,基本效果处理,包含减噪、亮度、色度等处理;滤镜效果处理,包含素描、负片、黑白等处理;特殊场景效果处理,包含处理为常见天气、星空等。
进一步的,为了在录制视频的同时,用户能够录制声音,抓取合成图像并进行编码处理的同时,还包括:开启音频设备,接收音频数据;对音频数据进行编码处理。音频数据的来源方式主要有两种:麦克风采集或者自定义音频文件。当音频来源为自定义音频文件时,先对音频文件进行解码,得到原始的音频数据。优选地,在对音频数据进行编码处理之前,还对接收到的音频数据进行特效处理,所述特效处理包括特效录音、变声、变调和/或变速等。
在增加了录制音频的功能基础上,生成视频文件的具体方式为:根据用户拍摄结束指令,将编码处理后的图像数据,以及编码处理后的音频数据,按照用户设定的视频文件格式,生成视频文件。
为了用户操作起来更为方便实用,还可以给用户提供一个操作界面,用来设定抓取合成图像的方式(间隔抓取或连续抓取),间隔抓取时的间隔时间,是否进行特效处理,是否开启录制音频功能等。
以上仅为本发明的优选实施例,并非因此限制本发明的专利范围,凡是利用本发明说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本发明的专利保护范围内。
在本申请所提供的几个实施例中,应该理解到,所揭露的设备和方法,可以通过其它的方式实现。以上所描述的设备实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可 以有另外的划分方式,如:多个单元或组件可以结合,或可以集成到另一个系统,或一些特征可以忽略,或不执行。另外,所显示或讨论的各组成部分相互之间的耦合、或直接耦合、或通信连接可以是通过一些接口,设备或单元的间接耦合或通信连接,可以是电性的、机械的或其它形式的。
上述作为分离部件说明的单元可以是、或也可以不是物理上分开的,作为单元显示的部件可以是、或也可以不是物理单元,即可以位于一个地方,也可以分布到多个网络单元上;可以根据实际的需要选择其中的部分或全部单元来实现本实施例方案的目的。
另外,在本发明各实施例中的各功能单元可以全部集成在一个处理模块中,也可以是各单元分别单独作为一个单元,也可以两个或两个以上单元集成在一个单元中;上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能单元的形式实现。
本领域普通技术人员可以理解:实现上述方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成,前述的程序可以存储于一计算机可读取存储介质中,该程序在执行时,执行包括上述方法实施例的步骤;而前述的存储介质包括:移动存储设备、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
本发明实施例还提供一种计算机存储介质,所述计算机存储介质中存储有计算机可执行指令,所述计算机可执行指令用于上述拍摄光绘视频的方法的至少其中之一;具体如图1和/或图2所示的方法。
所述计算机存储介质可为如ROM/RAM、磁盘、光盘、DVD或U盘等各种类型的存储介质,本实施例所述计算机存储介质可选为非瞬间存储介质。
值得注意的本申请实施例中所述拍摄光绘视频的装置,图像采集模块、 图像合成模块以及视频生成模块,可对应各种能够进行上述功能的结构,比如,各种类型具有信息处理功能的处理器。所述处理器可包括应用处理器AP(AP,Application Processor)、中央处理器(CPU,Central Processing Unit)、数字信号处理器(DSP,Digital Signal Processor)或可编程门阵列(FPGA,Field Programmable Gate Array)等信息处理结构或芯片,所述处理器可通过执行指定代码来实现上述功能。
图5是表示本发明的一个实施方式的相机的主要电气结构的框图。摄影镜头101由用于形成被摄体像的多个光学镜头构成,是单焦点镜头或变焦镜头。摄影镜头101能够通过镜头驱动部111在光轴方向上移动,根据来自镜头驱动控制部112的控制信号,控制摄影镜头101的焦点位置,在变焦镜头的情况下,也控制焦点距离。镜头驱动控制电路112按照来自微型计算机107的控制命令进行镜头驱动部111的驱动控制。
在摄影镜头101的光轴上、由摄影镜头101形成被摄体像的位置附近配置有摄像元件102。摄像元件102发挥作为对被摄体像摄像并取得摄像图像数据的摄像部的功能。在摄像元件102上二维地呈矩阵状配置有构成各像素的光电二极管。各光电二极管产生与受光量对应的光电转换电流,该光电转换电流由与各光电二极管连接的电容器进行电荷蓄积。各像素的前表面配置有拜耳排列的RGB滤色器。
摄像元件102与摄像电路103连接,该摄像电路103在摄像元件102中进行电荷蓄积控制和图像信号读出控制,对该读出的图像信号(模拟图像信号)降低重置噪声后进行波形整形,进而进行增益提高等以成为适当的信号电平。
摄像电路103与A/D转换部104连接,该A/D转换部104对模拟图像信号进行模数转换,向总线199输出数字图像信号(以下称之为图像数据)。
总线199是用于传送在相机的内部读出或生成的各种数据的传送路径。在总线199连接着上述A/D转换部104,此外还连接着图像处理器105、JPEG 处理器106、微型计算机107、SDRAM(Synchronous DRAM)108、存储器接口(以下称之为存储器I/F)109、LCD(液晶显示器:Liquid Crystal Display)驱动器110。
图像处理器105对基于摄像元件102的输出的图像数据进行OB相减处理、白平衡调整、颜色矩阵运算、伽马转换、色差信号处理、噪声去除处理、同时化处理、边缘处理等各种图像处理。JPEG处理器106在将图像数据记录于记录介质115时,按照JPEG压缩方式压缩从SDRAM108读出的图像数据。此外,JPEG处理器106为了进行图像再现显示而进行JPEG图像数据的解压缩。进行解压缩时,读出记录在记录介质115中的文件,在JPEG处理器106中实施了解压缩处理后,将解压缩的图像数据暂时存储于SDRAM108中并在LCD116上进行显示。另外,在本实施方式中,作为图像压缩解压缩方式采用的是JPEG方式,然而压缩解压缩方式不限于此,当然可以采用MPEG、TIFF、H.264等其他的压缩解压缩方式。
操作单元113包括但不限于实体按键或者虚拟按键,该实体或虚拟按键可以为电源按钮、拍照键、编辑按键、动态图像按钮、再现按钮、菜单按钮、十字键、OK按钮、删除按钮、放大按钮等各种输入按钮和各种输入键等操作部材,检测这些操作部材的操作状态。
将检测结果向微型计算机107输出。此外,在作为显示部的LCD116的前表面设有触摸面板,检测用户的触摸位置,将该触摸位置向微型计算机107输出。微型计算机107根据来自操作单元113的操作部材的检测结果,执行与用户的操作对应的各种处理序列。同样,可以把这个地方改成计算机107根据LCD116前面的触摸面板的检测结果,执行与用户的操作对应的各种处理序列。
闪存114存储用于执行微型计算机107的各种处理序列的程序。微型计算机107根据该程序进行相机整体的控制。此外,闪存114存储相机的各种调整值,微型计算机107读出调整值,按照该调整值进行相机的控制。 SDRAM108是用于对图像数据等进行暂时存储的可电改写的易失性存储器。该SDRAM108暂时存储从A/D转换部104输出的图像数据和在图像处理器105、JPEG处理器106等中进行了处理后的图像数据。
微型计算机107发挥作为该相机整体的控制部的功能,统一控制相机的各种处理序列。微型计算机107连接着操作单元113和闪存114。
所述微型计算机107可通过执行程序控制本实施例中装置执行下列操作:
拍摄开始后,通过摄像头每隔预设时间采集一张图像;
将当前的图像与过去的图像进行图像合成,生成合成图像;
抓取所述合成图像,并对抓取的合成图像进行编码处理;
拍摄结束时,将编码处理后的图像数据生成为视频文件。
可选地,所述将当前的图像与过去的图像进行图像合成包括:
根据当前的图像与过去的图像的亮度信息进行图像合成。
可选地,所述根据当前的图像与过去的图像的亮度信息进行图像合成包括:判断同一位置当前的图像中的像素的亮度是否大于过去的图像中的像素的亮度;若是,则将同一位置过去的图像中的像素替换为当前的图像中的像素,据此进行图像合成。
可选地,所述摄像头为前置摄像头,所述通过摄像头每隔预设时间采集一张图像的步骤之后还包括:对所述图像进行镜像处理。
可选地,所述对抓取的合成图像进行编码处理的步骤之前还包括:对抓取的合成图像进行特效处理,所述特效处理包括基本效果处理、滤镜效果处理和/或特殊场景效果处理。
存储器接口109与记录介质115连接,进行将图像数据和附加在图像数据中的文件头等数据写入记录介质115和从记录介质115中读出的控制。记录介质115例如为能够在相机主体上自由拆装的存储器卡等记录介质,然而不限于此,也可以是内置在相机主体中的硬盘等。
LCD驱动器110与LCD116连接,将由图像处理器105处理后的图像数据存储于SDRAM,需要显示时,读取SDRAM存储的图像数据并在LCD116上显示,或者,JPEG处理器106压缩过的图像数据存储于SDRAM,在需要显示时,JPEG处理器106读取SDRAM的压缩过的图像数据,再进行解压缩,将解压缩后的图像数据通过LCD116进行显示。
LCD116配置在相机主体的背面等上,进行图像显示。该LCD116设有检测用户的触摸操作的触摸面板。另外,作为显示部,在本实施方式中配置的是液晶表示面板(LCD116),然而不限于此,也可以采用有机EL等各种显示面板。
以上所述,仅为本发明的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本发明的保护范围之内。因此,本发明的保护范围应以所述权利要求的保护范围为准。

Claims (11)

  1. 一种拍摄光绘视频的方法,所述拍摄光绘视频的方法包括以下步骤:
    拍摄开始后,通过摄像头连续采集光绘图像;
    间隔读取所述光绘图像,根据当前的光绘图像与之前采集的光绘图像生成合成图像;
    抓取所述合成图像,对抓取的合成图像进视频编码处理,根据视频编码处理后的合成图像生成光绘视频。
  2. 根据权利要求1所述的拍摄光绘视频的方法,其中,所述根据当前的光绘图像与之前采集的光绘图像生成合成图像的步骤包括:
    从当前的光绘图像和之前采集的光绘图像中,选出满足预设条件的像素,对同一位置的像素执行加法运算,生成合成图像。
  3. 根据权利要求2所述的拍摄光绘视频的方法,其中,所述选出满足预设条件的像素包括:
    判断所述像素的亮度参数是否大于预设阈值,若是,则判定所述像素满足预设条件,选出该像素。
  4. 根据权利要求2所述的拍摄光绘视频的方法,其中,所述选出满足预设条件的像素包括:
    判断所述像素是否为突变像素;
    若所述像素为突变像素,则计算出所述突变像素周围预设个数像素的亮度参数的平均值,并判断该平均值是否大于预设阈值,若是,则判定所述突变像素满足预设条件,选出该突变像素;
    若所述像素不是突变像素,则进一步判断所述像素的亮度参数是否大于预设阈值,若是,则判定所述像素满足预设条件,选出该像素。
  5. 根据权利要求1至4中任一项所述的拍摄光绘视频的方法,其中,所述对抓取的合成图像进视频编码处理的步骤之前,所述拍摄光绘视频的方法还包括:
    对抓取的所述合成图像进行特效处理。
  6. 一种移动终端,所述移动终端包括:
    采集模块,配置为拍摄开始后,连续采集光绘图像;
    图像生成模块,配置为间隔读取所述光绘图像,根据当前的光绘图像与之前采集的光绘图像生成合成图像;
    视频生成模块,配置为抓取所述合成图像,对抓取的合成图像进视频编码处理,根据视频编码处理后的合成图像生成光绘视频。
  7. 根据权利要求6所述的移动终端,其中,所述图像生成模块,配置为从当前的光绘图像和之前采集的光绘图像中,选出满足预设条件的像素,对同一位置的像素执行加法运算,生成合成图像。
  8. 根据权利要求7所述的移动终端,其中,所述图像生成模块,配置为判断所述像素的亮度参数是否大于预设阈值,若是,则判定所述像素满足预设条件,选出该像素。
  9. 根据权利要求7所述的移动终端,其中,所述图像生成模块,配置为判断所述像素是否为突变像素;若所述像素为突变像素,则计算出所述突变像素周围预设个数像素的亮度参数的平均值,并判断该平均值是否大于预设阈值,若是,则判定所述突变像素满足预设条件,选出该突变像素;若所述像素不是突变像素,则进一步判断所述像素的亮度参数是否大于预设阈值,若是,则判定所述像素满足预设条件,选出该像素。
  10. 根据权利要求6至9中任一项所述的移动终端,其中,所述移动终端还包括:
    处理模块,配置为对抓取的所述合成图像进行特效处理。
  11. 一种计算机存储介质,所述计算机存储介质中存储有计算机可执 行指令,所述计算机可执行指令用于执行权利要求1至5所述方法的至少其中之一。
PCT/CN2015/081871 2014-07-23 2015-06-18 拍摄光绘视频的方法、移动终端和计算机存储介质 WO2016011859A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US15/327,627 US10129488B2 (en) 2014-07-23 2015-06-18 Method for shooting light-painting video, mobile terminal and computer storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201410352575.X 2014-07-23
CN201410352575.XA CN104104798A (zh) 2014-07-23 2014-07-23 拍摄光绘视频的方法和移动终端

Publications (1)

Publication Number Publication Date
WO2016011859A1 true WO2016011859A1 (zh) 2016-01-28

Family

ID=51672589

Family Applications (2)

Application Number Title Priority Date Filing Date
PCT/CN2015/081871 WO2016011859A1 (zh) 2014-07-23 2015-06-18 拍摄光绘视频的方法、移动终端和计算机存储介质
PCT/CN2015/082987 WO2016011877A1 (zh) 2014-07-23 2015-06-30 拍摄光绘视频的方法和移动终端、存储介质

Family Applications After (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/082987 WO2016011877A1 (zh) 2014-07-23 2015-06-30 拍摄光绘视频的方法和移动终端、存储介质

Country Status (3)

Country Link
US (1) US10129488B2 (zh)
CN (1) CN104104798A (zh)
WO (2) WO2016011859A1 (zh)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104079833A (zh) 2014-07-02 2014-10-01 深圳市中兴移动通信有限公司 拍摄星轨视频的方法和装置
CN104104798A (zh) 2014-07-23 2014-10-15 深圳市中兴移动通信有限公司 拍摄光绘视频的方法和移动终端
CN105072350B (zh) 2015-06-30 2019-09-27 华为技术有限公司 一种拍照方法及装置
US20170208354A1 (en) * 2016-01-15 2017-07-20 Hi Pablo Inc System and Method for Video Data Manipulation
CN106331482A (zh) * 2016-08-23 2017-01-11 努比亚技术有限公司 一种照片处理装置和方法
CN106534552B (zh) * 2016-11-11 2019-08-16 努比亚技术有限公司 移动终端及其拍照方法
CN106713745A (zh) * 2016-11-28 2017-05-24 努比亚技术有限公司 一种实现光绘摄影的方法、装置及拍摄设备
CN106686297A (zh) * 2016-11-28 2017-05-17 努比亚技术有限公司 一种实现光绘摄影的方法、装置及拍摄设备
CN106713777A (zh) * 2016-11-28 2017-05-24 努比亚技术有限公司 一种实现光绘摄影的方法、装置及拍摄设备
WO2018119632A1 (zh) * 2016-12-27 2018-07-05 深圳市大疆创新科技有限公司 图像处理的方法、装置和设备
KR102401659B1 (ko) * 2017-03-23 2022-05-25 삼성전자 주식회사 전자 장치 및 이를 이용한 카메라 촬영 환경 및 장면에 따른 영상 처리 방법
CN110913118B (zh) * 2018-09-17 2021-12-17 腾讯数码(天津)有限公司 视频处理方法、装置及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103595925A (zh) * 2013-11-15 2014-02-19 深圳市中兴移动通信有限公司 照片合成视频的方法和装置
WO2014035642A1 (en) * 2012-08-28 2014-03-06 Mri Lightpainting Llc Light painting live view
CN103634530A (zh) * 2012-08-27 2014-03-12 三星电子株式会社 拍摄装置及其控制方法
CN103888683A (zh) * 2014-03-24 2014-06-25 深圳市中兴移动通信有限公司 移动终端及其拍摄方法
CN104104798A (zh) * 2014-07-23 2014-10-15 深圳市中兴移动通信有限公司 拍摄光绘视频的方法和移动终端

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006243701A (ja) * 2005-02-07 2006-09-14 Fuji Photo Film Co Ltd カメラ及びレンズ装置
US9307212B2 (en) * 2007-03-05 2016-04-05 Fotonation Limited Tone mapping for low-light video frame enhancement
US9813638B2 (en) * 2012-08-28 2017-11-07 Hi Pablo, Inc. Lightpainting live view
US8830367B1 (en) * 2013-10-21 2014-09-09 Gopro, Inc. Frame manipulation to reduce rolling shutter artifacts

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103634530A (zh) * 2012-08-27 2014-03-12 三星电子株式会社 拍摄装置及其控制方法
WO2014035642A1 (en) * 2012-08-28 2014-03-06 Mri Lightpainting Llc Light painting live view
CN103595925A (zh) * 2013-11-15 2014-02-19 深圳市中兴移动通信有限公司 照片合成视频的方法和装置
CN103888683A (zh) * 2014-03-24 2014-06-25 深圳市中兴移动通信有限公司 移动终端及其拍摄方法
CN104104798A (zh) * 2014-07-23 2014-10-15 深圳市中兴移动通信有限公司 拍摄光绘视频的方法和移动终端

Also Published As

Publication number Publication date
WO2016011877A1 (zh) 2016-01-28
US10129488B2 (en) 2018-11-13
CN104104798A (zh) 2014-10-15
US20170208259A1 (en) 2017-07-20

Similar Documents

Publication Publication Date Title
WO2016011859A1 (zh) 拍摄光绘视频的方法、移动终端和计算机存储介质
WO2016000515A1 (zh) 拍摄星轨视频的方法、装置和计算机存储介质
WO2016023406A1 (zh) 物体运动轨迹的拍摄方法、移动终端和计算机存储介质
US8937677B2 (en) Digital photographing apparatus, method of controlling the same, and computer-readable medium
WO2016045457A1 (zh) 拍摄方法、装置和计算机存储介质
JP4787180B2 (ja) 撮影装置及び撮影方法
KR101913837B1 (ko) 파노라마 영상 생성 방법 및 이를 적용한 영상기기
JP6325841B2 (ja) 撮像装置、撮像方法、およびプログラム
US20170134634A1 (en) Photographing apparatus, method of controlling the same, and computer-readable recording medium
US20130162853A1 (en) Digital photographing apparatus and method of controlling the same
WO2016029746A1 (zh) 拍摄方法、拍摄装置及计算机存储介质
WO2016008359A1 (zh) 物体运动轨迹图像的合成方法、装置及计算机存储介质
WO2016000514A1 (zh) 拍摄星云视频的方法和装置和计算机存储介质
WO2017080348A2 (zh) 一种基于场景的拍照装置、方法、计算机存储介质
US10127455B2 (en) Apparatus and method of providing thumbnail image of moving picture
US8654204B2 (en) Digtal photographing apparatus and method of controlling the same
JP2015177221A (ja) 撮像装置、撮像方法、データ記録装置、及びプログラム
WO2017128914A1 (zh) 一种拍摄方法及装置
WO2016169488A1 (zh) 图像处理方法、装置、计算机存储介质和终端
JPWO2018235382A1 (ja) 撮像装置、撮像装置の制御方法、及び撮像装置の制御プログラム
JP2011239267A (ja) 撮像装置及び画像処理装置
US10762600B2 (en) Image processing apparatus, image processing method, and non-transitory computer-readable recording medium
JP5530304B2 (ja) 撮像装置および撮影画像表示方法
WO2017071560A1 (zh) 图片处理方法及装置
WO2006054763A1 (ja) 電子カメラ

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15824067

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 15327627

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 26/06/17)

122 Ep: pct application non-entry in european phase

Ref document number: 15824067

Country of ref document: EP

Kind code of ref document: A1