WO2016029746A1 - 拍摄方法、拍摄装置及计算机存储介质 - Google Patents
拍摄方法、拍摄装置及计算机存储介质 Download PDFInfo
- Publication number
- WO2016029746A1 WO2016029746A1 PCT/CN2015/083728 CN2015083728W WO2016029746A1 WO 2016029746 A1 WO2016029746 A1 WO 2016029746A1 CN 2015083728 W CN2015083728 W CN 2015083728W WO 2016029746 A1 WO2016029746 A1 WO 2016029746A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- light
- drawing area
- shooting
- effect processing
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/265—Mixing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
- H04N23/741—Circuitry for compensating brightness variation in the scene by increasing the dynamic range of the image compared to the dynamic range of the electronic image sensors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/63—Control of cameras or camera modules by using electronic viewfinders
Definitions
- the present invention relates to the field of imaging technology, and in particular, to a photographing method, a photographing apparatus, and a computer storage medium.
- the photographic photography refers to a shooting mode that uses a long-time exposure to create a special image through changes in the light source during exposure. Because it requires long exposure, it needs corresponding photosensitive hardware to support it, and the sensitive hardware that can support long-time exposure is expensive. Only professional camera devices such as SLR cameras are equipped with the above-mentioned photosensitive hardware, such as digital camera devices, mobile phones, etc. Digital cameras are not likely to be equipped with such expensive hardware.
- the trajectory of the illuminating is recorded in the finally obtained image. Therefore, whether it is a professional shooting device or the above digital shooting device, it can only be used for shooting in a dark shooting environment; if in a bright shooting environment, the final image obtained will not only have light highlights, but also There are a number of other highlights that contaminate the image and prevent a clear ray trace.
- the traditional SLR camera can only take photos, that is, only a static image showing the final photo-painting work, and can not take a dynamic video that can display the creative process of the photo-painting work.
- the existing shooting device can only perform light painting photography in a dark shooting environment (such as at night), and cannot perform light painting photography in a bright shooting environment (such as daylight), which cannot satisfy the user's light painting creation anytime and anywhere.
- a dark shooting environment such as at night
- a bright shooting environment such as daylight
- the main purpose of the embodiments of the present invention is to provide a photographing method, a photographing device, and a computer storage medium, which can realize the photographing of the light drawing video in a bright shooting environment, and meet the needs of the user to create the light painting anytime and anywhere, thereby improving the user experience.
- an embodiment of the present invention provides a shooting method, where the method includes:
- the encoded image data is generated as a video file.
- the identifying the light drawing area in the currently read image includes:
- the bright spot area at the auxiliary target preset position is identified as a light drawing area.
- the identifying the light drawing area in the currently read image includes:
- the method further includes: if the currently read image is the first image acquired, directly using the image as a base image for the next image synthesis or synthesizing the preset background image as the current image. Basic image.
- the step of performing the encoding process on the captured composite image further includes:
- Special effects processing is performed on the captured composite image, which includes basic effect processing, filter effect processing, and/or special scene effect processing.
- the embodiment of the invention simultaneously provides a camera device, including an image acquisition module, an image synthesis module and a video generation module, wherein:
- An image acquisition module configured to continuously acquire image data
- the image synthesizing module is configured to read the acquired image, search the currently read image according to the pre-stored feature, identify the light drawing area in the currently read image, extract the light drawing area and superimpose the base image Performing image synthesis on the corresponding position, generating a composite image, and using the composite image as a base image for the next image synthesis;
- the video generation module is configured to capture the composite image, perform encoding processing on the captured composite image, and generate the encoded image data into a video file.
- the image synthesizing module is further configured to: search for an auxiliary target matching the preset feature from the currently read image, and identify the bright spot region at the auxiliary target preset position as the light drawing region.
- the image synthesis module is further configured to: acquire a position of the light drawing area in the image that was last read; and search for a matching with the pre-stored feature within a preset range of the corresponding position in the currently read image.
- the highlight of the light is identified as the area of the light painting.
- the image synthesizing module is further configured to: if the currently read image is the first image acquired, directly use the image as a base image for the next image synthesis or use the preset background image as the current image The base image for image synthesis.
- the photographing apparatus further includes a special effect processing module, wherein the special effect processing module is configured to: perform special effect processing on the captured composite image, where the special effect processing includes basic effect processing, filter effect processing, and/or special scene Effect processing.
- the special effect processing includes basic effect processing, filter effect processing, and/or special scene Effect processing.
- the embodiment of the invention simultaneously provides a computer storage medium, which stores a computer program, A computer program is used to perform the above shooting method.
- the auxiliary target or the light-painting bright point itself is tracked, and then the light-painting area in the image is identified according to the auxiliary target or the light-painting bright point, and finally the light-drawn area is extracted and superimposed and combined to generate a composite image. Since only the light-painting area is superimposed and synthesized, other bright-point areas in the image will not appear in the composite image, and the synthesized image will not be contaminated, so that a clear light-drawn trajectory can be recorded in the finally synthesized image. Photographic photography was performed in a bright shooting environment.
- the photo-photographs at different times are encoded and finally combined into a video file, enabling the shooting of the light-drawn video in a bright environment. It not only broadens the application scene of light painting photography, but also satisfies the needs of users to create light painting anytime and anywhere. It also enables users to use the shooting device to capture the video showing the creative process of light painting, or to apply to similar application scenarios. The diverse needs of users enhance the user experience.
- Figure 1 is a flow chart showing a first embodiment of the photographing method of the present invention
- Figure 2 is a flow chart showing a second embodiment of the photographing method of the present invention.
- Figure 3 is a block diagram showing a first embodiment of the photographing apparatus of the present invention.
- Figure 4 is a block diagram showing a second embodiment of the photographing apparatus of the present invention.
- Fig. 5 is a block diagram showing a main electrical configuration of an image pickup apparatus according to an embodiment of the present invention.
- the photographing method of the embodiment of the present invention is mainly applied to photoplotting photography, and can also be applied to an application scenario similar to photoplotting photography, and is not limited herein.
- the following embodiments are described in detail by taking photographic photography as an example.
- the shooting of the light drawing video is performed in a bright environment by continuously acquiring the image; reading the collected image, searching the currently read image according to the pre-stored feature, and identifying the current reading. a light drawing area in the image; extracting the light drawing area superimposed on a corresponding position of the base image for image synthesis, generating a composite image, and using the composite image as a base image for the next image synthesis; grasping the composition
- the image is encoded by the captured composite image; at the end of the shooting, the encoded image data is generated as a video file.
- the photographing method includes the following steps:
- Step S101 Acquire the feature of the auxiliary target and store it before the shooting starts, the shooting device may directly acquire the ready-made feature data from the outside, or prompt the user to select the auxiliary target on the preview interface, extract the feature data of the auxiliary target selected by the user, and storage.
- the auxiliary target is preferably characterized by obvious features to facilitate the tracking of the auxiliary target by the photographing device according to the characteristic parameters during the subsequent shooting.
- the auxiliary target may be a stylus, a human hand, or a combination of both, and the stylus may be any object that can illuminate.
- Step S102 after the shooting starts, the camera continuously collects images.
- the shooting device When the user selects the photo shooting mode, press the shooting button or trigger the virtual shooting button, the shooting device starts to perform the photo shooting, and the image is continuously acquired by the camera, and the speed at which the camera continuously collects the image can be preset.
- Step S103 reading the acquired image, searching from the currently read image according to the pre-stored feature Auxiliary goal
- the photographing device continuously or intermittently reads the acquired image, and searches for the auxiliary target matching the pre-stored feature from the currently read image to implement tracking of the auxiliary target.
- Step S104 Identify the bright spot area at the auxiliary target preset position as the light drawing area
- the photographing device searches for a bright spot area (a region whose brightness is greater than a preset value) at a light-emitting position of the top end of the stylus pen, and recognizes the searched bright spot area as a light-painted area.
- a bright spot area a region whose brightness is greater than a preset value
- Step S105 extracting a photo-painting area superimposed on a corresponding position of the base image to perform image synthesis, generating a composite image, and using the synthesized image as a base image for the next image synthesis.
- the photographing device acquires a coordinate position of the light drawing area in the original image, extracts the light drawing area, and then superimposes the light drawing area on a coordinate position corresponding to the base image, and combines the light drawing area into the base image. And generate a composite image.
- the image is directly used as the base image for the next image synthesis, and the background of the final synthesized image is the background of the shooting scene.
- the user is allowed to preset a background image, which may be selected from an existing picture, or a picture may be taken as a background image. If the currently read image is the first image acquired, the preset background image is used as the base image of the current image synthesis, and the light drawing area of the first image is extracted and superimposed on the corresponding position of the preset background image. So that you can set any background for the composite image.
- the image composition thread After completing the image synthesis, the image composition thread returns to step S103, continues to read the acquired image, performs the next image synthesis, and so on.
- the acquisition of the image and the synthesis of the image are performed simultaneously, and since the camera continuously acquires the image, the synthesized image is also continuously generated in real time.
- Step S106 Grab the composite image, and encode the captured composite image.
- the photographing device can continuously capture or intermittently capture the composite image.
- This embodiment preferably captures continuously, that is, each time a composite image is generated, one image is captured, and all the generated composite images are used as the material of the composite video.
- the generation of the composite image and the capture of the composite image for the encoding process are performed simultaneously by the two threads, and since the composite image is encoded while being imaged, it is not necessary to store the generated composite image.
- Step S107 At the end of the shooting, the image data after the encoding process is generated as a video file.
- Video file formats include, but are not limited to, mp4, 3gp, avi, rmvb, and the like.
- the photographing device also displays the composite image in real time on the display screen for the user to preview the current light painting effect in real time.
- the composite image displayed by the photographing device is a compressed small-sized thumbnail image.
- the light drawing area in the image is recognized according to the auxiliary target, and the light drawing area is extracted and superimposed and synthesized. Since only the light painting area is superimposed and synthesized, other bright spot areas in the image will not appear in the synthesized image, and the synthesized image will not be polluted, so that a clear light drawing track can be recorded in the final synthesized image. , to achieve a photo shoot in a bright shooting environment. The photo-photographs at different times are encoded and finally combined into a video file, enabling the shooting of the light-drawn video in a bright environment.
- the user can use the camera to capture a video showing the creation process of the light painting work, or apply to a similar application scenario, which satisfies the diverse needs of the user and improves the user experience.
- the composite image is encoded while being photographed, there is no need to store the generated composite image, so the volume of the video file obtained by the final shooting is not large and does not occupy too much storage space.
- the photographing method includes the following steps:
- Step S201 acquiring features of the light-draw highlights and storing
- the camera can obtain ready-made feature data directly from the outside before shooting begins.
- the user is prompted to select a light spot on the preview interface, and extract feature data of the light spot selected by the user and store it.
- the characteristics of the light-dark highlights include the brightness, area, contour, diameter, etc. of the bright spots.
- the bright spot of the light is a bright spot formed by the light of the light pen.
- the feature of the light-draw bright spot is stored in advance so as to track the light-draw bright spot directly.
- Step S202 After starting shooting, the camera continuously collects images.
- Step S203 reading the acquired image, and acquiring the position of the light drawing area in the image that was last read.
- the photographing device can buffer the position (such as position coordinates) of the light drawing area in the image read this time each time the image is synthesized, so that the next time the image is synthesized, the light drawing area in the previous image can be directly obtained from the buffer. position.
- the photographing device may also use the position of the end of the light trace in the base image as the position of the light map area in the previous image.
- the user can specify the position or read the preset position.
- Step S204 Search for a light-draw bright spot matching the pre-stored feature in a preset range of the corresponding position in the currently read image, and identify the area where the light-draw bright spot is the light-painted area
- the trajectory of the light-dark highlights is regular and continuous, so the light-dark highlights in the adjacent two images collected are in close proximity and are not far apart. Therefore, in the embodiment, when tracking the light-draw highlights, the search is performed only within a preset range of the position of the light-painting area in the previous image, thereby avoiding misjudging the bright spots of other areas as the light-painting bright spots.
- Step S205 extracting a photo-painting area superimposed on a corresponding position of the base image to perform image synthesis, generating a composite image, and using the composite image as a base image for the next image synthesis.
- Step S206 Grab the composite image, and encode the captured composite image.
- Step S207 When the shooting ends, the encoded image data is generated as a video file.
- the light drawing area in the image is identified according to the bright spot of the light, and the light drawing area is extracted for superposition and synthesis. Since only the illuminating region is superimposed and synthesized, other bright spot regions in the image will not appear in the synthesized image, and the synthesized image will not be polluted, so that a clear illuminating trajectory can be recorded in the final synthesized image. It enables shooting of light-drawn video in a bright shooting environment.
- the photographing apparatus may be a general digital photographing apparatus such as a card photographing apparatus, or may be a terminal device such as a mobile phone or a tablet computer having an image capturing function, and the photographing device is
- a photographing apparatus that implements the above-described photographing method includes an image acquisition module 31, an image synthesis module 32, and a video generation module 33.
- the image acquisition module 31 is configured to invoke the camera to acquire an image.
- the shooting device When the user selects the photo shooting mode, presses the shooting button or triggers the virtual shooting button, the shooting device starts the photo shooting, and the image collecting module 31 continuously collects the image by using the camera, and the speed at which the camera continuously collects the image can be preset.
- the image synthesizing module 32 is configured to continuously or intermittently read the acquired image, search the currently read image according to the pre-stored feature, and identify the photo-painting region in the currently-read image; and extract the photo-painting region overlay. Image synthesis is performed at the corresponding position of the base image, a composite image is generated, and the composite image is used as a base image for the next image synthesis.
- image synthesis module 32 may identify the mapped regions in the image by tracking the secondary targets.
- the photographing apparatus may directly acquire the feature data of the ready-made auxiliary target from the outside, or may prompt the user to select the auxiliary target on the preview interface, extract the feature data of the auxiliary target selected by the user, and store the feature data.
- the auxiliary target is preferably characterized by obvious features to facilitate the tracking of the auxiliary target by the photographing device according to the characteristic parameters during the subsequent shooting.
- the auxiliary target may be a stylus, a human hand, or a combination of both, and the stylus may be any object that can illuminate.
- the image synthesis module 32 reads the acquired image and searches for the image that is currently read.
- the auxiliary target matching the pre-stored feature identifies the bright spot area at the auxiliary target preset position as the light drawing area.
- the image compositing module 32 searches for a bright spot area (a region whose brightness is greater than a preset value) at a light-emitting position of the top of the stylus pen, and identifies the searched bright spot area as a light-painted area.
- a bright spot area a region whose brightness is greater than a preset value
- image synthesis module 32 identifies the light-drawn regions in the image by directly tracking the light-spotted highlights themselves.
- the photographing device can directly acquire the feature data of the ready-made light-draw bright spot from the outside, or prompt the user to select the light-painting bright spot on the preview interface, and extract the feature data of the light-draw bright spot selected by the user. And store it.
- the characteristics of the light-dark highlights include the brightness, area, contour, diameter, etc. of the bright spots.
- the image synthesis module 32 reads the acquired image, and acquires the position of the light drawing area in the image that was last read, and searches for the light that matches the pre-stored feature within a preset range of the corresponding position in the currently read image. Paint highlights and identify the area where the light is highlighted as the light painting area.
- the image synthesis module 32 may cache the position of the light drawing area in the image that is read this time each time the image is synthesized, so that the next time the image is synthesized, the light drawing area in the previous image can be directly obtained from the cache. s position.
- the image synthesis module 32 may also use the position of the end of the light trace in the base image as the position of the light map area in the previous image.
- the image synthesis module 32 acquires the coordinate position of the light drawing area in the original image, extracts the light drawing area, and then superimposes the light drawing area on the coordinate position corresponding to the base image, and the light drawing area is A composite image is generated by being synthesized in the base image.
- the image is directly used as the base image for the next image synthesis, and the background of the final synthesized image is the background of the shooting scene.
- the user is allowed to preset a background image, which may be selected from an existing picture, or a picture may be taken as a background image.
- the preset background image is used as the base image of the current image synthesis, and the light drawing area of the first image is extracted and superimposed on the corresponding position of the preset background image. So that it can be set for composite images Set any background.
- the synthesized image can be displayed in real time through the display.
- the acquired image is continuously read, and the next image synthesis is performed, and the loop is repeated until the end of the shooting.
- the acquisition of the image and the synthesis of the image are performed simultaneously, and since the camera continuously acquires the image, the synthesized image is also continuously generated in real time.
- the video generation module 33 is configured to capture the composite image, perform encoding processing on the captured composite image, and generate the encoded image data into a video file.
- the video generation module 33 can continuously capture or intermittently capture the composite image. In this embodiment, the continuous capture is preferred. That is, each time the image synthesis module 32 generates a composite image, the video generation module 33 grabs one image for encoding processing, and uses all the generated composite images as the material of the composite video. Generating the composite image and grabbing the composite image for encoding processing are performed simultaneously by the two threads.
- the video generating module 33 processes the captured composite image into common video encodings such as MPEG-4, H264, H263, and VP8 for later generation of the video file, and the method for encoding the composite image is the same as the prior art. No longer.
- the video generation module 33 may generate the encoded image data into a video file according to a video file format specified by the user, and the video file format includes, but is not limited to, mp4, 3gp, avi, rmvb, and the like.
- each module in the camera may be a central processing unit (CPU), or a digital signal processor (DSP), or a field-programmable gate array (FPGA). achieve.
- CPU central processing unit
- DSP digital signal processor
- FPGA field-programmable gate array
- FIG. 4 shows a second embodiment of the photographing apparatus of the present invention.
- the difference between this embodiment and the first embodiment is that a special effect processing module 34 is added, which is connected to the video generating module 33, and the video generating module 33 will capture the The composite image is sent to the special effect processing module 34, and the special effect processing module 34 performs special effect processing on the captured composite image, and returns the processed composite image to the video generation module 33 for encoding processing.
- the special effect processing includes basic effect processing, filter effect processing, and/or special scene effect processing, and the like.
- the basic effect processing including noise reduction, brightness, chromaticity and other processing
- filter effect processing including sketch, negative, black and white processing
- special scene effect processing including processing for common weather, starry sky and so on.
- the video generating module 33 is further configured to: turn on the audio device, receive the audio data through the audio device; and perform encoding processing on the audio data.
- source audio data There are two main ways to source audio data: microphone capture or custom audio files.
- the video generation module 33 first decodes the audio file to obtain the original audio data.
- the special effect processing module 34 performs special effect processing on the received audio data, and the special effect processing includes special effect recording, voice changing, pitch changing, and/or shifting, and the like.
- the video generation module 33 generates a video file according to the video file format set by the user, according to the user shooting end instruction, the encoded image data, and the encoded audio data.
- the photographing apparatus of the present invention recognizes the auxiliary target or the light-draw bright point itself, and then recognizes the light-painted area in the image according to the auxiliary target or the light-painted bright point, and finally extracts the light-drawn area to perform superimposition and synthesis to generate a composite image. Since only the light-painting area is superimposed and synthesized, other bright-point areas in the image will not appear in the composite image, and the synthesized image will not be contaminated, so that a clear light-drawn trajectory can be recorded in the finally synthesized image. Photographic photography was performed in a bright shooting environment.
- the photo-photographs at different times are encoded and finally combined into a video file, enabling the shooting of the light-drawn video in a bright environment. It not only broadens the application scene of light painting photography, but also satisfies the needs of users to create light painting anytime and anywhere. It also enables users to use the shooting device to capture the video showing the creative process of light painting, or to apply to similar application scenarios. The diverse needs of users enhance the user experience.
- each module in the photographing device can be implemented by a CPU, or a DSP, or an FPGA.
- the photographing apparatus provided in the above embodiment performs photographing, only the division of each functional module described above is exemplified. In practical applications, the above-mentioned function assignment can be completed by different functional modules as needed.
- the photographing apparatus and the photographing method embodiment provided by the above embodiments are the same inventive concept, and the specific implementation process is described in detail in the method embodiment, and details are not described herein again.
- Fig. 5 is a block diagram showing a main electrical configuration of an image pickup apparatus according to an embodiment of the present invention.
- the photographic lens 101 is composed of a plurality of optical lenses for forming a subject image, and is a single focus lens or a zoom lens.
- the photographic lens 101 can be moved in the optical axis direction by the lens driving unit 111, and controls the focus position of the taking lens 101 based on the control signal from the lens driving control unit 112, and also controls the focus distance in the case of the zoom lens.
- the lens drive control circuit 112 performs drive control of the lens drive unit 111 in accordance with a control command from the microcomputer 107.
- An imaging element 102 is disposed in the vicinity of a position where the subject image is formed by the photographing lens 101 on the optical axis of the photographing lens 101.
- the imaging element 102 functions as an imaging unit that captures a subject image and acquires captured image data.
- Photodiodes constituting each pixel are two-dimensionally arranged in a matrix on the imaging element 102. Each photodiode generates a photoelectric conversion current corresponding to the amount of received light, and the photoelectric conversion current is charged by a capacitor connected to each photodiode.
- the front surface of each pixel is provided with a Bayer array of RGB color filters.
- the imaging element 102 is connected to an imaging circuit 103 that performs charge accumulation control and image signal readout control in the imaging element 102, and the read image signal (simulation diagram)
- the image signal is subjected to waveform shaping after the reset noise is reduced, and the gain is increased to obtain an appropriate signal level.
- the imaging circuit 103 is connected to the A/D conversion unit 104, which performs analog-to-digital conversion on the analog image signal, and outputs a digital image signal (hereinafter referred to as image data) to the bus 199.
- image data a digital image signal
- the bus 199 is a transmission path for transmitting various data read or generated inside the photographing apparatus.
- the A/D conversion unit 104 is connected to the bus 199, and an image processor 105, a JPEG processor 106, a microcomputer 107, a SDRAM (Synchronous DRAM) 108, and a memory interface (hereinafter referred to as a memory I/F) are connected. 109. LCD (Liquid Crystal Display) driver 110.
- the image processor 105 performs various kinds of images such as OB subtraction processing, white balance adjustment, color matrix calculation, gamma conversion, color difference signal processing, noise removal processing, simultaneous processing, edge processing, and the like on the image data based on the output of the imaging element 102. deal with.
- the JPEG processor 106 compresses the image data read out from the SDRAM 108 in accordance with the JPEG compression method. Further, the JPEG processor 106 performs decompression of JPEG image data for image reproduction display. At the time of decompression, the file recorded on the recording medium 115 is read, and after the compression processing is performed in the JPEG processor 106, the decompressed image data is temporarily stored in the SDRAM 108 and displayed on the LCD 116. Further, in the present embodiment, the JPEG method is adopted as the image compression/decompression method. However, the compression/decompression method is not limited thereto, and other compression/decompression methods such as MPEG, TIFF, and H.264 may be used.
- the microcomputer 107 functions as a control unit of the entire imaging device, and collectively controls various processing sequences of the imaging device.
- the microcomputer 107 is connected to the operation unit 113 and the flash memory 114.
- the operation unit 113 includes but not limited to a physical button or a virtual button, and the entity or virtual button can be a power button, a camera button, an edit button, a dynamic image button, a reproduction button, a menu button, a cross button, an OK button, a delete button, and a zoom in Buttons and other input buttons and various An operation member such as a key is input, and the operation state of these operation members is detected.
- the entity or virtual button can be a power button, a camera button, an edit button, a dynamic image button, a reproduction button, a menu button, a cross button, an OK button, a delete button, and a zoom in Buttons and other input buttons and various An operation member such as a key is input, and the operation state of these operation members is detected.
- the detection result is output to the microcomputer 107.
- a touch panel is provided on the front surface of the LCD 116 as a display portion, and the touch position of the user is detected, and the touch position is output to the microcomputer 107.
- the microcomputer 107 executes various processing sequences corresponding to the operation of the user based on the detection result of the operation member from the operation unit 113. (Similarly, this place can be changed to the computer 107 to execute various processing sequences corresponding to the user's operation based on the detection result of the touch panel on the front of the LCD 116.)
- the flash memory 114 stores programs for executing various processing sequences of the microcomputer 107.
- the microcomputer 107 performs overall control of the imaging device in accordance with the program. Further, the flash memory 114 stores various adjustment values of the imaging device, and the microcomputer 107 reads the adjustment value, and controls the imaging device in accordance with the adjustment value.
- the SDRAM 108 is an electrically rewritable volatile memory for temporarily storing image data or the like.
- the SDRAM 108 temporarily stores image data output from the A/D conversion unit 104 and image data processed in the image processor 105, the JPEG processor 106, and the like.
- the memory interface 109 is connected to the recording medium 115, and performs control for writing image data and a file header attached to the image data to the recording medium 115 and reading from the recording medium 115.
- the recording medium 115 is, for example, a recording medium such as a memory card that can be detachably attached to the main body of the imaging device.
- the recording medium 115 is not limited thereto, and may be a hard disk or the like built in the main body of the imaging device.
- the LCD driver 110 is connected to the LCD 116, and stores image data processed by the image processor 105 in the SDRAM.
- the image data stored in the SDRAM is read and displayed on the LCD 116, or the image data stored in the JPEG processor 106 is compressed.
- the JPEG processor 106 reads the compressed image data of the SDRAM, decompresses it, and displays the decompressed image data on the LCD 116.
- the LCD 116 is disposed on the back surface of the main body of the imaging device or the like to perform image display.
- the LCD 116 is provided with a touch panel that detects a user's touch operation.
- the liquid crystal display panel (LCD 116) is disposed in the middle, but is not limited thereto, and various display panels such as an organic EL may be employed.
- the apparatus for tracking the service signaling may also be stored in a computer readable storage medium if it is implemented in the form of a software function module and sold or used as a separate product.
- the technical solution of the embodiments of the present invention may be embodied in the form of a software product in essence or in the form of a software product stored in a storage medium, including a plurality of instructions.
- a computer device (which may be a personal computer, server, or network device, etc.) is caused to perform all or part of the methods described in various embodiments of the present invention.
- the foregoing storage medium includes various media that can store program codes, such as a USB flash drive, a mobile hard disk, a read only memory (ROM), a magnetic disk, or an optical disk.
- program codes such as a USB flash drive, a mobile hard disk, a read only memory (ROM), a magnetic disk, or an optical disk.
- an embodiment of the present invention further provides a computer storage medium, wherein a computer program is stored, and the computer program is used to execute the photographing method of the embodiment of the present invention.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Studio Devices (AREA)
Abstract
Description
Claims (11)
- 一种拍摄方法,所述方法包括:连续采集图像;读取采集的图像,识别当前读取的图像中的光绘区域;提取出所述光绘区域并叠加于基础图像的对应位置进行图像合成,生成合成图像,并将所述合成图像作为下一次图像合成的基础图像;抓取所述合成图像,并对抓取的合成图像进行编码处理;拍摄结束时,将编码处理后的图像数据生成为视频文件。
- 根据权利要求1所述的拍摄方法,其中,所述识别当前读取的图像中的光绘区域包括:从当前读取的图像中搜寻出与预存特征相匹配的辅助目标;将所述辅助目标预设位置处的亮点区域识别为光绘区域。
- 根据权利要求1所述的拍摄方法,其中,所述识别当前读取的图像中的光绘区域包括:获取上一次读取的图像中的光绘区域的位置;在当前读取的图像中的对应位置的预设范围内搜寻出与预存特征相匹配的光绘亮点,将所述光绘亮点所在区域识别为光绘区域。
- 根据权利要求1-3任一项所述的拍摄方法,其中,所述方法还包括:若当前读取的图像为采集的第一张图像,则直接将该图像作为下一次图像合成的基础图像或将预设的背景图像作为本次图像合成的基础图像。
- 根据权利要求1-3任一项所述的拍摄方法,其中,所述对抓取的合成图像进行编码处理的步骤之前还包括:对抓取的合成图像进行特效处理,所述特效处理包括基本效果处理、滤镜效果处理和/或特殊场景效果处理。
- 一种拍摄装置,包括图像采集模块、图像合成模块和视频生成模块, 其中:图像采集模块,配置为连续采集图像数据;图像合成模块,配置为读取采集的图像,识别当前读取的图像中的光绘区域;提取出所述光绘区域并叠加于基础图像的对应位置进行图像合成,生成合成图像,并将所述合成图像作为下一次图像合成的基础图像;视频生成模块,配置为抓取所述合成图像,对抓取的合成图像进行编码处理,并将编码处理后的图像数据生成为视频文件。
- 根据权利要求6所述的拍摄装置,其中,所述图像合成模块还配置为:从当前读取的图像中搜寻出与预设特征相匹配的辅助目标,将所述辅助目标预设位置处的亮点区域识别为光绘区域。
- 根据权利要求6所述的拍摄装置,其中,所述图像合成模块还配置为:获取上一次读取的图像中的光绘区域的位置;在当前读取的图像中的对应位置的预设范围内搜寻出与预存特征相匹配的光绘亮点,将所述光绘亮点所在区域识别为光绘区域。
- 根据权利要求6-8任一项所述的拍摄装置,其中,所述图像合成模块还配置为:若当前读取的图像为采集的第一张图像,则直接将该图像作为下一次图像合成的基础图像或将预设的背景图像作为本次图像合成的基础图像。
- 根据权利要求6-8任一项所述的拍摄装置,其中,所述拍摄装置还包括特效处理模块,所述特效处理模块配置为:对抓取的合成图像进行特效处理,所述特效处理包括基本效果处理、滤镜效果处理和/或特殊场景效果处理。
- 一种计算机存储介质,所述计算机存储介质中存储有计算机可执行指令,该计算机可执行指令配置为执行权利要求1-5任一项所述的拍摄方法。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/505,964 US10171753B2 (en) | 2014-08-28 | 2015-07-10 | Shooting method, shooting device and computer storage medium |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410431341.4 | 2014-08-28 | ||
CN201410431341.4A CN104159040B (zh) | 2014-08-28 | 2014-08-28 | 拍摄方法和拍摄装置 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2016029746A1 true WO2016029746A1 (zh) | 2016-03-03 |
Family
ID=51884438
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2015/083728 WO2016029746A1 (zh) | 2014-08-28 | 2015-07-10 | 拍摄方法、拍摄装置及计算机存储介质 |
Country Status (3)
Country | Link |
---|---|
US (1) | US10171753B2 (zh) |
CN (1) | CN104159040B (zh) |
WO (1) | WO2016029746A1 (zh) |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104202521B (zh) | 2014-08-28 | 2016-05-25 | 努比亚技术有限公司 | 拍摄方法及拍摄装置 |
CN104159040B (zh) | 2014-08-28 | 2019-07-05 | 努比亚技术有限公司 | 拍摄方法和拍摄装置 |
CN105100775B (zh) * | 2015-07-29 | 2017-12-05 | 努比亚技术有限公司 | 一种图像处理方法及装置、终端 |
CN106713777A (zh) * | 2016-11-28 | 2017-05-24 | 努比亚技术有限公司 | 一种实现光绘摄影的方法、装置及拍摄设备 |
CN107071259A (zh) * | 2016-11-28 | 2017-08-18 | 努比亚技术有限公司 | 一种实现光绘摄影的方法、装置及拍摄设备 |
WO2018119632A1 (zh) * | 2016-12-27 | 2018-07-05 | 深圳市大疆创新科技有限公司 | 图像处理的方法、装置和设备 |
CN109741242B (zh) * | 2018-12-25 | 2023-11-14 | 努比亚技术有限公司 | 光绘处理方法、终端和计算机可读存储介质 |
CN110536087A (zh) * | 2019-05-06 | 2019-12-03 | 珠海全志科技股份有限公司 | 电子设备及其运动轨迹照片合成方法、装置和嵌入式装置 |
CN111163264B (zh) * | 2019-12-31 | 2022-02-01 | 维沃移动通信有限公司 | 一种信息显示方法及电子设备 |
CN113628097A (zh) * | 2020-05-09 | 2021-11-09 | 北京字节跳动网络技术有限公司 | 图像特效配置方法、图像识别方法、装置及电子设备 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2005323411A (ja) * | 2005-08-08 | 2005-11-17 | Casio Comput Co Ltd | 画像合成装置 |
CN102831439A (zh) * | 2012-08-15 | 2012-12-19 | 深圳先进技术研究院 | 手势跟踪方法及系统 |
CN103124325A (zh) * | 2011-11-18 | 2013-05-29 | 索尼公司 | 图像处理装置、图像处理方法和记录介质 |
CN103973984A (zh) * | 2014-05-29 | 2014-08-06 | 深圳市中兴移动通信有限公司 | 拍摄方法和装置 |
CN104159040A (zh) * | 2014-08-28 | 2014-11-19 | 深圳市中兴移动通信有限公司 | 拍摄方法和拍摄装置 |
CN104202521A (zh) * | 2014-08-28 | 2014-12-10 | 深圳市中兴移动通信有限公司 | 拍摄方法及拍摄装置 |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6658163B1 (en) * | 1998-04-02 | 2003-12-02 | Fuji Photo Film Co., Ltd. | Image processing method |
US9307212B2 (en) * | 2007-03-05 | 2016-04-05 | Fotonation Limited | Tone mapping for low-light video frame enhancement |
CN103595925A (zh) * | 2013-11-15 | 2014-02-19 | 深圳市中兴移动通信有限公司 | 照片合成视频的方法和装置 |
CN103888683B (zh) * | 2014-03-24 | 2015-05-27 | 深圳市中兴移动通信有限公司 | 移动终端及其拍摄方法 |
-
2014
- 2014-08-28 CN CN201410431341.4A patent/CN104159040B/zh active Active
-
2015
- 2015-07-10 US US15/505,964 patent/US10171753B2/en active Active
- 2015-07-10 WO PCT/CN2015/083728 patent/WO2016029746A1/zh active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2005323411A (ja) * | 2005-08-08 | 2005-11-17 | Casio Comput Co Ltd | 画像合成装置 |
CN103124325A (zh) * | 2011-11-18 | 2013-05-29 | 索尼公司 | 图像处理装置、图像处理方法和记录介质 |
CN102831439A (zh) * | 2012-08-15 | 2012-12-19 | 深圳先进技术研究院 | 手势跟踪方法及系统 |
CN103973984A (zh) * | 2014-05-29 | 2014-08-06 | 深圳市中兴移动通信有限公司 | 拍摄方法和装置 |
CN104159040A (zh) * | 2014-08-28 | 2014-11-19 | 深圳市中兴移动通信有限公司 | 拍摄方法和拍摄装置 |
CN104202521A (zh) * | 2014-08-28 | 2014-12-10 | 深圳市中兴移动通信有限公司 | 拍摄方法及拍摄装置 |
Also Published As
Publication number | Publication date |
---|---|
CN104159040B (zh) | 2019-07-05 |
US10171753B2 (en) | 2019-01-01 |
US20170280064A1 (en) | 2017-09-28 |
CN104159040A (zh) | 2014-11-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2016029746A1 (zh) | 拍摄方法、拍摄装置及计算机存储介质 | |
US10419661B2 (en) | Shooting method and shooting device | |
WO2016023406A1 (zh) | 物体运动轨迹的拍摄方法、移动终端和计算机存储介质 | |
WO2016000515A1 (zh) | 拍摄星轨视频的方法、装置和计算机存储介质 | |
KR101913837B1 (ko) | 파노라마 영상 생성 방법 및 이를 적용한 영상기기 | |
JP5623915B2 (ja) | 撮像装置 | |
WO2016008359A1 (zh) | 物体运动轨迹图像的合成方法、装置及计算机存储介质 | |
US10129488B2 (en) | Method for shooting light-painting video, mobile terminal and computer storage medium | |
CN108259757A (zh) | 摄像装置、图像处理装置、以及记录方法 | |
WO2016004819A1 (zh) | 一种拍摄方法、拍摄装置和计算机存储介质 | |
JP6011569B2 (ja) | 撮像装置、被写体追尾方法及びプログラム | |
JP2012222495A (ja) | 画像処理装置、画像処理方法、及びプログラム | |
WO2016000514A1 (zh) | 拍摄星云视频的方法和装置和计算机存储介质 | |
US8654204B2 (en) | Digtal photographing apparatus and method of controlling the same | |
JP2006339784A (ja) | 撮像装置、画像処理方法及びプログラム | |
US8189055B2 (en) | Digital photographing apparatus and method of controlling the same | |
US8208042B2 (en) | Method of controlling digital photographing apparatus, digital photographing apparatus, and medium having recorded thereon a program for executing the method | |
JP2011217275A (ja) | 電子機器 | |
JP2008042382A (ja) | 撮像装置および表示方法、並びにプログラム | |
JP2011239267A (ja) | 撮像装置及び画像処理装置 | |
JP2010141609A (ja) | 撮像装置 | |
US9866796B2 (en) | Imaging apparatus, imaging method, and computer-readable recording medium | |
WO2016019786A1 (zh) | 物体运动轨迹拍摄方法、系统及计算机存储介质 | |
CN105357447B (zh) | 图片处理方法及装置 | |
JP2009253925A (ja) | 撮像装置及び撮像方法と、撮影制御プログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 15836139 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 15505964 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205N DATED 20/07/2017) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 15836139 Country of ref document: EP Kind code of ref document: A1 |