WO2017080348A2 - 一种基于场景的拍照装置、方法、计算机存储介质 - Google Patents
一种基于场景的拍照装置、方法、计算机存储介质 Download PDFInfo
- Publication number
- WO2017080348A2 WO2017080348A2 PCT/CN2016/102555 CN2016102555W WO2017080348A2 WO 2017080348 A2 WO2017080348 A2 WO 2017080348A2 CN 2016102555 W CN2016102555 W CN 2016102555W WO 2017080348 A2 WO2017080348 A2 WO 2017080348A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- scene
- feature
- image
- images
- color
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 64
- 238000003384 imaging method Methods 0.000 claims abstract description 50
- 238000000605 extraction Methods 0.000 claims abstract description 14
- 238000012545 processing Methods 0.000 claims description 17
- 230000008859 change Effects 0.000 claims description 16
- 238000009499 grossing Methods 0.000 claims description 16
- 238000007781 pre-processing Methods 0.000 claims description 7
- 230000006870 function Effects 0.000 description 21
- 238000004364 calculation method Methods 0.000 description 10
- 230000035945 sensitivity Effects 0.000 description 10
- 238000010586 diagram Methods 0.000 description 8
- 238000006243 chemical reaction Methods 0.000 description 7
- 241000023320 Luma <angiosperm> Species 0.000 description 6
- 239000003086 colorant Substances 0.000 description 6
- 238000013507 mapping Methods 0.000 description 6
- OSWPMRLSEDHDFF-UHFFFAOYSA-N methyl salicylate Chemical compound COC(=O)C1=CC=CC=C1O OSWPMRLSEDHDFF-UHFFFAOYSA-N 0.000 description 6
- ORQBXQOJMQIAOY-UHFFFAOYSA-N nobelium Chemical compound [No] ORQBXQOJMQIAOY-UHFFFAOYSA-N 0.000 description 6
- 230000006835 compression Effects 0.000 description 5
- 238000007906 compression Methods 0.000 description 5
- 230000006837 decompression Effects 0.000 description 5
- 238000001514 detection method Methods 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 238000004590 computer program Methods 0.000 description 3
- 238000011156 evaluation Methods 0.000 description 3
- 238000013209 evaluation strategy Methods 0.000 description 3
- 230000006872 improvement Effects 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 238000009825 accumulation Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 239000003990 capacitor Substances 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000007493 shaping process Methods 0.000 description 1
- 230000011664 signaling Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/62—Control of parameters via user interfaces
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/68—Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
- H04N23/681—Motion detection
- H04N23/6811—Motion detection based on the image signal
Definitions
- the present invention relates to a camera technology, and in particular, to a scene-based photographing apparatus, method, and computer storage medium.
- the photographing function on the terminal gradually becomes specialized and diversified, and has become a commonly used function in daily life.
- most of the terminal's camera functions have two types of camera modes: professional camera mode and entertainment camera mode.
- the professional photographing mode allows the user to adjust the parameters of the camera to meet the needs of the scene.
- the process requires the user to have a relevant photographic knowledge reserve, which is difficult to grasp for ordinary users.
- the entertainment photo mode cannot dynamically adjust the parameters of the image acquisition unit as the scene changes, resulting in poor quality of the captured image and poor user experience.
- an embodiment of the present invention provides a scene-based photographing apparatus, method, and computer storage medium.
- An image acquisition unit configured to collect at least two images of the scene
- a feature extraction unit configured to extract two or more image features from the at least two images respectively
- a determining unit configured to determine an image feature with the highest priority from the two or more image features respectively extracted, and determine a corresponding imaging parameter of the scene according to the image feature with the highest priority
- the image acquisition unit is further configured to take a photo of the target object in the scene based on the imaging parameter.
- the image collecting unit is further configured to collect at least two consecutive images of the scene.
- the two or more image features respectively extracted include a brightness feature, a color feature, and a motion feature.
- the feature extraction unit is further configured to perform color space change of the two images on the two images; respectively calculate the average brightness characteristics of the two images, and then average the average brightness characteristics of the two images to obtain the final result.
- the brightness characteristic is the brightness feature of the current scene; the color distribution information of the two images is calculated to obtain the color feature of the current scene; the motion characteristics of the two consecutive images are calculated by the frame difference method, and the average motion characteristics of the current scene are calculated. .
- the feature extraction unit is further configured to perform statistics on color distribution information by using a histogram, and calculate a probability distribution of each color component in the two images to obtain a color feature of the current scene.
- the determining unit includes:
- a first determining subunit configured to determine that the brightness feature is the image feature with the highest priority when the brightness feature is less than or equal to the first threshold
- a second determining subunit configured to determine that the color feature is the image feature with the highest priority when the brightness feature is greater than the first threshold and the color feature is greater than or equal to a second threshold;
- a third determining subunit configured to: when the brightness feature is greater than the first threshold, and the color feature is less than the second threshold, and the motion feature is less than or equal to a third threshold, the color feature is prioritized The highest level of image features.
- the determining unit includes: a fourth determining subunit, configured to: when the brightness feature is greater than the first threshold, and the color feature is smaller than the second threshold, and When the motion feature is greater than the third threshold, the camera mode of the scene is set as the default scene camera mode.
- the device further includes:
- a pre-processing unit configured to perform smoothing processing on the at least two images respectively.
- the pre-processing unit is further configured to filter the image data of the at least two images by using a Gaussian kernel.
- the collecting at least two images of the scene includes:
- the two or more image features respectively extracted include a brightness feature, a color feature, and a motion feature.
- the extracting two or more image features from the at least two images respectively includes:
- the two images are subjected to a color space change of the color model
- the motion difference characteristics of two consecutive images are calculated by the frame difference method, and the average motion characteristics of the current scene are counted.
- the calculating color distribution information of the two images includes:
- the histogram is used to calculate the color distribution information, and the probability distribution of each color component in the two images is calculated, and the color characteristics of the current scene are obtained.
- the image features having the highest priority are determined from the two or more extracted images respectively, including:
- the method further includes:
- the method before the extracting two or more image features from the at least two images, the method further includes:
- the at least two images are separately smoothed.
- the smoothing processing on the at least two images respectively includes:
- the image data of the at least two images is separately filtered by using a Gaussian kernel.
- the determining the corresponding imaging parameter of the scene according to the image feature with the highest priority includes:
- the aperture parameter is calculated based on the image feature with the highest priority.
- the computer storage medium provided by the embodiment of the present invention stores a computer program for executing the above-described scene-based photographing method.
- the technical solution of the embodiment of the present invention determines the photographing mode based on the detection of the scene.
- at least two images of the scene are first collected. Then, extracting two or more image features from the at least two images respectively; determining an image feature having the highest priority from the two or more extracted image features respectively; determining, according to the image feature having the highest priority Corresponding imaging parameters of the scene; using the image acquisition unit to take a photo of the target object in the scene based on the imaging parameters.
- the common user can use the scene-based photographing device of the embodiment of the present invention to adjust the image capturing parameters of the image capturing unit according to the change of the scene, and capture a high-quality image; in addition, the entertainment photographing mode is prevented from being photographed in different scenes.
- the problem that the parameter cannot be corrected automatically.
- the scene-based photographing method of the embodiment of the invention can help the user optimize the camera parameters in real time according to the change of the scene, so that the user's photographing experience is more humanized and intelligent.
- FIG. 1 is a schematic flowchart of a scene-based photographing method according to Embodiment 1 of the present invention
- FIG. 2 is a schematic flowchart of a scene-based photographing method according to Embodiment 2 of the present invention.
- FIG. 3 is a schematic flowchart of a scene-based photographing method according to Embodiment 3 of the present invention.
- FIG. 4 is a schematic structural diagram of a scene-based photographing apparatus according to Embodiment 1 of the present invention.
- FIG. 5 is a schematic structural diagram of a scene-based photographing apparatus according to Embodiment 2 of the present invention.
- FIG. 6 is a schematic structural diagram of a scene-based photographing apparatus according to Embodiment 3 of the present invention.
- Fig. 7 is a block diagram showing main electrical configurations of a scene-based photographing apparatus according to an embodiment of the present invention.
- FIG. 1 is a schematic flowchart of a scene-based photographing method according to a first embodiment of the present invention.
- the scene-based photographing method in the present example is applied to a scene-based photographing apparatus.
- the scene-based photographing method includes The following steps:
- Step 101 Collect at least two images of the scene.
- the scene-based photographing device is disposed in the terminal, and the terminal may be any form of terminal, such as a mobile phone, a tablet computer, or the like.
- the scene-based photographing device has an image capturing unit.
- the image capturing unit is a camera.
- An image of the scene can be acquired by the image acquisition unit, where the scene refers to the environment of the image capturing unit of the image capturing unit.
- the scene refers to the environment of the image capturing unit of the image capturing unit.
- you need to acquire at least two images because you can get a more average image feature.
- at least two images of the collected scene are continuous images, and these consecutive images represent the scene currently being prepared for shooting.
- Step 102 Extract two or more image features from the at least two images.
- the at least two images need to be smoothed first, and the smoothing process is mainly to perform noise removal on the image to improve the accuracy of the image feature calculation in the latter stage.
- the image data is filtered by using a Gaussian kernel, and the filtered image is smoothed with less noise points, thereby achieving smoothing processing.
- two or more image features are respectively extracted from the at least two images.
- the following image features are separately extracted from the at least two images: a brightness feature, a color feature, and a motion feature.
- the two images are subjected to color space change of the color model (Lab); the average brightness characteristics of the two images are calculated separately, and the average brightness characteristics of the two images are averaged again to obtain the final brightness feature, which is the brightness of the current scene. feature.
- the color distribution information of the two images is calculated.
- the histogram is used to collect the color distribution information, and the probability distribution of each color component in the two images is calculated, and the color feature of the current scene is obtained.
- the motion difference of the two adjacent images ie, continuous
- the average motion characteristics of the current scene are calculated.
- the color feature is a global feature that describes the surface properties of the scene corresponding to the image or image region.
- the general color feature is based on the characteristics of the pixel, at which point all pixels belonging to the image or image area have their own contributions.
- the commonly used color feature extraction method has a histogram, which can simply describe the global distribution of colors in an image, that is, the proportion of different colors in the whole image, which is especially suitable for describing images that are difficult to automatically segment and does not need to be considered.
- An image of the spatial location of the object is a histogram, which can simply describe the global distribution of colors in an image, that is, the proportion of different colors in the whole image, which is especially suitable for describing images that are difficult to automatically segment and does not need to be considered.
- Step 103 Determine an image feature with the highest priority from the two or more image features respectively extracted.
- the highest priority evaluation of the image features is performed according to the image features obtained in step 102.
- the embodiment of the present invention adopts different evaluation strategies:
- the luminance characteristic is less than or equal to the first threshold (which may be 50% of a certain set value), it is considered that the user is in the dark region at this time, then the intensity information of the luminance feature is higher than the other two Image features.
- the first threshold which may be 50% of a certain set value
- the default scene photographing mode sets the photographing mode of the scene to the default scene photographing mode.
- Step 104 Determine corresponding imaging parameters of the scene according to the image feature with the highest priority.
- the scene is considered to belong to the night view photographing mode. If the priority of the motion feature is the highest, the scene is considered to belong to the still camera mode. If the color feature has the highest priority, the scene is considered to belong to the outdoor landscape photographing mode.
- the night scene photographing mode it is necessary to reacquire an image of the current scene, recalculate the luma feature in the scene, and then calculate the sensitivity (iso) suitable for the current scene by using Gaussian interpolation, for example, calculation. Then set the sensitivity to: iso150, and finally map the brightness characteristics to the aperture parameters according to the exponential function.
- the aperture parameter is F20 after calculation.
- the sensitivity is set to: Iso100, finally calculates the aperture parameter of the current image acquisition unit from the motion feature according to the mapping function, for example, set to f5.
- the outdoor landscape photographing mode it is necessary to re-acquire an image of the current scene, recalculate the color features and brightness features in the scene, and then use Gaussian interpolation to calculate the iso suitable for the current scene, for example, the sensitivity is set to :iso100, finally calculate the current aperture parameter from the color distribution information according to the mapping function, for example, set to f6 and so on.
- Step 105 Take a photo of the target object in the scene based on the imaging parameter.
- the above imaging parameter setting is only a simple example, and the setting of the specific imaging parameter is dynamically adjusted according to real-time changes of each scene.
- the common user can use the scene-based photographing device of the embodiment of the present invention to adjust the image capturing parameters of the image capturing unit according to the change of the scene, and capture a high-quality image; in addition, the entertainment photographing mode is prevented from being photographed in different scenes.
- the scene-based photographing method of the embodiment of the invention can help the user optimize the camera parameters in real time according to the change of the scene, so that the user takes photos.
- the experience is more human and intelligent.
- FIG. 2 is a schematic flowchart of a scene-based photographing method according to a second embodiment of the present invention.
- the scene-based photographing method in the present example is applied to a scene-based photographing apparatus.
- the scene-based photographing method includes The following steps:
- Step 201 Collect at least two images of the scene.
- the scene-based photographing device is disposed in the terminal, and the terminal may be any form of terminal, such as a mobile phone, a tablet computer, or the like.
- the scene-based photographing device has an image capturing unit.
- the image capturing unit is a camera.
- An image of the scene can be acquired by the image acquisition unit, where the scene refers to the environment of the image capturing unit of the image capturing unit.
- the scene refers to the environment of the image capturing unit of the image capturing unit.
- you need to acquire at least two images because you can get a more average image feature.
- at least two images of the collected scene are continuous images, and these consecutive images represent the scene currently being prepared for shooting.
- Step 202 Smoothing the at least two images respectively.
- the at least two images need to be smoothed first, and the smoothing process is mainly to perform noise removal on the image to improve the accuracy of the image feature calculation in the latter stage.
- the image data is filtered by using a Gaussian kernel, and the filtered image is smoothed with less noise points, thereby achieving smoothing processing.
- Step 203 Extract two or more image features from the at least two images.
- two or more image features are respectively extracted from the at least two images.
- the following image features are separately extracted from the at least two images: a brightness feature, a color feature, and a motion feature.
- the two images are subjected to the color space change of Lab; the average brightness characteristics of the two images are calculated separately, and the average brightness characteristics of the two images are averaged again to obtain the final brightness feature, which is the brightness feature of the current scene.
- the color distribution information is used in the embodiment of the present invention to collect the color distribution information by using a histogram, and the probability distribution of each color component in the two images is calculated to obtain the color feature of the current scene.
- the motion difference of the two adjacent images ie, continuous
- the average motion characteristics of the current scene are calculated.
- the color feature is a global feature that describes the surface properties of the scene corresponding to the image or image region.
- the general color feature is based on the characteristics of the pixel, at which point all pixels belonging to the image or image area have their own contributions.
- the commonly used color feature extraction method has a histogram, which can simply describe the global distribution of colors in an image, that is, the proportion of different colors in the whole image, which is especially suitable for describing images that are difficult to automatically segment and does not need to be considered.
- An image of the spatial location of the object is a histogram, which can simply describe the global distribution of colors in an image, that is, the proportion of different colors in the whole image, which is especially suitable for describing images that are difficult to automatically segment and does not need to be considered.
- Step 204 Determine an image feature with the highest priority from the two or more image features respectively extracted.
- the highest priority evaluation of the image features is performed according to the image features obtained in step 203.
- the embodiment of the present invention adopts different evaluation strategies:
- the luminance characteristic is less than or equal to the first threshold (which may be 50% of a certain set value), it is considered that the user is in the dark region at this time, then the intensity information of the luminance feature is higher than the other two Image features.
- the first threshold which may be 50% of a certain set value
- the default scene photographing mode sets the photographing mode of the scene to the default scene photographing mode.
- Step 205 Determine corresponding imaging parameters of the scene according to the image feature with the highest priority.
- the scene is considered to belong to the night view photographing mode. If the priority of the motion feature is the highest, the scene is considered to belong to the still camera mode. If the color feature has the highest priority, the scene is considered to belong to the outdoor landscape photographing mode.
- the night scene photographing mode it is necessary to reacquire an image of the current scene, recalculate the luma feature in the scene, and then calculate the iso suitable for the current scene by using Gaussian interpolation, for example, the sensitivity after calculation.
- Gaussian interpolation for example, the sensitivity after calculation.
- Set to: iso150 and finally map the brightness feature to the aperture parameter according to the exponential function.
- the aperture parameter is F20 after calculation.
- the sensitivity is set to: Iso200, finally calculating the aperture parameter of the current image acquisition unit from the motion feature according to the mapping function, for example, set to f5.
- the outdoor landscape photographing mode it is necessary to re-acquire an image of the current scene, recalculate the color features and brightness features in the scene, and then use Gaussian interpolation to calculate the iso suitable for the current scene, for example, the sensitivity is set to :iso200, finally calculate the current aperture parameter from the color distribution information according to the mapping function, for example, set to f6 and so on.
- Step 206 Take a photo of the target object in the scene based on the imaging parameter.
- the above imaging parameter setting is only a simple example, and the setting of the specific imaging parameter is dynamically adjusted according to real-time changes of each scene.
- the common user can use the scene-based photographing device of the embodiment of the present invention to adjust the image capturing parameters of the image capturing unit according to the change of the scene, and capture a high-quality image; in addition, the entertainment photographing mode is prevented from being photographed in different scenes.
- the scene-based photographing method of the embodiment of the invention can help the user optimize the camera parameters in real time according to the change of the scene, so that the user takes photos.
- the experience is more human and intelligent.
- FIG. 3 is a schematic flowchart of a scene-based photographing method according to a third embodiment of the present invention.
- the scene-based photographing method in the present example is applied to a scene-based photographing apparatus.
- the scene-based photographing method includes The following steps:
- Step 301 Collect at least two images of the scene.
- the scene-based photographing device is disposed in the terminal, and the terminal may be any form of terminal, such as a mobile phone, a tablet computer, or the like.
- the scene-based photographing device has an image capturing unit.
- the image capturing unit is a camera.
- An image of the scene can be acquired by the image acquisition unit, where the scene refers to the environment of the image capturing unit of the image capturing unit.
- the scene refers to the environment of the image capturing unit of the image capturing unit.
- you need to acquire at least two images because you can get a more average image feature.
- at least two images of the collected scene are continuous images, and these consecutive images represent the scene currently being prepared for shooting.
- Step 302 Smoothing the at least two images respectively.
- the at least two images need to be smoothed first, and the smoothing process is mainly to perform noise removal on the image to improve the accuracy of the image feature calculation in the latter stage.
- the image data is filtered by using a Gaussian kernel, and the filtered image is smoothed with less noise points, thereby achieving smoothing processing.
- Step 303 Extract the following image features from the at least two images: a brightness feature, a color feature, and a motion feature.
- the two images are subjected to the color space change of Lab; the average brightness characteristics of the two images are calculated separately, and the average brightness characteristics of the two images are averaged again to obtain the final brightness feature, which is the brightness feature of the current scene.
- the color distribution information of the two images is calculated.
- the histogram is used to collect the color distribution information, and the probability distribution of each color component in the two images is calculated, and the color feature of the current scene is obtained.
- the method calculates the motion characteristics of two adjacent images (ie, consecutive) and counts the average motion characteristics of the current scene.
- the color feature is a global feature that describes the surface properties of the scene corresponding to the image or image region.
- the general color feature is based on the characteristics of the pixel, at which point all pixels belonging to the image or image area have their own contributions.
- the commonly used color feature extraction method has a histogram, which can simply describe the global distribution of colors in an image, that is, the proportion of different colors in the whole image, which is especially suitable for describing images that are difficult to automatically segment and does not need to be considered.
- An image of the spatial location of the object is a histogram, which can simply describe the global distribution of colors in an image, that is, the proportion of different colors in the whole image, which is especially suitable for describing images that are difficult to automatically segment and does not need to be considered.
- Step 304 Determine whether the brightness feature is less than or equal to a first threshold; and when the brightness feature is less than or equal to the first threshold, determine that the brightness feature is the image feature with the highest priority.
- the highest priority evaluation of the image features is performed according to the image features obtained in step 303.
- the embodiment of the present invention adopts different evaluation strategies:
- the intensity information of the luma feature is higher than the other two image features.
- Step 305 When the brightness feature is greater than the first threshold, determine whether the color feature is greater than or equal to a second threshold; and when the color feature is greater than or equal to the second threshold, determine that the color feature is a priority The highest image feature.
- the intensity information of the brightness feature is not the highest, if the color feature histogram is more dispersed and the color feature is greater than or equal to the second threshold, then the user is shooting the landscape, then the intensity information of the color feature is higher than the other two image features.
- Step 306 When the color feature is smaller than the second threshold, determine whether the motion feature is less than or equal to a third threshold; and when the motion feature is less than or equal to the third threshold, determine that the motion feature is a priority The highest image feature.
- the motion feature is less than or equal to the third threshold, the user is considered to be in still shooting, and the intensity information of the motion feature is higher than the other two image features.
- Step 307 Set the photographing mode of the scene to a default scene photographing mode when the motion feature is greater than the third threshold.
- the default scene photographing mode is activated, and the photographing mode of the scene is set as the default scene photographing mode.
- Step 308 Determine a photographing mode of the scene according to the image feature with the highest priority.
- the scene is considered to belong to the night view photographing mode. If the priority of the motion feature is the highest, the scene is considered to belong to the still camera mode. If the color feature has the highest priority, the scene is considered to belong to the outdoor landscape photographing mode.
- Step 309 Acquire imaging parameters corresponding to the photographing mode of the scene.
- the night scene photographing mode it is necessary to reacquire an image of the current scene, recalculate the luma feature in the scene, and then calculate the iso suitable for the current scene by using Gaussian interpolation, for example, the sensitivity after calculation.
- Gaussian interpolation for example, the sensitivity after calculation.
- Set to: iso150 and finally map the brightness feature to the aperture parameter according to the exponential function.
- the aperture parameter is F30 after calculation.
- the sensitivity is set to: Iso300, finally calculating the aperture parameter of the current image acquisition unit from the motion feature according to the mapping function, for example, set to f5.
- the outdoor landscape photographing mode it is necessary to re-acquire an image of the current scene, recalculate the color features and brightness features in the scene, and then use Gaussian interpolation to calculate the iso suitable for the current scene, for example, the sensitivity is set to :iso300, finally calculate the current aperture parameter from the color distribution information according to the mapping function, for example, set to f6 and so on.
- Step 310 Adjust an image acquisition unit according to the imaging parameter, and use the image.
- the collecting unit takes a photo of the target object in the scene based on the adjusted imaging parameter.
- the above imaging parameter setting is only a simple example, and the setting of the specific imaging parameter is dynamically adjusted according to real-time changes of each scene.
- the common user can use the scene-based photographing device of the embodiment of the present invention to adjust the image capturing parameters of the image capturing unit according to the change of the scene, and capture a high-quality image; in addition, the entertainment photographing mode is prevented from being photographed in different scenes.
- the scene-based photographing method of the embodiment of the invention can help the user optimize the camera parameters in real time according to the change of the scene, so that the user's photographing experience is more humanized and intelligent.
- FIG. 4 is a schematic structural diagram of a scene-based photographing apparatus according to Embodiment 1 of the present invention. As shown in FIG. 4, the apparatus includes:
- the image collecting unit 41 is configured to collect at least two images of the scene
- the feature extraction unit 42 is configured to extract two or more image features from the at least two images respectively;
- the determining unit 43 is configured to determine an image feature with the highest priority from the two or more image features respectively extracted; and determine a corresponding imaging parameter of the scene according to the image feature with the highest priority;
- the image collection unit 41 is further configured to take a photo of the target object in the scene based on the imaging parameter.
- each unit in the scene-based photographing apparatus shown in FIG. 4 can be understood by referring to the related description of the foregoing scene-based photographing method.
- the functions of each unit in the scene-based photographing apparatus shown in FIG. 4 can be realized by a program running on a processor, or can be realized by a specific logic circuit.
- FIG. 5 is a schematic structural diagram of a scene-based photographing apparatus according to Embodiment 2 of the present invention. As shown in FIG. 5, the apparatus includes:
- the image collecting unit 51 is configured to collect at least two images of the scene
- the feature extraction unit 52 is configured to extract two or more image features from the at least two images respectively;
- the determining unit 53 is configured to determine an image feature with the highest priority from the two or more image features respectively extracted; and determine a corresponding imaging parameter of the scene according to the image feature with the highest priority;
- the image collecting unit 51 is further configured to take a photo of the target object in the scene based on the imaging parameter.
- the image collecting unit 51 is further configured to collect at least two consecutive images of the scene.
- the feature extraction unit 52 is further configured to perform color space change of the two images on the two images; calculate the average brightness characteristics of the two images separately, and then average the average brightness characteristics of the two images again.
- the final brightness feature is the brightness feature of the current scene; the color distribution information of the two images is calculated to obtain the color features of the current scene; the motion characteristics of the two consecutive images are calculated by the frame difference method, and the average motion of the current scene is calculated. feature.
- the feature extraction unit 52 is further configured to perform the statistics of the color distribution information by using a histogram, and calculate the probability distribution of each color component in the two images to obtain the color feature of the current scene.
- the apparatus further includes a pre-processing unit 54 configured to perform smoothing on the at least two images, respectively.
- the pre-processing unit 54 is further configured to filter the image data of the at least two images by using a Gaussian kernel.
- the implementation functions of the units in the scene-based photographing apparatus shown in FIG. 5 can be understood by referring to the related description of the aforementioned scene-based photographing method.
- the functions of the units in the scene-based photographing apparatus shown in FIG. 5 can be realized by a program running on a processor, or can be realized by a specific logic circuit.
- FIG. 6 is a schematic structural diagram of a scene-based photographing apparatus according to Embodiment 3 of the present invention. As shown in FIG. 6, the apparatus includes:
- the image collecting unit 61 is configured to collect at least two images of the scene
- the feature extraction unit 62 is configured to extract two or more image features from the at least two images respectively;
- a determining unit 63 configured to determine an image feature with the highest priority from the two or more image features respectively extracted; and determine a corresponding imaging parameter of the scene according to the image feature with the highest priority;
- the image collecting unit 61 is further configured to take a photo of the target object in the scene based on the imaging parameter.
- the apparatus further includes a pre-processing unit 64 configured to perform smoothing processing on the at least two images, respectively.
- the feature extraction unit 62 is further configured to extract the following image features from the at least two images: a brightness feature, a color feature, and a motion feature.
- the determining unit 63 includes:
- the first determining sub-unit 631 is configured to determine that the brightness feature is the image feature with the highest priority when the brightness feature is less than or equal to the first threshold;
- the second determining sub-unit 632 is configured to: when the brightness feature is greater than the first threshold, and the color feature is greater than or equal to a second threshold, determine that the color feature is the image feature with the highest priority;
- a third determining sub-unit 633 configured to determine the motion feature when the brightness feature is greater than the first threshold, and the color feature is less than the second threshold, and the motion feature is less than or equal to a third threshold The image feature with the highest priority.
- the determining unit 63 includes: a fourth determining subunit 634 configured to: when the brightness feature is greater than the first threshold, and the color feature is less than the second threshold, and the motion feature When the third threshold is greater than the third threshold, the photographing mode of the scene is set to a default scene photographing mode.
- the implementation functions of the units in the scene-based photographing apparatus shown in FIG. 6 can be understood by referring to the related description of the aforementioned scene-based photographing method.
- the functions of the units in the scene-based photographing apparatus shown in FIG. 6 can be realized by a program running on a processor, or can be realized by a specific logic circuit.
- FIG. 7 is a block diagram of a main electrical structure of a scene-based photographing apparatus according to an embodiment of the present invention.
- the photographic lens 101 is composed of a plurality of optical lenses for forming a subject image, and is a single focus lens or a zoom lens.
- the photographic lens 101 can be moved in the optical axis direction by the lens driving unit 111, and controls the focus position of the taking lens 101 based on the control signal from the lens driving control unit 112, and also controls the focus distance in the case of the zoom lens.
- the lens drive control circuit 112 performs drive control of the lens drive unit 111 in accordance with a control command from the microcomputer 107.
- An imaging element 102 is disposed in the vicinity of a position where the subject image is formed by the photographing lens 101 on the optical axis of the photographing lens 101.
- the imaging element 102 functions as an imaging unit that captures a subject image and acquires captured image data.
- Photodiodes constituting each pixel are two-dimensionally arranged in a matrix on the imaging element 102. Each photodiode generates a photoelectric conversion current corresponding to the amount of received light, and the photoelectric conversion current is charged by a capacitor connected to each photodiode.
- the front surface of each pixel is provided with a Bayer array of RGB color filters.
- the imaging element 102 is connected to an imaging circuit 103 that performs charge accumulation control and image signal readout control in the imaging element 102, and performs waveform shaping after reducing the reset noise of the read image signal (analog image signal). Further, gain improvement or the like is performed to obtain an appropriate signal level.
- the imaging circuit 103 is connected to the A/D conversion unit 104, which performs analog-to-digital conversion on the analog image signal, and outputs a digital image signal (hereinafter referred to as image data) to the bus 199.
- image data a digital image signal
- the bus 199 is a transmission path for transmitting various data read or generated inside the photographing apparatus.
- the A/D conversion unit 104 is connected to the bus 199, and an image processor is also connected.
- SDRAM Synchronous DRAM
- LCD Liquid Crystal Display
- the image processor 105 performs various kinds of images such as OB subtraction processing, white balance adjustment, color matrix calculation, gamma conversion, color difference signal processing, noise removal processing, simultaneous processing, edge processing, and the like on the image data based on the output of the imaging element 102. deal with.
- the JPEG processor 106 compresses the image data read out from the SDRAM 108 in accordance with the JPEG compression method. Further, the JPEG processor 106 performs decompression of JPEG image data for image reproduction display. At the time of decompression, the file recorded on the recording medium 115 is read, and after the compression processing is performed in the JPEG processor 106, the decompressed image data is temporarily stored in the SDRAM 108 and displayed on the LCD 116. Further, in the present embodiment, the JPEG method is adopted as the image compression/decompression method. However, the compression/decompression method is not limited thereto, and other compression/decompression methods such as MPEG, TIFF, and H.264 may be used.
- the microcomputer 107 functions as a control unit of the entire imaging device, and collectively controls various processing sequences of the imaging device.
- the microcomputer 107 is connected to the operation unit 113 and the flash memory 114.
- the operation unit 113 includes but is not limited to a physical button or a virtual button, and the entity or virtual button may be a power button, a camera button, an edit button, a dynamic image button, a reproduction button, a menu button, a cross button, an OK button, a delete button, and an enlarge button.
- the operation members such as various input buttons and various input keys are detected, and the operation states of these operation members are detected.
- the detection result is output to the microcomputer 107.
- a touch panel is provided on the front surface of the LCD 116 as a display portion, and the touch position of the user is detected, and the touch position is output to the microcomputer 107.
- the microcomputer 107 executes various processing sequences corresponding to the operation of the user based on the detection result of the operation member from the operation unit 113. (Again, this place can be changed to the computer 107 to perform the operation with the user based on the detection result of the touch panel in front of the LCD 116. Corresponding various processing sequences. )
- the flash memory 114 stores programs for executing various processing sequences of the microcomputer 107.
- the microcomputer 107 performs overall control of the imaging device in accordance with the program. Further, the flash memory 114 stores various adjustment values of the imaging device, and the microcomputer 107 reads the adjustment value, and controls the imaging device in accordance with the adjustment value.
- the SDRAM 108 is an electrically rewritable volatile memory for temporarily storing image data or the like.
- the SDRAM 108 temporarily stores image data output from the A/D conversion unit 104 and image data processed in the image processor 105, the JPEG processor 106, and the like.
- the memory interface 109 is connected to the recording medium 115, and performs control for writing image data and a file header attached to the image data to the recording medium 115 and reading from the recording medium 115.
- the recording medium 115 is, for example, a recording medium such as a memory card that can be detachably attached to the main body of the imaging device.
- the recording medium 115 is not limited thereto, and may be a hard disk or the like built in the main body of the imaging device.
- the LCD driver 110 is connected to the LCD 116, and stores image data processed by the image processor 105 in the SDRAM.
- the image data stored in the SDRAM is read and displayed on the LCD 116, or the image data stored in the JPEG processor 106 is compressed.
- the JPEG processor 106 reads the compressed image data of the SDRAM, decompresses it, and displays the decompressed image data on the LCD 116.
- the LCD 116 is disposed on the back surface of the main body of the imaging device or the like to perform image display.
- the LCD 116 is provided with a touch panel that detects a user's touch operation.
- the liquid crystal display panel (LCD 116) is disposed as the display portion.
- the present invention is not limited thereto, and various display panels such as an organic EL may be employed.
- the apparatus for tracking the service signaling may also be stored in a computer readable storage medium if it is implemented in the form of a software function module and sold or used as a separate product. Based on such understanding, the technical solution of the embodiments of the present invention may be embodied in the form of a software product in essence or in the form of a software product. Stored in a storage medium, including instructions for causing a computer device (which may be a personal computer, server, or network device, etc.) to perform all or part of the methods described in various embodiments of the present invention.
- a computer device which may be a personal computer, server, or network device, etc.
- the foregoing storage medium includes various media that can store program codes, such as a USB flash drive, a mobile hard disk, a read only memory (ROM), a magnetic disk, or an optical disk.
- program codes such as a USB flash drive, a mobile hard disk, a read only memory (ROM), a magnetic disk, or an optical disk.
- an embodiment of the present invention further provides a computer storage medium, wherein a computer program is stored, and the computer program is used to execute the photographing method of the embodiment of the present invention.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Human Computer Interaction (AREA)
- Studio Devices (AREA)
- Image Analysis (AREA)
Abstract
本发明公开了一种基于场景的拍照装置、方法、计算机存储介质,包括:图像采集单元,配置为采集场景的至少两幅图像;特征提取单元,配置为从所述至少两幅图像中分别提取两个以上图像特征;确定单元,配置为从所述分别提取的两个以上图像特征中确定出优先级最高的图像特征;并根据所述优先级最高的图像特征,确定所述场景的对应摄像参数;所述图像采集单元,还配置为基于所述摄像参数对所述场景中的目标对象进行拍照。
Description
本发明涉及摄像技术,尤其涉及一种基于场景的拍照装置、方法、计算机存储介质。
目前,大多终端具有拍照功能,终端上的拍照功能逐步趋向专业化和多样化,并且已经成为日常生活中常用的功能。目前市场上大多数终端的拍照功能有两类拍照模式:专业拍照模式和娱乐拍照模式。其中,专业拍照模式允许用户调节相机的参数以满足所述场景的需求,然而该过程需要用户有相关的摄影知识储备,对于一般普通用户比较难掌握。娱乐拍照模式不能随着场景的变化动态调节图像采集单元的参数,导致拍摄的图像质量差,用户的体验不佳。
发明内容
为解决上述技术问题,本发明实施例提供了一种基于场景的拍照装置、方法、计算机存储介质。
本发明实施例提供的基于场景的拍照装置,包括:
图像采集单元,配置为采集场景的至少两幅图像;
特征提取单元,配置为从所述至少两幅图像中分别提取两个以上图像特征;
确定单元,配置为从所述分别提取的两个以上图像特征中确定出优先级最高的图像特征,并根据所述优先级最高的图像特征,确定所述场景的对应摄像参数;
所述图像采集单元,还配置为基于所述摄像参数对所述场景中的目标对象进行拍照。
本发明实施例中,所述图像采集单元,还配置为采集场景的至少两幅连续的图像。
本发明实施例中,所述分别提取的两个以上图像特征包括亮度特征、颜色特征和运动特征。
本发明实施例中,所述特征提取单元,还配置为将两幅图像进行颜色模型的颜色空间变化;分别计算两幅图像的平均亮度特征,再将两幅图像的平均亮度特征再次平均得到最终的亮度特征,即为当前场景的亮度特征;计算两幅图像的颜色分布信息,得到当前场景的颜色特征;采用帧差法计算连续的两幅图像的运动特征,统计出当前场景的平均运动特征。
本发明实施例中,所述特征提取单元,还配置采用直方图进行颜色分布信息的统计,统计出两幅图像中各个颜色分量的概率分布情况,得到当前场景的颜色特征。
本发明实施例中,所述确定单元包括:
第一确定子单元,配置为当所述亮度特征小于等于所述第一阈值时,确定所述亮度特征为优先级最高的图像特征;
第二确定子单元,配置为当所述亮度特征大于所述第一阈值,且所述颜色特征大于等于第二阈值时,确定所述颜色特征为优先级最高的图像特征;
第三确定子单元,配置为当所述亮度特征大于所述第一阈值,且所述颜色特征小于所述第二阈值,且所述运动特征小于等于第三阈值时,所述颜色特征为优先级最高的图像特征。
本发明实施例中,所述确定单元包括:第四确定子单元,配置为当所述亮度特征大于所述第一阈值,且所述颜色特征小于所述第二阈值,且所
述运动特征大于所述第三阈值时,将所述场景的拍照模式设置为默认场景拍照模式。
本发明实施例中,所述装置还包括:
预处理单元,配置为分别对所述至少两幅图像进行平滑处理。
本发明实施例中,所述预处理单元,还配置为采用高斯核分别对所述至少两幅图像的图像数据进行滤波。
本发明实施例提供的基于场景的拍照方法,包括:
采集场景的至少两幅图像;
从所述至少两幅图像中分别提取两个以上图像特征;
从所述分别提取的两个以上图像特征中确定出优先级最高的图像特征;
根据所述优先级最高的图像特征确定所述场景的对应摄像参数;
基于所述摄像参数对所述场景中的目标对象进行拍照。
本发明实施例中,所述采集场景的至少两幅图像,包括:
采集场景的至少两幅连续的图像。
本发明实施例中,所述分别提取的两个以上图像特征包括亮度特征、颜色特征和运动特征。
本发明实施例中,所述从所述至少两幅图像中分别提取两个以上图像特征,包括:
将两幅图像进行颜色模型的颜色空间变化;
分别计算两幅图像的平均亮度特征,再将两幅图像的平均亮度特征再次平均得到最终的亮度特征,即为当前场景的亮度特征;
计算两幅图像的颜色分布信息,得到当前场景的颜色特征;
采用帧差法计算连续的两幅图像的运动特征,统计出当前场景的平均运动特征。
本发明实施例中,所述计算两幅图像的颜色分布信息,包括:
采用直方图进行颜色分布信息的统计,统计出两幅图像中各个颜色分量的概率分布情况,得到当前场景的颜色特征。
本发明实施例中,所述从所述分别提取的两个以上图像特中确定出优先级最高的图像特征,包括:
当所述亮度特征小于等于所述第一阈值时,确定所述亮度特征为优先级最高的图像特征;
当所述亮度特征大于所述第一阈值时,且所述颜色特征大于等于第二阈值时,确定所述颜色特征为优先级最高的图像特征;
当所述亮度特征大于所述第一阈值,且所述颜色特征小于所述第二阈值,且所述运动特征小于等于第三阈值时,确定所述运动特征为优先级最高的图像特征。
本发明实施例中,所述方法还包括:
当所述亮度特征大于所述第一阈值,且所述颜色特征小于所述第二阈值,且所述运动特征大于所述第三阈值时,将所述场景的拍照模式设置为默认场景拍照模式。
本发明实施例中,所述从所述至少两幅图像中分别提取两个以上图像特征之前,所述方法还包括:
分别对所述至少两幅图像进行平滑处理。
本发明实施例中,所述分别对所述至少两幅图像进行平滑处理,包括:
采用高斯核分别对所述至少两幅图像的图像数据进行滤波。
本发明实施例中,所述根据所述优先级最高的图像特征确定所述场景的对应摄像参数,包括:
重新获取当前场景的一幅图像;
计算所述图像的优先级最高的图像特征;
根据所述优先级最高的图像特征,计算光圈参数。
本发明实施例提供的计算机存储介质存储有计算机程序,该计算机程序用于执行上述基于场景的拍照方法。
本发明实施例的技术方案基于场景的检测来确定拍照模式,为此,首先采集场景的至少两幅图像。然后,从所述至少两幅图像中分别提取两个以上图像特征;从所述分别提取的两个以上图像特征中确定出优先级最高的图像特征;根据所述优先级最高的图像特征确定所述场景的对应摄像参数;利用所述图像采集单元基于所述摄像参数对所述场景中的目标对象进行拍照。如此,可以使普通用户利用本发明实施例的基于场景的拍照装置,根据场景的变化来调节图像采集单元的摄像参数,拍摄出高质量的图像;此外,避免了娱乐拍照模式在不同场景下摄像参数无法自动修正的问题。本发明实施例的基于场景的拍照方法能够依据场景的变化实时的帮助用户优化摄像参数,使得用户的拍照体验更加人性化和智能化。
图1为本发明实施例一的基于场景的拍照方法的流程示意图;
图2为本发明实施例二的基于场景的拍照方法的流程示意图;
图3为本发明实施例三的基于场景的拍照方法的流程示意图;
图4为本发明实施例一的基于场景的拍照装置的结构组成示意图;
图5为本发明实施例二的基于场景的拍照装置的结构组成示意图;
图6为本发明实施例三的基于场景的拍照装置的结构组成示意图;
图7为表示本发明的一个实施方式的基于场景的拍照装置的主要电气结构的框图。
为了能够更加详尽地了解本发明实施例的特点与技术内容,下面结合
附图对本发明实施例的实现进行详细阐述,所附附图仅供参考说明之用,并非用来限定本发明实施例。
图1为本发明实施例一的基于场景的拍照方法的流程示意图,本示例中的基于场景的拍照方法应用于基于场景的拍照装置中,如图1所示,所述基于场景的拍照方法包括以下步骤:
步骤101:采集场景的至少两幅图像。
本发明实施例中,基于场景的拍照装置设置在终端中,该终端可以是任意形式的终端,例如手机、平板电脑等。基于场景的拍照装置具有图像采集单元,具体实现时,所述图像采集单元为相机。
利用图像采集单元能够采集到场景的图像,这里,场景是指图像采集单元拍摄区域的环境。采集场景的图像时,需要采集至少两幅图像,因为这样可以获取到较为平均的图像特征。具体实现时,采集场景的至少两幅图像为连续的图像,这些连续的图像均代表了当前准备拍摄的场景。
步骤102:从所述至少两幅图像中分别提取两个以上图像特征。
本发明实施例中,需要先对所述至少两幅图像进行平滑处理,平滑处理主要是对图像进行噪声去除,提高后期图像特征计算的精度。具体地,采用高斯核对图像数据进行滤波,滤波后的图像平滑,噪声点较少,从而实现了平滑处理。
本发明实施例中,对所述至少两幅图像进行平滑处理后,从所述至少两幅图像中分别提取两个以上图像特征。
例如,从所述至少两幅图像中分别提取以下图像特征:亮度特征、颜色特征和运动特征。下面以两幅图像为例,具体地,对平滑处理后的两幅图像进行图像特征的计算。首先,将两幅图像进行颜色模型(Lab)的颜色空间变化;分别计算两幅图像的平均亮度特征,再将两幅图像的平均亮度特征再次平均得到最终的亮度特征,即为当前场景的亮度特征。然后,计
算两幅图像的颜色分布信息,本发明实施例采用直方图进行颜色分布信息的统计,统计出两幅图像中各个颜色分量的概率分布情况,得到当前场景的颜色特征。最后,采用帧差法计算相邻(也即连续)的两幅图像的运动特征,统计出当前场景的平均运动特征。
本发明实施例中,颜色特征是一种全局特征,描述了图像或图像区域所对应的景物的表面性质。一般颜色特征是基于像素点的特征,此时所有属于图像或图像区域的像素都有各自的贡献。常用的颜色特征提取方法有直方图,直方图能简单描述一幅图像中颜色的全局分布,即不同色彩在整幅图像中所占的比例,特别适用于描述难以自动分割的图像和不需要考虑物体空间位置的图像。
步骤103:从所述分别提取的两个以上图像特征中确定出优先级最高的图像特征。
本发明实施例中,根据步骤102中得到的各个图像特征,进行图像特征的最高优先级的评价,对于每个图像特征的最高优先级本发明实施例采用不同的评价策略:
(1)对于亮度特征,如果亮度特征小于等于第一阈值(可以是某个设定值的50%),则认为此时用户处于暗光区域,那么亮度特征的强度信息要高于其他两个图像特征。
(2)当亮度特征的强度信息不是最高时,如果颜色特征直方图较为分散,颜色特征大于等于第二阈值,则此时用户正在拍摄风景,那么颜色特征的强度信息要高于其他两个图像特征。
(3)当亮度特征、颜色特征的强度信息均不是最高时,如果运动特征小于等于所述第三阈值,则认为用户处于静止拍摄,此时运动特征的强度信息高于其他两个图像特征。
当亮度特征、颜色特征、运动特征的最高优先级均不是最高时,启动
默认场景拍照模式,将所述场景的拍照模式设置为默认场景拍照模式。
步骤104:根据所述优先级最高的图像特征确定所述场景的对应摄像参数。
具体地,若亮度特征的优先级最高,则认为所述场景属于夜景拍照模式。若运动特征的优先级最高,则认为所述场景属于静止拍照模式。若颜色特征的优先级最高,则认为所述场景属于户外风景拍照模式。
具体地,针对夜景拍照模式,此时还需要重新获取当前场景的一幅图像,重新计算场景中的亮度特征,然后采用高斯插值的方式,计算出适合当前场景的感光度(iso),例如计算后将感光度设置为:iso150,最后根据指数函数将亮度特征映射为光圈参数,例如计算后选用光圈参数为F20。
针对静止拍照模式,需要重新获取当前场景的一幅图像,重新计算场景中的亮度特征和运动特征,然后采用高斯插值的方式,计算出适合当前场景的iso,例如计算后将感光度设置为:iso100,最后根据映射函数由运动特征计算出当前图像采集单元的光圈参数,例如设置为f5。
针对户外风景拍照模式,需要重新获取当前场景的一幅图像,重新计算场景中的颜色特征和亮度特征,然后采用高斯插值的方式,计算出适合当前场景的iso,例如计算后将感光度设置为:iso100,最后根据映射函数由颜色分布信息计算出当前的光圈参数,例如设置为f6等等。
步骤105:基于所述摄像参数对所述场景中的目标对象进行拍照。
本发明实施例中,以上的摄像参数设置只是简单示例,具体的摄像参数的设置根据各个场景实时的变化而动态调整。如此,可以使普通用户利用本发明实施例的基于场景的拍照装置,根据场景的变化来调节图像采集单元的摄像参数,拍摄出高质量的图像;此外,避免了娱乐拍照模式在不同场景下摄像参数无法自动修正的问题。本发明实施例的基于场景的拍照方法能够依据场景的变化实时的帮助用户优化摄像参数,使得用户的拍照
体验更加人性化和智能化。
图2为本发明实施例二的基于场景的拍照方法的流程示意图,本示例中的基于场景的拍照方法应用于基于场景的拍照装置中,如图2所示,所述基于场景的拍照方法包括以下步骤:
步骤201:采集场景的至少两幅图像。
本发明实施例中,基于场景的拍照装置设置在终端中,该终端可以是任意形式的终端,例如手机、平板电脑等。基于场景的拍照装置具有图像采集单元,具体实现时,所述图像采集单元为相机。
利用图像采集单元能够采集到场景的图像,这里,场景是指图像采集单元拍摄区域的环境。采集场景的图像时,需要采集至少两幅图像,因为这样可以获取到较为平均的图像特征。具体实现时,采集场景的至少两幅图像为连续的图像,这些连续的图像均代表了当前准备拍摄的场景。
步骤202:分别对所述至少两幅图像进行平滑处理。
本发明实施例中,需要先对所述至少两幅图像进行平滑处理,平滑处理主要是对图像进行噪声去除,提高后期图像特征计算的精度。具体地,采用高斯核对图像数据进行滤波,滤波后的图像平滑,噪声点较少,从而实现了平滑处理。
步骤203:从所述至少两幅图像中分别提取两个以上图像特征。
本发明实施例中,对所述至少两幅图像进行平滑处理后,从所述至少两幅图像中分别提取两个以上图像特征。
例如,从所述至少两幅图像中分别提取以下图像特征:亮度特征、颜色特征和运动特征。下面以两幅图像为例,具体地,对平滑处理后的两幅图像进行图像特征的计算。首先,将两幅图像进行Lab的颜色空间变化;分别计算两幅图像的平均亮度特征,再将两幅图像的平均亮度特征再次平均得到最终的亮度特征,即为当前场景的亮度特征。然后,计算两幅图像
的颜色分布信息,本发明实施例采用直方图进行颜色分布信息的统计,统计出两幅图像中各个颜色分量的概率分布情况,得到当前场景的颜色特征。最后,采用帧差法计算相邻(也即连续)的两幅图像的运动特征,统计出当前场景的平均运动特征。
本发明实施例中,颜色特征是一种全局特征,描述了图像或图像区域所对应的景物的表面性质。一般颜色特征是基于像素点的特征,此时所有属于图像或图像区域的像素都有各自的贡献。常用的颜色特征提取方法有直方图,直方图能简单描述一幅图像中颜色的全局分布,即不同色彩在整幅图像中所占的比例,特别适用于描述难以自动分割的图像和不需要考虑物体空间位置的图像。
步骤204:从所述分别提取的两个以上图像特征中确定出优先级最高的图像特征。
本发明实施例中,根据步骤203中得到的各个图像特征,进行图像特征的最高优先级的评价,对于每个图像特征的最高优先级本发明实施例采用不同的评价策略:
(1)对于亮度特征,如果亮度特征小于等于第一阈值(可以是某个设定值的50%),则认为此时用户处于暗光区域,那么亮度特征的强度信息要高于其他两个图像特征。
(2)当亮度特征的强度信息不是最高时,如果颜色特征直方图较为分散,颜色特征大于等于第二阈值,则此时用户正在拍摄风景,那么颜色特征的强度信息要高于其他两个图像特征。
(3)当亮度特征、颜色特征的最高优先级均不是最高时,如果运动特征小于等于所述第三阈值,则认为用户处于静止拍摄,此时运动特征的强度信息高于其他两个图像特征。
当亮度特征、颜色特征、运动特征的最高优先级均不是最高时,启动
默认场景拍照模式,将所述场景的拍照模式设置为默认场景拍照模式。
步骤205:根据所述优先级最高的图像特征确定所述场景的对应摄像参数。
具体地,若亮度特征的优先级最高,则认为所述场景属于夜景拍照模式。若运动特征的优先级最高,则认为所述场景属于静止拍照模式。若颜色特征的优先级最高,则认为所述场景属于户外风景拍照模式。
具体地,针对夜景拍照模式,此时还需要重新获取当前场景的一幅图像,重新计算场景中的亮度特征,然后采用高斯插值的方式,计算出适合当前场景的iso,例如计算后将感光度设置为:iso150,最后根据指数函数将亮度特征映射为光圈参数,例如计算后选用光圈参数为F20。
针对静止拍照模式,需要重新获取当前场景的一幅图像,重新计算场景中的亮度特征和运动特征,然后采用高斯插值的方式,计算出适合当前场景的iso,例如计算后将感光度设置为:iso200,最后根据映射函数由运动特征计算出当前图像采集单元的光圈参数,例如设置为f5。
针对户外风景拍照模式,需要重新获取当前场景的一幅图像,重新计算场景中的颜色特征和亮度特征,然后采用高斯插值的方式,计算出适合当前场景的iso,例如计算后将感光度设置为:iso200,最后根据映射函数由颜色分布信息计算出当前的光圈参数,例如设置为f6等等。
步骤206:基于所述摄像参数对所述场景中的目标对象进行拍照。
本发明实施例中,以上的摄像参数设置只是简单示例,具体的摄像参数的设置根据各个场景实时的变化而动态调整。如此,可以使普通用户利用本发明实施例的基于场景的拍照装置,根据场景的变化来调节图像采集单元的摄像参数,拍摄出高质量的图像;此外,避免了娱乐拍照模式在不同场景下摄像参数无法自动修正的问题。本发明实施例的基于场景的拍照方法能够依据场景的变化实时的帮助用户优化摄像参数,使得用户的拍照
体验更加人性化和智能化。
图3为本发明实施例三的基于场景的拍照方法的流程示意图,本示例中的基于场景的拍照方法应用于基于场景的拍照装置中,如图3所示,所述基于场景的拍照方法包括以下步骤:
步骤301:采集场景的至少两幅图像。
本发明实施例中,基于场景的拍照装置设置在终端中,该终端可以是任意形式的终端,例如手机、平板电脑等。基于场景的拍照装置具有图像采集单元,具体实现时,所述图像采集单元为相机。
利用图像采集单元能够采集到场景的图像,这里,场景是指图像采集单元拍摄区域的环境。采集场景的图像时,需要采集至少两幅图像,因为这样可以获取到较为平均的图像特征。具体实现时,采集场景的至少两幅图像为连续的图像,这些连续的图像均代表了当前准备拍摄的场景。
步骤302:分别对所述至少两幅图像进行平滑处理。
本发明实施例中,需要先对所述至少两幅图像进行平滑处理,平滑处理主要是对图像进行噪声去除,提高后期图像特征计算的精度。具体地,采用高斯核对图像数据进行滤波,滤波后的图像平滑,噪声点较少,从而实现了平滑处理。
步骤303:从所述至少两幅图像中分别提取以下图像特征:亮度特征、颜色特征和运动特征。
下面以两幅图像为例,具体地,对平滑处理后的两幅图像进行图像特征的计算。首先,将两幅图像进行Lab的颜色空间变化;分别计算两幅图像的平均亮度特征,再将两幅图像的平均亮度特征再次平均得到最终的亮度特征,即为当前场景的亮度特征。然后,计算两幅图像的颜色分布信息,本发明实施例采用直方图进行颜色分布信息的统计,统计出两幅图像中各个颜色分量的概率分布情况,得到当前场景的颜色特征。最后,采用帧差
法计算相邻(也即连续)的两幅图像的运动特征,统计出当前场景的平均运动特征。
本发明实施例中,颜色特征是一种全局特征,描述了图像或图像区域所对应的景物的表面性质。一般颜色特征是基于像素点的特征,此时所有属于图像或图像区域的像素都有各自的贡献。常用的颜色特征提取方法有直方图,直方图能简单描述一幅图像中颜色的全局分布,即不同色彩在整幅图像中所占的比例,特别适用于描述难以自动分割的图像和不需要考虑物体空间位置的图像。
步骤304:判断所述亮度特征是否小于等于第一阈值;当所述亮度特征小于等于所述第一阈值时,确定所述亮度特征为优先级最高的图像特征。
本发明实施例中,根据步骤303中得到的各个图像特征,进行图像特征的最高优先级的评价,对于每个图像特征的最高优先级本发明实施例采用不同的评价策略:
对于亮度特征,如果亮度特征小于等于第一阈值(可以是某个设定值的50%),则认为此时用户处于暗光区域,那么亮度特征的强度信息要高于其他两个图像特征。
步骤305:当所述亮度特征大于所述第一阈值时,判断所述颜色特征是否大于等于第二阈值;当所述颜色特征大于等于所述第二阈值时,确定所述颜色特征为优先级最高的图像特征。
当亮度特征的强度信息不是最高时,如果颜色特征直方图较为分散,颜色特征大于等于第二阈值,则此时用户正在拍摄风景,那么颜色特征的强度信息要高于其他两个图像特征。
步骤306:当所述颜色特征小于所述第二阈值时,判断所述运动特征是否小于等于第三阈值;当所述运动特征小于等于所述第三阈值时,确定所述运动特征为优先级最高的图像特征。
当亮度特征、颜色特征的最高优先级均不是最高时,如果运动特征小于等于所述第三阈值,则认为用户处于静止拍摄,此时运动特征的强度信息高于其他两个图像特征。
步骤307:当所述运动特征大于所述第三阈值时,将所述场景的拍照模式设置为默认场景拍照模式。
当亮度特征、颜色特征、运动特征的最高优先级均不是最高时,启动默认场景拍照模式,将所述场景的拍照模式设置为默认场景拍照模式。
步骤308:根据所述优先级最高的图像特征确定所述场景的拍照模式。
具体地,若亮度特征的优先级最高,则认为所述场景属于夜景拍照模式。若运动特征的优先级最高,则认为所述场景属于静止拍照模式。若颜色特征的优先级最高,则认为所述场景属于户外风景拍照模式。
步骤309:获取与所述场景的拍照模式对应的摄像参数。
具体地,针对夜景拍照模式,此时还需要重新获取当前场景的一幅图像,重新计算场景中的亮度特征,然后采用高斯插值的方式,计算出适合当前场景的iso,例如计算后将感光度设置为:iso150,最后根据指数函数将亮度特征映射为光圈参数,例如计算后选用光圈参数为F30。
针对静止拍照模式,需要重新获取当前场景的一幅图像,重新计算场景中的亮度特征和运动特征,然后采用高斯插值的方式,计算出适合当前场景的iso,例如计算后将感光度设置为:iso300,最后根据映射函数由运动特征计算出当前图像采集单元的光圈参数,例如设置为f5。
针对户外风景拍照模式,需要重新获取当前场景的一幅图像,重新计算场景中的颜色特征和亮度特征,然后采用高斯插值的方式,计算出适合当前场景的iso,例如计算后将感光度设置为:iso300,最后根据映射函数由颜色分布信息计算出当前的光圈参数,例如设置为f6等等。
步骤310:依据所述摄像参数对图像采集单元进行调节,利用所述图像
采集单元基于调节后的所述摄像参数对所述场景中的目标对象进行拍照。
本发明实施例中,以上的摄像参数设置只是简单示例,具体的摄像参数的设置根据各个场景实时的变化而动态调整。如此,可以使普通用户利用本发明实施例的基于场景的拍照装置,根据场景的变化来调节图像采集单元的摄像参数,拍摄出高质量的图像;此外,避免了娱乐拍照模式在不同场景下摄像参数无法自动修正的问题。本发明实施例的基于场景的拍照方法能够依据场景的变化实时的帮助用户优化摄像参数,使得用户的拍照体验更加人性化和智能化。
图4为本发明实施例一的基于场景的拍照装置的结构组成示意图,如图4所示,所述装置包括:
图像采集单元41,配置为采集场景的至少两幅图像;
特征提取单元42,配置为从所述至少两幅图像中分别提取两个以上图像特征;
确定单元43,配置为从所述分别提取的两个以上图像特征中确定出优先级最高的图像特征;并根据所述优先级最高的图像特征,确定所述场景的对应摄像参数;
所述图像采集单元41,还配置为基于所述摄像参数对所述场景中的目标对象进行拍照。
本领域技术人员应当理解,图4所示的基于场景的拍照装置中的各单元的实现功能可参照前述基于场景的拍照方法的相关描述而理解。图4所示的基于场景的拍照装置中的各单元的功能可通过运行于处理器上的程序而实现,也可通过具体的逻辑电路而实现。
图5为本发明实施例二的基于场景的拍照装置的结构组成示意图,如图5所示,所述装置包括:
图像采集单元51,配置为采集场景的至少两幅图像;
特征提取单元52,配置为从所述至少两幅图像中分别提取两个以上图像特征;
确定单元53,配置为从所述分别提取的两个以上图像特征中确定出优先级最高的图像特征;并根据所述优先级最高的图像特征,确定所述场景的对应摄像参数;
所述图像采集单元51,还配置为基于所述摄像参数对所述场景中的目标对象进行拍照。
本发明实施例中,所述图像采集单元51,还配置为采集场景的至少两幅连续的图像。
本发明实施例中,所述特征提取单元52,还配置为将两幅图像进行颜色模型的颜色空间变化;分别计算两幅图像的平均亮度特征,再将两幅图像的平均亮度特征再次平均得到最终的亮度特征,即为当前场景的亮度特征;计算两幅图像的颜色分布信息,得到当前场景的颜色特征;采用帧差法计算连续的两幅图像的运动特征,统计出当前场景的平均运动特征。
本发明实施例中,所述特征提取单元52,还配置采用直方图进行颜色分布信息的统计,统计出两幅图像中各个颜色分量的概率分布情况,得到当前场景的颜色特征。
所述装置还包括:预处理单元54,配置为分别对所述至少两幅图像进行平滑处理。
本发明实施例中,所述预处理单元54,还配置为采用高斯核分别对所述至少两幅图像的图像数据进行滤波。
本领域技术人员应当理解,图5所示的基于场景的拍照装置中的各单元的实现功能可参照前述基于场景的拍照方法的相关描述而理解。图5所示的基于场景的拍照装置中的各单元的功能可通过运行于处理器上的程序而实现,也可通过具体的逻辑电路而实现。
图6为本发明实施例三的基于场景的拍照装置的结构组成示意图,如图6所示,所述装置包括:
图像采集单元61,配置为采集场景的至少两幅图像;
特征提取单元62,配置为从所述至少两幅图像中分别提取两个以上图像特征;
确定单元63,配置为从所述分别提取的两个以上图像特征中确定出优先级最高的图像特征;并根据所述优先级最高的图像特征,确定所述场景的对应摄像参数;
所述图像采集单元61,还配置为基于所述摄像参数对所述场景中的目标对象进行拍照。
所述装置还包括:预处理单元64,配置为分别对所述至少两幅图像进行平滑处理。
所述特征提取单元62,还配置为从所述至少两幅图像中分别提取以下图像特征:亮度特征、颜色特征和运动特征。
所述确定单元63包括:
第一确定子单元631,配置为当所述亮度特征小于等于所述第一阈值时,确定所述亮度特征为优先级最高的图像特征;
第二确定子单元632,配置为当所述亮度特征大于所述第一阈值,且所述颜色特征大于等于第二阈值时,确定所述颜色特征为优先级最高的图像特征;
第三确定子单元633,配置为当所述亮度特征大于所述第一阈值,且所述颜色特征小于所述第二阈值,且所述运动特征小于等于第三阈值时,确定所述运动特征为优先级最高的图像特征。
所述确定单元63包括:第四确定子单元634,配置为当所述亮度特征大于所述第一阈值,且所述颜色特征小于所述第二阈值,且所述运动特征
大于所述第三阈值时,将所述场景的拍照模式设置为默认场景拍照模式。
本领域技术人员应当理解,图6所示的基于场景的拍照装置中的各单元的实现功能可参照前述基于场景的拍照方法的相关描述而理解。图6所示的基于场景的拍照装置中的各单元的功能可通过运行于处理器上的程序而实现,也可通过具体的逻辑电路而实现。
图7为本发明实施例的一个实施方式的基于场景的拍照装置的主要电气结构的框图。摄影镜头101由用于形成被摄体像的多个光学镜头构成,是单焦点镜头或变焦镜头。摄影镜头101能够通过镜头驱动部111在光轴方向上移动,根据来自镜头驱动控制部112的控制信号,控制摄影镜头101的焦点位置,在变焦镜头的情况下,也控制焦点距离。镜头驱动控制电路112按照来自微型计算机107的控制命令进行镜头驱动部111的驱动控制。
在摄影镜头101的光轴上、由摄影镜头101形成被摄体像的位置附近配置有摄像元件102。摄像元件102发挥作为对被摄体像摄像并取得摄像图像数据的摄像部的功能。在摄像元件102上二维地呈矩阵状配置有构成各像素的光电二极管。各光电二极管产生与受光量对应的光电转换电流,该光电转换电流由与各光电二极管连接的电容器进行电荷蓄积。各像素的前表面配置有拜耳排列的RGB滤色器。
摄像元件102与摄像电路103连接,该摄像电路103在摄像元件102中进行电荷蓄积控制和图像信号读出控制,对该读出的图像信号(模拟图像信号)降低重置噪声后进行波形整形,进而进行增益提高等以成为适当的信号电平。
摄像电路103与A/D转换部104连接,该A/D转换部104对模拟图像信号进行模数转换,向总线199输出数字图像信号(以下称之为图像数据)。
总线199是用于传送在拍摄装置的内部读出或生成的各种数据的传送路径。在总线199连接着上述A/D转换部104,此外还连接着图像处理器
105、JPEG处理器106、微型计算机107、SDRAM(Synchronous DRAM)108、存储器接口(以下称之为存储器I/F)109、LCD(液晶显示器:Liquid Crystal Display)驱动器110。
图像处理器105对基于摄像元件102的输出的图像数据进行OB相减处理、白平衡调整、颜色矩阵运算、伽马转换、色差信号处理、噪声去除处理、同时化处理、边缘处理等各种图像处理。
JPEG处理器106在将图像数据记录于记录介质115时,按照JPEG压缩方式压缩从SDRAM108读出的图像数据。此外,JPEG处理器106为了进行图像再现显示而进行JPEG图像数据的解压缩。进行解压缩时,读出记录在记录介质115中的文件,在JPEG处理器106中实施了解压缩处理后,将解压缩的图像数据暂时存储于SDRAM108中并在LCD116上进行显示。另外,在本实施方式中,作为图像压缩解压缩方式采用的是JPEG方式,然而压缩解压缩方式不限于此,当然可以采用MPEG、TIFF、H.264等其他的压缩解压缩方式。
微型计算机107发挥作为该拍摄装置整体的控制部的功能,统一控制拍摄装置的各种处理序列。微型计算机107连接着操作单元113和闪存114。
操作单元113包括但不限于实体按键或者虚拟按键,该实体或虚拟按键可以为电源按钮、拍照键、编辑按键、动态图像按钮、再现按钮、菜单按钮、十字键、OK按钮、删除按钮、放大按钮等各种输入按钮和各种输入键等操作部材,检测这些操作部材的操作状态。
将检测结果向微型计算机107输出。此外,在作为显示部的LCD116的前表面设有触摸面板,检测用户的触摸位置,将该触摸位置向微型计算机107输出。微型计算机107根据来自操作单元113的操作部材的检测结果,执行与用户的操作对应的各种处理序列。(同样,可以把这个地方改成计算机107根据LCD116前面的触摸面板的检测结果,执行与用户的操作
对应的各种处理序列。)
闪存114存储用于执行微型计算机107的各种处理序列的程序。微型计算机107根据该程序进行拍摄装置整体的控制。此外,闪存114存储拍摄装置的各种调整值,微型计算机107读出调整值,按照该调整值进行拍摄装置的控制。
SDRAM108是用于对图像数据等进行暂时存储的可电改写的易失性存储器。该SDRAM108暂时存储从A/D转换部104输出的图像数据和在图像处理器105、JPEG处理器106等中进行了处理后的图像数据。
存储器接口109与记录介质115连接,进行将图像数据和附加在图像数据中的文件头等数据写入记录介质115和从记录介质115中读出的控制。记录介质115例如为能够在拍摄装置主体上自由拆装的存储器卡等记录介质,然而不限于此,也可以是内置在拍摄装置主体中的硬盘等。
LCD驱动器110与LCD116连接,将由图像处理器105处理后的图像数据存储于SDRAM,需要显示时,读取SDRAM存储的图像数据并在LCD116上显示,或者,JPEG处理器106压缩过的图像数据存储于SDRAM,在需要显示时,JPEG处理器106读取SDRAM的压缩过的图像数据,再进行解压缩,将解压缩后的图像数据通过LCD116进行显示。
LCD116配置在拍摄装置主体的背面等上,进行图像显示。该LCD116设有检测用户的触摸操作的触摸面板。另外,作为显示部,在本实施方式中配置的是液晶表示面板(LCD116),然而不限于此,也可以采用有机EL等各种显示面板。
本发明实施例上述业务信令跟踪的装置如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明实施例的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品
存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机、服务器、或者网络设备等)执行本发明各个实施例所述方法的全部或部分。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read Only Memory)、磁碟或者光盘等各种可以存储程序代码的介质。这样,本发明实施例不限制于任何特定的硬件和软件结合。
相应地,本发明实施例还提供一种计算机存储介质,其中存储有计算机程序,该计算机程序用于执行本发明实施例的拍摄方法。
以上参照附图说明了本发明的优选实施例,并非因此局限本发明的权利范围。本领域技术人员不脱离本发明的范围和实质,可以有多种变型方案实现本发明,比如作为一个实施例的特征可用于另一实施例而得到又一实施例。凡在运用本发明的技术构思之内所作的任何修改、等同替换和改进,均应在本发明的权利范围之内。
Claims (20)
- 一种基于场景的拍照装置,所述装置包括:图像采集单元,配置为采集场景的至少两幅图像;特征提取单元,配置为从所述至少两幅图像中分别提取两个以上图像特征;确定单元,配置为从所述分别提取的两个以上图像特征中确定出优先级最高的图像特征,并根据所述优先级最高的图像特征,确定所述场景的对应摄像参数;所述图像采集单元,还配置为基于所述摄像参数对所述场景中的目标对象进行拍照。
- 根据权利要求1所述的基于场景的拍照装置,其中,所述图像采集单元,还配置为采集场景的至少两幅连续的图像。
- 根据权利要求1所述的基于场景的拍照装置,其中,所述分别提取的两个以上图像特征包括亮度特征、颜色特征和运动特征。
- 根据权利要求3所述的基于场景的拍照装置,其中,所述特征提取单元,还配置为将两幅图像进行颜色模型的颜色空间变化;分别计算两幅图像的平均亮度特征,再将两幅图像的平均亮度特征再次平均得到最终的亮度特征,即为当前场景的亮度特征;计算两幅图像的颜色分布信息,得到当前场景的颜色特征;采用帧差法计算连续的两幅图像的运动特征,统计出当前场景的平均运动特征。
- 根据权利要求4所述的基于场景的拍照装置,其中,所述特征提取单元,还配置采用直方图进行颜色分布信息的统计,统计出两幅图像中各个颜色分量的概率分布情况,得到当前场景的颜色特征。
- 根据权利要求3所述的基于场景的拍照装置,其中,所述确定单元包括:第一确定子单元,配置为当所述亮度特征小于等于第一阈值时,确定所述亮度特征为优先级最高的图像特征;第二确定子单元,配置为当所述亮度特征大于所述第一阈值,且所述颜色特征大于等于第二阈值时,确定所述颜色特征为优先级最高的图像特征;第三确定子单元,配置为当所述亮度特征大于所述第一阈值,且所述颜色特征小于所述第二阈值,且所述运动特征小于等于第三阈值时,确定所述运动特征为优先级最高的图像特征。
- 根据权利要求6所述的基于场景的拍照装置,其中,所述确定单元包括:第四确定子单元,配置为当所述亮度特征大于所述第一阈值,且所述颜色特征小于所述第二阈值,且所述运动特征大于所述第三阈值时,将所述场景的拍照模式设置为默认场景拍照模式。
- 根据权利要求1至7任一项所述的基于场景的拍照装置,其中,所述装置还包括:预处理单元,配置为分别对所述至少两幅图像进行平滑处理。
- 根据权利要求8所述的基于场景的拍照装置,其中,所述预处理单元,还配置为采用高斯核分别对所述至少两幅图像的图像数据进行滤波。
- 一种基于场景的拍照方法,所述方法包括:采集场景的至少两幅图像;从所述至少两幅图像中分别提取两个以上图像特征;从所述分别提取的两个以上图像特征中确定出优先级最高的图像特征;根据所述优先级最高的图像特征确定所述场景的对应摄像参数;基于所述摄像参数对所述场景中的目标对象进行拍照。
- 根据权利要求10所述的基于场景的拍照方法,其中,所述采集场 景的至少两幅图像,包括:采集场景的至少两幅连续的图像。
- 根据权利要求10所述的基于场景的拍照方法,其中,所述分别提取的两个以上图像特征包括亮度特征、颜色特征和运动特征。
- 根据权利要求12所述的基于场景的拍照方法,其中,所述从所述至少两幅图像中分别提取两个以上图像特征,包括:将两幅图像进行颜色模型的颜色空间变化;分别计算两幅图像的平均亮度特征,再将两幅图像的平均亮度特征再次平均得到最终的亮度特征,即为当前场景的亮度特征;计算两幅图像的颜色分布信息,得到当前场景的颜色特征;采用帧差法计算连续的两幅图像的运动特征,统计出当前场景的平均运动特征。
- 根据权利要求13所述的基于场景的拍照方法,其中,所述计算两幅图像的颜色分布信息,包括:采用直方图进行颜色分布信息的统计,统计出两幅图像中各个颜色分量的概率分布情况,得到当前场景的颜色特征。
- 根据权利要求12所述的基于场景的拍照方法,其中,所述从所述分别提取的两个以上图像特征中确定出优先级最高的图像特征,包括:当所述亮度特征小于等于第一阈值时,确定所述亮度特征为优先级最高的图像特征;当所述亮度特征大于所述第一阈值,且所述颜色特征大于等于第二阈值时,确定所述颜色特征为优先级最高的图像特征;当所述亮度特征大于所述第一阈值,且所述颜色特征小于所述第二阈值,且所述运动特征小于等于第三阈值时,确定所述运动特征为优先级最高的图像特征。
- 根据权利要求15所述的基于场景的拍照方法,其中,所述方法还包括:当所述亮度特征大于所述第一阈值,且所述颜色特征小于所述第二阈值,且所述运动特征大于所述第三阈值时,将所述场景的拍照模式设置为默认场景拍照模式。
- 根据权利要求10至16任一项所述的基于场景的拍照方法,其中,所述从所述至少两幅图像中分别提取两个以上图像特征之前,所述方法还包括:分别对所述至少两幅图像进行平滑处理。
- 根据权利要求17所述的基于场景的拍照方法,其中,所述分别对所述至少两幅图像进行平滑处理,包括:采用高斯核分别对所述至少两幅图像的图像数据进行滤波。
- 根据权利要求10所述的基于场景的拍照方法,其中,所述根据所述优先级最高的图像特征确定所述场景的对应摄像参数,包括:重新获取当前场景的一幅图像;计算所述图像的优先级最高的图像特征;根据所述优先级最高的图像特征,计算光圈参数。
- 一种计算机存储介质,所述计算机存储介质中存储有计算机可执行指令,该计算机可执行指令配置为执行权利要求10-19任一项所述的基于场景的拍照方法。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510776502.8 | 2015-11-13 | ||
CN201510776502.8A CN105407281A (zh) | 2015-11-13 | 2015-11-13 | 一种基于场景的拍照装置、方法 |
Publications (2)
Publication Number | Publication Date |
---|---|
WO2017080348A2 true WO2017080348A2 (zh) | 2017-05-18 |
WO2017080348A3 WO2017080348A3 (zh) | 2017-06-15 |
Family
ID=55472501
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2016/102555 WO2017080348A2 (zh) | 2015-11-13 | 2016-10-19 | 一种基于场景的拍照装置、方法、计算机存储介质 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN105407281A (zh) |
WO (1) | WO2017080348A2 (zh) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2022052944A1 (en) * | 2020-09-11 | 2022-03-17 | International Business Machines Corporation | Recommending location and content aware filters for digital photographs |
CN118088963A (zh) * | 2024-03-07 | 2024-05-28 | 广东艾罗智能光电股份有限公司 | 一种可自动追光的智能照明控制方法及装置 |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105407281A (zh) * | 2015-11-13 | 2016-03-16 | 努比亚技术有限公司 | 一种基于场景的拍照装置、方法 |
CN109964478A (zh) * | 2017-10-14 | 2019-07-02 | 华为技术有限公司 | 一种拍摄方法以及电子装置 |
CN108024105A (zh) * | 2017-12-14 | 2018-05-11 | 珠海市君天电子科技有限公司 | 图像色彩调节方法、装置、电子设备及存储介质 |
CN108322648B (zh) * | 2018-02-02 | 2020-06-30 | Oppo广东移动通信有限公司 | 图像处理方法和装置、电子设备、计算机可读存储介质 |
CN110881101A (zh) * | 2018-09-06 | 2020-03-13 | 奇酷互联网络科技(深圳)有限公司 | 一种拍摄方法、移动终端和具有存储功能的装置 |
Family Cites Families (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4311457B2 (ja) * | 2007-02-15 | 2009-08-12 | ソニー株式会社 | 動き検出装置、動き検出方法、撮像装置および監視システム |
CN101478639B (zh) * | 2008-01-04 | 2011-07-20 | 华晶科技股份有限公司 | 场景模式自动选择方法 |
JP5520037B2 (ja) * | 2009-01-30 | 2014-06-11 | キヤノン株式会社 | 撮像装置、その制御方法及びプログラム |
CN101778220A (zh) * | 2010-03-01 | 2010-07-14 | 华为终端有限公司 | 一种自动切换夜景模式的方法和摄像设备 |
JP5629564B2 (ja) * | 2010-12-07 | 2014-11-19 | キヤノン株式会社 | 画像処理装置およびその制御方法 |
US9143679B2 (en) * | 2012-01-26 | 2015-09-22 | Canon Kabushiki Kaisha | Electronic apparatus, electronic apparatus control method, and storage medium |
JP2014146989A (ja) * | 2013-01-29 | 2014-08-14 | Sony Corp | 撮像装置、撮像方法および撮像プログラム |
JP5859061B2 (ja) * | 2013-06-11 | 2016-02-10 | キヤノン株式会社 | 撮像装置、画像処理装置、および、これらの制御方法 |
CN103617432B (zh) * | 2013-11-12 | 2017-10-03 | 华为技术有限公司 | 一种场景识别方法及装置 |
CN103841324A (zh) * | 2014-02-20 | 2014-06-04 | 小米科技有限责任公司 | 拍摄处理方法、装置和终端设备 |
CN104811609A (zh) * | 2015-03-03 | 2015-07-29 | 小米科技有限责任公司 | 拍摄参数调整方法和装置 |
CN105407281A (zh) * | 2015-11-13 | 2016-03-16 | 努比亚技术有限公司 | 一种基于场景的拍照装置、方法 |
-
2015
- 2015-11-13 CN CN201510776502.8A patent/CN105407281A/zh active Pending
-
2016
- 2016-10-19 WO PCT/CN2016/102555 patent/WO2017080348A2/zh active Application Filing
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2022052944A1 (en) * | 2020-09-11 | 2022-03-17 | International Business Machines Corporation | Recommending location and content aware filters for digital photographs |
GB2614483A (en) * | 2020-09-11 | 2023-07-05 | Ibm | Recommending location and content aware filters for digital photographs |
US11778309B2 (en) | 2020-09-11 | 2023-10-03 | International Business Machines Corporation | Recommending location and content aware filters for digital photographs |
CN118088963A (zh) * | 2024-03-07 | 2024-05-28 | 广东艾罗智能光电股份有限公司 | 一种可自动追光的智能照明控制方法及装置 |
Also Published As
Publication number | Publication date |
---|---|
WO2017080348A3 (zh) | 2017-06-15 |
CN105407281A (zh) | 2016-03-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2017080348A2 (zh) | 一种基于场景的拍照装置、方法、计算机存储介质 | |
US8395694B2 (en) | Apparatus and method for blurring image background in digital image processing device | |
WO2017045558A1 (zh) | 景深调节方法、装置及终端 | |
US20140176789A1 (en) | Image capturing apparatus and control method thereof | |
WO2016011859A1 (zh) | 拍摄光绘视频的方法、移动终端和计算机存储介质 | |
US20170302848A1 (en) | Photographing method, device and computer storage medium | |
WO2017050125A1 (zh) | 拍摄角度提示方法和装置、终端和计算机存储介质 | |
CN110198418B (zh) | 图像处理方法、装置、存储介质及电子设备 | |
CN108093158B (zh) | 图像虚化处理方法、装置、移动设备和计算机可读介质 | |
KR20110043162A (ko) | 다중 노출 제어 장치 및 그 방법 | |
US20130222635A1 (en) | Digital photographing apparatus and method of controlling the same | |
US10362207B2 (en) | Image capturing apparatus capable of intermittent image capturing, and control method and storage medium thereof | |
WO2016008359A1 (zh) | 物体运动轨迹图像的合成方法、装置及计算机存储介质 | |
US9055212B2 (en) | Imaging system, image processing method, and image processing program recording medium using framing information to capture image actually intended by user | |
US10127455B2 (en) | Apparatus and method of providing thumbnail image of moving picture | |
WO2016004819A1 (zh) | 一种拍摄方法、拍摄装置和计算机存储介质 | |
WO2016011872A1 (zh) | 图像拍摄方法、装置和计算机存储介质 | |
KR20150078275A (ko) | 움직이는 피사체 촬영 장치 및 방법 | |
JP5073602B2 (ja) | 撮像装置および撮像装置の制御方法 | |
CN110266967B (zh) | 图像处理方法、装置、存储介质及电子设备 | |
WO2017128914A1 (zh) | 一种拍摄方法及装置 | |
KR102368625B1 (ko) | 디지털 촬영 장치 및 그 방법 | |
JP2014103643A (ja) | 撮像装置及び被写体認識方法 | |
KR101613617B1 (ko) | 디지털 영상 촬영 장치 및 그 제어 방법 | |
CN110266965B (zh) | 图像处理方法、装置、存储介质及电子设备 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16863525 Country of ref document: EP Kind code of ref document: A2 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 16863525 Country of ref document: EP Kind code of ref document: A2 |