WO2019174544A1 - 图像合成的方法、装置、计算机存储介质及电子设备 - Google Patents

图像合成的方法、装置、计算机存储介质及电子设备 Download PDF

Info

Publication number
WO2019174544A1
WO2019174544A1 PCT/CN2019/077659 CN2019077659W WO2019174544A1 WO 2019174544 A1 WO2019174544 A1 WO 2019174544A1 CN 2019077659 W CN2019077659 W CN 2019077659W WO 2019174544 A1 WO2019174544 A1 WO 2019174544A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
target object
sub
historical
attribute parameter
Prior art date
Application number
PCT/CN2019/077659
Other languages
English (en)
French (fr)
Inventor
刘银华
孙剑波
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2019174544A1 publication Critical patent/WO2019174544A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • H04N17/002Diagnosis, testing or measuring for television systems or their details for television cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/10Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B2210/00Aspects not specifically covered by any group under G01B, e.g. of wheel alignment, caliper-like sensors
    • G01B2210/52Combining or merging partially overlapping images to an overall image

Definitions

  • the present application relates to the field of image processing technologies, and in particular, to a method, an apparatus, a computer storage medium, and an electronic device for image synthesis.
  • front-facing cameras such as front-facing cameras.
  • the lens size of the front-end camera cannot meet the requirements of professional digital cameras, resulting in front-facing cameras.
  • the quality of images that can be captured is bottlenecked.
  • multi-frame synthesis is usually used to improve the quality of the captured image to solve the problem of collecting images such as large noise and unclearness, thereby improving the display effect of the image.
  • the image processing performance of the smart terminal is improved, it is possible to synthesize for 4 to 6 frames of images.
  • the quality of the captured image frame is limited due to the external environment, so that the image quality is improved even if the image frame is synthesized according to the 6-frame quality limited image frame.
  • the space is also limited.
  • the embodiments of the present application are intended to provide a method, an apparatus, a computer storage medium, and an electronic device for image synthesis, which can significantly improve image quality and improve display performance of an image.
  • an embodiment of the present application provides a method for image synthesis, where the method includes:
  • the historical sub-image for synthesis is combined with the acquired image to obtain a synthesized image.
  • an embodiment of the present application further provides an image synthesizing apparatus, where the apparatus includes: a photographing part, an evaluation part, a query part, and a synthesizing part; wherein
  • the photographing portion is configured to capture at least one frame of the captured image for the target object
  • the evaluation portion is configured to evaluate an image quality of the captured image, and extract the target from the captured image when an image quality evaluation value of the acquired image is lower than a set first quality evaluation threshold Object sub-image
  • the querying portion is configured to query a correspondence between a preset historical sub-image and an attribute parameter based on an attribute parameter of the target object sub-image to obtain a historical sub-image for synthesis, wherein the The image quality of the historical sub-image is higher than the image quality of the target object sub-image;
  • the synthesizing portion is configured to synthesize the historical sub-image for synthesis and the acquired image to obtain a synthesized image.
  • the embodiment of the present application further provides a computer storage medium, where the computer storage medium stores an image synthesis program, and the image synthesis program is implemented by at least one processor to implement image synthesis according to the first aspect. Method steps.
  • an embodiment of the present application provides an electronic device, including: a camera, a memory, and a processor, where
  • the photographing device is configured to capture at least one frame of the captured image for the target object
  • the memory stores an image synthesis program
  • the processor is configured to perform the image synthesis process to implement the method steps of image composition as described in the first aspect.
  • the embodiment of the present application provides a method, an apparatus, a computer storage medium, and an electronic device for image synthesis.
  • the attribute parameters of the target object are similar and the image quality is higher than the history of the target object sub-image.
  • the image is combined with the acquired image, so that the obtained synthesized image has more obvious image quality improvement than the original captured image, and the display effect of the image is improved.
  • the quality of the original captured image is low, the composite image obtained by only synthesizing the original captured image cannot significantly improve the image quality.
  • FIG. 1 is a schematic flowchart of a method for image synthesis according to an embodiment of the present application
  • FIG. 2 is a schematic diagram of a terminal according to an embodiment of the present application.
  • FIG. 3 is a schematic diagram of an angle of a target object according to an embodiment of the present application.
  • FIG. 4 is a schematic diagram of a device for synthesizing an image according to an embodiment of the present application.
  • FIG. 5 is a schematic diagram of another apparatus for synthesizing an image according to an embodiment of the present application.
  • FIG. 6 is a schematic structural diagram of a specific hardware of an electronic device according to an embodiment of the present disclosure.
  • a method for image composition provided by an embodiment of the present application.
  • the method may be applied to a terminal having a camera, and the method may include:
  • S101 capture at least one frame of the captured image for the target object
  • S102 evaluating, for the image quality of the collected image, when the image quality evaluation value of the collected image is lower than a set first quality evaluation threshold, extracting the target object sub-image from the collected image;
  • S103 Query a correspondence between a preset historical sub-image and an attribute parameter based on an attribute parameter of the target object sub-image, and obtain a historical sub-image for synthesis, wherein the historical sub-image used for the synthesis The image quality is higher than the image quality of the target object sub-image;
  • S104 Synthesize the historical sub-image for synthesis and the acquired image to obtain a synthesized image.
  • the historical sub-images whose target object has similar attribute parameters and the image quality is higher than the target object sub-image are combined with the acquired image, thereby obtaining the synthesized image.
  • the image has a more obvious improvement in image quality than the original captured image, and enhances the display effect of the image.
  • the quality of the original captured image is low, the composite image obtained by only synthesizing the original captured image cannot significantly improve the image quality.
  • the terminal shown in FIG. 2 and the front camera in the terminal are generally used for performing a user self-timer. Therefore, the target object for shooting is usually the user face of the user during the self-timer or The face of the user and others when the user is taking a self-portrait with others.
  • the shooting device of the terminal when the user uses the shooting device of the terminal to shoot, when the user clicks the shooting button (physical button or virtual button) of the terminal, the shooting device collects multiple frames of images based on the current shooting environment and the scene, and collects the multi-frame image. The synthesis is performed to improve the image quality and enhance the display of the image.
  • the embodiment of the present application is expected to synthesize the high-quality image in the historical shooting process and the currently captured captured image, so that the image quality can be significantly improved when the external shooting environment and the scene are very bad. Capture images.
  • the image quality may have corresponding evaluation criteria according to the shooting scene and the environment. For example, when the current shooting scene is in a jitter or motion state, image blurring may occur, and at this time, the image quality may be It is based on the image blur and other parameters to evaluate the degree of image blur; the current shooting environment is dark, such as sunset, there will be a large noise and other signal to noise ratio is too low, at this time, the image quality can be based on the image letter The signal-to-noise ratio of the image evaluated by the noise ratio value.
  • the evaluating, for the image quality of the collected image includes: evaluating, for the degree of blur or signal to noise ratio of the collected image;
  • the degree of blur of the image is evaluated, the image quality evaluation value may include: a sharpness evaluation value of the acquired image, the first quality evaluation threshold may include a sharpness threshold; and a signal to noise ratio corresponding to the acquired image
  • the image quality evaluation value may include a signal to noise ratio value of the acquired image
  • the first quality evaluation threshold may include a signal to noise ratio threshold.
  • the corresponding first quality evaluation threshold may be set for different image quality evaluation criteria, so that the quality of the collected image may be specifically quantified. For example, when the image quality evaluation value of the acquired image is lower than the first quality evaluation threshold, the image quality of the captured image can be characterized as “poor”.
  • extracting the target object sub-image from the acquired image may include: based on the set target detection algorithm Extracting the target object sub-image from the captured image. Specifically, the target object may be identified from the acquired image by edge detection, feature extraction, etc., and the target object sub-image may be sequentially extracted.
  • the attribute parameter of the target object sub-image may be acquired.
  • the attribute parameter of the target object sub-image may include at least one of the following: an identifier of the current target object, and a three-dimensional (3D) of the current target object.
  • the information, the depth of field of the current target object, the color temperature information of the current target object sub-image, and the expression information of the current target object; correspondingly, the attribute parameter corresponding to the historical sub-image may also include at least one of the following: the historical target object The logo, the three-dimensional (3D) information of the historical target object, the depth of field of the historical target object, the color temperature information of the historical target object sub-image, and the expression information of the historical target object.
  • the image quality of the historical sub-image is higher than the image quality of the target object sub-image, so that when the historical sub-image is used for composition, the image quality of the currently poor "poor" captured image can be significantly improved.
  • the attribute relationship between the preset historical sub-image and the attribute parameter is queried based on the attribute parameter of the target object sub-image, and a history sub-synthesis is obtained.
  • the image includes: obtaining an absolute value of a difference between each attribute parameter value of the target object sub-image and each attribute parameter value corresponding to the preset history sub-image; and the weight corresponding to the set attribute parameter value to the opposite Performing a weighted summation on the absolute value of the difference of the corresponding attribute parameter, obtaining a similarity evaluation value for characterizing the degree of similarity between the historical sub-image and the target object sub-image; using the historical sub-image with the smallest similar evaluation value as A historical sub-image of the composition.
  • the terminal may save a historical sub-image with high image quality for the historical target object during the historical shooting process, and save the attribute parameter values corresponding to the historical sub-images, thereby obtaining the target object sub-image.
  • the historical sub-image closest to the target object sub-image can be selected from the above implementation for image synthesis. This avoids the situation where the difference between the synthesized image and the original captured image is too large.
  • a corresponding database may also be established for the historical sub-image and the attribute parameter, that is, at least one frame is captured in step S101.
  • the method further includes:
  • the attribute parameter value corresponding to the historical sub-image is acquired.
  • the effect of the second quality evaluation threshold is the same as the first quality evaluation threshold, and the evaluation of the quality of the image quality is performed, but the second quality evaluation threshold is higher than the first quality evaluation threshold, which can explain
  • the second quality evaluation threshold can be used to characterize the image quality "excellent".
  • the specific type of the second quality evaluation threshold may be the same as the first quality evaluation threshold, and the only difference may be only a specific value.
  • the acquiring the attribute parameter value corresponding to the historical sub-image, based on the items included in the foregoing attribute parameter specifically includes at least one of the following:
  • the human face can be obtained according to the face recognition algorithm A plurality of feature point information, because the feature points corresponding to different faces are different, and therefore, the feature points of each face can be used as the identifiers for distinguishing different faces;
  • the angle information is a common 3D information, as shown in FIG. 3, different shooting angles can be seen
  • the target objects presented below are different and can be obtained in a variety of ways.
  • a structured light (Structured Light) technique can be used to project a grating or a line source or the like onto the object to be measured, and the three-dimensional information of the object to be measured can be demodulated according to the distortion generated by the object; time of flight can also be used (Time of Flight) Technology, using the sensor to emit modulated near-infrared light, after the object is reflected, calculate the light emission and reflection time difference or phase difference to convert the distance of the captured object, thereby generating depth information and combining with the traditional camera to capture the object's three-dimensional
  • the contours are presented in different topographical representations of different distances; traditional algorithms such as Scale Invariant Feature Transform (SIFT) can also be used to calculate feature points at different scales, directions, and illumination conditions. Description vector.
  • SIFT Scale Invariant Feature Transform
  • the color temperature information is also an attribute parameter that needs to be considered when selecting a historical sub-image;
  • the facial expression can be recorded, so that the historical sub-image can be selected to be closer to the current target object expression.
  • the historical sub-image can also avoid the situation where the synthesized image is too different from the original captured image.
  • the synthesizing the historical sub-image for synthesizing with the acquired image to obtain the synthesized image may include: After the historical sub-image is combined with the target object area in the acquired image, the combined image is obtained by combining the non-target object area in the captured image; or the historical sub-image is replaced by the collection. The target object area in the image is obtained, and the synthesized image is obtained.
  • the quality of the historical sub-image is much higher than the acquired image, the fusion of the historical sub-image and the acquired image can greatly improve the quality of the acquired image, but the degree of blurring in the acquired image quality is too high, and history
  • the historical image may be directly substituted for the target object region in the captured image, thereby also improving the image quality and avoiding excessive display differences.
  • the collected multi-frame captured images may be fused, and then synthesized with the historical sub-images in the above manner.
  • the image synthesis method After obtaining the acquired image of the target object, the historical sub-images whose target object has similar attribute parameters and the image quality is higher than the target object sub-image are combined with the acquired image, thereby obtaining the obtained image.
  • the synthesized image has more obvious image quality improvement than the original captured image, and improves the display effect of the image.
  • the quality of the original captured image is low, the composite image obtained by only synthesizing the original captured image cannot significantly improve the image quality.
  • an image synthesizing device 40 including: a photographing portion 401, an evaluation portion 402, a query portion 403, and a synthesizing portion 404;
  • the photographing portion 401 is configured to capture at least one frame of the captured image for the target object
  • the evaluation portion 402 is configured to evaluate an image quality of the captured image, and extract the image from the captured image when an image quality evaluation value of the captured image is lower than a set first quality evaluation threshold Target object sub-image;
  • the querying portion 403 is configured to query a correspondence between a preset historical sub-image and an attribute parameter based on an attribute parameter of the target object sub-image to obtain a historical sub-image for synthesis, wherein the The image quality of the synthesized historical sub-image is higher than the image quality of the target object sub-image;
  • the synthesizing portion 404 is configured to synthesize the historical sub-image for synthesis and the acquired image to obtain a synthesized image.
  • the evaluation portion 402 is configured to evaluate the degree of blur or the signal to noise ratio of the acquired image
  • the image quality evaluation value comprising: a sharpness evaluation value of the collected image, the first quality evaluation threshold comprising a sharpness threshold;
  • the image quality evaluation value includes a signal to noise ratio value of the acquired image
  • the first quality evaluation threshold includes a signal to noise ratio threshold
  • the evaluation portion 402 is configured to extract the target object sub-image from the acquired image based on a set target detection algorithm.
  • the querying portion 403 is configured to acquire an absolute value of the difference between each attribute parameter value of the target object sub-image and each attribute parameter value corresponding to the preset history sub-image; Weighting corresponding to the attribute parameter value to weighting and summing the difference absolute value of the corresponding attribute parameter, obtaining a similarity evaluation value for characterizing the degree of similarity between the historical sub-image and the target object sub-image; The historical sub-image with the smallest evaluation value is used as a historical sub-image for synthesis.
  • the attribute parameter of the target object sub-image includes at least one of the following: an identifier of the current target object, three-dimensional (3D) information of the current target object, a depth of field of the current target object, and a color temperature of the current target object sub-image.
  • the attribute parameter corresponding to the historical sub-image includes at least one of the following: an identifier of the historical target object, three-dimensional (3D) information of the historical target object, a depth of field of the historical target object, color temperature information of the historical target object sub-image, and a historical target object.
  • Emoticon information includes at least one of the following: an identifier of the historical target object, three-dimensional (3D) information of the historical target object, a depth of field of the historical target object, color temperature information of the historical target object sub-image, and a historical target object.
  • 3D three-dimensional
  • the apparatus 40 further includes: an obtaining portion 405 configured to: when the historical target object is photographed, when the image quality evaluation value of the historical sub-image is higher than the set second quality evaluation threshold And acquiring an attribute parameter value corresponding to the historical sub-image; wherein the second quality evaluation threshold is higher than the first quality evaluation threshold.
  • the obtaining part 405 is configured as at least one of the following:
  • the synthesizing portion 404 is configured to synthesize the historical sub-image with the target object region in the acquired image, and then combine the non-target object region in the acquired image to obtain the synthesized An image; or, replacing the historical sub-image with the target object region in the acquired image to obtain the synthesized image.
  • the “part” may be a partial circuit, a partial processor, a partial program or software, etc., of course, may be a unit, a module, or a non-modular.
  • each component in this embodiment may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software function module.
  • the integrated unit may be stored in a computer readable storage medium if it is implemented in the form of a software function module and is not sold or used as a stand-alone product.
  • the technical solution of the embodiment is essentially Said that the part contributing to the prior art or all or part of the technical solution can be embodied in the form of a software product stored in a storage medium, comprising a plurality of instructions for making a computer device (may It is a personal computer, a server, or a network device, etc. or a processor that performs all or part of the steps of the method described in this embodiment.
  • the foregoing storage medium includes: a U disk, a mobile hard disk, a read only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, and the like, which can store program codes.
  • the embodiment provides a computer storage medium storing an image synthesis program, and the image synthesis program is implemented by at least one processor to implement the steps of the method described in the first embodiment.
  • the electronic device may include the foregoing image synthesizing device 40, and the electronic device may include various handheld devices and in-vehicle devices having wireless communication functions, based on the image synthesizing device 40 and the computer storage medium. , a wearable device, a computing device, or other processing device connected to a wireless modem, and various forms of User Equipment (UE), Mobile Station (MS), terminal device, and the like.
  • UE User Equipment
  • MS Mobile Station
  • a specific hardware structure of an electronic device 60 may be included, which may include: a photographing device 601 , a memory 602 and a processor 603 , and the components are coupled together by a bus system 604 .
  • bus system 604 is used to implement connection communication between these components.
  • the bus system 604 includes a power bus, a control bus, and a status signal bus in addition to the data bus.
  • various buses are labeled as bus system 604 in FIG. among them,
  • the photographing device 601 is configured to capture at least one frame of the captured image for the target object
  • a memory 602 configured to store a computer program capable of running on the processor 603;
  • the processor 603 is configured to: when running the computer program, perform an evaluation on an image quality of the collected image, when an image quality evaluation value of the acquired image is lower than a set first quality evaluation threshold, Extracting the target object sub-image from the captured image;
  • the historical sub-image for synthesis is combined with the acquired image to obtain a synthesized image.
  • the memory 602 in the embodiments of the present application may be a volatile memory or a non-volatile memory, or may include both volatile and non-volatile memory.
  • the non-volatile memory may be a read-only memory (ROM), a programmable read only memory (PROM), an erasable programmable read only memory (Erasable PROM, EPROM), or an electric Erase programmable read only memory (EEPROM) or flash memory.
  • the volatile memory can be a Random Access Memory (RAM) that acts as an external cache.
  • RAM Random Access Memory
  • many forms of RAM are available, such as static random access memory (SRAM), dynamic random access memory (DRAM), synchronous dynamic random access memory (Synchronous DRAM).
  • SDRAM Double Data Rate Synchronous Dynamic Random Access Memory
  • DDRSDRAM Double Data Rate Synchronous Dynamic Random Access Memory
  • ESDRAM Enhanced Synchronous Dynamic Random Access Memory
  • SDRAM Synchronous Connection Dynamic Random Access Memory
  • DRRAM direct memory bus random access memory
  • the processor 603 may be an integrated circuit chip with signal processing capabilities. In the implementation process, each step of the foregoing method may be completed by an integrated logic circuit of hardware in the processor 603 or an instruction in a form of software.
  • the processor 603 may be a general-purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), or the like. Programmable logic devices, discrete gates or transistor logic devices, discrete hardware components.
  • the methods, steps, and logical block diagrams disclosed in the embodiments of the present application can be implemented or executed.
  • the general purpose processor may be a microprocessor or the processor or any conventional processor or the like.
  • the steps of the method disclosed in the embodiments of the present application may be directly implemented by the hardware decoding processor, or may be performed by a combination of hardware and software modules in the decoding processor.
  • the software module can be located in a conventional storage medium such as random access memory, flash memory, read only memory, programmable read only memory or electrically erasable programmable memory, registers, and the like.
  • the storage medium is located in the memory 602, and the processor 603 reads the information in the memory 602 and completes the steps of the above method in combination with its hardware.
  • the embodiments described herein can be implemented in hardware, software, firmware, middleware, microcode, or a combination thereof.
  • the processing unit can be implemented in one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processing (DSP), Digital Signal Processing Equipment (DSP Device, DSPD), programmable Programmable Logic Device (PLD), Field-Programmable Gate Array (FPGA), general purpose processor, controller, microcontroller, microprocessor, other for performing the functions described herein In an electronic unit or a combination thereof.
  • ASICs Application Specific Integrated Circuits
  • DSP Digital Signal Processing
  • DSP Device Digital Signal Processing Equipment
  • PLD programmable Programmable Logic Device
  • FPGA Field-Programmable Gate Array
  • the techniques described herein can be implemented by modules (eg, procedures, functions, and so on) that perform the functions described herein.
  • the software code can be stored in memory and executed by the processor.
  • the memory can be implemented in the processor or external to the processor.
  • the processor 603 is further configured to perform the steps of the method in the foregoing embodiment when the computer program is executed, and details are not described herein again.
  • embodiments of the present application can be provided as a method, system, or computer program product. Accordingly, the application can take the form of a hardware embodiment, a software embodiment, or an embodiment in combination with software and hardware. Moreover, the application can take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage and optical storage, etc.) including computer usable program code.
  • the computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device.
  • the apparatus implements the functions specified in one or more blocks of a flow or a flow and/or block diagram of the flowchart.
  • These computer program instructions can also be loaded onto a computer or other programmable data processing device such that a series of operational steps are performed on a computer or other programmable device to produce computer-implemented processing for execution on a computer or other programmable device.
  • the instructions provide steps for implementing the functions specified in one or more of the flow or in a block or blocks of a flow diagram.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Image Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本申请实施例公开了一种图像合成的方法、装置、计算机存储介质及电子设备;该方法可以包括:拍摄至少一帧针对目标对象的采集图像;针对所述采集图像的图像质量进行评价,当所述采集图像的图像质量评价值低于设定的第一质量评价阈值时,从所述采集图像中抽取所述目标对象子图像;基于所述目标对象子图像的属性参数,查询预设的历史子图像与属性参数之间的对应关系,获得用于合成的历史子图像;将所述用于合成的历史子图像与所述采集图像进行合成,得到合成后的图像。

Description

图像合成的方法、装置、计算机存储介质及电子设备
相关申请的交叉引用
本申请基于申请号为201810210884.1、申请日为2018年3月14日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此以引入方式并入本申请。
技术领域
本申请涉及图像处理技术领域,尤其涉及一种图像合成的方法、装置、计算机存储介质及电子设备。
背景技术
当前,越来越多的智能终端均配备有前置拍摄装置,例如前置摄像头;而基于智能终端尺寸的限制,前置摄像头的镜头尺寸无法达到专业数码相机的要求,从而导致前置摄像头所能够采集到的图像质量是有瓶颈的。为了提升采集图像的质量,通常会采用多帧合成的手段来改善采集图像质量,以解决采集图像诸如噪点大、不清晰等问题,从而提高图像的显示效果。
随着智能终端图像处理性能的提升,可以针对4至6帧图像进行合成。但是在某些特定场景下,例如,在环境光较暗的场景下,由于外界环境导致采集图像帧的质量有限,因此,即使根据6帧质量有限的图像帧进行合成,所得到的图像质量提升的空间也是有限的。
发明内容
本申请实施例期望提供一种图像合成的方法、装置、计算机存储介质及电子设备,能够较为明显的改善图像质量,提升图像的显示效果。
本申请实施例的技术方案是这样实现的:
第一方面,本申请实施例提供了一种图像合成的方法,所述方法包括:
拍摄至少一帧针对目标对象的采集图像;
针对所述采集图像的图像质量进行评价,当所述采集图像的图像质量评价值低于设定的第一质量评价阈值时,从所述采集图像中抽取所述目标对象子图像;
基于所述目标对象子图像的属性参数,查询预设的历史子图像与属性参数之间的对应关系,获得用于合成的历史子图像,其中,所述用于合成的历史子图像的图像质量高于所述目标对象子图像的图像质量;
将所述用于合成的历史子图像与所述采集图像进行合成,得到合成后的图像。
第二方面,本申请实施例还提供了一种图像合成装置,所述装置包括:拍摄部分、评价部分、查询部分和合成部分;其中,
所述拍摄部分,配置为拍摄至少一帧针对目标对象的采集图像;
所述评价部分,配置为针对所述采集图像的图像质量进行评价,当所述采集图像的图像质量评价值低于设定的第一质量评价阈值时,从所述采集图像中抽取所述目标对象子图像;
所述查询部分,配置为基于所述目标对象子图像的属性参数,查询预设的历史子图像与属性参数之间的对应关系,获得用于合成的历史子图像,其中,所述用于合成的历史子图像的图像质量高于所述目标对象子图像的图像质量;
所述合成部分,配置为将所述用于合成的历史子图像与所述采集图像进行合成,得到合成后的图像。
第三方面,本申请实施例还提供了一种计算机存储介质,所述计算机存储介质存储有图像合成程序,所述图像合成程序被至少一个处理器执行 时实现第一方面所述的图像合成的方法步骤。
第四方面,本申请实施例提供了一种电子设备,包括:拍摄装置,存储器和处理器,其中,
所述拍摄装置,配置为拍摄至少一帧针对目标对象的采集图像;
所述存储器存储有图像合成程序;
所述处理器,配置为执行所述图像合成程序以实现如第一方面所述的图像合成的方法步骤。
本申请实施例提供了一种图像合成的方法、装置、计算机存储介质及电子设备;在获得目标对象的采集图像之后,将目标对象的属性参数相似且图像质量高于目标对象子图像的历史子图像与采集图像进行合成,从而所得到的合成后的图像比原始采集图像具有较为明显的图像质量的改善,提升图像的显示效果。避免原始采集图像质量较低时,仅依据原始采集图像进行合成所得到的合成图像无法对图像质量进行明显的提升。
附图说明
图1为本申请实施例提供的一种图像合成的方法流程示意图;
图2为本申请实施例提供的一种终端示意图;
图3为本申请实施例提供的一种目标对象角度示意图;
图4为本申请实施例提供的一种图像合成的装置组成示意图;
图5为本申请实施例提供的另一种图像合成的装置组成示意图;
图6为本申请实施例提供的一种电子设备的具体硬件结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述。
参见图1,其示出了本申请实施例提供的一种图像合成的方法,该方法 可以应用于具有拍摄装置的终端,该方法可以包括:
S101:拍摄至少一帧针对目标对象的采集图像;
S102:针对所述采集图像的图像质量进行评价,当所述采集图像的图像质量评价值低于设定的第一质量评价阈值时,从所述采集图像中抽取所述目标对象子图像;
S103:基于所述目标对象子图像的属性参数,查询预设的历史子图像与属性参数之间的对应关系,获得用于合成的历史子图像,其中,所述用于合成的历史子图像的图像质量高于所述目标对象子图像的图像质量;
S104:将所述用于合成的历史子图像与所述采集图像进行合成,得到合成后的图像。
通过图1所示的技术方案,在获得目标对象的采集图像之后,将目标对象的属性参数相似且图像质量高于目标对象子图像的历史子图像与采集图像进行合成,从而所得到的合成后的图像比原始采集图像具有较为明显的图像质量的改善,提升图像的显示效果。避免原始采集图像质量较低时,仅依据原始采集图像进行合成所得到的合成图像无法对图像质量进行明显的提升。
需要说明的是,以图2所示的终端以及终端中的前置拍摄装置为例,通常用于进行用户自拍,因此,其进行拍摄的目标对象通常是用户在自拍过程中的用户人脸或用户在与其他人自拍时的用户及其他人的人脸。通常用户在使用终端的拍摄装置进行拍摄时,当用户点击终端的拍摄按键(物理按键或虚拟按键)后,拍摄装置会基于当前拍摄环境及场景采集多帧图像,并且将采集到的多帧图像进行合成,从而能够改善图像质量,提升图像的显示效果。但是,如果当前拍摄环境及场景无法提供多帧高质量的采集图像,那么即使将这些采集图像进行合成,也无法明显地改善图像质量,提升图像的显示效果也有限。基于此,本申请实施例期望通过将历史拍摄 过程中的高质量图像与当前拍摄到的采集图像进行合成,从而能够在外界拍摄环境及场景十分恶劣的情况下,仍然可以合成得到图像质量明显提升的采集图像。
对于图1所示的技术方案,图像质量可以根据拍摄场景及环境具有相应的评价准则,举例来说,当前拍摄场景处于抖动或运动状态时,会出现图像模糊的情况,此时,图像质量可以是基于图像的清晰度等参数所评价的图像模糊程度;当前拍摄环境亮度较暗,例如日落时,会出现噪点较大等信噪比过低的情况,此时,图像质量可以是基于图像信噪比值所评价的图像信噪比。基于上述阐述,在一种可能的实现方式中,所述针对所述采集图像的图像质量进行评价,包括:针对所述采集图像的模糊程度或信噪比进行评价;则相应于针对所述采集图像的模糊程度进行评价,所述图像质量评价值可以包括:所述采集图像的清晰度评价值,所述第一质量评价阈值可以包括清晰度阈值;相应于针对所述采集图像的信噪比进行评价,所述图像质量评价值可以包括所述采集图像的信噪比值,所述第一质量评价阈值可以包括信噪比阈值。
可选地,在上述实现方式中,可以针对不同图像质量评价准则设定对应的第一质量评价阈值,从而可以针对采集图像的质量优劣进行具体量化。举例来说,当采集图像的图像质量评价值低于该第一质量评价阈值时,就能够表征该采集图像的图像质量“较差”。
在通过上述方案得出采集图像的图像质量“较差”的结论后,如果再使用质量“较差”的采集图像进行合成,那么也无法明显地提高图像质量。对于此问题,本实施例针对采集图像中的目标对象子图像,采用质量较高的历史子图像进行融合,从而可以明显提高合成后的图像质量。所以,当所述采集图像的图像质量评价值低于设定的第一质量评价阈值时,从所述采集图像中抽取所述目标对象子图像,可以包括:基于设定的目标检测算 法从所述采集图像中抽取所述目标对象子图像。具体来说,可以通过边缘检测、特征提取等方式从采集图像中识别目标对象,并依次将目标对象子图像进行抽取。
在抽取之后,可以获取目标对象子图像的属性参数,在本申请实施例中,目标对象子图像的属性参数可以至少包括以下至少一项:当前目标对象的标识、当前目标对象的三维(3D)信息、当前目标对象的景深、当前目标对象子图像的色温信息以及当前目标对象的表情信息等;相应来说,历史子图像所对应的属性参数也同样可以包括以下至少一项:历史目标对象的标识、历史目标对象的三维(3D)信息、历史目标对象的景深、历史目标对象子图像的色温信息以及历史目标对象的表情信息等。可以理解地,历史子图像的图像质量要高于目标对象子图像的图像质量,从而使得将历史子图像用于合成时,能够明显地提高当前质量“较差”的采集图像的图像质量。
基于上述属性参数,在一种可能的实现方式中,所述基于所述目标对象子图像的属性参数,查询预设的历史子图像与属性参数之间的对应关系,获得用于合成的历史子图像,包括:获取所述目标对象子图像各属性参数值与预设的历史子图像所对应的各属性参数值之间的差值绝对值;基于设定的属性参数值对应的权重至对所述相应属性参数的差值绝对值进行加权求和,获得用于表征所述历史子图像与所述目标对象子图像相似程度的相似评估值;将所述相似评估值最小的历史子图像作为用于合成的历史子图像。
在具体实现过程中,终端可以在历史拍摄过程中,针对历史目标对象保存图像质量较高的历史子图像,并且将各历史子图像对应的属性参数值进行保存,从而当获取到目标对象子图像后,可以按照上述实现方式来从中选取与目标对象子图像最接近的历史子图像用来进行图像合成。从而避 免发生合成后的图像与原始采集图像之间显示差异过大的情况。
由此,在一种可能的实现方式中,图1所示的技术方案之前,还可以针对历史子图像以及属性参数建立对应的数据库,也就是说,在步骤S101所述的拍摄至少一帧针对目标对象的采集图像之前,所述方法还包括:
在对历史目标对象拍摄过程中,当历史子图像的图像质量评价值高于设定的第二质量评价阈值时,获取所述历史子图像对应的属性参数值。
可以理解,第二质量评价阈值的作用与第一质量评价阈值一样,针对图像质量的优劣进行量化的评价,但是第二质量评价阈值要高于所述第一质量评价阈值,这就能够说明,第二质量评价阈值可以用于表征图像质量“优秀”。具体地,第二质量评价阈值的具体类型可以和第一质量评价阈值相同,唯一不同的可能也仅是具体数值。
在此基础上,基于前述属性参数所包括的各项,所述获取所述历史子图像对应的属性参数值,具体包括以下至少一项:
基于所述历史子图像中所述历史目标对象的特征点信息,确定所述历史目标对象的标识;可以理解地,以历史目标对象是人脸为例,可以根据人脸识别算法获得人脸上多个特征点信息,因为不同人脸所对应的特征点不同,因此,可以根据各人脸的特征点作为区分不同人脸的标识;
基于所述历史子图像中所述历史目标对象的方位信息,确定所述历史目标对象的3D信息;具体来说,角度信息是一种常见的3D信息,参见图3,可以看出不同拍摄角度下所呈现的目标对象是不同,可以通过多种方式进行获取。例如,可以采用结构光(Structured Light)技术利用将光栅或线光源等投射到被测物上,根据它们产生的畸变来解调出被测物的三维信息;还可以采用飞行时间(Time of Flight)技术,利用传感器发出经调制的近红外光,遇物体反射后,计算光线发射和反射时间差或相位差来换算被拍摄景物的距离,从而产生深度信息并结合传统的相机拍摄,将物体的三维轮 廓以不同颜色代表不同距离的地形图方式呈现出来;也可以采用传统算法比如尺度不变特征变换(SIFT,Scale Invariant Feature Transform)计算出特征点在不同的尺度(scale)、方向、光照条件下的描述向量。
基于所述历史子图像中所述历史目标对象的大小确定所述历史目标对象的景深信息;可以理解,同一台拍摄装置在拍同一个目标,如果目标景深是一样的话,理论上目标图像的大小也是一样的,而用相同景深的历史目标对象进行合成,会减少图像效果差异,提高显示效果。
获取所述历史子图像中所述历史目标对象的色温信息;可以理解地,如果涉及到图像合成,如果合成的两幅图像拍摄时色温差异很大,会造成最终得到的合成图像色彩过渡不自然;因此,色温信息也是选取历史子图像时需要考量的属性参数;
获取所述历史子图像中所述历史目标对象的表情信息;以历史目标对象是人脸为例,可以针对人脸表情进行记录,从而能够在选取历史子图像时采用与当前目标对象表情较为接近的历史子图像,也能够避免合成后的图像与原始采集图像显示差异过大的情况。
可以理解,通过上述实现方式建立历史子图像数据库也是一个动态的建立过程,随着用户拍摄数量的增加,该数据库也会更加完善,数据库中就会出现越来越接近当前目标对象的历史子图像。
对于图1所示的技术方案,在一种可能的实现方式中,S104所述的将所述用于合成的历史子图像与所述采集图像进行合成,得到合成后的图像,可以包括:将所述历史子图像与所述采集图像中目标对象区域进行合成后,再结合所述采集图像中的非目标对象区域获得所述合成后的图像;或者,将所述历史子图像替代所述采集图像中目标对象区域,获得所述合成后的图像。
可以理解,由于历史子图像的质量要远高于采集图像,因此,将历史 子图像与采集图像进行融合,能够极大地提高采集图像的质量,但是在采集图像质量的模糊程度过于高,并且历史子图像中历史目标对象与当前目标对象相似度较高的情况下,也可以直接将历史图像替代采集图像中的目标对象区域,从而也能够提高图像的质量,还避免显示差异过大。
此外,针对采集图像中的非目标对象区域,可以将采集到的多帧采集图像进行融合,从而再与历史子图像按照上述方式进行合成。
通过本实施例提供的图像合成的方法,在获得目标对象的采集图像之后,将目标对象的属性参数相似且图像质量高于目标对象子图像的历史子图像与采集图像进行合成,从而所得到的合成后的图像比原始采集图像具有较为明显的图像质量的改善,提升图像的显示效果。避免原始采集图像质量较低时,仅依据原始采集图像进行合成所得到的合成图像无法对图像质量进行明显的提升。
基于前述实施例相同的构思,参见图4,其示出了本申请实施例提供的一种图像合成装置40,包括:拍摄部分401、评价部分402、查询部分403和合成部分404;其中,
所述拍摄部分401,配置为拍摄至少一帧针对目标对象的采集图像;
所述评价部分402,配置为针对所述采集图像的图像质量进行评价,当所述采集图像的图像质量评价值低于设定的第一质量评价阈值时,从所述采集图像中抽取所述目标对象子图像;
所述查询部分403,配置为基于所述目标对象子图像的属性参数,查询预设的历史子图像与属性参数之间的对应关系,获得用于合成的历史子图像,其中,所述用于合成的历史子图像的图像质量高于所述目标对象子图像的图像质量;
所述合成部分404,配置为将所述用于合成的历史子图像与所述采集图像进行合成,得到合成后的图像。
在上述方案中,所述评价部分402,配置为针对所述采集图像的模糊程度或信噪比进行评价;
相应于针对所述采集图像的模糊程度进行评价,所述图像质量评价值包括:所述采集图像的清晰度评价值,所述第一质量评价阈值包括清晰度阈值;
相应于针对所述采集图像的信噪比进行评价,所述图像质量评价值包括所述采集图像的信噪比值,所述第一质量评价阈值包括信噪比阈值。
在上述方案中,所述评价部分402,配置为基于设定的目标检测算法从所述采集图像中抽取所述目标对象子图像。
在上述方案中,所述查询部分403,配置为获取所述目标对象子图像各属性参数值与预设的历史子图像所对应的各属性参数值之间的差值绝对值;基于设定的属性参数值对应的权重至对所述相应属性参数的差值绝对值进行加权求和,获得用于表征所述历史子图像与所述目标对象子图像相似程度的相似评估值;将所述相似评估值最小的历史子图像作为用于合成的历史子图像。
在上述方案中,所述目标对象子图像的属性参数至少包括以下至少一项:当前目标对象的标识、当前目标对象的三维(3D)信息、当前目标对象的景深、当前目标对象子图像的色温信息以及当前目标对象的表情信息;
所述历史子图像所对应的属性参数包括以下至少一项:历史目标对象的标识、历史目标对象的三维(3D)信息、历史目标对象的景深、历史目标对象子图像的色温信息以及历史目标对象的表情信息。
在上述方案中,参见图5,所述装置40还包括:获取部分405,配置为在对历史目标对象拍摄过程中,当历史子图像的图像质量评价值高于设定的第二质量评价阈值时,获取所述历史子图像对应的属性参数值;其中,所述第二质量评价阈值高于所述第一质量评价阈值。
在上述方案中,所述获取部分405,配置为以下至少一项:
基于所述历史子图像中所述历史目标对象的特征点信息,确定所述历史目标对象的标识;
基于所述历史子图像中所述历史目标对象的方位信息,确定所述历史目标对象的3D信息;
基于所述历史子图像中所述历史目标对象的大小确定所述历史目标对象的景深信息;
获取所述历史子图像中所述历史目标对象的色温信息;
获取所述历史子图像中所述历史目标对象的表情信息。
在上述方案中,所述合成部分404,配置为将所述历史子图像与所述采集图像中目标对象区域进行合成后,再结合所述采集图像中的非目标对象区域获得所述合成后的图像;或者,将所述历史子图像替代所述采集图像中目标对象区域,获得所述合成后的图像。
可以理解地,在本实施例中,“部分”可以是部分电路、部分处理器、部分程序或软件等等,当然也可以是单元,还可以是模块也可以是非模块化的。
另外,在本实施例中的各组成部分可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。
所述集成的单元如果以软件功能模块的形式实现并非作为独立的产品进行销售或使用时,可以存储在一个计算机可读取存储介质中,基于这样的理解,本实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可 以是个人计算机,服务器,或者网络设备等)或processor(处理器)执行本实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
因此,本实施例提供了一种计算机存储介质,该计算机存储介质存储有图像合成程序,所述图像合成程序被至少一个处理器执行时实现上述实施例一所述的方法的步骤。
基于上述图像合成装置40以及计算机存储介质,本申请实施例提供的一种电子设备;该电子设备可以包括前述图像合成装置40,该电子设备可以包括各种具有无线通信功能的手持设备、车载设备、可穿戴设备、计算设备或连接到无线调制解调器的其他处理设备,以及各种形式的用户设备(User Equipment,UE),移动台(Mobile Station,MS),终端设备(terminaldevice)等等。
基于此,参见图6,其示出了本申请实施例提供的一种电子设备60的具体硬件结构,可以包括:拍摄装置601,存储器602和处理器603,各个组件通过总线系统604耦合在一起。可理解,总线系统604用于实现这些组件之间的连接通信。总线系统604除包括数据总线之外,还包括电源总线、控制总线和状态信号总线。但是为了清楚说明起见,在图6中将各种总线都标为总线系统604。其中,
拍摄装置601,配置为拍摄至少一帧针对目标对象的采集图像;
存储器602,用于存储能够在处理器603上运行的计算机程序;
处理器603,用于在运行所述计算机程序时,执行:针对所述采集图像的图像质量进行评价,当所述采集图像的图像质量评价值低于设定的第一质量评价阈值时,从所述采集图像中抽取所述目标对象子图像;
基于所述目标对象子图像的属性参数,查询预设的历史子图像与属性 参数之间的对应关系,获得用于合成的历史子图像,其中,所述用于合成的历史子图像的图像质量高于所述目标对象子图像的图像质量;
将所述用于合成的历史子图像与所述采集图像进行合成,得到合成后的图像。
可以理解,本申请实施例中的存储器602可以是易失性存储器或非易失性存储器,或可包括易失性和非易失性存储器两者。其中,非易失性存储器可以是只读存储器(Read-Only Memory,ROM)、可编程只读存储器(Programmable ROM,PROM)、可擦除可编程只读存储器(Erasable PROM,EPROM)、电可擦除可编程只读存储器(Electrically EPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(Random Access Memory,RAM),其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的RAM可用,例如静态随机存取存储器(Static RAM,SRAM)、动态随机存取存储器(Dynamic RAM,DRAM)、同步动态随机存取存储器(Synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(Double Data Rate SDRAM,DDRSDRAM)、增强型同步动态随机存取存储器(Enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(Synchlink DRAM,SLDRAM)和直接内存总线随机存取存储器(Direct Rambus RAM,DRRAM)。本文描述的系统和方法的存储器602旨在包括但不限于这些和任意其它适合类型的存储器。
而处理器603可能是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法的各步骤可以通过处理器603中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器603可以是通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、 分立硬件组件。可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器602,处理器603读取存储器602中的信息,结合其硬件完成上述方法的步骤。
可以理解的是,本文描述的这些实施例可以用硬件、软件、固件、中间件、微码或其组合来实现。对于硬件实现,处理单元可以实现在一个或多个专用集成电路(Application Specific Integrated Circuits,ASIC)、数字信号处理器(Digital Signal Processing,DSP)、数字信号处理设备(DSP Device,DSPD)、可编程逻辑设备(Programmable Logic Device,PLD)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)、通用处理器、控制器、微控制器、微处理器、用于执行本申请所述功能的其它电子单元或其组合中。
对于软件实现,可通过执行本文所述功能的模块(例如过程、函数等)来实现本文所述的技术。软件代码可存储在存储器中并通过处理器执行。存储器可以在处理器中或在处理器外部实现。
可选地,作为另一个实施例,所述处理器603还用于运行所述计算机程序时,执行前述实施例中所述方法的步骤,在此不再赘述。
本领域内的技术人员应明白,本申请实施例可提供为方法、系统、或计算机程序产品。因此,本申请可采用硬件实施例、软件实施例、或结合软件和硬件方面的实施例的形式。而且,本申请可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器和光学存储器等)上实施的计算机程序产品的形式。
本申请是参照根据本申请实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
以上所述,仅为本申请的较佳实施例而已,并非用于限定本申请的保护范围。

Claims (18)

  1. 一种图像合成的方法,所述方法包括:
    拍摄至少一帧针对目标对象的采集图像;
    针对所述采集图像的图像质量进行评价,当所述采集图像的图像质量评价值低于设定的第一质量评价阈值时,从所述采集图像中抽取所述目标对象子图像;
    基于所述目标对象子图像的属性参数,查询预设的历史子图像与属性参数之间的对应关系,获得用于合成的历史子图像,其中,所述用于合成的历史子图像的图像质量高于所述目标对象子图像的图像质量;
    将所述用于合成的历史子图像与所述采集图像进行合成,得到合成后的图像。
  2. 根据权利要求1所述的方法,其中,所述针对所述采集图像的图像质量进行评价,包括:针对所述采集图像的模糊程度或信噪比进行评价;
    相应于针对所述采集图像的模糊程度进行评价,所述图像质量评价值包括:所述采集图像的清晰度评价值,所述第一质量评价阈值包括清晰度阈值;
    相应于针对所述采集图像的信噪比进行评价,所述图像质量评价值包括所述采集图像的信噪比值,所述第一质量评价阈值包括信噪比阈值。
  3. 根据权利要求1所述的方法,其中,所述当所述采集图像的图像质量评价值低于设定的第一质量评价阈值时,从所述采集图像中抽取所述目标对象子图像,包括:基于设定的目标检测算法从所述采集图像中抽取所述目标对象子图像。
  4. 根据权利要求1所述的方法,其中,所述基于所述目标对象子图像的属性参数,查询预设的历史子图像与属性参数之间的对应关系,获得用于合成的历史子图像,包括:
    获取所述目标对象子图像各属性参数值与预设的历史子图像所对应的各属性参数值之间的差值绝对值;
    基于设定的属性参数值对应的权重至对所述相应属性参数的差值绝对值进行加权求和,获得用于表征所述历史子图像与所述目标对象子图像相似程度的相似评估值;
    将所述相似评估值最小的历史子图像作为用于合成的历史子图像。
  5. 根据权利要求1所述的方法,其中,所述目标对象子图像的属性参数至少包括以下至少一项:当前目标对象的标识、当前目标对象的三维3D信息、当前目标对象的景深、当前目标对象子图像的色温信息以及当前目标对象的表情信息;
    所述历史子图像所对应的属性参数包括以下至少一项:历史目标对象的标识、历史目标对象的三维信息、历史目标对象的景深、历史目标对象子图像的色温信息以及历史目标对象的表情信息。
  6. 根据权利要求5所述的方法,其中,在所述拍摄至少一帧针对目标对象的采集图像之前,所述方法还包括:
    在对历史目标对象拍摄过程中,当历史子图像的图像质量评价值高于设定的第二质量评价阈值时,获取所述历史子图像对应的属性参数值;其中,所述第二质量评价阈值高于所述第一质量评价阈值。
  7. 根据权利要求6所述的方法,其中,所述获取所述历史子图像对应的属性参数值,具体包括以下至少一项:
    基于所述历史子图像中所述历史目标对象的特征点信息,确定所述历史目标对象的标识;
    基于所述历史子图像中所述历史目标对象的方位信息,确定所述历史目标对象的3D信息;
    基于所述历史子图像中所述历史目标对象的大小确定所述历史目标对 象的景深信息;
    获取所述历史子图像中所述历史目标对象的色温信息;
    获取所述历史子图像中所述历史目标对象的表情信息。
  8. 根据权利要求1所述的方法,其中,所述将所述用于合成的历史子图像与所述采集图像进行合成,得到合成后的图像,包括:
    将所述历史子图像与所述采集图像中目标对象区域进行合成后,再结合所述采集图像中的非目标对象区域获得所述合成后的图像;
    或者,将所述历史子图像替代所述采集图像中目标对象区域,获得所述合成后的图像。
  9. 一种图像合成装置,所述装置包括:拍摄部分、评价部分、查询部分和合成部分;其中,
    所述拍摄部分,配置为拍摄至少一帧针对目标对象的采集图像;
    所述评价部分,配置为针对所述采集图像的图像质量进行评价,当所述采集图像的图像质量评价值低于设定的第一质量评价阈值时,从所述采集图像中抽取所述目标对象子图像;
    所述查询部分,配置为基于所述目标对象子图像的属性参数,查询预设的历史子图像与属性参数之间的对应关系,获得用于合成的历史子图像,其中,所述用于合成的历史子图像的图像质量高于所述目标对象子图像的图像质量;
    所述合成部分,配置为将所述用于合成的历史子图像与所述采集图像进行合成,得到合成后的图像。
  10. 根据权利要求9所述的装置,其中,所述评价部分,配置为针对所述采集图像的模糊程度或信噪比进行评价;
    相应于针对所述采集图像的模糊程度进行评价,所述图像质量评价值包括:所述采集图像的清晰度评价值,所述第一质量评价阈值包括清晰度 阈值;
    相应于针对所述采集图像的信噪比进行评价,所述图像质量评价值包括所述采集图像的信噪比值,所述第一质量评价阈值包括信噪比阈值。
  11. 根据权利要求9所述的装置,其中,所述评价部分,配置为基于设定的目标检测算法从所述采集图像中抽取所述目标对象子图像。
  12. 根据权利要求9所述的装置,其中,所述查询部分,配置为获取所述目标对象子图像各属性参数值与预设的历史子图像所对应的各属性参数值之间的差值绝对值;基于设定的属性参数值对应的权重至对所述相应属性参数的差值绝对值进行加权求和,获得用于表征所述历史子图像与所述目标对象子图像相似程度的相似评估值;将所述相似评估值最小的历史子图像作为用于合成的历史子图像。
  13. 根据权利要求9所述的装置,其中,所述目标对象子图像的属性参数至少包括以下至少一项:当前目标对象的标识、当前目标对象的三维3D信息、当前目标对象的景深、当前目标对象子图像的色温信息以及当前目标对象的表情信息;
    所述历史子图像所对应的属性参数包括以下至少一项:历史目标对象的标识、历史目标对象的三维信息、历史目标对象的景深、历史目标对象子图像的色温信息以及历史目标对象的表情信息。
  14. 根据权利要求13所述的装置,其中,所述装置还包括:获取部分,配置为在对历史目标对象拍摄过程中,当历史子图像的图像质量评价值高于设定的第二质量评价阈值时,获取所述历史子图像对应的属性参数值;其中,所述第二质量评价阈值高于所述第一质量评价阈值。
  15. 根据权利要求14所述的装置,其中,所述获取部分,配置为以下至少一项:
    基于所述历史子图像中所述历史目标对象的特征点信息,确定所述历 史目标对象的标识;
    基于所述历史子图像中所述历史目标对象的方位信息,确定所述历史目标对象的3D信息;
    基于所述历史子图像中所述历史目标对象的大小确定所述历史目标对象的景深信息;
    获取所述历史子图像中所述历史目标对象的色温信息;
    获取所述历史子图像中所述历史目标对象的表情信息。
  16. 根据权利要求9所述的装置,其中,所述合成部分,配置为将所述历史子图像与所述采集图像中目标对象区域进行合成后,再结合所述采集图像中的非目标对象区域获得所述合成后的图像;或者,将所述历史子图像替代所述采集图像中目标对象区域,获得所述合成后的图像。
  17. 一种计算机存储介质,所述计算机存储介质存储有图像合成程序,所述图像合成程序被至少一个处理器执行时实现权利要求1至8任一项所述的图像合成的方法步骤。
  18. 一种电子设备,包括:拍摄装置,存储器和处理器,其中,
    所述拍摄装置,配置为拍摄至少一帧针对目标对象的采集图像;
    所述存储器存储有图像合成程序;
    所述处理器,配置为执行所述图像合成程序以实现如权利要求1至8任一项所述的图像合成方法步骤。
PCT/CN2019/077659 2018-03-14 2019-03-11 图像合成的方法、装置、计算机存储介质及电子设备 WO2019174544A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810210884.1 2018-03-14
CN201810210884.1A CN108449543A (zh) 2018-03-14 2018-03-14 图像合成的方法、装置、计算机存储介质及电子设备

Publications (1)

Publication Number Publication Date
WO2019174544A1 true WO2019174544A1 (zh) 2019-09-19

Family

ID=63195111

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/077659 WO2019174544A1 (zh) 2018-03-14 2019-03-11 图像合成的方法、装置、计算机存储介质及电子设备

Country Status (2)

Country Link
CN (1) CN108449543A (zh)
WO (1) WO2019174544A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112019739A (zh) * 2020-08-03 2020-12-01 RealMe重庆移动通信有限公司 一种拍摄控制方法、装置、电子设备及存储介质
CN113781499A (zh) * 2021-08-27 2021-12-10 上海微创医疗机器人(集团)股份有限公司 医疗用镜状态检测方法、图像处理方法机器人控制方法及系统
CN114723672A (zh) * 2022-03-09 2022-07-08 杭州易现先进科技有限公司 一种三维重建数据采集校验的方法、系统、装置和介质
CN116188440A (zh) * 2023-02-28 2023-05-30 聊城市红日机械配件厂 一种轴承保持架的生产分析优化方法、设备及介质

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108449543A (zh) * 2018-03-14 2018-08-24 广东欧珀移动通信有限公司 图像合成的方法、装置、计算机存储介质及电子设备
CN109993737A (zh) * 2019-03-29 2019-07-09 联想(北京)有限公司 一种处理方法、设备及计算机可读存储介质
CN110072057B (zh) * 2019-05-14 2021-03-09 Oppo广东移动通信有限公司 图像处理方法及相关产品
CN111842922A (zh) * 2020-06-04 2020-10-30 深圳市人工智能与机器人研究院 材料合成参数调整方法、装置、计算机设备和存储介质
CN112269853B (zh) * 2020-11-16 2023-06-13 Oppo广东移动通信有限公司 检索处理方法、装置及存储介质
CN116347220B (zh) * 2023-05-29 2023-07-21 合肥工业大学 人像拍摄方法及相关设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104662889A (zh) * 2012-09-25 2015-05-27 三星电子株式会社 用于在便携式终端中进行拍摄的方法和设备
CN104867112A (zh) * 2015-03-31 2015-08-26 小米科技有限责任公司 照片处理方法及装置
JP2015173407A (ja) * 2014-03-12 2015-10-01 有限会社デザインオフィス・シィ 被写体合成画像作成方法、その被写体合成画像を作成するプログラムを記録したコンピュータ読み取り可能な記録媒体、及び合成画像提供方法
CN106161933A (zh) * 2016-06-30 2016-11-23 维沃移动通信有限公司 一种图像处理方法及移动终端
CN108449543A (zh) * 2018-03-14 2018-08-24 广东欧珀移动通信有限公司 图像合成的方法、装置、计算机存储介质及电子设备

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100716977B1 (ko) * 2004-07-23 2007-05-10 삼성전자주식회사 디지털 영상 기기
CN103187083B (zh) * 2011-12-29 2016-04-13 深圳中兴力维技术有限公司 一种基于时域视频融合的存储方法及其系统
CN104581386B (zh) * 2014-12-23 2017-11-07 深圳市九洲电器有限公司 一种电视节目播放方法及系统
CN107231522A (zh) * 2017-05-04 2017-10-03 广东欧珀移动通信有限公司 一种移动终端及其拍照方法、计算机可读存储介质
CN107155067B (zh) * 2017-07-10 2019-03-22 珠海市魅族科技有限公司 拍照控制方法及装置、终端及存储介质
CN107610075A (zh) * 2017-08-29 2018-01-19 维沃移动通信有限公司 图像合成方法及移动终端
CN107589963B (zh) * 2017-09-26 2019-05-17 维沃移动通信有限公司 一种图片处理方法、移动终端及计算机可读存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104662889A (zh) * 2012-09-25 2015-05-27 三星电子株式会社 用于在便携式终端中进行拍摄的方法和设备
JP2015173407A (ja) * 2014-03-12 2015-10-01 有限会社デザインオフィス・シィ 被写体合成画像作成方法、その被写体合成画像を作成するプログラムを記録したコンピュータ読み取り可能な記録媒体、及び合成画像提供方法
CN104867112A (zh) * 2015-03-31 2015-08-26 小米科技有限责任公司 照片处理方法及装置
CN106161933A (zh) * 2016-06-30 2016-11-23 维沃移动通信有限公司 一种图像处理方法及移动终端
CN108449543A (zh) * 2018-03-14 2018-08-24 广东欧珀移动通信有限公司 图像合成的方法、装置、计算机存储介质及电子设备

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112019739A (zh) * 2020-08-03 2020-12-01 RealMe重庆移动通信有限公司 一种拍摄控制方法、装置、电子设备及存储介质
CN113781499A (zh) * 2021-08-27 2021-12-10 上海微创医疗机器人(集团)股份有限公司 医疗用镜状态检测方法、图像处理方法机器人控制方法及系统
CN114723672A (zh) * 2022-03-09 2022-07-08 杭州易现先进科技有限公司 一种三维重建数据采集校验的方法、系统、装置和介质
CN116188440A (zh) * 2023-02-28 2023-05-30 聊城市红日机械配件厂 一种轴承保持架的生产分析优化方法、设备及介质
CN116188440B (zh) * 2023-02-28 2023-08-29 聊城市红日机械配件厂 一种轴承保持架的生产分析优化方法、设备及介质

Also Published As

Publication number Publication date
CN108449543A (zh) 2018-08-24

Similar Documents

Publication Publication Date Title
WO2019174544A1 (zh) 图像合成的方法、装置、计算机存储介质及电子设备
US10540806B2 (en) Systems and methods for depth-assisted perspective distortion correction
CN106899781B (zh) 一种图像处理方法及电子设备
JP6961797B2 (ja) プレビュー写真をぼかすための方法および装置ならびにストレージ媒体
US10303983B2 (en) Image recognition apparatus, image recognition method, and recording medium
JP4772839B2 (ja) 画像識別方法および撮像装置
CN110300264B (zh) 图像处理方法、装置、移动终端以及存储介质
JP5725953B2 (ja) 撮像装置及びその制御方法、並びに情報処理装置
KR20170008638A (ko) 3차원 컨텐츠 생성 장치 및 그 3차원 컨텐츠 생성 방법
KR101524548B1 (ko) 영상 정합 장치 및 방법
CN109064504B (zh) 图像处理方法、装置和计算机存储介质
WO2022160857A1 (zh) 图像处理方法及装置、计算机可读存储介质和电子设备
WO2021136078A1 (zh) 图像处理方法、图像处理系统、计算机可读介质和电子设备
WO2021008205A1 (zh) 图像处理
CN112261292B (zh) 图像获取方法、终端、芯片及存储介质
CN106034203A (zh) 拍摄终端的图像处理方法及其装置
CN109559353A (zh) 摄像模组标定方法、装置、电子设备及计算机可读存储介质
JP2023540273A (ja) 眼部状態検出による画像処理方法、装置及び記憶媒体
US9947106B2 (en) Method and electronic device for object tracking in a light-field capture
CN110365897B (zh) 图像修正方法和装置、电子设备、计算机可读存储介质
WO2020227945A1 (en) Photographing method and apparatus
CN113610865A (zh) 图像处理方法、装置、电子设备及计算机可读存储介质
Kınlı et al. Modeling the lighting in scenes as style for auto white-balance correction
WO2017096859A1 (zh) 照片的处理方法及装置
CN107578006B (zh) 一种照片处理方法及移动终端

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19766895

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19766895

Country of ref document: EP

Kind code of ref document: A1