CN109447929B - Image synthesis method and device - Google Patents

Image synthesis method and device Download PDF

Info

Publication number
CN109447929B
CN109447929B CN201811215475.7A CN201811215475A CN109447929B CN 109447929 B CN109447929 B CN 109447929B CN 201811215475 A CN201811215475 A CN 201811215475A CN 109447929 B CN109447929 B CN 109447929B
Authority
CN
China
Prior art keywords
image
motion
unit
determining
synthesis
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811215475.7A
Other languages
Chinese (zh)
Other versions
CN109447929A (en
Inventor
杨冬东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN201811215475.7A priority Critical patent/CN109447929B/en
Publication of CN109447929A publication Critical patent/CN109447929A/en
Application granted granted Critical
Publication of CN109447929B publication Critical patent/CN109447929B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics

Abstract

The disclosure relates to an image synthesis method and apparatus, including obtaining a first image of continuous shooting; determining a moving unit and a non-moving unit in the first image according to the depth information and the color information; respectively synthesizing a motion unit and a non-motion unit to obtain a synthesis result of the motion unit and a synthesis result of the non-motion unit; and synthesizing the synthetic result of the motion unit and the synthetic result of the non-motion unit to obtain a synthetic image. By performing multi-frame synthesis on the motion unit and the non-motion unit respectively, according to the image synthesis method and the image synthesis device disclosed by the embodiment of the disclosure, the color gamut of a non-motion object region is expanded, the smear of the motion object region is avoided, and the quality of the image is improved.

Description

Image synthesis method and device
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image synthesis method and apparatus.
Background
In order to expand the color gamut, especially in a shooting scene with a large contrast, a method of combining a plurality of image frames (corresponding to a plurality of photographs) obtained under different exposure conditions may be used. When a moving object exists in a shooting target, the composition result is prone to have a smear problem, so that the result of shooting in the expanded color gamut for the moving object is poor.
In the related art, moving object detection can be performed based on a motion vector and an inter-frame difference, and moving object detection is performed by combining two types of moving object detection results, so that a fast moving object can be detected while the detection accuracy of a slow moving object can be maintained. However, in the method of determining the displacement by calculating the velocity, when the moving object moves irregularly, the composition of the moving object may still cause the smear problem.
Disclosure of Invention
To overcome the problems in the related art, the present disclosure provides an image synthesis method and apparatus.
According to a first aspect of embodiments of the present disclosure, there is provided an image synthesis method, including: acquiring continuously shot first images; determining a moving unit and a non-moving unit in the first image according to the depth information and the color information; respectively synthesizing a motion unit and a non-motion unit to obtain a synthesis result of the motion unit and a synthesis result of the non-motion unit; and synthesizing the synthetic result of the motion unit and the synthetic result of the non-motion unit to obtain a synthetic image.
In one possible implementation manner, synthesizing a motion unit to obtain a synthesis result of the motion unit includes: selecting a first image as a target image; and determining a motion unit in the target image as a synthesis result of the motion unit.
In one possible implementation manner, synthesizing a motion unit to obtain a synthesis result of the motion unit includes: selecting a first image as a second image; for each first image, if the displacement of the first image relative to the second image is within a first threshold value and the depth of field change is within a second threshold value, determining that the first image is a target image; and synthesizing the motion units in the target images to obtain a synthesis result of the motion units.
In one possible implementation manner, synthesizing a motion unit to obtain a synthesis result of the motion unit includes: determining the motion units with consistent color information in each target image as the motion units corresponding to the same motion object; synthesizing the motion units corresponding to the same motion object in each target image to obtain a synthesis result of each motion object; and combining the synthesis result of each moving object to obtain the synthesis result of the moving unit.
In one possible implementation, determining the moving units and the non-moving units in the first image according to the depth information and the color information includes: dividing each first image into independent units according to the depth information and the color information, wherein each independent unit represents an object; determining the independent units with consistent color information in each first image as independent units corresponding to the same object; for each object, if an independent unit with different position information or depth information exists in an independent unit corresponding to the object, determining that the object is a moving object, and determining that a unit corresponding to the object is a moving unit; determining an independent unit other than the motion unit as a non-motion unit.
According to a second aspect of the embodiments of the present disclosure, there is provided an image synthesizing apparatus including: the acquisition module is used for acquiring continuously shot first images; the determining module is used for determining a moving unit and a non-moving unit in the first image according to the depth information and the color information; the first synthesis module is used for synthesizing a motion unit and a non-motion unit respectively to obtain a synthesis result of the motion unit and a synthesis result of the non-motion unit; and the second synthesis module is used for synthesizing the synthesis result of the motion unit and the synthesis result of the non-motion unit to obtain a synthesis image.
In one possible implementation, the first synthesizing module includes: the first selection submodule is used for selecting a first image as a target image; and the first determining submodule is used for determining a motion unit in the target image as a synthetic result of the motion unit.
In one possible implementation, the first synthesizing module includes: the second selection submodule is used for selecting a first image as a second image; the second determining submodule is used for determining the first image as a target image if the displacement of the first image relative to the second image is within a first threshold value and the depth change is within a second threshold value for each first image; and the first synthesis submodule is used for synthesizing the motion units in the target images to obtain the synthesis result of the motion units.
In one possible implementation, the first synthesizing module includes: the third determining submodule is used for determining the motion units with consistent color information in each target image as the motion units corresponding to the same motion object; the second synthesis submodule is used for synthesizing the motion units corresponding to the same motion object in each target image to obtain the synthesis result of each motion object; and the merging submodule is used for merging the synthesis result of each moving object to obtain the synthesis result of the moving unit.
In one possible implementation, the determining module includes: the dividing submodule is used for dividing each first image into independent units according to the depth information and the color information, and each independent unit represents an object; the fourth determining submodule is used for determining the independent units with consistent color information in each first image as the independent units corresponding to the same object; a fifth determining submodule, configured to determine, for each object, if there is an independent unit with different position information or depth information in an independent unit corresponding to the object, that the object is a moving object, and a unit corresponding to the object is a moving unit; a sixth determining sub-module for determining an independent unit other than the moving unit as a non-moving unit.
According to a third aspect of the embodiments of the present disclosure, there is provided an image synthesizing apparatus comprising a processor; a memory for storing processor-executable instructions; wherein the processor is configured to perform the method of the first aspect.
According to a fourth aspect of embodiments of the present disclosure, there is provided a non-transitory computer-readable storage medium, wherein instructions, when executed by a processor, enable the processor to perform the method of the first aspect described above.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects: in the method, the first image is divided into the motion unit and the non-motion unit, and the motion unit and the non-motion unit are subjected to multi-frame synthesis respectively, so that the color gamut of a non-motion object region is expanded, the smear of the motion object region is avoided, and the quality of the picture is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a flow diagram illustrating an image synthesis method according to an exemplary embodiment.
Fig. 2a illustrates one example of a first image of the present disclosure.
Fig. 2b illustrates one example of a first image of the present disclosure.
FIG. 3 is a flow diagram illustrating an image synthesis method according to an exemplary embodiment.
FIG. 4 is a flow diagram illustrating an image synthesis method according to an exemplary embodiment.
FIG. 5 is a flow diagram illustrating an image synthesis method according to an exemplary embodiment.
Fig. 6 is a block diagram illustrating an image synthesizing apparatus according to an exemplary embodiment.
Fig. 7 is a block diagram illustrating an image synthesizing apparatus according to an exemplary embodiment.
Fig. 8 is a block diagram illustrating an apparatus for image synthesis according to an exemplary embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
FIG. 1 is a flow diagram illustrating an image synthesis method according to an exemplary embodiment. The method can be applied to a terminal such as a mobile phone, a tablet computer or a computer. As shown in fig. 1, the method may include steps S11 through S14.
In step S11, first images continuously captured are acquired.
In step S12, a moving unit and a non-moving unit in the first image are determined based on the depth information and the color information.
In step S13, a motion unit and a non-motion unit are synthesized, respectively, to obtain a synthesis result of the motion unit and a synthesis result of the non-motion unit.
In step S14, the synthesis result of the motion unit and the synthesis result of the non-motion unit are synthesized to obtain a synthesized image.
In the method, the first image is divided into the motion unit and the non-motion unit, and the motion unit and the non-motion unit are subjected to multi-frame synthesis respectively, so that the color gamut of a non-motion object region is expanded, the possibility of smear of the motion object region is avoided, and the quality of the picture is improved.
Wherein the first image may be used to represent continuously captured images. The terminal may continuously capture a series of images at a predetermined frame rate, as the first images, respectively. In one possible implementation, after a shooting device of a terminal focuses on a certain object (a person, an animal, an object, or the like), the object can be automatically tracked, and the object can be kept focused on the object even if the object moves. In this way, the first image that the terminal continuously captures through the capturing device may be a series of moving images focused on a moving object. Fig. 2a and 2b respectively show one example of a first image of the present disclosure. Wherein the imaging time of the first image shown in fig. 2a is earlier than the imaging time of the first image shown in fig. 2 b.
The depth information of the object may represent a relative distance of the object from the photographing apparatus. The terminal may divide the region of the first image according to the size of the depth information. For example, the terminal may divide an area corresponding to a tree and a person from fig. 2a according to the size of the depth information.
The color information corresponding to different objects is different, and the terminal can divide the area of the first image according to the color information. For example, the terminal may divide the region corresponding to the tree and the person from fig. 2a according to the color information.
In one possible implementation, step S12 may include: dividing each first image into independent units according to the depth information and the color information, wherein each independent unit represents an object; determining the independent units with consistent color information in each first image as independent units corresponding to the same object; for each object, if an independent unit with different position information or depth information exists in an independent unit corresponding to the object, determining that the object is a moving object, and determining that a unit corresponding to the object is a moving unit; determining an independent unit other than the motion unit as a non-motion unit.
The independent unit may represent an object, specifically, a region where the object is located in the first image. The moving unit may represent a moving object and the non-moving unit may represent a non-moving object.
Since the objects with consistent depth information may not be the same object, e.g. the ball and the person shown in fig. 2a, the depth information is consistent, but the two objects are different. The color information may or may not be the same object, e.g. the two trees shown in fig. 2a, but the color information is the same object, but the two objects are different objects. Therefore, the terminal can jointly determine the areas corresponding to different objects in the first image through the depth information and the color information. That is, the terminal may divide each first image into independent cells each representing one object according to the depth information and the color information.
In one example, the terminal may divide the first image into different regions according to the color information, and then determine the front-back position relationship of each region according to the depth information of each region, thereby dividing each region into independent units.
For example, the terminal may use an average value of depth information of pixels included in each region as the depth information of each region. The depth information of each pixel point can be based on methods such as a triangulation distance measuring principle. For example, the terminal may obtain two images simultaneously through two cameras to determine depth of field information of the object, and for example, the terminal may calculate depth of field information of each pixel point through a difference between pixel points in the two images at the same position in a photographed scene in combination with a triangulation distance measurement principle. Alternatively, the terminal may determine depth information of the object from the structured light encoded information. Specifically, the terminal may project a light spot, a light slit, a grating, a grid or a stripe to the moving object by using a structured light projector, obtain structured light encoded information of the moving object by using a structured light sensor, decode the obtained structured light encoded information, compare the structured light encoded information with preset structured light encoded information to obtain a matching relationship between the two, and determine depth-of-field information of each pixel point by combining a triangle policy principle. The disclosure does not limit how the depth information of the pixel points is determined.
In different first images, the color information of the same object is consistent. The terminal may determine the independent units with the same color information in each first image as the independent units corresponding to the same object. For example, the color information of the region corresponding to the person in fig. 2a coincides with the color information of the region corresponding to the person in fig. 2b, and at this time, the terminal may determine that the two regions correspond to the same object.
In the case of keeping the position of the photographing apparatus unchanged, the movement of the moving object may cause a change in the front-rear distance and/or the left-right position thereof with respect to the photographing apparatus. The terminal can judge whether the front-back distance of the object changes according to whether the depth information of the object changes, and judge whether the left-right position of the object changes according to whether the position information of the object changes. When the terminal determines that the depth information or the position information of an object changes, the terminal indicates that the position of the object changes, the object is a moving object, and the independent unit corresponding to the object is a moving unit. In one possible implementation, the position information of the independent unit may be coordinates of the feature point of the independent unit. And each object corresponds to a point with the same characteristic in independent units of different first images and is a corresponding characteristic point. The terminal can determine whether the position information of the independent unit changes by comparing the coordinates of the characteristic point of the independent unit with the coordinates of the characteristic point corresponding to the characteristic point in the independent unit of other first images.
The independent units in the first image except for the motion units are non-motion units. Taking the first image including fig. 2a and fig. 2b as an example, based on fig. 2a and fig. 2b, the terminal may determine that depth information of both the person and the ball has changed, and therefore, the terminal may determine that the independent units corresponding to the person and the ball are both moving units, and the independent units corresponding to the two trees and the other background areas are non-moving units.
The terminal may synthesize the result of synthesizing the moving unit and the non-moving unit. The terminal can respectively synthesize the moving unit and the non-moving unit according to the characteristics of the moving object and the non-moving object, so that the synthesis result of the moving unit has no smear, and the synthesis result of the non-moving unit expands the color gamut. On the basis, the terminal can synthesize the synthesis result of the motion unit and the synthesis result of the non-motion unit, so that the picture with wide color gamut, high definition, hierarchy and no smear on a motion object is obtained.
FIG. 3 is a flow diagram illustrating an image synthesis method according to an exemplary embodiment. As shown in fig. 3, synthesizing a motion unit to obtain a synthesis result of the motion unit may include:
step S131, a first image is selected as a target image.
Step S132, determining the motion unit in the target image as the synthesis result of the motion unit.
The terminal may select one first image as the target image from the acquired plurality of first images. In a possible implementation manner, the terminal may use the acquired first image as a target image, or use a first image corresponding to a motion unit with the highest definition (or with the highest brightness, the highest contrast, or the like) as the target image, which is not limited by this disclosure.
The terminal may directly determine a motion unit in the target image as a result of the synthesis of the motion unit. By using a motion unit directly as the synthesis result of the motion unit, smearing of the synthesis result of the motion unit can be avoided.
FIG. 4 is a flow diagram illustrating an image synthesis method according to an exemplary embodiment. As shown in fig. 4, synthesizing a motion unit to obtain a synthesis result of the motion unit may include:
step S133 selects a first image as a second image.
Step S134, for each first image, if the displacement of the first image relative to the second image is within the first threshold and the depth change is within the second threshold, determining that the first image is the target image.
Step S135, synthesizing the motion units in each target image to obtain a synthesis result of the motion units.
The terminal may select one image as the second image from the acquired plurality of first images. The second image is a reference image of the selected target image. In a possible implementation manner, the terminal may use the acquired first image as the second image, or use the first image corresponding to the motion unit with the highest definition (or with the highest brightness, the highest contrast, etc.) as the second image, which is not limited in this disclosure.
The larger the displacement of the first image relative to the second image, the larger the change in depth of field of the first image relative to the second image, indicating the larger the difference in the position of the moving unit of the first image and the second image. The first threshold may be used to limit the magnitude of the displacement to a degree that does not produce smearing, and the second threshold may be used to limit the magnitude of the depth of field variation to a degree that does not produce smearing. The first threshold and the second threshold may be actually determined, and the disclosure is not limited thereto.
The terminal can synthesize the motion units in the target images to obtain the synthesis result of the motion units. Therefore, the movement amount of the motion unit used in the synthesis process is controlled within the range without generating the smear by the terminal through the first threshold and the second threshold, the smear can be prevented from occurring in the synthesis result of the motion unit, and the smear can be prevented from occurring in the synthesized image.
FIG. 5 is a flow diagram illustrating an image synthesis method according to an exemplary embodiment. As shown in fig. 5, synthesizing the motion units to obtain the synthesis result of the motion units may include steps S136 to S138.
In step S136, the moving unit whose color information matches in each target image is determined as a moving unit corresponding to the same moving object.
In step S137, the motion units corresponding to the same moving object in each target image are synthesized to obtain a synthesis result of each moving object.
In step S138, the synthesis results of each moving object are combined to obtain the synthesis result of the moving unit.
When a plurality of moving units exist in the first image, the terminal synthesizes the moving units corresponding to the moving objects. Because the different motion units have different motion speeds, the motion units are respectively synthesized, so that the influence between the synthesis results of the motion units can be avoided, and the quality of the synthesized picture is improved.
When the motion units are combined, the method of step S131 and step S132 may be adopted, and the method of step S133 to step S135 may also be adopted, which is not limited in this disclosure.
In one possible implementation, synthesizing for non-motion units may include: and synthesizing the non-motion units of all the first images to obtain a synthesis result of the non-motion units.
Fig. 6 is a block diagram illustrating an image synthesizing apparatus according to an exemplary embodiment. As shown in fig. 6, the apparatus 60 may include: an acquisition module 61, a determination module 62, a first synthesis module 63 and a second synthesis module 64.
The acquisition module 61 is configured to acquire continuously captured first images;
the determination module 62 is configured to determine moving units and non-moving units in the first image based on the depth information and the color information;
the first synthesis module 63 is configured to synthesize a motion unit and a non-motion unit respectively, and obtain a synthesis result of the motion unit and a synthesis result of the non-motion unit;
the second synthesis module 64 is configured to synthesize the synthesis result of the motion unit and the synthesis result of the non-motion unit to obtain a synthesized image.
In the method, the first image is divided into the motion unit and the non-motion unit, and the motion unit and the non-motion unit are subjected to multi-frame synthesis respectively, so that the color gamut of a non-motion object region is expanded, the possibility of smear of the motion object region is avoided, and the quality of the picture is improved.
Fig. 7 is a block diagram illustrating an image synthesizing apparatus according to an exemplary embodiment. As shown in fig. 7, in one possible implementation, the first synthesis module 63 may include a first selection submodule 631 and a first determination submodule 632.
The first selection sub-module 631 is configured to select a first image as a target image;
the first determination submodule 632 is configured to determine a motion unit in the target image as a synthesis result of the motion unit.
In one possible implementation, the first synthesis module 63 may include a second selection submodule 633, a second determination submodule 634, and a first synthesis submodule 635.
The second selecting submodule 633 is configured to select a first image as a second image;
the second determination sub-module 634 is configured to determine, for each first image, that the first image is a target image if the displacement of the first image relative to the second image is within a first threshold and the depth variation is within a second threshold;
the first synthesis submodule 635 is configured to synthesize the motion unit in each target image, and obtain a synthesis result of the motion unit.
In one possible implementation, the first synthesis module 63 may include a third determination submodule 636, a second synthesis submodule 637, and a merge submodule 638.
The third determining submodule 636 is configured to determine a moving unit in which color information in each target image is identical as a moving unit corresponding to the same moving object;
the second synthesis sub-module 637 is configured to synthesize motion units corresponding to the same moving object in each target image, to obtain a synthesis result of each moving object;
the merge sub-module 638 is configured to merge the composite results for each moving object, resulting in a composite result for the moving unit.
In one possible implementation, the determination module 62 may include a dividing sub-module 621, a fourth determination sub-module 622, a fifth determination sub-module 623, and a sixth determination sub-module 624.
The dividing submodule 621 is configured to divide each first image into independent units according to the depth information and the color information, each independent unit representing one object;
the fourth determining submodule 622 is configured to determine the independent units with consistent color information in each first image as the independent units corresponding to the same object;
the fifth determining sub-module 623 is configured to determine, for each object, that the object is a moving object if there is an independent unit having different displacement information or depth information among the independent units corresponding to the object, and the unit corresponding to the object is a moving unit;
the sixth determination submodule 624 is configured to determine an independent unit other than the moving unit as a non-moving unit.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Fig. 8 is a block diagram illustrating an apparatus 800 for image synthesis according to an example embodiment. For example, the apparatus 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 8, the apparatus 800 may include one or more of the following components: processing component 802, memory 804, power component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor component 814, and communication component 816.
The processing component 802 generally controls overall operation of the device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the apparatus 800. Examples of such data include instructions for any application or method operating on device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
Power components 806 provide power to the various components of device 800. The power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the apparatus 800.
The multimedia component 808 includes a screen that provides an output interface between the device 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the device 800 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the apparatus 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the device 800. For example, the sensor assembly 814 may detect the open/closed status of the device 800, the relative positioning of components, such as a display and keypad of the device 800, the sensor assembly 814 may also detect a change in the position of the device 800 or a component of the device 800, the presence or absence of user contact with the device 800, the orientation or acceleration/deceleration of the device 800, and a change in the temperature of the device 800. The sensor component 814 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate communications between the apparatus 800 and other devices in a wired or wireless manner. The device 800 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium comprising instructions, such as the memory 804 comprising instructions, executable by the processor 820 of the device 800 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. An image synthesis method, characterized in that the method comprises:
acquiring continuously shot first images;
determining a moving unit and a non-moving unit in the first image according to the depth information and the color information;
respectively synthesizing a motion unit and a non-motion unit to obtain a synthesis result of the motion unit and a synthesis result of the non-motion unit;
synthesizing the synthetic result of the motion unit and the synthetic result of the non-motion unit to obtain a synthetic image;
the determining the moving unit and the non-moving unit in the first image according to the depth information and the color information comprises:
dividing each first image into independent units according to the depth information and the color information, wherein each independent unit represents an object, dividing the first images into different areas according to the color information, and determining the front-back position relation of each area according to the depth information of each area so as to divide each area into independent units;
determining the independent units with consistent color information in each first image as independent units corresponding to the same object;
for each object, if an independent unit with different position information or depth information exists in an independent unit corresponding to the object, determining that the object is a moving object, and determining that a unit corresponding to the object is a moving unit;
determining an independent unit other than the motion unit as a non-motion unit.
2. The method of claim 1, wherein synthesizing for a motion unit results in a synthesis result for the motion unit, comprising:
selecting a first image as a target image;
and determining a motion unit in the target image as a synthesis result of the motion unit.
3. The method of claim 1, wherein synthesizing for a motion unit results in a synthesis result for the motion unit, comprising:
selecting a first image as a second image;
for each first image, if the displacement of the first image relative to the second image is within a first threshold value and the depth of field change is within a second threshold value, determining that the first image is a target image;
and synthesizing the motion units in the target images to obtain a synthesis result of the motion units.
4. The method according to any one of claims 1 to 3, wherein synthesizing for a motion unit to obtain a synthesis result of the motion unit comprises:
determining the motion units with consistent color information in each target image as the motion units corresponding to the same motion object;
synthesizing the motion units corresponding to the same motion object in each target image to obtain a synthesis result of each motion object;
and combining the synthesis result of each moving object to obtain the synthesis result of the moving unit.
5. An image synthesizing apparatus, characterized in that the apparatus comprises:
the acquisition module is used for acquiring continuously shot first images;
the determining module is used for determining a moving unit and a non-moving unit in the first image according to the depth information and the color information;
the first synthesis module is used for synthesizing a motion unit and a non-motion unit respectively to obtain a synthesis result of the motion unit and a synthesis result of the non-motion unit;
the second synthesis module is used for synthesizing the synthesis result of the motion unit and the synthesis result of the non-motion unit to obtain a synthesis image;
the determining module comprises:
the dividing submodule is used for dividing each first image into independent units according to the depth information and the color information, each independent unit represents an object, the first images are divided into different areas according to the color information, and the front-back position relation of each area is determined according to the depth information of each area, so that each area is divided into independent units;
the fourth determining submodule is used for determining the independent units with consistent color information in each first image as the independent units corresponding to the same object;
a fifth determining submodule, configured to determine, for each object, if there is an independent unit with different displacement or depth information in an independent unit corresponding to the object, that the object is a moving object, and a unit corresponding to the object is a moving unit;
a sixth determining sub-module for determining an independent unit other than the moving unit as a non-moving unit.
6. The apparatus of claim 5, wherein the first synthesis module comprises:
the first selection submodule is used for selecting a first image as a target image;
and the first determining submodule is used for determining a motion unit in the target image as a synthetic result of the motion unit.
7. The apparatus of claim 5, wherein the first synthesis module comprises:
the second selection submodule is used for selecting a first image as a second image;
the second determining submodule is used for determining the first image as a target image if the displacement of the first image relative to the second image is within a first threshold value and the depth change is within a second threshold value for each first image;
and the first synthesis submodule is used for synthesizing the motion units in the target images to obtain the synthesis result of the motion units.
8. The apparatus of any one of claims 5 to 7, wherein the first synthesis module comprises:
the third determining submodule is used for determining the motion units with consistent color information in each target image as the motion units corresponding to the same motion object;
the second synthesis submodule is used for synthesizing the motion units corresponding to the same motion object in each target image to obtain the synthesis result of each motion object;
and the merging submodule is used for merging the synthesis result of each moving object to obtain the synthesis result of the moving unit.
9. An image synthesizing apparatus, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform the method of any one of claims 1 to 4.
10. A non-transitory computer readable storage medium, wherein instructions in the storage medium, when executed by a processor, enable the processor to perform the method of any of claims 1 to 4.
CN201811215475.7A 2018-10-18 2018-10-18 Image synthesis method and device Active CN109447929B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811215475.7A CN109447929B (en) 2018-10-18 2018-10-18 Image synthesis method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811215475.7A CN109447929B (en) 2018-10-18 2018-10-18 Image synthesis method and device

Publications (2)

Publication Number Publication Date
CN109447929A CN109447929A (en) 2019-03-08
CN109447929B true CN109447929B (en) 2020-12-04

Family

ID=65546792

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811215475.7A Active CN109447929B (en) 2018-10-18 2018-10-18 Image synthesis method and device

Country Status (1)

Country Link
CN (1) CN109447929B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104243819A (en) * 2014-08-29 2014-12-24 小米科技有限责任公司 Photo acquiring method and device
CN104424649A (en) * 2013-08-21 2015-03-18 株式会社理光 Method and system for detecting moving object
CN107948519A (en) * 2017-11-30 2018-04-20 广东欧珀移动通信有限公司 Image processing method, device and equipment

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5458865B2 (en) * 2009-09-18 2014-04-02 ソニー株式会社 Image processing apparatus, imaging apparatus, image processing method, and program
CN105451010A (en) * 2014-08-18 2016-03-30 惠州友华微电子科技有限公司 Depth of field acquisition device and acquisition method
TWI537875B (en) * 2015-04-08 2016-06-11 大同大學 Image fusion method and image processing apparatus

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104424649A (en) * 2013-08-21 2015-03-18 株式会社理光 Method and system for detecting moving object
CN104243819A (en) * 2014-08-29 2014-12-24 小米科技有限责任公司 Photo acquiring method and device
CN107948519A (en) * 2017-11-30 2018-04-20 广东欧珀移动通信有限公司 Image processing method, device and equipment

Also Published As

Publication number Publication date
CN109447929A (en) 2019-03-08

Similar Documents

Publication Publication Date Title
CN108182730B (en) Virtual and real object synthesis method and device
US9674395B2 (en) Methods and apparatuses for generating photograph
CN106210496B (en) Photo shooting method and device
CN110557547B (en) Lens position adjusting method and device
CN108154465B (en) Image processing method and device
CN105282441B (en) Photographing method and device
CN108154466B (en) Image processing method and device
CN110769147B (en) Shooting method and electronic equipment
CN110796012B (en) Image processing method and device, electronic equipment and readable storage medium
CN113364965A (en) Shooting method and device based on multiple cameras and electronic equipment
CN106469446B (en) Depth image segmentation method and segmentation device
CN112188096A (en) Photographing method and device, terminal and storage medium
CN107770444B (en) Photographing parameter determination method and device
US11252341B2 (en) Method and device for shooting image, and storage medium
CN109447929B (en) Image synthesis method and device
CN113315903B (en) Image acquisition method and device, electronic equipment and storage medium
CN114422687B (en) Preview image switching method and device, electronic equipment and storage medium
CN114979455A (en) Photographing method, photographing device and storage medium
CN107707819B (en) Image shooting method, device and storage medium
CN114430457A (en) Shooting method, shooting device, electronic equipment and storage medium
CN112866555A (en) Shooting method, shooting device, shooting equipment and storage medium
CN109862252B (en) Image shooting method and device
CN110458962B (en) Image processing method and device, electronic equipment and storage medium
CN115706848A (en) Focusing control method and device, electronic equipment and storage medium
CN114339018A (en) Lens switching method and device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant