WO2017140182A1 - Image synthesis method and apparatus, and storage medium - Google Patents

Image synthesis method and apparatus, and storage medium Download PDF

Info

Publication number
WO2017140182A1
WO2017140182A1 PCT/CN2016/112498 CN2016112498W WO2017140182A1 WO 2017140182 A1 WO2017140182 A1 WO 2017140182A1 CN 2016112498 W CN2016112498 W CN 2016112498W WO 2017140182 A1 WO2017140182 A1 WO 2017140182A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
processed
images
color space
initial
Prior art date
Application number
PCT/CN2016/112498
Other languages
French (fr)
Chinese (zh)
Inventor
戴向东
Original Assignee
努比亚技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 努比亚技术有限公司 filed Critical 努比亚技术有限公司
Publication of WO2017140182A1 publication Critical patent/WO2017140182A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/10Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/682Vibration or motion blur correction
    • H04N23/684Vibration or motion blur correction performed by controlling the image sensor readout, e.g. by controlling the integration time
    • H04N23/6845Vibration or motion blur correction performed by controlling the image sensor readout, e.g. by controlling the integration time by combination of a plurality of images sequentially taken
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/76Circuitry for compensating brightness variation in the scene by influencing the image signals

Definitions

  • the present invention relates to image processing technologies in the field of signal processing, and in particular, to an image synthesis method and apparatus, and a storage medium.
  • HDR high dynamic range images
  • LDR Low-Dynamic Range
  • the image using the LDR image corresponding to the best detail for each exposure time to synthesize the final HDR image, can better reflect the visual effect in the real environment.
  • the current HDR algorithm synthesis mainly has two strategies. The first one is to shoot different exposure images. By estimating the camera's illumination response curve, the image's brightness range is mapped from a low dynamic range to a high dynamic range, and then through the tone mapping algorithm.
  • the image is mapped to the number of images suitable for image display viewing; the second is to capture a single image, and the image contrast and brightness adjustment will enhance the contrast of the underexposed area of the image, and the contrast suppression in the overexposed area; the first method
  • the second method is relatively straightforward, the method complexity is not high, and the underexposed area can be repaired, but It is difficult to suppress the reduction to the actual scene brightness in the overexposed area.
  • the main purpose of the embodiments of the present invention is to provide an image synthesizing method and apparatus, and a storage medium, which can solve at least the above problems in the prior art.
  • a first aspect of the embodiments of the present invention provides an image synthesizing method, including:
  • the feature weight is a set of weights for each pixel of the image to be processed
  • a second aspect of the embodiments of the present invention provides an image synthesizing apparatus, where the apparatus includes:
  • an acquiring unit configured to acquire at least two initial images, respectively converting the at least two initial images into at least two to-be-processed images; wherein the initial image is an image based on the first color space, the to-be-processed The image is an image based on the second color space;
  • a calculating unit configured to respectively determine a high frequency image portion and a low frequency image portion corresponding to each of the to-be-processed images based on the at least two to-be-processed images; and calculate feature rights for the at least two to-be-processed images a value; wherein the feature weight is a set of weights for each pixel of the image to be processed;
  • a merging unit configured to fuse the at least two to-be-processed images based on the high-frequency image portion of the image to be processed, the low-frequency image portion, and the feature weight of the image to be processed, to obtain the at least two The fused image corresponding to the initial image.
  • a third aspect of the embodiments of the present invention provides a computer storage medium, wherein the computer storage medium stores a computer program for executing the image synthesis method described above.
  • the image synthesizing method and device and the storage medium provided by the embodiments of the present invention are obtained. Having two initial images, converting the at least two initial images into at least two to-be-processed images; calculating feature weights for the at least two images to be processed; and obtaining high-frequency images based on the image to be processed And a part of the low-frequency image portion and the feature weight of the image to be processed, and merging the at least two to-be-processed images to obtain a fused image corresponding to the at least two initial images.
  • the fusion of the plurality of images is performed based on the feature weights corresponding to the different pixel points in each image, so that the image obtained by the final fusion can ensure the quality of the image in detail.
  • FIG. 1 is a schematic structural diagram of hardware of a mobile terminal that implements various embodiments of the present invention
  • FIG. 2 is a schematic diagram of a wireless communication system of the mobile terminal shown in FIG. 1;
  • FIG. 3 is a schematic flowchart 1 of an image synthesizing method according to an embodiment of the present invention.
  • FIG. 4 is a schematic diagram of two color spaces according to an embodiment of the present invention.
  • FIG. 5a is a schematic diagram of two initial images according to an embodiment of the present invention.
  • FIG. 5b is a schematic diagram of three initial images according to an embodiment of the present invention.
  • FIG. 6 is a second schematic flowchart of an image synthesizing method according to an embodiment of the present invention.
  • FIG. 7 is a diagram showing an example of wavelet decomposition for an image according to an embodiment of the present invention.
  • FIG. 8 is a schematic diagram of processing logic according to an embodiment of the present invention.
  • FIG. 9 is a schematic diagram of a synthesis result according to an embodiment of the present invention.
  • FIG. 10 is a schematic view showing the comparison of effects of another embodiment of the present invention.
  • FIG. 11 is a schematic structural diagram of an image synthesizing apparatus according to an embodiment of the present invention.
  • the image processing apparatus in this embodiment may be a mobile terminal, or Think of the server, or it can be a terminal device such as a personal computer, a laptop, or a camera.
  • the image processing apparatus may be described as an example of a mobile terminal, and the mobile terminal may be implemented in various forms.
  • the terminals described in the present invention may include, for example, mobile phones, smart phones, notebook computers, digital broadcast receivers, personal digital assistants (PDAs), tablet computers (PADs), portable multimedia players (PMPs), navigation devices, and the like.
  • Mobile terminals and fixed terminals such as digital TVs, desktop computers, and the like.
  • the terminal is a mobile terminal.
  • PDAs personal digital assistants
  • PADs tablet computers
  • PMPs portable multimedia players
  • Mobile terminals and fixed terminals such as digital TVs, desktop computers, and the like.
  • the terminal is a mobile terminal.
  • those skilled in the art will appreciate that configurations in accordance with embodiments of the present invention can be applied to fixed type terminals in addition to components that are specifically for mobile purposes.
  • FIG. 1 is a schematic diagram showing the hardware structure of a mobile terminal embodying various embodiments of the present invention.
  • the mobile terminal 100 may include an audio/video (A/V) input unit 120, an output unit 150, a memory 160, an interface unit 170, a controller 180, a power supply unit 190, a sensing unit, and the like.
  • A/V audio/video
  • Figure 1 illustrates a mobile terminal having various components, but it should be understood that not all illustrated components are required to be implemented. More or fewer components can be implemented instead. The elements of the mobile terminal will be described in detail below.
  • the A/V input unit 120 is for receiving an audio or video signal.
  • the A/V input unit 120 may include a camera 121 that processes image data of still pictures or video obtained by an image capturing device in a video capturing mode or an image capturing mode.
  • the processed image frame can be displayed on the display unit 151.
  • the image frames processed by the camera 121 may be stored in the memory 160 (or other storage medium) or transmitted via the wireless communication unit, and two or more cameras 121 may be provided according to the configuration of the mobile terminal.
  • the interface unit 170 serves as an interface through which at least one external device can connect with the mobile terminal 100.
  • the external device may include a wired or wireless headset port, an external power (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, and an audio input/output. (I/O) port, video I/O port, headphone port, and more.
  • the identification module may be storing various information for verifying that the user uses the mobile terminal 100 and And may include a User Identity Module (UIM), a Customer Identity Module (SIM), a Universal Customer Identity Module (USIM), and the like.
  • UIM User Identity Module
  • SIM Customer Identity Module
  • USB Universal Customer Identity Module
  • the device having the identification module may take the form of a smart card, and thus the identification device may be connected to the mobile terminal 100 via a port or other connection device.
  • the interface unit 170 can be configured to receive input from an external device (eg, data information, power, etc.) and transmit the received input to one or more components within the mobile terminal 100 or can be used at the mobile terminal and external device Transfer data between.
  • the interface unit 170 may function as a path through which power is supplied from the base to the mobile terminal 100 or may be used as a transmission of various command signals allowing input from the base to the mobile terminal 100 The path to the terminal.
  • Output unit 150 is configured to provide an output signal (eg, an audio signal, a video signal, an alarm signal, a vibration signal, etc.) in a visual, audio, and/or tactile manner.
  • an output signal eg, an audio signal, a video signal, an alarm signal, a vibration signal, etc.
  • the output unit 150 may include a display unit 151, an audio output module, an alarm unit, and the like.
  • the display unit 151 can display information processed in the mobile terminal 100. For example, when the mobile terminal 100 is in a phone call mode, the display unit 151 can display a user interface (UI) or a graphical user interface (GUI) related to a call or other communication (eg, text messaging, multimedia file download, etc.). When the mobile terminal 100 is in a video call mode or an image capturing mode, the display unit 151 may display a captured image and/or a received image, a UI or GUI showing a video or image and related functions, and the like.
  • UI user interface
  • GUI graphical user interface
  • the display unit 151 can function as an input device and an output device.
  • the display unit 151 may include at least one of a liquid crystal display (LCD), a thin film transistor LCD (TFT-LCD), an organic light emitting diode (OLED) display, a flexible display, a three-dimensional (3D) display, and the like.
  • LCD liquid crystal display
  • TFT-LCD thin film transistor LCD
  • OLED organic light emitting diode
  • a flexible display a three-dimensional (3D) display, and the like.
  • 3D three-dimensional
  • Some of these displays may be configured to be transparent to allow a user to view from the outside, which may be referred to as a transparent display, and a typical transparent display may be, for example, a TOLED (Transparent Organic Light Emitting Diode) display or the like.
  • TOLED Transparent Organic Light Emitting Diode
  • Mobile terminal 100 may include two, depending on the particular desired implementation. Or more display units (or other display devices), for example, the mobile terminal may include an external display unit (not shown) and an internal display unit (not shown).
  • the touch screen can be used to detect touch input pressure as well as touch input position and touch input area.
  • the memory 160 may store a software program or the like that performs processing and control operations performed by the controller 180, or may temporarily store data (for example, a phone book, a message, a still image, a video, and the like) that has been output or is to be output. Moreover, the memory 160 can store data regarding vibrations and audio signals of various manners that are output when a touch is applied to the touch screen.
  • the memory 160 may include at least one type of storage medium including a flash memory, a hard disk, a multimedia card, a card type memory (eg, SD or DX memory, etc.), a random access memory (RAM), a static random access memory ( SRAM), read only memory (ROM), electrically erasable programmable read only memory (EEPROM), programmable read only memory (PROM), magnetic memory, magnetic disk, optical disk, and the like.
  • the mobile terminal 100 can cooperate with a network storage device that performs a storage function of the memory 160 through a network connection.
  • the controller 180 typically controls the overall operation of the mobile terminal. For example, the controller 180 performs the control and processing associated with voice calls, data communications, video calls, and the like.
  • the controller 180 may perform a pattern recognition process to recognize a handwriting input or a picture drawing input performed on the touch screen as a character or an image.
  • the power supply unit 190 receives external power or internal power under the control of the controller 180 and provides appropriate power required to operate the various components and components.
  • the various embodiments described herein can be implemented in a computer readable medium using, for example, computer software, hardware, or any combination thereof.
  • the embodiments described herein may be through the use of application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays ( FPGA, processor, controller, microcontroller, microprocessor, at least one of the electronic units designed to perform the functions described herein, in some cases, such an embodiment may be under control Implemented in the controller 180.
  • ASICs application specific integrated circuits
  • DSPs digital signal processors
  • DSPDs digital signal processing devices
  • PLDs programmable logic devices
  • FPGA field programmable gate arrays
  • processor controller
  • microcontroller microprocessor
  • FPGA field programmable gate arrays
  • implementations such as procedures or functions may be implemented with separate software modules that permit the execution of at least one function or operation.
  • the software code can be implemented by a
  • the mobile terminal has been described in terms of its function.
  • a slide type mobile terminal among various types of mobile terminals such as a folding type, a bar type, a swing type, a slide type mobile terminal, and the like will be described as an example. Therefore, the present invention can be applied to any type of mobile terminal, and is not limited to a slide type mobile terminal.
  • the mobile terminal 100 as shown in FIG. 1 may be configured to operate using a communication system such as a wired and wireless communication system and a satellite-based communication system that transmits data via frames or packets.
  • a communication system such as a wired and wireless communication system and a satellite-based communication system that transmits data via frames or packets.
  • Such communication systems may use different air interfaces and/or physical layers.
  • air interfaces used by communication systems include, for example, Frequency Division Multiple Access (FDMA), Time Division Multiple Access (TDMA), Code Division Multiple Access (CDMA), and Universal Mobile Telecommunications System (UMTS) (in particular, Long Term Evolution (LTE)). ), Global System for Mobile Communications (GSM), etc.
  • FDMA Frequency Division Multiple Access
  • TDMA Time Division Multiple Access
  • CDMA Code Division Multiple Access
  • UMTS Universal Mobile Telecommunications System
  • LTE Long Term Evolution
  • GSM Global System for Mobile Communications
  • the following description relates to a CDMA communication system, but such teachings are equally applicable to other types of systems.
  • a CDMA wireless communication system can include a plurality of mobile terminals 100, a plurality of base stations (BS) 270, a base station controller (BSC) 275, and a mobile switching center (MSC) 280.
  • the MSC 280 is configured to interface with a public switched telephone network (PSTN) 290.
  • PSTN public switched telephone network
  • the MSC 280 is also configured to interface with a BSC 275 that can be coupled to the BS 270 via a backhaul line.
  • the backhaul line can be constructed in accordance with any of a number of known interfaces including, for example, E1/T1, ATM, IP, PPP, Frame Relay, HDSL, ADSL, or xDSL. It will be appreciated that the system as shown in FIG. 2 can include multiple BSCs 275.
  • Each BS 270 can serve one or more partitions (or regions), each of which is covered by a multi-directional antenna or an antenna directed to a particular direction radially away from the BS 270. Or, each partition can Covered by two or more antennas for diversity reception.
  • Each BS 270 can be configured to support multiple frequency allocations, and each frequency allocation has a particular frequency spectrum (eg, 1.25 MHz, 5 MHz, etc.).
  • BS 270 may also be referred to as a Base Transceiver Subsystem (BTS) or other equivalent terminology.
  • BTS Base Transceiver Subsystem
  • the term "base station” can be used to generally refer to a single BSC 275 and at least one BS 270.
  • a base station can also be referred to as a "cell station.”
  • each partition of a particular BS 270 may be referred to as a plurality of cellular stations.
  • a broadcast transmitter (BT) 295 transmits a broadcast signal to the mobile terminal 100 operating within the system.
  • a broadcast receiving module may be provided at the mobile terminal 100 to receive a broadcast signal transmitted by the BT 295.
  • GPS Global Positioning System
  • the satellite 300 helps locate at least one of the plurality of mobile terminals 100.
  • a plurality of satellites 300 are depicted, but it is understood that useful positioning information can be obtained using any number of satellites.
  • a GPS module located in the mobile terminal is typically configured to cooperate with the satellite 300 to obtain desired positioning information. Instead of GPS tracking technology or in addition to GPS tracking technology, other techniques that can track the location of the mobile terminal can be used. Additionally, at least one GPS satellite 300 can selectively or additionally process satellite DMB transmissions.
  • BS 270 receives reverse link signals from various mobile terminals 100.
  • Mobile terminal 100 typically participates in calls, messaging, and other types of communications.
  • Each reverse link signal received by a particular BS 270 is processed within a particular BS 270.
  • the obtained data is forwarded to the relevant BSC 275.
  • the BSC provides call resource allocation and coordinated mobility management functions including a soft handoff procedure between the BSs 270.
  • the BSC 275 also routes the received data to the MSC 280, which provides additional routing services for interfacing with the PSTN 290.
  • PSTN 290 interfaces with MSC 280, which forms an interface with BSC 275, and BSC 275 controls BS 270 accordingly to transmit forward link signals to mobile terminal 100.
  • the method and device of the present invention are proposed based on the hardware structure and communication system of the above mobile terminal Various embodiments.
  • An embodiment of the present invention provides an image synthesizing method, as shown in FIG. 3, including:
  • Step 301 Acquire at least two initial images, and convert the at least two initial images into at least two to-be-processed images respectively; wherein the initial image is an image based on a first color space, and the image to be processed is An image based on the second color space;
  • Step 302 Calculate a feature weight for the at least two to-be-processed images; wherein the feature weight is a set of weights for each pixel of the image to be processed;
  • Step 303 Determine, according to the at least two to-be-processed images, a high-frequency image portion and a low-frequency image portion corresponding to each of the to-be-processed images;
  • Step 304 merging the at least two to-be-processed images to obtain corresponding to the at least two initial images, based on the high-frequency image portion of the image to be processed, the low-frequency image portion, and the feature weight of the image to be processed. Fusion image.
  • the acquiring the at least two initial images includes: acquiring at least two initial images having different exposure amounts for the target object.
  • the at least two initial images may be two initial images or three initial images. It can be understood that the target object may be for the same scene or for the same person, which is not limited in this embodiment.
  • the first color space may be a red (R), green (G), blue (B) color space; the second color space may be a hue (H), a saturation (S), a brightness (V) color space .
  • the HSV space separates the color and brightness of the image, which is more in line with the human eye's visual experience than the RGB color space.
  • the parameters of the color in the HSV model are: hue (H), saturation (S), and brightness (V).
  • the left side of the figure represents the model of the RGB color space
  • the right side of the figure represents the model of the HSV color space
  • the image of the RGB color space into the HSV color space can be calculated by the following formula:
  • H ⁇ 0 H ⁇ H+360.On output 0 ⁇ V ⁇ 1, 0 ⁇ S ⁇ 1, 0 ⁇ H ⁇ 360. That is, when H is less than zero, H is replaced by the value obtained by H+360; and the final output value V is less than or equal to 1 and greater than or equal to 0; S is less than or equal to 1 and greater than or equal to zero; H is less than or equal to 360, and is greater than or equal to 0.
  • Scene 1 A scenario in which two initial images are used for subsequent processing.
  • the two initial images may be a first initial image and a second initial image, respectively, wherein the first initial image and the second initial The exposure amount corresponding to the image is different. It is assumed that the exposure amount of the first initial image is larger than the second initial image.
  • the two images have been registered, and the pixels are aligned.
  • the image is as shown in FIG. 5a, wherein the first initial image is a and the second initial image is b.
  • the feature weights in the embodiment may be normalized feature weights, so that the image after the fusion is not exceeded beyond the original range of values.
  • each of the first pending image Calculating the feature weights of the high-frequency image portion of the pixel and the low-frequency image portion and the corresponding pixel, and obtaining a portion of the feature of the pixel to be processed that is retained in the fused image and capable of affecting the final fused image;
  • the high-frequency image portion of each pixel in the second image to be processed and the feature weight of the low-frequency image portion and the corresponding pixel point are calculated, and the pixel point of the second image to be processed is retained in the final fused image.
  • Scenario 2 a scenario in which two initial images are used for subsequent processing.
  • the two initial images may be a first initial image, a second initial image, and a third initial image, where the first initial image, The exposure amounts corresponding to the second initial image and the third initial image are different. It is assumed that the exposure amount of the first initial image is larger than the second initial image, and the exposure amount of the second initial image is larger than the third initial image.
  • the image is as shown in FIG. 5b, wherein the first initial image is a, and the second initial image is b.
  • the third initial image is image c.
  • the feature weights in the embodiment may be normalized feature weights, so that the image after the fusion is not exceeded beyond the original range of values.
  • the fused image may be: calculating a feature weight value of each of the first to-be-processed image, the second to-be-processed image, and the third image to be processed, and the low-frequency image portion and the corresponding pixel, respectively.
  • the image to be processed The pixel retains some of the features in the fused image that can affect the final fused image.
  • At least two initial images are acquired, and the at least two initial images are respectively converted into at least two to-be-processed images; the feature weights for the at least two to-be-processed images are calculated; The high-frequency image portion of the image to be processed and the low-frequency image portion and the feature weight of the image to be processed are fused to the at least two images to be processed to obtain a fused image corresponding to the at least two initial images.
  • the fusion of the plurality of images is performed based on the feature weights corresponding to the different pixel points in each image, so that the image obtained by the final fusion can ensure the quality of the image in detail.
  • An embodiment of the present invention provides an image synthesizing method, as shown in FIG. 3, including:
  • Step 301 Acquire at least two initial images, and convert the at least two initial images into at least two to-be-processed images respectively; wherein the initial image is an image based on a first color space, and the image to be processed is An image based on the second color space;
  • Step 302 Calculate a feature weight for the at least two to-be-processed images; wherein the feature weight is a set of weights for each pixel of the image to be processed;
  • Step 303 Determine, according to the at least two to-be-processed images, a high-frequency image portion and a low-frequency image portion corresponding to each of the to-be-processed images;
  • Step 304 merging the at least two to-be-processed images to obtain corresponding to the at least two initial images, based on the high-frequency image portion of the image to be processed, the low-frequency image portion, and the feature weight of the image to be processed. Fusion image.
  • the acquiring the at least two initial images includes: acquiring at least two initial images having different exposure amounts for the target object.
  • the at least two initial images may be two initial images or three initial images. It can be understood that the target object may be for the same scene or for the same person. This embodiment is incorrect. It is limited.
  • the first color space may be a red (R), green (G), blue (B) color space; the second color space may be a hue (H), a saturation (S), a brightness (V) color space .
  • the HSV space separates the color and brightness of the image, which is more in line with the human eye's visual experience than the RGB color space.
  • the parameters of the color in the HSV model are: hue (H), saturation (S), and brightness (V).
  • the left side of the figure represents the model of the RGB color space
  • the right side of the figure represents the model of the HSV color space
  • the image of the RGB color space into the HSV color space can be calculated by the following formula:
  • the method further comprises: respectively determining, according to the at least two to-be-processed images, a high-frequency image corresponding to each of the to-be-processed images Part and low frequency image part.
  • the high frequency image portion and the low frequency image portion corresponding to each of the to-be-processed images may be decomposed for the pixel points in the image to be processed by using wavelet coefficients, for example, the formula Iwave k (i, j) may be used.
  • Iwave k i, j
  • a calculation is performed in which I can represent the image to be processed, wave() is a wavelet decomposition function, and (i, j) represents the horizontal and vertical coordinates of the pixel.
  • the calculating the feature weights for the at least two to-be-processed images may include:
  • a normalized feature weight for each image to be processed is determined based on feature weights of the at least two images to be processed.
  • the regional contrast of each pixel point can be calculated by the following formula:
  • p(i,j) is the pixel value of the pixel
  • m(i,j) is the local region average
  • the soble operator is used to calculate the gradient of the image in the horizontal and vertical directions.
  • the operator consists of two sets of 3x3 matrices, which are horizontal and vertical, respectively, and are planarly convolved with the image to obtain lateral and longitudinal luminance difference approximations. If I represents the original image, Gx and Gy represent the images detected by the longitudinal and lateral edges, respectively, and GL i,j is the gradient of the pixel points of the image, and the formula is as follows:
  • the merging the at least two to-be-processed images based on the high-frequency image portion of the image to be processed and the low-frequency image portion and the feature weight of the image to be processed includes:
  • the normalized at least two images to be processed are summed, and the at least two images to be processed are merged.
  • n is the number of initial images
  • Iwave k (i, j) is the wavelet decomposition of any pixel (i, j) point in image I.
  • At least two initial images are acquired, and the at least two initial images are respectively converted into at least two to-be-processed images; the feature weights for the at least two to-be-processed images are calculated; a high frequency image portion of the image to be processed and a low frequency image portion, a feature weight of the image to be processed, and the at least two to-be-processed maps
  • the image is merged to obtain a fused image corresponding to the at least two initial images.
  • the fusion of the plurality of images is performed based on the feature weights corresponding to the different pixel points in each image, so that the image obtained by the final fusion can ensure the quality of the image in detail.
  • the high-frequency image portion and the low-frequency image portion of the pixel are obtained by wavelet transform, and the HDR image is synthesized by selecting the pixel point satisfying the HDR requirement with the region contrast feature and the gradient joint feature, and the generated HDR image can effectively highlight the scene. The dark details are suppressed and the image is overexposed.
  • the embodiment of the invention further describes a computer storage medium, wherein the computer storage medium stores a computer program for executing the image synthesis method described in the above embodiments.
  • This embodiment is based on the schemes given in the above two embodiments, and a detailed description of the image synthesis method using three initial images with different exposure amounts, as shown in FIG. 6, includes the following steps:
  • the S100 acquires three images of different exposures.
  • the S200 converts the image from the RGB color space to the HSV space.
  • the S300 uses wavelet transform to decompose the HSV image into high frequency and low frequency parts.
  • the S100 acquires three images of different exposures.
  • the three images are low-exposure, normal-exposure, and over-exposure images. It should be noted that assuming that the three images have been registered, the pixels are aligned. The image is shown in Figure 5b.
  • the S200 converts the image from the RGB color space to the HSV space. Since the HSV space separates the color and brightness of the image, the intersecting RGB color space conforms to the visual perception of the human eye.
  • the parameters of the color in the HSV model are: hue (H), saturation (S), and brightness (V). As shown in Figure 4.
  • the S300 uses wavelet transform to decompose the HSV image into high frequency and low frequency parts.
  • Wavelet transform is a multi-scale, multi-resolution decomposition of an image that can be focused on any detail of an image and is called a mathematical microscope.
  • wavelet multiresolution decomposition has been used for pixel-level image fusion.
  • the inherent characteristics of wavelet transform have the following advantages in image processing: 1. Perfect reconstruction ability to ensure that there is no information loss and redundant information in the decomposition process; 2. Decompose the image into a combination of average image and detail image , respectively, represent the different structures of the image, so it is easy to extract the structural information and detailed information of the original image; 3.
  • FIG. 7 shows a schematic diagram of an image of wavelet decomposition. (a) is the original image, and (b), (c), and (d) are wavelet coefficient images decomposed once, twice, and three times.
  • the wavelet decomposition coefficients of three different exposure images in HSV space are obtained.
  • the underexposed image has better contrast in the area of the highlight detail, the image details are clear, such as the cloud part in the sky; the overexposed image is clear in the dark part.
  • the green grass under the city wall has clear image details; the normal exposed image is generally in the dark part detail and the highlight detail phenotype, and the overall image has a general visual effect.
  • HDR images are details that need to preserve the dark and highlight details in the scene, enhancing the overall brightness range of the image. Therefore, as shown in Figure 8, after wavelet decomposition, the coefficients of these relatively clear details need to be preserved, and the choice of fusion rules is the key to the fusion algorithm.
  • the local area contrast and the global gradient image feature calculation are performed on the decomposed wavelet coefficients, and the weight maps of the three different exposure image fusion coefficients are generated.
  • the calculation process is as follows:
  • i, j is the coordinate of the p point of any pixel in the image
  • WM(i, j) is the initialization weight of the pixel participating in the fusion algorithm
  • CL i, j is the local area contrast of the pixel
  • GL i, j is the gradient value of the pixel.
  • p(i, j) is the pixel value of the pixel
  • m(i, j) is the local region average
  • the soble operator is used to calculate the gradient of the image in the horizontal and vertical directions.
  • the operator consists of two sets of 3x3 matrices, which are horizontal and vertical, respectively, and are planarly convolved with the image to obtain lateral and longitudinal luminance difference approximations. If I represents the original image, Gx and Gy represent the images detected by the longitudinal and lateral edges, respectively, and G is the gradient of the pixel points of the image.
  • the formula (3) is as follows:
  • the fusion weight maps of three different exposure image images can be calculated respectively.
  • WM(I,k) is the fusion weight coefficient of the kth image, so that the weights of the fusion coefficients of different exposure images are normalized, satisfying It ensures that the pixels will not exceed the original range of values after the image is blended.
  • the wavelet coefficients of the three image decompositions can be fused, and the high-frequency coefficients and the low-frequency coefficients are fused in the same manner, and are multiplied by the fusion weight coefficients.
  • Iwave k (i, j) is the wavelet decomposition coefficient of any pixel (i, j) point in the image. It can be seen from the above formula calculation that the larger the regional contrast, the larger the gradient feature, the regional characteristics of the pixel The more obvious, the clearer the image details, the more the image pixels need to be preserved in the HDR image, so the fusion weight is also large.
  • Figure 9 shows two sets of images of different exposure combined HDR.
  • the first set of images is based on the three images of Figure 5b.
  • the composite image is obtained. It can be seen that the sky in the HDR composite image retains the underexposed image area. In clear areas of the sky, the grass on the wall also retains the details of the dark parts of the exposed image area. See the area section of the ellipse mark. This is compared to another HDR algorithm effect. Since another HDR algorithm is not disclosed, first two points need to be clarified. 1 It is not possible to determine whether Qualcomm is synthesized by three images of different exposures; 2 If it is through 3 frames, the synthesis algorithm is unknown. Based on the above two points, this patent will be compared with the performance of Qualcomm HDR algorithm.
  • An embodiment of the present invention provides an image synthesizing apparatus, as shown in FIG.
  • the acquiring unit 1101 is configured to acquire at least two initial images, and convert the at least two initial images into at least two to-be-processed images respectively; wherein the initial image is an image based on the first color space, and the Processing the image as an image based on the second color space;
  • the calculating unit 1102 is configured to respectively determine a high frequency image portion and a low frequency image portion corresponding to each of the to-be-processed images based on the at least two to-be-processed images; and calculate features for the at least two to-be-processed images a weight; wherein the feature weight is a set of weights for each pixel of the image to be processed;
  • the merging unit 1103 is configured to fuse the at least two to-be-processed images based on the high-frequency image portion of the image to be processed, the low-frequency image portion, and the feature weight of the image to be processed, to obtain the at least two The fused image corresponding to the initial image.
  • the obtaining unit 1101 is further configured to acquire at least two initial images having different exposure amounts for the target object.
  • the first color space may be a red (R), green (G), blue (B) color space; the second color space may be a hue (H), a saturation (S), a brightness (V) color space .
  • the HSV space separates the color and brightness of the image, which is more in line with the human eye's visual experience than the RGB color space.
  • the parameters of the color in the HSV model are: hue (H), saturation (S), and brightness (V).
  • the left side of the figure represents the model of the RGB color space
  • the right side of the figure represents the model of the HSV color space
  • the image of the RGB color space into the HSV color space can be calculated by the following formula:
  • the The calculating unit is configured to determine a high frequency image portion and a low frequency image portion corresponding to each of the to-be-processed images, respectively, based on the at least two to-be-processed images.
  • the high frequency image portion and the low frequency image portion corresponding to each of the to-be-processed images may be decomposed for the pixel points in the image to be processed by using wavelet coefficients, for example, the formula Iwave k (i, j) may be used.
  • Iwave k i, j
  • a calculation is performed in which I can represent the image to be processed, wave() is a wavelet decomposition function, and (i, j) represents the horizontal and vertical coordinates of the pixel.
  • the calculating unit is configured to calculate a region contrast of each pixel in each image to be processed, and a gradient value of each pixel; and determine the region based on the region contrast of the each pixel and the gradient value Feature weights for each image to be processed; determining normalized feature weights for each image to be processed based on feature weights of the at least two images to be processed.
  • the regional contrast of each pixel point can be calculated by the following formula:
  • p(i,j) is the pixel value of the pixel
  • m(i,j) is the local region average
  • the soble operator is used to calculate the gradient of the image in the horizontal and vertical directions.
  • the operator consists of two sets of 3x3 matrices, which are horizontal and vertical, respectively, and are planarly convolved with the image to obtain lateral and longitudinal luminance difference approximations. If I represents the original image, Gx and Gy represent the images detected by the longitudinal and lateral edges, respectively, and GL i,j is the gradient of the pixel points of the image, and the formula is as follows:
  • the merging unit is configured to multiply the high frequency image portion of the at least two to-be-processed images and the low-frequency image portion, respectively, with a normalized feature weight of each image to be processed, to obtain a normalized The image to be processed; the at least two images to be processed after the normalization are summed, and the at least two images to be processed are merged.
  • n is the number of initial images
  • Iwave k (i, j) is the wavelet decomposition of any pixel (i, j) point in image I.
  • the merging unit is configured to convert the fused image to obtain a fused image based on the first color space. It can be understood that the conversion described here can be combined with this implementation. The manner in which the first color space provided in the example is converted to the second color space is reversed, and the resulting image is the RGB color space.
  • At least two initial images are acquired, and the at least two initial images are respectively converted into at least two to-be-processed images; the feature weights for the at least two to-be-processed images are calculated; The high-frequency image portion of the image to be processed and the low-frequency image portion and the feature weight of the image to be processed are fused to the at least two images to be processed to obtain a fused image corresponding to the at least two initial images.
  • the fusion of the plurality of images is performed based on the feature weights corresponding to the different pixel points in each image, so that the image obtained by the final fusion can ensure the quality of the image in detail.
  • the high-frequency image portion and the low-frequency image portion of the pixel are obtained by wavelet transform, and the HDR image is synthesized by selecting the pixel point satisfying the HDR requirement with the region contrast feature and the gradient joint feature, and the generated HDR image can effectively highlight the scene. The dark details are suppressed and the image is overexposed.
  • the obtaining unit 1101, the computing unit 1102, and the merging unit 1103 can all run on a computer, and can be a central processing unit (CPU), a microprocessor (MPU), or a digital signal processor located on a computer. (DSP), or programmable gate array (FPGA) implementation.
  • CPU central processing unit
  • MPU microprocessor
  • DSP digital signal processor located on a computer.
  • FPGA programmable gate array
  • the disclosed apparatus and method may be implemented in other manners.
  • the device embodiments described above are merely illustrative, examples
  • the division of the unit is only a logical function division, and the actual implementation may have another division manner, for example, multiple units or components may be combined, or may be integrated into another system, or some features may be ignored. Or not.
  • the coupling, or direct coupling, or communication connection of the components shown or discussed may be indirect coupling or communication connection through some interfaces, devices or units, and may be electrical, mechanical or other forms. of.
  • the units described above as separate components may or may not be physically separated, and the components displayed as the unit may or may not be physical units, that is, may be located in one place or distributed to multiple network units; Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may be separately used as one unit, or two or more units may be integrated into one unit;
  • the unit can be implemented in the form of hardware or in the form of hardware plus software functional units.
  • the foregoing program may be stored in a computer readable storage medium, and the program is executed when executed.
  • the foregoing storage device includes the following steps: the foregoing storage medium includes: a mobile storage device, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk.
  • ROM read-only memory
  • RAM random access memory
  • magnetic disk or an optical disk.
  • optical disk A medium that can store program code.
  • the above-described integrated unit of the present invention may be stored in a computer readable storage medium if it is implemented in the form of a software function module and sold or used as a standalone product.
  • the technical solution of the embodiments of the present invention may be embodied in the form of a software product in essence or in the form of a software product stored in a storage medium, including a plurality of instructions. Enabling a computer device (which may be a personal computer, server, or network device, etc.) to perform all of the methods of the various embodiments of the present invention or section.
  • the foregoing storage medium includes various media that can store program codes, such as a mobile storage device, a ROM, a RAM, a magnetic disk, or an optical disk.
  • the embodiment of the invention can perform the fusion of multiple images based on the feature weights corresponding to different pixel points in each image, thereby ensuring that the image obtained by the final fusion guarantees the quality of the image in detail.

Abstract

Disclosed are an image synthesis method and apparatus, and a storage medium. The method comprises: obtaining at least two initial images, and converting the at least two initial images into at least two to-be-processed images, respectively, wherein the initial images are images based on a first color space, and the to-be-processed images are images based on a second color space; determining a high-frequency image portion and a low-frequency image portion corresponding to each of the to-be-processed images on the basis of the at least two to-be-processed images; and synthesizing the at least two to-be-processed images on the basis of the high-frequency image portions and the low-frequency image portions of the to-be-processed images and feature weights of the to-be-processed images, to obtain the synthesized image corresponding to the at least two initial images.

Description

一种图像合成方法及装置、存储介质Image synthesis method and device, storage medium 技术领域Technical field
本发明涉及信号处理领域中的图像处理技术,尤其涉及一种图像合成方法及装置、存储介质。The present invention relates to image processing technologies in the field of signal processing, and in particular, to an image synthesis method and apparatus, and a storage medium.
背景技术Background technique
目前,高动态范围图像(HDR,High-Dynamic Range),相比普通的图像,可以提供更多的动态范围和图像细节,根据不同的曝光时间的低动态范围图像(LDR,Low-Dynamic Range)图像,利用每个曝光时间相对应最佳细节的LDR图像来合成最终HDR图像,能够更好的反映出真实环境中的视觉效果。目前的HDR算法合成主要有两种策略,第一种,拍摄不同的曝光图像,通过估算相机的光照响应曲线,将图像的亮度范围从低动态范围映射到高动态范围,之后通过色调映射算法将图像映射到适合图像显示器查看的图像位数;第二种,拍摄单幅图像,通过图像对比度和亮度调整,将对图像曝光不足区域对比度增强,在曝光过量的区域进行对比度抑制;第一种方法通过物理成像相机响应曲线的原理,可以获取比较自然的HDR图像,但是过程比较复杂,算法复杂度高;第二种方法比较直接,方法复杂度不高,对于曝光不足的区域可以进行修复,但是曝光过度区域很难抑制还原为实际场景亮度。Currently, high dynamic range images (HDR, High-Dynamic Range) provide more dynamic range and image detail than normal images, and low dynamic range images (LDR, Low-Dynamic Range) according to different exposure times. The image, using the LDR image corresponding to the best detail for each exposure time to synthesize the final HDR image, can better reflect the visual effect in the real environment. The current HDR algorithm synthesis mainly has two strategies. The first one is to shoot different exposure images. By estimating the camera's illumination response curve, the image's brightness range is mapped from a low dynamic range to a high dynamic range, and then through the tone mapping algorithm. The image is mapped to the number of images suitable for image display viewing; the second is to capture a single image, and the image contrast and brightness adjustment will enhance the contrast of the underexposed area of the image, and the contrast suppression in the overexposed area; the first method Through the principle of physical imaging camera response curve, a more natural HDR image can be obtained, but the process is more complicated and the algorithm complexity is higher; the second method is relatively straightforward, the method complexity is not high, and the underexposed area can be repaired, but It is difficult to suppress the reduction to the actual scene brightness in the overexposed area.
发明内容Summary of the invention
有鉴于此,本发明实施例的主要目的在于提出一种图像合成方法及装置、存储介质,能至少解决现有技术中存在的上述问题。In view of this, the main purpose of the embodiments of the present invention is to provide an image synthesizing method and apparatus, and a storage medium, which can solve at least the above problems in the prior art.
本发明实施例第一方面提供了一种图像合成方法,包括: A first aspect of the embodiments of the present invention provides an image synthesizing method, including:
获取到至少两个初始图像,将所述至少两个初始图像分别转换为至少两个待处理图像;其中,所述初始图像为基于第一色彩空间的图像,所述待处理图像为基于第二色彩空间的图像;Obtaining at least two initial images, respectively converting the at least two initial images into at least two to-be-processed images; wherein the initial image is an image based on a first color space, and the image to be processed is based on a second An image of a color space;
计算得到针对所述至少两个待处理图像的特征权值,其中,所述特征权值为针对待处理图像的每一个像素点的权值组成的集合;Calculating a feature weight for the at least two to-be-processed images, wherein the feature weight is a set of weights for each pixel of the image to be processed;
基于所述至少两个待处理图像,确定针对每一个所述待处理图像对应的高频图像部分以及低频图像部分;Determining, according to the at least two images to be processed, a high frequency image portion and a low frequency image portion corresponding to each of the to-be-processed images;
基于所述待处理图像的高频图像部分、低频图像部分以及所述待处理图像的特征权值,对所述至少两个待处理图像进行融合得到所述至少两个初始图像对应的融合图像。And superimposing the at least two to-be-processed images to obtain a fused image corresponding to the at least two initial images, based on the high-frequency image portion of the image to be processed, the low-frequency image portion, and the feature weight of the image to be processed.
本发明实施例第二方面提供了一种图像合成装置,所述装置包括:A second aspect of the embodiments of the present invention provides an image synthesizing apparatus, where the apparatus includes:
获取单元,配置为获取到至少两个初始图像,将所述至少两个初始图像分别转换为至少两个待处理图像;其中,所述初始图像为基于第一色彩空间的图像,所述待处理图像为基于第二色彩空间的图像;And an acquiring unit configured to acquire at least two initial images, respectively converting the at least two initial images into at least two to-be-processed images; wherein the initial image is an image based on the first color space, the to-be-processed The image is an image based on the second color space;
计算单元,配置为基于所述至少两个待处理图像,分别确定针对每一个所述待处理图像对应的高频图像部分以及低频图像部分;计算得到针对所述至少两个待处理图像的特征权值;其中,所述特征权值为针对待处理图像的每一个像素点的权值组成的集合;a calculating unit, configured to respectively determine a high frequency image portion and a low frequency image portion corresponding to each of the to-be-processed images based on the at least two to-be-processed images; and calculate feature rights for the at least two to-be-processed images a value; wherein the feature weight is a set of weights for each pixel of the image to be processed;
融合单元,配置为基于所述待处理图像的高频图像部分、低频图像部分、以及所述待处理图像的特征权值,对所述至少两个待处理图像进行融合,得到所述至少两个初始图像对应的融合图像。a merging unit configured to fuse the at least two to-be-processed images based on the high-frequency image portion of the image to be processed, the low-frequency image portion, and the feature weight of the image to be processed, to obtain the at least two The fused image corresponding to the initial image.
本发明实施例第三方面提供了一种计算机存储介质,所述计算机存储介质中存储有计算机程序,所述计算机程序用于执行以上所述的图像合成方法。A third aspect of the embodiments of the present invention provides a computer storage medium, wherein the computer storage medium stores a computer program for executing the image synthesis method described above.
采用本发明实施例提供的图像合成方法及装置、存储介质,获取到至 少两个初始图像,将所述至少两个初始图像分别转换为至少两个待处理图像;计算得到针对所述至少两个待处理图像的特征权值;基于所述待处理图像的高频图像部分以及低频图像部分、所述待处理图像的特征权值,对所述至少两个待处理图像进行融合,得到所述至少两个初始图像对应的融合图像。如此,基于每一个图像中不同像素点对应的特征权值进行多个图像的融合,从而能够保证最终融合得到的图像从细节上保证图像的质量。The image synthesizing method and device and the storage medium provided by the embodiments of the present invention are obtained. Having two initial images, converting the at least two initial images into at least two to-be-processed images; calculating feature weights for the at least two images to be processed; and obtaining high-frequency images based on the image to be processed And a part of the low-frequency image portion and the feature weight of the image to be processed, and merging the at least two to-be-processed images to obtain a fused image corresponding to the at least two initial images. In this way, the fusion of the plurality of images is performed based on the feature weights corresponding to the different pixel points in each image, so that the image obtained by the final fusion can ensure the quality of the image in detail.
附图说明DRAWINGS
图1为实现本发明各个实施例的移动终端的硬件结构示意图;1 is a schematic structural diagram of hardware of a mobile terminal that implements various embodiments of the present invention;
图2为如图1所示的移动终端的无线通信系统示意图;2 is a schematic diagram of a wireless communication system of the mobile terminal shown in FIG. 1;
图3为本发明实施例图像合成方法流程示意图一;3 is a schematic flowchart 1 of an image synthesizing method according to an embodiment of the present invention;
图4为本发明实施例两种色彩空间示意图;4 is a schematic diagram of two color spaces according to an embodiment of the present invention;
图5a为本发明实施例两个初始图像的示意图;FIG. 5a is a schematic diagram of two initial images according to an embodiment of the present invention; FIG.
图5b为本发明实施例三个初始图像的示意图;FIG. 5b is a schematic diagram of three initial images according to an embodiment of the present invention; FIG.
图6为本发明实施例图像合成方法流程示意图二;6 is a second schematic flowchart of an image synthesizing method according to an embodiment of the present invention;
图7为本发明实施例针对图像进行小波分解示例图;FIG. 7 is a diagram showing an example of wavelet decomposition for an image according to an embodiment of the present invention; FIG.
图8为本发明实施例处理逻辑示意图;8 is a schematic diagram of processing logic according to an embodiment of the present invention;
图9为本发明实施例合成结果示意图;9 is a schematic diagram of a synthesis result according to an embodiment of the present invention;
图10为本发明实施例与另一种合成方案的效果对比示意图;10 is a schematic view showing the comparison of effects of another embodiment of the present invention;
图11为本发明实施例图像合成装置结构示意图。11 is a schematic structural diagram of an image synthesizing apparatus according to an embodiment of the present invention.
具体实施方式detailed description
应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。It is understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
需要说明的是,本实施例中所述图像处理装置可以为移动终端,也可 以为服务器,或者还可以为个人电脑、笔记本电脑、相机等终端设备。It should be noted that the image processing apparatus in this embodiment may be a mobile terminal, or Think of the server, or it can be a terminal device such as a personal computer, a laptop, or a camera.
下面,可以以所述图像处理装置为移动终端为示例进行描述,移动终端可以以各种形式来实施。例如,本发明中描述的终端可以包括诸如移动电话、智能电话、笔记本电脑、数字广播接收器、个人数字助理(PDA)、平板电脑(PAD)、便携式多媒体播放器(PMP)、导航装置等等的移动终端以及诸如数字TV、台式计算机等等的固定终端。下面,假设终端是移动终端。然而,本领域技术人员将理解的是,除了特别用于移动目的的元件之外,根据本发明的实施方式的构造也能够应用于固定类型的终端。Hereinafter, the image processing apparatus may be described as an example of a mobile terminal, and the mobile terminal may be implemented in various forms. For example, the terminals described in the present invention may include, for example, mobile phones, smart phones, notebook computers, digital broadcast receivers, personal digital assistants (PDAs), tablet computers (PADs), portable multimedia players (PMPs), navigation devices, and the like. Mobile terminals and fixed terminals such as digital TVs, desktop computers, and the like. In the following, it is assumed that the terminal is a mobile terminal. However, those skilled in the art will appreciate that configurations in accordance with embodiments of the present invention can be applied to fixed type terminals in addition to components that are specifically for mobile purposes.
图1为实现本发明各个实施例的移动终端的硬件结构示意。FIG. 1 is a schematic diagram showing the hardware structure of a mobile terminal embodying various embodiments of the present invention.
移动终端100可以包括音频/视频(A/V)输入单元120、输出单元150、存储器160、接口单元170、控制器180、电源单元190和感测单元等等。图1示出了具有各种组件的移动终端,但是应理解的是,并不要求实施所有示出的组件。可以替代地实施更多或更少的组件。将在下面详细描述移动终端的元件。The mobile terminal 100 may include an audio/video (A/V) input unit 120, an output unit 150, a memory 160, an interface unit 170, a controller 180, a power supply unit 190, a sensing unit, and the like. Figure 1 illustrates a mobile terminal having various components, but it should be understood that not all illustrated components are required to be implemented. More or fewer components can be implemented instead. The elements of the mobile terminal will be described in detail below.
A/V输入单元120用于接收音频或视频信号。A/V输入单元120可以包括相机121,相机121对在视频捕获模式或图像捕获模式中由图像捕获装置获得的静态图片或视频的图像数据进行处理。处理后的图像帧可以显示在显示单元151上。经相机121处理后的图像帧可以存储在存储器160(或其它存储介质)中或者经由无线通信单元进行发送,可以根据移动终端的构造提供两个或更多相机121。The A/V input unit 120 is for receiving an audio or video signal. The A/V input unit 120 may include a camera 121 that processes image data of still pictures or video obtained by an image capturing device in a video capturing mode or an image capturing mode. The processed image frame can be displayed on the display unit 151. The image frames processed by the camera 121 may be stored in the memory 160 (or other storage medium) or transmitted via the wireless communication unit, and two or more cameras 121 may be provided according to the configuration of the mobile terminal.
接口单元170用作至少一个外部装置与移动终端100连接可以通过的接口。例如,外部装置可以包括有线或无线头戴式耳机端口、外部电源(或电池充电器)端口、有线或无线数据端口、存储卡端口、用于连接具有识别模块的装置的端口、音频输入/输出(I/O)端口、视频I/O端口、耳机端口等等。识别模块可以是存储用于验证用户使用移动终端100的各种信息并 且可以包括用户识别模块(UIM)、客户识别模块(SIM)、通用客户识别模块(USIM)等等。另外,具有识别模块的装置(下面称为"识别装置")可以采取智能卡的形式,因此,识别装置可以经由端口或其它连接装置与移动终端100连接。接口单元170可以用于接收来自外部装置的输入(例如,数据信息、电力等等)并且将接收到的输入传输到移动终端100内的一个或多个元件或者可以用于在移动终端和外部装置之间传输数据。另外,当移动终端100与外部底座连接时,接口单元170可以用作允许通过其将电力从底座提供到移动终端100的路径或者可以用作允许从底座输入的各种命令信号通过其传输到移动终端的路径。从底座输入的各种命令信号或电力可以用作用于识别移动终端是否准确地安装在底座上的信号。输出单元150被构造为以视觉、音频和/或触觉方式提供输出信号(例如,音频信号、视频信号、警报信号、振动信号等等)。The interface unit 170 serves as an interface through which at least one external device can connect with the mobile terminal 100. For example, the external device may include a wired or wireless headset port, an external power (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, and an audio input/output. (I/O) port, video I/O port, headphone port, and more. The identification module may be storing various information for verifying that the user uses the mobile terminal 100 and And may include a User Identity Module (UIM), a Customer Identity Module (SIM), a Universal Customer Identity Module (USIM), and the like. In addition, the device having the identification module (hereinafter referred to as "identification device") may take the form of a smart card, and thus the identification device may be connected to the mobile terminal 100 via a port or other connection device. The interface unit 170 can be configured to receive input from an external device (eg, data information, power, etc.) and transmit the received input to one or more components within the mobile terminal 100 or can be used at the mobile terminal and external device Transfer data between. In addition, when the mobile terminal 100 is connected to the external base, the interface unit 170 may function as a path through which power is supplied from the base to the mobile terminal 100 or may be used as a transmission of various command signals allowing input from the base to the mobile terminal 100 The path to the terminal. Various command signals or power input from the base can be used as signals for identifying whether the mobile terminal is accurately mounted on the base. Output unit 150 is configured to provide an output signal (eg, an audio signal, a video signal, an alarm signal, a vibration signal, etc.) in a visual, audio, and/or tactile manner.
输出单元150可以包括显示单元151、音频输出模块、警报单元等等。显示单元151可以显示在移动终端100中处理的信息。例如,当移动终端100处于电话通话模式时,显示单元151可以显示与通话或其它通信(例如,文本消息收发、多媒体文件下载等等)相关的用户界面(UI)或图形用户界面(GUI)。当移动终端100处于视频通话模式或者图像捕获模式时,显示单元151可以显示捕获的图像和/或接收的图像、示出视频或图像以及相关功能的UI或GUI等等。同时,当显示单元151和触摸板以层的形式彼此叠加以形成触摸屏时,显示单元151可以用作输入装置和输出装置。显示单元151可以包括液晶显示器(LCD)、薄膜晶体管LCD(TFT-LCD)、有机发光二极管(OLED)显示器、柔性显示器、三维(3D)显示器等等中的至少一种。这些显示器中的一些可以被构造为透明状以允许用户从外部观看,这可以称为透明显示器,典型的透明显示器可以例如为TOLED(透明有机发光二极管)显示器等等。根据特定想要的实施方式,移动终端100可以包括两个 或更多显示单元(或其它显示装置),例如,移动终端可以包括外部显示单元(未示出)和内部显示单元(未示出)。触摸屏可用于检测触摸输入压力以及触摸输入位置和触摸输入面积。The output unit 150 may include a display unit 151, an audio output module, an alarm unit, and the like. The display unit 151 can display information processed in the mobile terminal 100. For example, when the mobile terminal 100 is in a phone call mode, the display unit 151 can display a user interface (UI) or a graphical user interface (GUI) related to a call or other communication (eg, text messaging, multimedia file download, etc.). When the mobile terminal 100 is in a video call mode or an image capturing mode, the display unit 151 may display a captured image and/or a received image, a UI or GUI showing a video or image and related functions, and the like. Meanwhile, when the display unit 151 and the touch panel are superposed on each other in the form of a layer to form a touch screen, the display unit 151 can function as an input device and an output device. The display unit 151 may include at least one of a liquid crystal display (LCD), a thin film transistor LCD (TFT-LCD), an organic light emitting diode (OLED) display, a flexible display, a three-dimensional (3D) display, and the like. Some of these displays may be configured to be transparent to allow a user to view from the outside, which may be referred to as a transparent display, and a typical transparent display may be, for example, a TOLED (Transparent Organic Light Emitting Diode) display or the like. Mobile terminal 100 may include two, depending on the particular desired implementation. Or more display units (or other display devices), for example, the mobile terminal may include an external display unit (not shown) and an internal display unit (not shown). The touch screen can be used to detect touch input pressure as well as touch input position and touch input area.
存储器160可以存储由控制器180执行的处理和控制操作的软件程序等等,或者可以暂时地存储已经输出或将要输出的数据(例如,电话簿、消息、静态图像、视频等等)。而且,存储器160可以存储关于当触摸施加到触摸屏时输出的各种方式的振动和音频信号的数据。存储器160可以包括至少一种类型的存储介质,所述存储介质包括闪存、硬盘、多媒体卡、卡型存储器(例如,SD或DX存储器等等)、随机访问存储器(RAM)、静态随机访问存储器(SRAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、可编程只读存储器(PROM)、磁性存储器、磁盘、光盘等等。而且,移动终端100可以与通过网络连接执行存储器160的存储功能的网络存储装置协作。The memory 160 may store a software program or the like that performs processing and control operations performed by the controller 180, or may temporarily store data (for example, a phone book, a message, a still image, a video, and the like) that has been output or is to be output. Moreover, the memory 160 can store data regarding vibrations and audio signals of various manners that are output when a touch is applied to the touch screen. The memory 160 may include at least one type of storage medium including a flash memory, a hard disk, a multimedia card, a card type memory (eg, SD or DX memory, etc.), a random access memory (RAM), a static random access memory ( SRAM), read only memory (ROM), electrically erasable programmable read only memory (EEPROM), programmable read only memory (PROM), magnetic memory, magnetic disk, optical disk, and the like. Moreover, the mobile terminal 100 can cooperate with a network storage device that performs a storage function of the memory 160 through a network connection.
控制器180通常控制移动终端的总体操作。例如,控制器180执行与语音通话、数据通信、视频通话等等相关的控制和处理。控制器180可以执行模式识别处理,以将在触摸屏上执行的手写输入或者图片绘制输入识别为字符或图像。The controller 180 typically controls the overall operation of the mobile terminal. For example, the controller 180 performs the control and processing associated with voice calls, data communications, video calls, and the like. The controller 180 may perform a pattern recognition process to recognize a handwriting input or a picture drawing input performed on the touch screen as a character or an image.
电源单元190在控制器180的控制下接收外部电力或内部电力并且提供操作各元件和组件所需的适当的电力。The power supply unit 190 receives external power or internal power under the control of the controller 180 and provides appropriate power required to operate the various components and components.
这里描述的各种实施方式可以以使用例如计算机软件、硬件或其任何组合的计算机可读介质来实施。对于硬件实施,这里描述的实施方式可以通过使用特定用途集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理装置(DSPD)、可编程逻辑装置(PLD)、现场可编程门阵列(FPGA)、处理器、控制器、微控制器、微处理器、被设计为执行这里描述的功能的电子单元中的至少一种来实施,在一些情况下,这样的实施方式可以在控 制器180中实施。对于软件实施,诸如过程或功能的实施方式可以与允许执行至少一种功能或操作的单独的软件模块来实施。软件代码可以由以任何适当的编程语言编写的软件应用程序(或程序)来实施,软件代码可以存储在存储器160中并且由控制器180执行。The various embodiments described herein can be implemented in a computer readable medium using, for example, computer software, hardware, or any combination thereof. For hardware implementations, the embodiments described herein may be through the use of application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays ( FPGA, processor, controller, microcontroller, microprocessor, at least one of the electronic units designed to perform the functions described herein, in some cases, such an embodiment may be under control Implemented in the controller 180. For software implementations, implementations such as procedures or functions may be implemented with separate software modules that permit the execution of at least one function or operation. The software code can be implemented by a software application (or program) written in any suitable programming language, which can be stored in memory 160 and executed by controller 180.
至此,已经按照其功能描述了移动终端。下面,为了简要起见,将描述诸如折叠型、直板型、摆动型、滑动型移动终端等等的各种类型的移动终端中的滑动型移动终端作为示例。因此,本发明能够应用于任何类型的移动终端,并且不限于滑动型移动终端。So far, the mobile terminal has been described in terms of its function. Hereinafter, for the sake of brevity, a slide type mobile terminal among various types of mobile terminals such as a folding type, a bar type, a swing type, a slide type mobile terminal, and the like will be described as an example. Therefore, the present invention can be applied to any type of mobile terminal, and is not limited to a slide type mobile terminal.
如图1中所示的移动终端100可以被构造为利用经由帧或分组发送数据的诸如有线和无线通信系统以及基于卫星的通信系统来操作。The mobile terminal 100 as shown in FIG. 1 may be configured to operate using a communication system such as a wired and wireless communication system and a satellite-based communication system that transmits data via frames or packets.
现在将参考图2描述其中根据本发明的移动终端能够操作的通信系统。A communication system in which a mobile terminal according to the present invention can be operated will now be described with reference to FIG.
这样的通信系统可以使用不同的空中接口和/或物理层。例如,由通信系统使用的空中接口包括例如频分多址(FDMA)、时分多址(TDMA)、码分多址(CDMA)和通用移动通信系统(UMTS)(特别地,长期演进(LTE))、全球移动通信系统(GSM)等等。作为非限制性示例,下面的描述涉及CDMA通信系统,但是这样的教导同样适用于其它类型的系统。Such communication systems may use different air interfaces and/or physical layers. For example, air interfaces used by communication systems include, for example, Frequency Division Multiple Access (FDMA), Time Division Multiple Access (TDMA), Code Division Multiple Access (CDMA), and Universal Mobile Telecommunications System (UMTS) (in particular, Long Term Evolution (LTE)). ), Global System for Mobile Communications (GSM), etc. As a non-limiting example, the following description relates to a CDMA communication system, but such teachings are equally applicable to other types of systems.
参考图2,CDMA无线通信系统可以包括多个移动终端100、多个基站(BS)270、基站控制器(BSC)275和移动交换中心(MSC)280。MSC280被构造为与公共电话交换网络(PSTN)290形成接口。MSC280还被构造为与可以经由回程线路耦接到BS270的BSC275形成接口。回程线路可以根据若干已知的接口中的任一种来构造,所述接口包括例如E1/T1、ATM,IP、PPP、帧中继、HDSL、ADSL或xDSL。将理解的是,如图2中所示的系统可以包括多个BSC275。Referring to FIG. 2, a CDMA wireless communication system can include a plurality of mobile terminals 100, a plurality of base stations (BS) 270, a base station controller (BSC) 275, and a mobile switching center (MSC) 280. The MSC 280 is configured to interface with a public switched telephone network (PSTN) 290. The MSC 280 is also configured to interface with a BSC 275 that can be coupled to the BS 270 via a backhaul line. The backhaul line can be constructed in accordance with any of a number of known interfaces including, for example, E1/T1, ATM, IP, PPP, Frame Relay, HDSL, ADSL, or xDSL. It will be appreciated that the system as shown in FIG. 2 can include multiple BSCs 275.
每个BS270可以服务一个或多个分区(或区域),由多向天线或指向特定方向的天线覆盖的每个分区放射状地远离BS270。或者,每个分区可以 由用于分集接收的两个或更多天线覆盖。每个BS270可以被构造为支持多个频率分配,并且每个频率分配具有特定频谱(例如,1.25MHz,5MHz等等)。Each BS 270 can serve one or more partitions (or regions), each of which is covered by a multi-directional antenna or an antenna directed to a particular direction radially away from the BS 270. Or, each partition can Covered by two or more antennas for diversity reception. Each BS 270 can be configured to support multiple frequency allocations, and each frequency allocation has a particular frequency spectrum (eg, 1.25 MHz, 5 MHz, etc.).
分区与频率分配的交叉可以被称为CDMA信道。BS270也可以被称为基站收发器子系统(BTS)或者其它等效术语。在这样的情况下,术语"基站"可以用于笼统地表示单个BSC275和至少一个BS270。基站也可以被称为"蜂窝站"。或者,特定BS270的各分区可以被称为多个蜂窝站。The intersection of partitioning and frequency allocation can be referred to as a CDMA channel. BS 270 may also be referred to as a Base Transceiver Subsystem (BTS) or other equivalent terminology. In such a case, the term "base station" can be used to generally refer to a single BSC 275 and at least one BS 270. A base station can also be referred to as a "cell station." Alternatively, each partition of a particular BS 270 may be referred to as a plurality of cellular stations.
如图2中所示,广播发射器(BT)295将广播信号发送给在系统内操作的移动终端100。可以在移动终端100处设置广播接收模块以接收由BT295发送的广播信号。在图2中,示出了几个全球定位系统(GPS)卫星300。卫星300帮助定位多个移动终端100中的至少一个。As shown in FIG. 2, a broadcast transmitter (BT) 295 transmits a broadcast signal to the mobile terminal 100 operating within the system. A broadcast receiving module may be provided at the mobile terminal 100 to receive a broadcast signal transmitted by the BT 295. In Figure 2, several Global Positioning System (GPS) satellites 300 are shown. The satellite 300 helps locate at least one of the plurality of mobile terminals 100.
在图2中,描绘了多个卫星300,但是理解的是,可以利用任何数目的卫星获得有用的定位信息。位于移动终端中的GPS模块通常被构造为与卫星300配合以获得想要的定位信息。替代GPS跟踪技术或者在GPS跟踪技术之外,可以使用可以跟踪移动终端的位置的其它技术。另外,至少一个GPS卫星300可以选择性地或者额外地处理卫星DMB传输。In Figure 2, a plurality of satellites 300 are depicted, but it is understood that useful positioning information can be obtained using any number of satellites. A GPS module located in the mobile terminal is typically configured to cooperate with the satellite 300 to obtain desired positioning information. Instead of GPS tracking technology or in addition to GPS tracking technology, other techniques that can track the location of the mobile terminal can be used. Additionally, at least one GPS satellite 300 can selectively or additionally process satellite DMB transmissions.
作为无线通信系统的一个典型操作,BS270接收来自各种移动终端100的反向链路信号。移动终端100通常参与通话、消息收发和其它类型的通信。特定BS270接收的每个反向链路信号被在特定BS270内进行处理。获得的数据被转发给相关的BSC275。BSC提供通话资源分配和包括BS270之间的软切换过程的协调的移动管理功能。BSC275还将接收到的数据路由到MSC280,其提供用于与PSTN290形成接口的额外的路由服务。类似地,PSTN290与MSC280形成接口,MSC与BSC275形成接口,并且BSC275相应地控制BS270以将正向链路信号发送到移动终端100。As a typical operation of a wireless communication system, BS 270 receives reverse link signals from various mobile terminals 100. Mobile terminal 100 typically participates in calls, messaging, and other types of communications. Each reverse link signal received by a particular BS 270 is processed within a particular BS 270. The obtained data is forwarded to the relevant BSC 275. The BSC provides call resource allocation and coordinated mobility management functions including a soft handoff procedure between the BSs 270. The BSC 275 also routes the received data to the MSC 280, which provides additional routing services for interfacing with the PSTN 290. Similarly, PSTN 290 interfaces with MSC 280, which forms an interface with BSC 275, and BSC 275 controls BS 270 accordingly to transmit forward link signals to mobile terminal 100.
基于上述移动终端硬件结构以及通信系统,提出本发明方法以及装置 的各个实施例。The method and device of the present invention are proposed based on the hardware structure and communication system of the above mobile terminal Various embodiments.
实施例一、Embodiment 1
本发明实施例提供了一种图像合成方法,如图3所示,包括:An embodiment of the present invention provides an image synthesizing method, as shown in FIG. 3, including:
步骤301:获取到至少两个初始图像,将所述至少两个初始图像分别转换为至少两个待处理图像;其中,所述初始图像为基于第一色彩空间的图像,所述待处理图像为基于第二色彩空间的图像;Step 301: Acquire at least two initial images, and convert the at least two initial images into at least two to-be-processed images respectively; wherein the initial image is an image based on a first color space, and the image to be processed is An image based on the second color space;
步骤302:计算得到针对所述至少两个待处理图像的特征权值;其中,所述特征权值为针对待处理图像的每一个像素点的权值组成的集合;Step 302: Calculate a feature weight for the at least two to-be-processed images; wherein the feature weight is a set of weights for each pixel of the image to be processed;
步骤303:基于所述至少两个待处理图像,确定针对每一个所述待处理图像对应的高频图像部分以及低频图像部分;Step 303: Determine, according to the at least two to-be-processed images, a high-frequency image portion and a low-frequency image portion corresponding to each of the to-be-processed images;
步骤304:基于所述待处理图像的高频图像部分、低频图像部分以及所述待处理图像的特征权值,对所述至少两个待处理图像进行融合得到所述至少两个初始图像对应的融合图像。Step 304: merging the at least two to-be-processed images to obtain corresponding to the at least two initial images, based on the high-frequency image portion of the image to be processed, the low-frequency image portion, and the feature weight of the image to be processed. Fusion image.
这里,所述获取到至少两个初始图像,包括:获取到针对目标对象的具备不同的曝光量的至少两个初始图像。其中,所述至少两个初始图像可以为两个初始图像也可以为三个初始图像。可以理解的是,所述目标对象可以为针对相同的景物也可以为针对相同的人物,本实施例不对其进行限定。Here, the acquiring the at least two initial images includes: acquiring at least two initial images having different exposure amounts for the target object. The at least two initial images may be two initial images or three initial images. It can be understood that the target object may be for the same scene or for the same person, which is not limited in this embodiment.
所述第一色彩空间可以为红(R)、绿(G)、蓝(B)色彩空间;所述第二色彩空间可以为色调(H),饱和度(S),明度(V)色彩空间。HSV空间将图像的颜色和亮度进行分离,与RGB色彩空间相比更符合人眼的视觉感受。HSV这个模型中颜色的参数分别是:色调(H),饱和度(S),明度(V)。如图4所示,其中,图左边表征RGB颜色空间的模型,图右边表征HSV颜色空间的模型;将RGB色彩空间的图像转换为HSV色彩空间的图像可以采用如下公式进行计算: The first color space may be a red (R), green (G), blue (B) color space; the second color space may be a hue (H), a saturation (S), a brightness (V) color space . The HSV space separates the color and brightness of the image, which is more in line with the human eye's visual experience than the RGB color space. The parameters of the color in the HSV model are: hue (H), saturation (S), and brightness (V). As shown in FIG. 4, the left side of the figure represents the model of the RGB color space, and the right side of the figure represents the model of the HSV color space; the image of the RGB color space into the HSV color space can be calculated by the following formula:
max(R,G,B)→VMax(R,G,B)→V
Figure PCTCN2016112498-appb-000001
Figure PCTCN2016112498-appb-000001
Figure PCTCN2016112498-appb-000002
Figure PCTCN2016112498-appb-000002
If H<0 then H←H+360.On output 0≤V≤1,0≤S≤1,0≤H≤360。即,当H小于零时,则将H替换为H+360得到的数值;并且最终输出的数值V小于等于1且大于等于0;S小于等于1且大于等于零;H小于等于360、且大于等于0。If H<0 then H←H+360.On output 0≤V≤1, 0≤S≤1, 0≤H≤360. That is, when H is less than zero, H is replaced by the value obtained by H+360; and the final output value V is less than or equal to 1 and greater than or equal to 0; S is less than or equal to 1 and greater than or equal to zero; H is less than or equal to 360, and is greater than or equal to 0.
本实施例分别对两种场景进行说明:This embodiment describes two scenarios:
场景一、使用两个初始图像进行后续处理的场景,在本场景中,所述两个初始图像可以分别为第一初始图像以及第二初始图像,其中,所述第一初始图像以及第二初始图像对应的曝光量不同。假设第一初始图像的曝光量大于第二初始图像。Scene 1 : A scenario in which two initial images are used for subsequent processing. In the present scenario, the two initial images may be a first initial image and a second initial image, respectively, wherein the first initial image and the second initial The exposure amount corresponding to the image is different. It is assumed that the exposure amount of the first initial image is larger than the second initial image.
需要说明的是,本实施例中,假设二幅图像已经经过了配准,像素点对齐了,比如,图像如图5a所示,其中,第一初始图像为a,第二初始图像为b。It should be noted that, in this embodiment, it is assumed that the two images have been registered, and the pixels are aligned. For example, the image is as shown in FIG. 5a, wherein the first initial image is a and the second initial image is b.
所述计算得到针对所述至少两个待处理图像的特征权值,可以为计算得到针对每一个待处理图像中每一个像素点对应的特征权值,也就是说,针对每一个待处理图像中的每一个像素点在进行融合的时候,都会对应了不同的调整值。Calculating the feature weights for the at least two images to be processed, and calculating the feature weights corresponding to each pixel in each image to be processed, that is, for each image to be processed Each pixel is corresponding to a different adjustment value when it is fused.
优选地,本实施例中所述特征权值可以为归一化的特征权值,这样可以保证进行融合之后的图像不会超出原有的值域范围。Preferably, the feature weights in the embodiment may be normalized feature weights, so that the image after the fusion is not exceeded beyond the original range of values.
所述基于所述待处理图像的高频图像部分以及低频图像部分、所述待处理图像的特征权值,对所述至少两个待处理图像进行融合,得到所述至少两个初始图像对应的融合图像,可以为:将第一待处理图像中每 一个像素的高频图像部分以及低频图像部分与对应的像素的特征权值进行计算,得到该待处理图像的该像素点在融合图像中保留的能够对最终的融合图像产生影响的部分特征;将第二待处理图像中每一个像素点的高频图像部分以及低频图像部分与对应的像素点的特征权值进行计算,得到该第二待处理图像的该像素点在最终的融合图像中保留的能够对最终的融合图像产生影响的部分特征。And combining the at least two to-be-processed images to obtain the at least two initial images corresponding to the high-frequency image portion of the image to be processed and the low-frequency image portion and the feature weight of the image to be processed Fusion image can be: each of the first pending image Calculating the feature weights of the high-frequency image portion of the pixel and the low-frequency image portion and the corresponding pixel, and obtaining a portion of the feature of the pixel to be processed that is retained in the fused image and capable of affecting the final fused image; The high-frequency image portion of each pixel in the second image to be processed and the feature weight of the low-frequency image portion and the corresponding pixel point are calculated, and the pixel point of the second image to be processed is retained in the final fused image. Some features that can affect the final fused image.
场景二、使用两个初始图像进行后续处理的场景,在本场景中,所述两个初始图像可以分别为第一初始图像、第二初始图像以及第三初始图像,其中,第一初始图像、第二初始图像以及第三初始图像对应的曝光量不同。假设第一初始图像的曝光量大于第二初始图像,第二初始图像的曝光量大于第三初始图像。Scenario 2: a scenario in which two initial images are used for subsequent processing. In the present scenario, the two initial images may be a first initial image, a second initial image, and a third initial image, where the first initial image, The exposure amounts corresponding to the second initial image and the third initial image are different. It is assumed that the exposure amount of the first initial image is larger than the second initial image, and the exposure amount of the second initial image is larger than the third initial image.
需要说明的是,本实施例中,假设三幅图像已经经过了配准,像素点对齐了,比如,图像如图5b所示,其中,第一初始图像为a,第二初始图像为b、第三初始图像为图像c。It should be noted that, in this embodiment, it is assumed that three images have been registered, and the pixels are aligned. For example, the image is as shown in FIG. 5b, wherein the first initial image is a, and the second initial image is b. The third initial image is image c.
所述计算得到针对所述至少两个待处理图像的特征权值,可以为计算得到针对每一个待处理图像中每一个像素点对应的特征权值,也就是说,针对每一个待处理图像中的每一个像素点在进行融合的时候,都会对应了不同的调整值。Calculating the feature weights for the at least two images to be processed, and calculating the feature weights corresponding to each pixel in each image to be processed, that is, for each image to be processed Each pixel is corresponding to a different adjustment value when it is fused.
优选地,本实施例中所述特征权值可以为归一化的特征权值,这样可以保证进行融合之后的图像不会超出原有的值域范围。Preferably, the feature weights in the embodiment may be normalized feature weights, so that the image after the fusion is not exceeded beyond the original range of values.
所述基于所述待处理图像的高频图像部分以及低频图像部分、所述待处理图像的特征权值,对所述至少两个待处理图像进行融合,得到所述至少两个初始图像对应的融合图像,可以为:分别将第一待处理图像、第二待处理图像以及第三待处理图像中每一个像素的高频图像部分以及低频图像部分与对应的像素的特征权值进行计算,得到该待处理图像的 该像素点在融合图像中保留的能够对最终的融合图像产生影响的部分特征。And combining the at least two to-be-processed images to obtain the at least two initial images corresponding to the high-frequency image portion of the image to be processed and the low-frequency image portion and the feature weight of the image to be processed The fused image may be: calculating a feature weight value of each of the first to-be-processed image, the second to-be-processed image, and the third image to be processed, and the low-frequency image portion and the corresponding pixel, respectively. The image to be processed The pixel retains some of the features in the fused image that can affect the final fused image.
可见,通过采用上述方案,获取到至少两个初始图像,将所述至少两个初始图像分别转换为至少两个待处理图像;计算得到针对所述至少两个待处理图像的特征权值;基于所述待处理图像的高频图像部分以及低频图像部分、所述待处理图像的特征权值,对所述至少两个待处理图像进行融合,得到所述至少两个初始图像对应的融合图像。如此,基于每一个图像中不同像素点对应的特征权值进行多个图像的融合,从而能够保证最终融合得到的图像从细节上保证图像的质量。It can be seen that, by adopting the above scheme, at least two initial images are acquired, and the at least two initial images are respectively converted into at least two to-be-processed images; the feature weights for the at least two to-be-processed images are calculated; The high-frequency image portion of the image to be processed and the low-frequency image portion and the feature weight of the image to be processed are fused to the at least two images to be processed to obtain a fused image corresponding to the at least two initial images. In this way, the fusion of the plurality of images is performed based on the feature weights corresponding to the different pixel points in each image, so that the image obtained by the final fusion can ensure the quality of the image in detail.
实施例二、Embodiment 2
本发明实施例提供了一种图像合成方法,如图3所示,包括:An embodiment of the present invention provides an image synthesizing method, as shown in FIG. 3, including:
步骤301:获取到至少两个初始图像,将所述至少两个初始图像分别转换为至少两个待处理图像;其中,所述初始图像为基于第一色彩空间的图像,所述待处理图像为基于第二色彩空间的图像;Step 301: Acquire at least two initial images, and convert the at least two initial images into at least two to-be-processed images respectively; wherein the initial image is an image based on a first color space, and the image to be processed is An image based on the second color space;
步骤302:计算得到针对所述至少两个待处理图像的特征权值;其中,所述特征权值为针对待处理图像的每一个像素点的权值组成的集合;Step 302: Calculate a feature weight for the at least two to-be-processed images; wherein the feature weight is a set of weights for each pixel of the image to be processed;
步骤303:基于所述至少两个待处理图像,确定针对每一个所述待处理图像对应的高频图像部分以及低频图像部分;Step 303: Determine, according to the at least two to-be-processed images, a high-frequency image portion and a low-frequency image portion corresponding to each of the to-be-processed images;
步骤304:基于所述待处理图像的高频图像部分、低频图像部分以及所述待处理图像的特征权值,对所述至少两个待处理图像进行融合得到所述至少两个初始图像对应的融合图像。Step 304: merging the at least two to-be-processed images to obtain corresponding to the at least two initial images, based on the high-frequency image portion of the image to be processed, the low-frequency image portion, and the feature weight of the image to be processed. Fusion image.
这里,所述获取到至少两个初始图像,包括:获取到针对目标对象的具备不同的曝光量的至少两个初始图像。其中,所述至少两个初始图像可以为两个初始图像也可以为三个初始图像。可以理解的是,所述目标对象可以为针对相同的景物也可以为针对相同的人物,本实施例不对 其进行限定。Here, the acquiring the at least two initial images includes: acquiring at least two initial images having different exposure amounts for the target object. The at least two initial images may be two initial images or three initial images. It can be understood that the target object may be for the same scene or for the same person. This embodiment is incorrect. It is limited.
所述第一色彩空间可以为红(R)、绿(G)、蓝(B)色彩空间;所述第二色彩空间可以为色调(H),饱和度(S),明度(V)色彩空间。HSV空间将图像的颜色和亮度进行分离,与RGB色彩空间相比更符合人眼的视觉感受。HSV这个模型中颜色的参数分别是:色调(H),饱和度(S),明度(V)。如图4所示,其中,图左边表征RGB颜色空间的模型,图右边表征HSV颜色空间的模型;将RGB色彩空间的图像转换为HSV色彩空间的图像可以采用如下公式进行计算:The first color space may be a red (R), green (G), blue (B) color space; the second color space may be a hue (H), a saturation (S), a brightness (V) color space . The HSV space separates the color and brightness of the image, which is more in line with the human eye's visual experience than the RGB color space. The parameters of the color in the HSV model are: hue (H), saturation (S), and brightness (V). As shown in FIG. 4, the left side of the figure represents the model of the RGB color space, and the right side of the figure represents the model of the HSV color space; the image of the RGB color space into the HSV color space can be calculated by the following formula:
max(R,G,B)→VMax(R,G,B)→V
Figure PCTCN2016112498-appb-000003
Figure PCTCN2016112498-appb-000003
Figure PCTCN2016112498-appb-000004
Figure PCTCN2016112498-appb-000004
If H<0 then H←H+360.On output 0≤V≤1,0≤S≤1,0≤H≤360。If H<0 then H←H+360.On output 0≤V≤1, 0≤S≤1, 0≤H≤360.
所述计算得到针对所述至少两个待处理图像的特征权值之前,所述方法还包括:基于所述至少两个待处理图像,分别确定针对每一个所述待处理图像对应的高频图像部分以及低频图像部分。Before the calculating the feature weights for the at least two to-be-processed images, the method further comprises: respectively determining, according to the at least two to-be-processed images, a high-frequency image corresponding to each of the to-be-processed images Part and low frequency image part.
其中,所述获取到每一个待处理图像对应的高频图像部分以及低频图像部分可以利用小波系数针对所述待处理图像中的像素点进行分解,比如,可以采用公式Iwavek(i,j)进行计算,其中,I可以表示所述待处理图像,wave()为小波分解函数,(i,j)表示像素点的横纵坐标。The high frequency image portion and the low frequency image portion corresponding to each of the to-be-processed images may be decomposed for the pixel points in the image to be processed by using wavelet coefficients, for example, the formula Iwave k (i, j) may be used. A calculation is performed in which I can represent the image to be processed, wave() is a wavelet decomposition function, and (i, j) represents the horizontal and vertical coordinates of the pixel.
所述计算得到针对所述至少两个待处理图像的特征权值,可以包括:The calculating the feature weights for the at least two to-be-processed images may include:
计算每一个待处理图像中每一个像素点的区域对比度、以及每一个像素点的梯度值;Calculating the area contrast of each pixel in each image to be processed, and the gradient value of each pixel;
基于所述每一个像素点的区域对比度以及所述梯度值,确定所述每一个待处理图像的特征权值; Determining a feature weight of each of the to-be-processed images based on an area contrast of each of the pixels and the gradient value;
基于所述至少两个待处理图像的特征权值,确定针对每一个待处理图像的归一化特征权值。A normalized feature weight for each image to be processed is determined based on feature weights of the at least two images to be processed.
其中,所述每一个像素点的区域对比度可以采用以下公式进行计算:Wherein, the regional contrast of each pixel point can be calculated by the following formula:
Figure PCTCN2016112498-appb-000005
Figure PCTCN2016112498-appb-000005
p(i,j)为该像素点的像素值,m(i,j)为局部区域平均值;其中,M和N表示选定的区域中最大像素点的位置。p(i,j) is the pixel value of the pixel, and m(i,j) is the local region average; where M and N represent the position of the largest pixel in the selected region.
利用soble算子计算图像在水平和垂直方向的梯度大小。该算子包含两组3x3的矩阵,分别为横向及纵向,将之与图像作平面卷积,即可分别得出横向及纵向的亮度差分近似值。如果以I代表原始图像,Gx及Gy分别代表经纵向及横向边缘检测的图像,GLi,j为该图像像素点出的梯度大小,其公式如下:The soble operator is used to calculate the gradient of the image in the horizontal and vertical directions. The operator consists of two sets of 3x3 matrices, which are horizontal and vertical, respectively, and are planarly convolved with the image to obtain lateral and longitudinal luminance difference approximations. If I represents the original image, Gx and Gy represent the images detected by the longitudinal and lateral edges, respectively, and GL i,j is the gradient of the pixel points of the image, and the formula is as follows:
Figure PCTCN2016112498-appb-000006
Figure PCTCN2016112498-appb-000006
Figure PCTCN2016112498-appb-000007
Figure PCTCN2016112498-appb-000007
进一步地,所述基于所述每一个像素点的区域对比度以及所述梯度值,确定所述每一个待处理图像的特征权值,可以为:将所述每一个像素点的区域对比度以及所述梯度值相乘,得到所述每一个待处理图像的每一个像素点对应的特征权值;具体可以采用以下公式WM(i,j)=CLi,j*GLi,j表示;其中,WM(i,j)用于表征特征权值。Further, determining the feature weight of each of the to-be-processed images based on the region contrast of each of the pixel points and the gradient value may be: a region contrast of each of the pixel points and the Multiplying the gradient values to obtain feature weights corresponding to each pixel of each image to be processed; specifically, the following formula WM(i,j)=CL i,j *GL i,j can be used; wherein, WM (i, j) is used to characterize feature weights.
在上述方案的基础上,所述确定针对每一个待处理图像的归一化特征权值,可以采用公式:
Figure PCTCN2016112498-appb-000008
进行计算,其中,n为初始图像的数量,比如,n=2时表示有两个初始图像,n=3时表示有三 个初始图像;WM(I,k)为第k幅图像的融合权值系数,这样就将不同曝光图像的融合系数权值归一化了,满足
Figure PCTCN2016112498-appb-000009
保证了图像融合后像素不会超出原有的值域范围。
Based on the above solution, the determining the normalized feature weight for each image to be processed may adopt a formula:
Figure PCTCN2016112498-appb-000008
Calculate, where n is the number of initial images, for example, n=2 means there are two initial images, n=3 means there are three initial images; WM(I,k) is the fusion weight of the kth image Coefficient, so that the weights of the fusion coefficients of different exposure images are normalized, satisfying
Figure PCTCN2016112498-appb-000009
It ensures that the pixels will not exceed the original range of values after image fusion.
所述基于所述待处理图像的高频图像部分以及低频图像部分、所述待处理图像的特征权值,对所述至少两个待处理图像进行融合,包括:The merging the at least two to-be-processed images based on the high-frequency image portion of the image to be processed and the low-frequency image portion and the feature weight of the image to be processed includes:
将所述至少两个待处理图像的高频图像部分以及所述低频图像部分、分别与每一个待处理图像的归一化特征权值相乘,得到归一化之后的待处理图像;Multiplying the high frequency image portion of the at least two images to be processed and the low frequency image portion by respective normalized feature weights of each image to be processed to obtain a normalized image to be processed;
将归一化之后的至少两个待处理图像作和,将所述至少两个待处理图像进行融合。The normalized at least two images to be processed are summed, and the at least two images to be processed are merged.
具体的,可以采用以下公式进行说明:Specifically, the following formula can be used to illustrate:
Figure PCTCN2016112498-appb-000010
n为初始图像的数量,Iwavek(i,j)为图像I中任一像素(i,j)点的小波分解,从上面的公式计算可以看出,区域对比度越大,梯度特征越大说明,该像素点的区域特征越明显,图像细节越清晰,是HDR图像中需要保留图像像素点,因此融合权值也比较大。
Figure PCTCN2016112498-appb-000010
n is the number of initial images, Iwave k (i, j) is the wavelet decomposition of any pixel (i, j) point in image I. It can be seen from the above formula calculation that the larger the regional contrast, the larger the gradient feature The more obvious the regional feature of the pixel, the clearer the image detail is, and the image pixel needs to be preserved in the HDR image, so the fusion weight is also relatively large.
所述得到所述至少两个初始图像对应的融合图像,包括:And obtaining the fused image corresponding to the at least two initial images, including:
将所述融合后的图像进行转换,得到基于第一色彩空间的融合图像。可以理解的是,这里所述的转换可以与本实施例中提供的第一色彩空间转换到第二色彩空间的方式相反,最终得到的为RGB色彩空间的图像。Converting the fused image to obtain a fused image based on the first color space. It can be understood that the conversion described herein can be opposite to the manner in which the first color space provided in the embodiment is converted to the second color space, and finally the image of the RGB color space is obtained.
可见,通过采用上述方案,获取到至少两个初始图像,将所述至少两个初始图像分别转换为至少两个待处理图像;计算得到针对所述至少两个待处理图像的特征权值;基于所述待处理图像的高频图像部分以及低频图像部分、所述待处理图像的特征权值,对所述至少两个待处理图 像进行融合,得到所述至少两个初始图像对应的融合图像。如此,基于每一个图像中不同像素点对应的特征权值进行多个图像的融合,从而能够保证最终融合得到的图像从细节上保证图像的质量。It can be seen that, by adopting the above scheme, at least two initial images are acquired, and the at least two initial images are respectively converted into at least two to-be-processed images; the feature weights for the at least two to-be-processed images are calculated; a high frequency image portion of the image to be processed and a low frequency image portion, a feature weight of the image to be processed, and the at least two to-be-processed maps The image is merged to obtain a fused image corresponding to the at least two initial images. In this way, the fusion of the plurality of images is performed based on the feature weights corresponding to the different pixel points in each image, so that the image obtained by the final fusion can ensure the quality of the image in detail.
进一步地,通过小波变换得到像素点的高频图像部分以及低频图像部分,进而与区域对比度特征以及梯度联合特征选择出满足HDR要求的像素点进行HDR图像合成,生成的HDR图像可以有效地突出场景的暗部细节并抑制图像过曝细节。Further, the high-frequency image portion and the low-frequency image portion of the pixel are obtained by wavelet transform, and the HDR image is synthesized by selecting the pixel point satisfying the HDR requirement with the region contrast feature and the gradient joint feature, and the generated HDR image can effectively highlight the scene. The dark details are suppressed and the image is overexposed.
实施例三、Embodiment 3
本发明实施例还记载了一种计算机存储介质,所述计算机存储介质中存储有计算机程序,所述计算机程序用于执行以上实施例所述的图像合成方法。The embodiment of the invention further describes a computer storage medium, wherein the computer storage medium stores a computer program for executing the image synthesis method described in the above embodiments.
实施例四、Embodiment 4
本实施例基于上述两个实施例给出的方案,利用三幅曝光量不同的初始图像进行图像合成方法的具体说明,见图6所示,包括以下步骤:This embodiment is based on the schemes given in the above two embodiments, and a detailed description of the image synthesis method using three initial images with different exposure amounts, as shown in FIG. 6, includes the following steps:
S100获取三幅不同曝光量的图像。The S100 acquires three images of different exposures.
S200将图像由RGB色彩空间转换到HSV空间。The S200 converts the image from the RGB color space to the HSV space.
S300利用小波变换将HSV图像分解为高频和低频部分。The S300 uses wavelet transform to decompose the HSV image into high frequency and low frequency parts.
S400区域对比度特征以及梯度联合特征权值融合。S400 regional contrast features and gradient joint feature weight fusion.
具体实施如下:The specific implementation is as follows:
S100获取三幅不同曝光量的图像。这三幅图像分别为低曝光、正常曝光、过曝光图像,需要说明的一点,假设三幅图像已经经过了配准,像素点对齐了。图像如图5b所示。The S100 acquires three images of different exposures. The three images are low-exposure, normal-exposure, and over-exposure images. It should be noted that assuming that the three images have been registered, the pixels are aligned. The image is shown in Figure 5b.
S200将图像由RGB色彩空间转换到HSV空间,由于HSV空间将图像的颜色和亮度进行分离,相交RGB色彩空间符合人眼的视觉感受。HSV这个模型中颜色的参数分别是:色调(H),饱和度(S),明度(V)。 如图4所示。The S200 converts the image from the RGB color space to the HSV space. Since the HSV space separates the color and brightness of the image, the intersecting RGB color space conforms to the visual perception of the human eye. The parameters of the color in the HSV model are: hue (H), saturation (S), and brightness (V). As shown in Figure 4.
S300利用小波变换将HSV图像分解为高频和低频部分。小波变换是图像的多尺度、多分辨率分解,它可以聚焦到图像的任意细节,被称为数学上的显微镜。近年来,随着小波理论及其应用的发展,已将小波多分辨率分解用于像素级图像融合。小波变换的固有特性使其在图像处理中有如下优点:1.完善的重构能力,保证信号在分解过程中没有信息损失和冗余信息;2.把图像分解成平均图像和细节图像的组合,分别代表了图像的不同结构,因此容易提取原始图像的结构信息和细节信息;3.具有快速算法,它在小波变换中的作用相当于FFT算法在傅立叶变换中的作用,为小波变换应用提供了必要的手段;4.二维小波分析提供了与人类视觉系统方向相吻合的选择性图像。图7给出了小波分解作用一副图像的示意图,(a)为原始图像,(b)、(c)、(d)为分解一次、二次、三次的小波系数图像。The S300 uses wavelet transform to decompose the HSV image into high frequency and low frequency parts. Wavelet transform is a multi-scale, multi-resolution decomposition of an image that can be focused on any detail of an image and is called a mathematical microscope. In recent years, with the development of wavelet theory and its applications, wavelet multiresolution decomposition has been used for pixel-level image fusion. The inherent characteristics of wavelet transform have the following advantages in image processing: 1. Perfect reconstruction ability to ensure that there is no information loss and redundant information in the decomposition process; 2. Decompose the image into a combination of average image and detail image , respectively, represent the different structures of the image, so it is easy to extract the structural information and detailed information of the original image; 3. It has a fast algorithm, its role in wavelet transform is equivalent to the role of FFT algorithm in Fourier transform, providing for wavelet transform application The necessary means; 4. Two-dimensional wavelet analysis provides a selective image that matches the direction of the human visual system. Figure 7 shows a schematic diagram of an image of wavelet decomposition. (a) is the original image, and (b), (c), and (d) are wavelet coefficient images decomposed once, twice, and three times.
S400通过S300中的小波分解后,得到了三幅不同曝光图像分别在HSV空间的小波分解系数。通过观察图5b中的三幅不同曝光量的图像可以发现,曝光不足的图像在亮部细节的区域对比度较好,图像细节清晰,例如天空中的云彩部分;曝光过量的图像在暗部细节比较清晰,例如城墙下的绿色草丛,图像细节清晰;正常曝光的图像在暗部细节和亮部细节表型的都一般,整体图像视觉效果一般。After S400 is decomposed by wavelet in S300, the wavelet decomposition coefficients of three different exposure images in HSV space are obtained. By observing the images of three different exposures in Figure 5b, it can be found that the underexposed image has better contrast in the area of the highlight detail, the image details are clear, such as the cloud part in the sky; the overexposed image is clear in the dark part. For example, the green grass under the city wall has clear image details; the normal exposed image is generally in the dark part detail and the highlight detail phenotype, and the overall image has a general visual effect.
HDR图像就是需要保留场景中的暗部和亮部细节,增强图像的整体亮度范围的细节。因此,如图8所示,小波分解后,需要将这些相对清晰的细节的系数保留下来,融合规则的选择是融合算法的关键。HDR images are details that need to preserve the dark and highlight details in the scene, enhancing the overall brightness range of the image. Therefore, as shown in Figure 8, after wavelet decomposition, the coefficients of these relatively clear details need to be preserved, and the choice of fusion rules is the key to the fusion algorithm.
因此,本文对分解后的小波系数进行局部区域对比度和全局的梯度图像特征计算,生成三幅不同曝光图像融合系数的权值图像WeightMap,计算过程如下: Therefore, in this paper, the local area contrast and the global gradient image feature calculation are performed on the decomposed wavelet coefficients, and the weight maps of the three different exposure image fusion coefficients are generated. The calculation process is as follows:
WM(i,j)=CLi,j*GLi,j         (1)WM(i,j)=CL i,j *GL i,j (1)
式(1)中i,j为图像中任一像素p点的坐标,WM(i,j)为该像素参与融合算法的初始化权值,CLi,j为该像素点的局部区域对比度,GLi,j为该像素点的梯度值大小。In the formula (1), i, j is the coordinate of the p point of any pixel in the image, WM(i, j) is the initialization weight of the pixel participating in the fusion algorithm, and CL i, j is the local area contrast of the pixel, GL i, j is the gradient value of the pixel.
Figure PCTCN2016112498-appb-000011
Figure PCTCN2016112498-appb-000011
Figure PCTCN2016112498-appb-000012
Figure PCTCN2016112498-appb-000012
式(2)中,p(i,j)为该像素点的像素值,m(i,j)为局部区域平均值。In the formula (2), p(i, j) is the pixel value of the pixel, and m(i, j) is the local region average.
利用soble算子计算图像在水平和垂直方向的梯度大小。该算子包含两组3x3的矩阵,分别为横向及纵向,将之与图像作平面卷积,即可分别得出横向及纵向的亮度差分近似值。如果以I代表原始图像,Gx及Gy分别代表经纵向及横向边缘检测的图像,G为该图像像素点出的梯度大小,其公式(3)如下:The soble operator is used to calculate the gradient of the image in the horizontal and vertical directions. The operator consists of two sets of 3x3 matrices, which are horizontal and vertical, respectively, and are planarly convolved with the image to obtain lateral and longitudinal luminance difference approximations. If I represents the original image, Gx and Gy represent the images detected by the longitudinal and lateral edges, respectively, and G is the gradient of the pixel points of the image. The formula (3) is as follows:
Figure PCTCN2016112498-appb-000013
Figure PCTCN2016112498-appb-000013
Figure PCTCN2016112498-appb-000014
Figure PCTCN2016112498-appb-000014
按照上述计算过程,分别可以计算出三幅不同的曝光图像图像的融合权值图According to the above calculation process, the fusion weight maps of three different exposure image images can be calculated respectively.
Figure PCTCN2016112498-appb-000015
Figure PCTCN2016112498-appb-000015
式(4)中,WM(I,k)为第k幅图像的融合权值系数,这样就将不同曝光图像的融合系数权值归一化了,满足
Figure PCTCN2016112498-appb-000016
保证了图像融 合后像素不会超出原有的值域范围。按照式子(5)就可以将,3幅图像分解的小波系数进行融合,高频系数和低频系数融合规则一致,均是乘以融合权值系数。
In equation (4), WM(I,k) is the fusion weight coefficient of the kth image, so that the weights of the fusion coefficients of different exposure images are normalized, satisfying
Figure PCTCN2016112498-appb-000016
It ensures that the pixels will not exceed the original range of values after the image is blended. According to the formula (5), the wavelet coefficients of the three image decompositions can be fused, and the high-frequency coefficients and the low-frequency coefficients are fused in the same manner, and are multiplied by the fusion weight coefficients.
Figure PCTCN2016112498-appb-000017
Figure PCTCN2016112498-appb-000017
Iwavek(i,j)为图像中任一像素(i,j)点的小波分解系数,从上面的公式计算可以看出,区域对比度越大,梯度特征越大说明,该像素点的区域特征越明显,图像细节越清晰,是HDR图像中需要保留图像像素点,因此融合权值也比较大。Iwave k (i, j) is the wavelet decomposition coefficient of any pixel (i, j) point in the image. It can be seen from the above formula calculation that the larger the regional contrast, the larger the gradient feature, the regional characteristics of the pixel The more obvious, the clearer the image details, the more the image pixels need to be preserved in the HDR image, so the fusion weight is also large.
图9给出了两组不同曝光合成HDR的图像,第一组图像,基于图5b的三个图得到了合成图像,从中可以看出,HDR合成图像中的天空保留了曝光不足图像区域的天空清晰区域,城墙上的草丛也保留了曝光过量图像区域的暗部细节。参看椭圆标记的区域部分。这里与另一种HDR算法效果进行比较。由于另一种HDR算法并不公开,因此首先需要明确的两点,1不能确定高通是否通过3张不同曝光的图像进行合成;2如果是通过3张合成,合成算法未知。基于上述两点,本专利将与高通HDR算法效果进行比较,MTK的机型目前没有机型,暂时对比不了。这里给出3组测试场景。通过图10给出的测试场景可以发现,本专利的算法得到的HDR效果在暗部细节保留方面与高通效果类似,在过曝区域的高光抑制方面,另一种HDR算法没有能够很好的抑制高光细节,导致该部分像素出现过曝。而方案的算法可以有效地抑制高光过曝问题。可以参考红色标记区域的细节对比。在图像的锐度和饱和度方面,本文专利目前没有进行最后的调整,整体的饱和度和锐度较另一种HDR效果要逊色。Figure 9 shows two sets of images of different exposure combined HDR. The first set of images is based on the three images of Figure 5b. The composite image is obtained. It can be seen that the sky in the HDR composite image retains the underexposed image area. In clear areas of the sky, the grass on the wall also retains the details of the dark parts of the exposed image area. See the area section of the ellipse mark. This is compared to another HDR algorithm effect. Since another HDR algorithm is not disclosed, first two points need to be clarified. 1 It is not possible to determine whether Qualcomm is synthesized by three images of different exposures; 2 If it is through 3 frames, the synthesis algorithm is unknown. Based on the above two points, this patent will be compared with the performance of Qualcomm HDR algorithm. MTK models currently have no models, which can not be compared temporarily. Here are three sets of test scenarios. Through the test scenario shown in Figure 10, it can be found that the HDR effect obtained by the algorithm of this patent is similar to the Qualcomm effect in terms of detail retention in the dark portion, and another HDR algorithm does not suppress the highlight in terms of highlight suppression in the overexposed region. The details caused the partial pixels to be overexposed. The algorithm of the scheme can effectively suppress the problem of high light overexposure. You can refer to the detail comparison of the red marked area. In terms of image sharpness and saturation, the patent has not been finalized at present, and the overall saturation and sharpness are inferior to the other HDR effects.
实施例五、Embodiment 5
本发明实施例提供了一种图像合成装置,如图11所示,包括: An embodiment of the present invention provides an image synthesizing apparatus, as shown in FIG.
获取单元1101,配置为获取到至少两个初始图像,将所述至少两个初始图像分别转换为至少两个待处理图像;其中,所述初始图像为基于第一色彩空间的图像,所述待处理图像为基于第二色彩空间的图像;The acquiring unit 1101 is configured to acquire at least two initial images, and convert the at least two initial images into at least two to-be-processed images respectively; wherein the initial image is an image based on the first color space, and the Processing the image as an image based on the second color space;
计算单元1102,配置为基于所述至少两个待处理图像,分别确定针对每一个所述待处理图像对应的高频图像部分以及低频图像部分;计算得到针对所述至少两个待处理图像的特征权值;其中,所述特征权值为针对待处理图像的每一个像素点的权值组成的集合;The calculating unit 1102 is configured to respectively determine a high frequency image portion and a low frequency image portion corresponding to each of the to-be-processed images based on the at least two to-be-processed images; and calculate features for the at least two to-be-processed images a weight; wherein the feature weight is a set of weights for each pixel of the image to be processed;
融合单元1103,配置为基于所述待处理图像的高频图像部分、低频图像部分、以及所述待处理图像的特征权值,对所述至少两个待处理图像进行融合,得到所述至少两个初始图像对应的融合图像。The merging unit 1103 is configured to fuse the at least two to-be-processed images based on the high-frequency image portion of the image to be processed, the low-frequency image portion, and the feature weight of the image to be processed, to obtain the at least two The fused image corresponding to the initial image.
这里,所述获取单元1101,还配置为获取到针对目标对象的具备不同的曝光量的至少两个初始图像。Here, the obtaining unit 1101 is further configured to acquire at least two initial images having different exposure amounts for the target object.
所述第一色彩空间可以为红(R)、绿(G)、蓝(B)色彩空间;所述第二色彩空间可以为色调(H),饱和度(S),明度(V)色彩空间。HSV空间将图像的颜色和亮度进行分离,与RGB色彩空间相比更符合人眼的视觉感受。HSV这个模型中颜色的参数分别是:色调(H),饱和度(S),明度(V)。如图4所示,其中,图左边表征RGB颜色空间的模型,图右边表征HSV颜色空间的模型;将RGB色彩空间的图像转换为HSV色彩空间的图像可以采用如下公式进行计算:The first color space may be a red (R), green (G), blue (B) color space; the second color space may be a hue (H), a saturation (S), a brightness (V) color space . The HSV space separates the color and brightness of the image, which is more in line with the human eye's visual experience than the RGB color space. The parameters of the color in the HSV model are: hue (H), saturation (S), and brightness (V). As shown in FIG. 4, the left side of the figure represents the model of the RGB color space, and the right side of the figure represents the model of the HSV color space; the image of the RGB color space into the HSV color space can be calculated by the following formula:
max(R,G,B)→VMax(R,G,B)→V
Figure PCTCN2016112498-appb-000018
Figure PCTCN2016112498-appb-000018
Figure PCTCN2016112498-appb-000019
Figure PCTCN2016112498-appb-000019
If H<0 then H←H+360.On output 0≤V≤1,0≤S≤1,0≤H≤360。If H<0 then H←H+360.On output 0≤V≤1, 0≤S≤1, 0≤H≤360.
所述计算得到针对所述至少两个待处理图像的特征权值之前,所述 计算单元,配置为基于所述至少两个待处理图像,分别确定针对每一个所述待处理图像对应的高频图像部分以及低频图像部分。Before the calculating the feature weights for the at least two images to be processed, the The calculating unit is configured to determine a high frequency image portion and a low frequency image portion corresponding to each of the to-be-processed images, respectively, based on the at least two to-be-processed images.
其中,所述获取到每一个待处理图像对应的高频图像部分以及低频图像部分可以利用小波系数针对所述待处理图像中的像素点进行分解,比如,可以采用公式Iwavek(i,j)进行计算,其中,I可以表示所述待处理图像,wave()为小波分解函数,(i,j)表示像素点的横纵坐标。The high frequency image portion and the low frequency image portion corresponding to each of the to-be-processed images may be decomposed for the pixel points in the image to be processed by using wavelet coefficients, for example, the formula Iwave k (i, j) may be used. A calculation is performed in which I can represent the image to be processed, wave() is a wavelet decomposition function, and (i, j) represents the horizontal and vertical coordinates of the pixel.
所述计算单元,配置为计算每一个待处理图像中每一个像素点的区域对比度、以及每一个像素点的梯度值;基于所述每一个像素点的区域对比度以及所述梯度值,确定所述每一个待处理图像的特征权值;基于所述至少两个待处理图像的特征权值,确定针对每一个待处理图像的归一化特征权值。The calculating unit is configured to calculate a region contrast of each pixel in each image to be processed, and a gradient value of each pixel; and determine the region based on the region contrast of the each pixel and the gradient value Feature weights for each image to be processed; determining normalized feature weights for each image to be processed based on feature weights of the at least two images to be processed.
其中,所述每一个像素点的区域对比度可以采用以下公式进行计算:
Figure PCTCN2016112498-appb-000020
Wherein, the regional contrast of each pixel point can be calculated by the following formula:
Figure PCTCN2016112498-appb-000020
p(i,j)为该像素点的像素值,m(i,j)为局部区域平均值;其中,M和N表示选定的区域中最大像素点的位置。p(i,j) is the pixel value of the pixel, and m(i,j) is the local region average; where M and N represent the position of the largest pixel in the selected region.
利用soble算子计算图像在水平和垂直方向的梯度大小。该算子包含两组3x3的矩阵,分别为横向及纵向,将之与图像作平面卷积,即可分别得出横向及纵向的亮度差分近似值。如果以I代表原始图像,Gx及Gy分别代表经纵向及横向边缘检测的图像,GLi,j为该图像像素点出的梯度大小,其公式如下:The soble operator is used to calculate the gradient of the image in the horizontal and vertical directions. The operator consists of two sets of 3x3 matrices, which are horizontal and vertical, respectively, and are planarly convolved with the image to obtain lateral and longitudinal luminance difference approximations. If I represents the original image, Gx and Gy represent the images detected by the longitudinal and lateral edges, respectively, and GL i,j is the gradient of the pixel points of the image, and the formula is as follows:
Figure PCTCN2016112498-appb-000021
Figure PCTCN2016112498-appb-000021
Figure PCTCN2016112498-appb-000022
Figure PCTCN2016112498-appb-000022
进一步地,所述基于所述每一个像素点的区域对比度以及所述梯度值,确定所述每一个待处理图像的特征权值,可以为:将所述每一个像素点的区域对比度以及所述梯度值相乘,得到所述每一个待处理图像的每一个像素点对应的特征权值;具体可以采用以下公式WM(i,j)=CLi,j*GLi,j表示;其中,WM(i,j)用于表征特征权值。Further, determining the feature weight of each of the to-be-processed images based on the region contrast of each of the pixel points and the gradient value may be: a region contrast of each of the pixel points and the Multiplying the gradient values to obtain feature weights corresponding to each pixel of each image to be processed; specifically, the following formula WM(i,j)=CL i,j *GL i,j can be used; wherein, WM (i, j) is used to characterize feature weights.
在上述方案的基础上,所述确定针对每一个待处理图像的归一化特征权值,可以采用公式:
Figure PCTCN2016112498-appb-000023
进行计算,其中,n为初始图像的数量,比如,n=2时表示有两个初始图像,n=3时表示有三个初始图像;WM(I,k)为第k幅图像的融合权值系数,这样就将不同曝光图像的融合系数权值归一化了,满足
Figure PCTCN2016112498-appb-000024
保证了图像融合后像素不会超出原有的值域范围。
Based on the above solution, the determining the normalized feature weight for each image to be processed may adopt a formula:
Figure PCTCN2016112498-appb-000023
Calculate, where n is the number of initial images, for example, n=2 means there are two initial images, n=3 means there are three initial images; WM(I,k) is the fusion weight of the kth image Coefficient, so that the weights of the fusion coefficients of different exposure images are normalized, satisfying
Figure PCTCN2016112498-appb-000024
It ensures that the pixels will not exceed the original range of values after image fusion.
所述融合单元,配置为将所述至少两个待处理图像的高频图像部分以及所述低频图像部分、分别与每一个待处理图像的归一化特征权值相乘,得到归一化之后的待处理图像;将归一化之后的至少两个待处理图像作和,将所述至少两个待处理图像进行融合。The merging unit is configured to multiply the high frequency image portion of the at least two to-be-processed images and the low-frequency image portion, respectively, with a normalized feature weight of each image to be processed, to obtain a normalized The image to be processed; the at least two images to be processed after the normalization are summed, and the at least two images to be processed are merged.
具体的,可以采用以下公式进行说明:Specifically, the following formula can be used to illustrate:
Figure PCTCN2016112498-appb-000025
n为初始图像的数量,Iwavek(i,j)为图像I中任一像素(i,j)点的小波分解,从上面的公式计算可以看出,区域对比度越大,梯度特征越大说明,该像素点的区域特征越明显,图像细节越清晰,是HDR图像中需要保留图像像素点,因此融合权值也比较大。
Figure PCTCN2016112498-appb-000025
n is the number of initial images, Iwave k (i, j) is the wavelet decomposition of any pixel (i, j) point in image I. It can be seen from the above formula calculation that the larger the regional contrast, the larger the gradient feature The more obvious the regional feature of the pixel, the clearer the image detail is, and the image pixel needs to be preserved in the HDR image, so the fusion weight is also relatively large.
所述融合单元,配置为将所述融合后的图像进行转换,得到基于第一色彩空间的融合图像。可以理解的是,这里所述的转换可以与本实施 例中提供的第一色彩空间转换到第二色彩空间的方式相反,最终得到的为RGB色彩空间的图像。The merging unit is configured to convert the fused image to obtain a fused image based on the first color space. It can be understood that the conversion described here can be combined with this implementation. The manner in which the first color space provided in the example is converted to the second color space is reversed, and the resulting image is the RGB color space.
可见,通过采用上述方案,获取到至少两个初始图像,将所述至少两个初始图像分别转换为至少两个待处理图像;计算得到针对所述至少两个待处理图像的特征权值;基于所述待处理图像的高频图像部分以及低频图像部分、所述待处理图像的特征权值,对所述至少两个待处理图像进行融合,得到所述至少两个初始图像对应的融合图像。如此,基于每一个图像中不同像素点对应的特征权值进行多个图像的融合,从而能够保证最终融合得到的图像从细节上保证图像的质量。It can be seen that, by adopting the above scheme, at least two initial images are acquired, and the at least two initial images are respectively converted into at least two to-be-processed images; the feature weights for the at least two to-be-processed images are calculated; The high-frequency image portion of the image to be processed and the low-frequency image portion and the feature weight of the image to be processed are fused to the at least two images to be processed to obtain a fused image corresponding to the at least two initial images. In this way, the fusion of the plurality of images is performed based on the feature weights corresponding to the different pixel points in each image, so that the image obtained by the final fusion can ensure the quality of the image in detail.
进一步地,通过小波变换得到像素点的高频图像部分以及低频图像部分,进而与区域对比度特征以及梯度联合特征选择出满足HDR要求的像素点进行HDR图像合成,生成的HDR图像可以有效地突出场景的暗部细节并抑制图像过曝细节。Further, the high-frequency image portion and the low-frequency image portion of the pixel are obtained by wavelet transform, and the HDR image is synthesized by selecting the pixel point satisfying the HDR requirement with the region contrast feature and the gradient joint feature, and the generated HDR image can effectively highlight the scene. The dark details are suppressed and the image is overexposed.
在实际应用中,所述获取单元1101、计算单元1102以及融合单元1103均可以运行于计算机上,可由位于计算机上的中央处理器(CPU)、或微处理器(MPU)、或数字信号处理器(DSP)、或可编程门阵列(FPGA)实现。In practical applications, the obtaining unit 1101, the computing unit 1102, and the merging unit 1103 can all run on a computer, and can be a central processing unit (CPU), a microprocessor (MPU), or a digital signal processor located on a computer. (DSP), or programmable gate array (FPGA) implementation.
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。It is to be understood that the term "comprises", "comprising", or any other variants thereof, is intended to encompass a non-exclusive inclusion, such that a process, method, article, or device comprising a series of elements includes those elements. It also includes other elements that are not explicitly listed, or elements that are inherent to such a process, method, article, or device. An element that is defined by the phrase "comprising a ..." does not exclude the presence of additional equivalent elements in the process, method, item, or device that comprises the element.
上述本发明实施例序号仅仅为了描述,不代表实施例的优劣。The serial numbers of the embodiments of the present invention are merely for the description, and do not represent the advantages and disadvantages of the embodiments.
在本申请所提供的几个实施例中,应该理解到,所揭露的设备和方法,可以通过其它的方式实现。以上所描述的设备实施例仅仅是示意性的,例 如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,如:多个单元或组件可以结合,或可以集成到另一个系统,或一些特征可以忽略,或不执行。另外,所显示或讨论的各组成部分相互之间的耦合、或直接耦合、或通信连接可以是通过一些接口,设备或单元的间接耦合或通信连接,可以是电性的、机械的或其它形式的。In the several embodiments provided by the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. The device embodiments described above are merely illustrative, examples For example, the division of the unit is only a logical function division, and the actual implementation may have another division manner, for example, multiple units or components may be combined, or may be integrated into another system, or some features may be ignored. Or not. In addition, the coupling, or direct coupling, or communication connection of the components shown or discussed may be indirect coupling or communication connection through some interfaces, devices or units, and may be electrical, mechanical or other forms. of.
上述作为分离部件说明的单元可以是、或也可以不是物理上分开的,作为单元显示的部件可以是、或也可以不是物理单元,即可以位于一个地方,也可以分布到多个网络单元上;可以根据实际的需要选择其中的部分或全部单元来实现本实施例方案的目的。The units described above as separate components may or may not be physically separated, and the components displayed as the unit may or may not be physical units, that is, may be located in one place or distributed to multiple network units; Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
另外,在本发明各实施例中的各功能单元可以全部集成在一个处理单元中,也可以是各单元分别单独作为一个单元,也可以两个或两个以上单元集成在一个单元中;上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能单元的形式实现。In addition, each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may be separately used as one unit, or two or more units may be integrated into one unit; The unit can be implemented in the form of hardware or in the form of hardware plus software functional units.
本领域普通技术人员可以理解:实现上述方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成,前述的程序可以存储于一计算机可读取存储介质中,该程序在执行时,执行包括上述方法实施例的步骤;而前述的存储介质包括:移动存储设备、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。A person skilled in the art can understand that all or part of the steps of implementing the above method embodiments may be completed by using hardware related to the program instructions. The foregoing program may be stored in a computer readable storage medium, and the program is executed when executed. The foregoing storage device includes the following steps: the foregoing storage medium includes: a mobile storage device, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk. A medium that can store program code.
或者,本发明上述集成的单元如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明实施例的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机、服务器、或者网络设备等)执行本发明各个实施例所述方法的全部或 部分。而前述的存储介质包括:移动存储设备、ROM、RAM、磁碟或者光盘等各种可以存储程序代码的介质。Alternatively, the above-described integrated unit of the present invention may be stored in a computer readable storage medium if it is implemented in the form of a software function module and sold or used as a standalone product. Based on such understanding, the technical solution of the embodiments of the present invention may be embodied in the form of a software product in essence or in the form of a software product stored in a storage medium, including a plurality of instructions. Enabling a computer device (which may be a personal computer, server, or network device, etc.) to perform all of the methods of the various embodiments of the present invention or section. The foregoing storage medium includes various media that can store program codes, such as a mobile storage device, a ROM, a RAM, a magnetic disk, or an optical disk.
以上所述,仅为本发明的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本发明的保护范围之内。因此,本发明的保护范围应以所述权利要求的保护范围为准。The above is only a specific embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily think of changes or substitutions within the technical scope of the present invention. It should be covered by the scope of the present invention. Therefore, the scope of the invention should be determined by the scope of the appended claims.
工业实用性Industrial applicability
本发明实施例能够基于每一个图像中不同像素点对应的特征权值进行多个图像的融合,从而能够保证最终融合得到的图像从细节上保证图像的质量。 The embodiment of the invention can perform the fusion of multiple images based on the feature weights corresponding to different pixel points in each image, thereby ensuring that the image obtained by the final fusion guarantees the quality of the image in detail.

Claims (20)

  1. 一种图像合成方法,所述方法包括:An image synthesis method, the method comprising:
    获取到至少两个初始图像,将所述至少两个初始图像分别转换为至少两个待处理图像;其中,所述初始图像为基于第一色彩空间的图像,所述待处理图像为基于第二色彩空间的图像;Obtaining at least two initial images, respectively converting the at least two initial images into at least two to-be-processed images; wherein the initial image is an image based on a first color space, and the image to be processed is based on a second An image of a color space;
    计算得到针对所述至少两个待处理图像的特征权值,其中,所述特征权值为针对待处理图像的每一个像素点的权值组成的集合;Calculating a feature weight for the at least two to-be-processed images, wherein the feature weight is a set of weights for each pixel of the image to be processed;
    基于所述至少两个待处理图像,确定针对每一个所述待处理图像对应的高频图像部分以及低频图像部分;Determining, according to the at least two images to be processed, a high frequency image portion and a low frequency image portion corresponding to each of the to-be-processed images;
    基于所述待处理图像的高频图像部分、低频图像部分以及所述待处理图像的特征权值,对所述至少两个待处理图像进行融合得到所述至少两个初始图像对应的融合图像。And superimposing the at least two to-be-processed images to obtain a fused image corresponding to the at least two initial images, based on the high-frequency image portion of the image to be processed, the low-frequency image portion, and the feature weight of the image to be processed.
  2. 根据权利要求1所述的方法,其中,所述第一色彩空间为红R、绿G、蓝B色彩空间。The method of claim 1 wherein said first color space is a red R, green G, blue B color space.
  3. 根据权利要求1所述的方法,其中,所述第二色彩空间为色调H,饱和度S,明度V色彩空间。The method of claim 1, wherein the second color space is a hue H, a saturation S, and a brightness V color space.
  4. 根据权利要求1所述的方法,其中,所述方法还包括:The method of claim 1 wherein the method further comprises:
    根据下述公式将所述至少两个初始图像分别转换为至少两个待处理图像;其中,公式为:Converting the at least two initial images into at least two to-be-processed images according to the following formula; wherein, the formula is:
    max(R,G,B)→VMax(R,G,B)→V
    Figure PCTCN2016112498-appb-100001
    Figure PCTCN2016112498-appb-100001
    Figure PCTCN2016112498-appb-100002
    Figure PCTCN2016112498-appb-100002
    当H小于零时,则将H替换为H+360得到的数值;并且最终输出的数值V小于等于1且大于等于0;S小于等于1且大于等于零;H小于等于360、且大于等于0。When H is less than zero, H is replaced by the value obtained by H+360; and the final output value V is less than or equal to 1 and greater than or equal to 0; S is less than or equal to 1 and greater than or equal to zero; H is less than or equal to 360 and greater than or equal to zero.
  5. 根据权利要求1所述的方法,其中,所述计算得到针对所述至少两个待处理图像的特征权值,包括:The method of claim 1, wherein the calculating the feature weights for the at least two images to be processed comprises:
    计算得到针对每一个待处理图像中每一个像素点对应的特征权值。A feature weight corresponding to each pixel in each image to be processed is calculated.
  6. 根据权利要求5所述的方法,其中,所述特征权值可以为归一化的特征权值。The method of claim 5 wherein the feature weights can be normalized feature weights.
  7. 根据权利要求1或5或6所述的方法,其中,所述计算得到针对所述至少两个待处理图像的特征权值,包括:The method according to claim 1 or 5 or 6, wherein the calculating the feature weights for the at least two images to be processed comprises:
    计算每一个待处理图像中每一个像素点的区域对比度、以及每一个像素点的梯度值;Calculating the area contrast of each pixel in each image to be processed, and the gradient value of each pixel;
    基于所述每一个像素点的区域对比度以及所述梯度值,确定所述每一个待处理图像的特征权值;Determining a feature weight of each of the to-be-processed images based on an area contrast of each of the pixels and the gradient value;
    基于所述至少两个待处理图像的特征权值,确定针对每一个待处理图像的归一化特征权值。A normalized feature weight for each image to be processed is determined based on feature weights of the at least two images to be processed.
  8. 根据权利要求7所述的方法,其中,所述基于所述待处理图像的高频图像部分、低频图像部分以及所述待处理图像的特征权值,对所述至少两个待处理图像进行融合,包括:The method according to claim 7, wherein said merging said at least two images to be processed based on a high frequency image portion of said image to be processed, a low frequency image portion, and a feature weight of said image to be processed ,include:
    将所述至少两个待处理图像的高频图像部分以及所述低频图像部分、分别与每一个待处理图像的归一化特征权值相乘,得到归一化之后的待处理图像;Multiplying the high frequency image portion of the at least two images to be processed and the low frequency image portion by respective normalized feature weights of each image to be processed to obtain a normalized image to be processed;
    将归一化之后的至少两个待处理图像作和,将所述至少两个待处理图像进行融合。The normalized at least two images to be processed are summed, and the at least two images to be processed are merged.
  9. 根据权利要求1所述的方法,其中,所述得到所述至少两个初始 图像对应的融合图像,包括:The method of claim 1 wherein said obtaining said at least two initials The fused image corresponding to the image, including:
    将所述融合后的图像进行转换,得到基于第一色彩空间的融合图像。Converting the fused image to obtain a fused image based on the first color space.
  10. 根据权利要求1所述的方法,其中,所述获取到至少两个初始图像,包括:The method of claim 1 wherein said obtaining at least two initial images comprises:
    获取到针对目标对象的具备不同的曝光量的至少两个初始图像。At least two initial images having different exposure amounts for the target object are acquired.
  11. 一种图像合成装置,所述装置包括:An image synthesizing device, the device comprising:
    获取单元,配置为获取到至少两个初始图像,将所述至少两个初始图像分别转换为至少两个待处理图像;其中,所述初始图像为基于第一色彩空间的图像,所述待处理图像为基于第二色彩空间的图像;And an acquiring unit configured to acquire at least two initial images, respectively converting the at least two initial images into at least two to-be-processed images; wherein the initial image is an image based on the first color space, the to-be-processed The image is an image based on the second color space;
    计算单元,配置为基于所述至少两个待处理图像,分别确定针对每一个所述待处理图像对应的高频图像部分以及低频图像部分;计算得到针对所述至少两个待处理图像的特征权值;其中,所述特征权值为针对待处理图像的每一个像素点的权值组成的集合;a calculating unit, configured to respectively determine a high frequency image portion and a low frequency image portion corresponding to each of the to-be-processed images based on the at least two to-be-processed images; and calculate feature rights for the at least two to-be-processed images a value; wherein the feature weight is a set of weights for each pixel of the image to be processed;
    融合单元,配置为基于所述待处理图像的高频图像部分、低频图像部分、以及所述待处理图像的特征权值,对所述至少两个待处理图像进行融合,得到所述至少两个初始图像对应的融合图像。a merging unit configured to fuse the at least two to-be-processed images based on the high-frequency image portion of the image to be processed, the low-frequency image portion, and the feature weight of the image to be processed, to obtain the at least two The fused image corresponding to the initial image.
  12. 根据权利要求11所述的装置,其中,所述第一色彩空间为红R、绿G、蓝B色彩空间;和/或,所述第二色彩空间为色调H,饱和度S,明度V色彩空间。The apparatus according to claim 11, wherein said first color space is a red R, green G, blue B color space; and/or said second color space is a hue H, a saturation S, a brightness V color space.
  13. 根据权利要求11所述的装置,其中,所述获取单元,还配置为根据下述公式将所述至少两个初始图像分别转换为至少两个待处理图像;其中,公式为: The apparatus according to claim 11, wherein the obtaining unit is further configured to separately convert the at least two initial images into at least two to-be-processed images according to the following formula; wherein the formula is:
    max(R,G,B)→VMax(R,G,B)→V
    Figure PCTCN2016112498-appb-100003
    Figure PCTCN2016112498-appb-100003
    Figure PCTCN2016112498-appb-100004
    Figure PCTCN2016112498-appb-100004
    当H小于零时,则将H替换为H+360得到的数值;并且最终输出的数值V小于等于1且大于等于0;S小于等于1且大于等于零;H小于等于360、且大于等于0。When H is less than zero, H is replaced by the value obtained by H+360; and the final output value V is less than or equal to 1 and greater than or equal to 0; S is less than or equal to 1 and greater than or equal to zero; H is less than or equal to 360 and greater than or equal to zero.
  14. 根据权利要求11所述的装置,其中,所述计算单元,还配置为计算得到针对每一个待处理图像中每一个像素点对应的特征权值。The apparatus according to claim 11, wherein said calculating unit is further configured to calculate a feature weight corresponding to each pixel point in each image to be processed.
  15. 根据权利要求14所述的装置,其中,所述特征权值可以为归一化的特征权值。The apparatus of claim 14, wherein the feature weights can be normalized feature weights.
  16. 根据权利要求11或14或15所述的装置,其中,The device according to claim 11 or 14 or 15, wherein
    所述计算单元,还配置为基计算每一个待处理图像中每一个像素点的区域对比度、以及每一个像素点的梯度值;基于所述每一个像素点的区域对比度以及所述梯度值,确定所述每一个待处理图像的特征权值;基于所述至少两个待处理图像的特征权值,确定针对每一个待处理图像的归一化特征权值。The calculating unit is further configured to calculate a region contrast of each pixel in each image to be processed, and a gradient value of each pixel; and determine the region contrast and the gradient value of each pixel a feature weight of each of the images to be processed; determining a normalized feature weight for each image to be processed based on feature weights of the at least two images to be processed.
  17. 根据权利要求16所述的装置,其中,The device according to claim 16, wherein
    所述融合单元,还配置为将所述至少两个待处理图像的高频图像部分以及所述低频图像部分、分别与每一个待处理图像的归一化特征权值相乘,得到归一化之后的待处理图像;将归一化之后的至少两个待处理图像作和,将所述至少两个待处理图像进行融合。The merging unit is further configured to multiply the high frequency image portion of the at least two to-be-processed images and the low-frequency image portion, respectively, with a normalized feature weight of each image to be processed to obtain a normalization a subsequent image to be processed; summing at least two images to be processed after normalization, and fusing the at least two images to be processed.
  18. 根据权利要求11所述的装置,其中,The apparatus according to claim 11, wherein
    所述融合单元,还配置为将所述融合后的图像进行转换,得到基于 第一色彩空间的融合图像。The merging unit is further configured to convert the fused image to obtain a basis A fused image of the first color space.
  19. 根据权利要求11所述的装置,其中,The apparatus according to claim 11, wherein
    所述获取单元,还配置为获取到针对目标对象的具备不同的曝光量的至少两个初始图像。The acquiring unit is further configured to acquire at least two initial images having different exposure amounts for the target object.
  20. 一种计算机存储介质,所述计算机存储介质中存储有计算机程序,所述计算机程序用于执行前述权利要求1至10任一项所述的图像合成方法。 A computer storage medium having stored therein a computer program for performing the image composition method according to any one of claims 1 to 10.
PCT/CN2016/112498 2016-02-15 2016-12-27 Image synthesis method and apparatus, and storage medium WO2017140182A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610086227.1 2016-02-15
CN201610086227.1A CN105744159B (en) 2016-02-15 2016-02-15 A kind of image composition method and device

Publications (1)

Publication Number Publication Date
WO2017140182A1 true WO2017140182A1 (en) 2017-08-24

Family

ID=56246002

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/112498 WO2017140182A1 (en) 2016-02-15 2016-12-27 Image synthesis method and apparatus, and storage medium

Country Status (2)

Country Link
CN (1) CN105744159B (en)
WO (1) WO2017140182A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111583168A (en) * 2020-06-18 2020-08-25 上海眼控科技股份有限公司 Image synthesis method, image synthesis device, computer equipment and storage medium
CN111714883A (en) * 2020-06-19 2020-09-29 网易(杭州)网络有限公司 Method and device for processing map and electronic equipment
CN112116102A (en) * 2020-09-27 2020-12-22 张洪铭 Method and system for expanding domain adaptive training set
CN113538304A (en) * 2020-12-14 2021-10-22 腾讯科技(深圳)有限公司 Training method and device of image enhancement model, and image enhancement method and device

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105744159B (en) * 2016-02-15 2019-05-24 努比亚技术有限公司 A kind of image composition method and device
CN106355569A (en) * 2016-08-29 2017-01-25 努比亚技术有限公司 Image generating device and method thereof
CN106447641A (en) * 2016-08-29 2017-02-22 努比亚技术有限公司 Image generation device and method
WO2018040751A1 (en) * 2016-08-29 2018-03-08 努比亚技术有限公司 Image generation apparatus and method therefor, and image processing device and storage medium
CN106920327B (en) * 2017-03-02 2019-04-05 浙江古伽智能科技有限公司 A kind of high efficiency recyclable device based on image recognition
CN107343140A (en) * 2017-06-14 2017-11-10 努比亚技术有限公司 A kind of image processing method and mobile terminal
CN108111778A (en) * 2017-12-25 2018-06-01 信利光电股份有限公司 A kind of photographic device and electronic equipment
CN109951634B (en) * 2019-03-14 2021-09-03 Oppo广东移动通信有限公司 Image synthesis method, device, terminal and storage medium
CN110599410B (en) * 2019-08-07 2022-06-10 北京达佳互联信息技术有限公司 Image processing method, device, terminal and storage medium
CN110503622B (en) * 2019-08-23 2022-07-01 上海圭目机器人有限公司 Image global positioning optimizing splicing method based on positioning data
WO2021195895A1 (en) * 2020-03-30 2021-10-07 深圳市大疆创新科技有限公司 Infrared image processing method and apparatus, device, and storage medium
CN112365493B (en) * 2020-11-30 2022-04-22 北京鹰瞳科技发展股份有限公司 Training data generation method and device for fundus image recognition model
CN112634187B (en) * 2021-01-05 2022-11-18 安徽大学 Wide dynamic fusion algorithm based on multiple weight mapping
CN113222869B (en) * 2021-05-06 2024-03-01 杭州海康威视数字技术股份有限公司 Image processing method
CN115797237A (en) * 2021-09-10 2023-03-14 北京字跳网络技术有限公司 Image processing method and device
CN116452437B (en) * 2023-03-20 2023-11-14 荣耀终端有限公司 High dynamic range image processing method and electronic equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102722864A (en) * 2012-05-18 2012-10-10 清华大学 Image enhancement method
CN102905058A (en) * 2011-07-28 2013-01-30 三星电子株式会社 Apparatus and method for generating high dynamic range image from which ghost blur is removed
US20130335596A1 (en) * 2012-06-15 2013-12-19 Microsoft Corporation Combining multiple images in bracketed photography
CN103973958A (en) * 2013-01-30 2014-08-06 阿里巴巴集团控股有限公司 Image processing method and image processing equipment
CN104881854A (en) * 2015-05-20 2015-09-02 天津大学 High-dynamic-range image fusion method based on gradient and brightness information
CN105227856A (en) * 2015-09-28 2016-01-06 广东欧珀移动通信有限公司 A kind of method of image procossing and mobile terminal
CN105744159A (en) * 2016-02-15 2016-07-06 努比亚技术有限公司 Image synthesizing method and device

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8699821B2 (en) * 2010-07-05 2014-04-15 Apple Inc. Aligning images
CN103473749B (en) * 2013-01-09 2016-06-22 深圳信息职业技术学院 A kind of method based on full variation image co-registration and device
CN104853091B (en) * 2015-04-30 2017-11-24 广东欧珀移动通信有限公司 A kind of method taken pictures and mobile terminal

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102905058A (en) * 2011-07-28 2013-01-30 三星电子株式会社 Apparatus and method for generating high dynamic range image from which ghost blur is removed
CN102722864A (en) * 2012-05-18 2012-10-10 清华大学 Image enhancement method
US20130335596A1 (en) * 2012-06-15 2013-12-19 Microsoft Corporation Combining multiple images in bracketed photography
CN103973958A (en) * 2013-01-30 2014-08-06 阿里巴巴集团控股有限公司 Image processing method and image processing equipment
CN104881854A (en) * 2015-05-20 2015-09-02 天津大学 High-dynamic-range image fusion method based on gradient and brightness information
CN105227856A (en) * 2015-09-28 2016-01-06 广东欧珀移动通信有限公司 A kind of method of image procossing and mobile terminal
CN105744159A (en) * 2016-02-15 2016-07-06 努比亚技术有限公司 Image synthesizing method and device

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111583168A (en) * 2020-06-18 2020-08-25 上海眼控科技股份有限公司 Image synthesis method, image synthesis device, computer equipment and storage medium
CN111714883A (en) * 2020-06-19 2020-09-29 网易(杭州)网络有限公司 Method and device for processing map and electronic equipment
CN112116102A (en) * 2020-09-27 2020-12-22 张洪铭 Method and system for expanding domain adaptive training set
CN113538304A (en) * 2020-12-14 2021-10-22 腾讯科技(深圳)有限公司 Training method and device of image enhancement model, and image enhancement method and device
CN113538304B (en) * 2020-12-14 2023-08-18 腾讯科技(深圳)有限公司 Training method and device for image enhancement model, and image enhancement method and device

Also Published As

Publication number Publication date
CN105744159B (en) 2019-05-24
CN105744159A (en) 2016-07-06

Similar Documents

Publication Publication Date Title
WO2017140182A1 (en) Image synthesis method and apparatus, and storage medium
CN110622497B (en) Device with cameras having different focal lengths and method of implementing a camera
WO2017045650A1 (en) Picture processing method and terminal
WO2017050115A1 (en) Image synthesis method
WO2018019124A1 (en) Image processing method and electronic device and storage medium
CN108629747B (en) Image enhancement method and device, electronic equipment and storage medium
US9692959B2 (en) Image processing apparatus and method
WO2017067526A1 (en) Image enhancement method and mobile terminal
US9600741B1 (en) Enhanced image generation based on multiple images
WO2017166886A1 (en) Image processing system and method
WO2017071559A1 (en) Image processing apparatus and method
WO2017067390A1 (en) Method and terminal for obtaining depth information of low-texture regions in image
WO2021036991A1 (en) High dynamic range video generation method and device
WO2016180325A1 (en) Image processing method and device
WO2017016511A1 (en) Image processing method and device, and terminal
CN106131450B (en) Image processing method and device and terminal
WO2018176925A1 (en) Hdr image generation method and apparatus
US9986171B2 (en) Method and apparatus for dual exposure settings using a pixel array
WO2017206656A1 (en) Image processing method, terminal, and computer storage medium
WO2017206657A1 (en) Image processing method and device, mobile terminal, and computer storage medium
WO2017071475A1 (en) Image processing method, and terminal and storage medium
WO2021036715A1 (en) Image-text fusion method and apparatus, and electronic device
WO2017071476A1 (en) Image synthesis method and device, and storage medium
WO2017071542A1 (en) Image processing method and apparatus
US20150063694A1 (en) Techniques for combining images with varying brightness degrees

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16890406

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16890406

Country of ref document: EP

Kind code of ref document: A1