WO2022166765A1 - 图像处理方法、移动终端及存储介质 - Google Patents

图像处理方法、移动终端及存储介质 Download PDF

Info

Publication number
WO2022166765A1
WO2022166765A1 PCT/CN2022/074357 CN2022074357W WO2022166765A1 WO 2022166765 A1 WO2022166765 A1 WO 2022166765A1 CN 2022074357 W CN2022074357 W CN 2022074357W WO 2022166765 A1 WO2022166765 A1 WO 2022166765A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
camera
area
matches
target
Prior art date
Application number
PCT/CN2022/074357
Other languages
English (en)
French (fr)
Inventor
彭叶斌
赵紫辉
代文慧
Original Assignee
深圳传音控股股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳传音控股股份有限公司 filed Critical 深圳传音控股股份有限公司
Publication of WO2022166765A1 publication Critical patent/WO2022166765A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/45Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/69Control of means for changing angle of the field of view, e.g. optical zoom objectives or electronic zooming
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing

Definitions

  • the present application relates to the technical field of image processing, and in particular, to an image processing method, a mobile terminal and a storage medium.
  • the present application provides an image processing method, a mobile terminal and a storage medium, which can enable users to obtain high-definition captured images and improve the user's look and feel when browsing images.
  • the present application provides an image processing method, which is applied to a shooting device, the shooting device includes a first camera and a second camera, and the field angle of the first camera is smaller than the size of the field of view.
  • the size of the field of view of the second camera includes:
  • performing image fusion processing on the first image and the second image to obtain a target image including:
  • a target image is generated according to the enhanced second image.
  • generating the target image according to the enhanced second image includes:
  • the target image is obtained by fusing the chrominance value and/or luminance value of each segmented image.
  • the method before performing image enhancement processing on an image area in the second image that matches the first image according to the first image to obtain a processed second image, the method further includes:
  • Image registration processing is performed on the first image and the second image, and an image area in the second image that matches the first image is determined.
  • performing image registration processing on the first image and the second image to determine an image area in the second image that matches the first image includes:
  • the image area corresponding to the target feature point set in the second image is determined as the image area in the second image that matches the first image.
  • performing image enhancement processing on an image area in the second image that matches the first image according to the first image, to obtain a processed second image comprising:
  • Color enhancement processing and/or texture enhancement processing is performed on an image area in the second image that matches the first image according to the first image, to obtain a processed second image.
  • the method further includes:
  • the target image is processed according to the image processing rule to obtain a processed target image.
  • performing image fusion processing on the first image and the second image to obtain a target image including:
  • a target image is generated according to the stitched first image.
  • the generating a target image according to the stitched first image includes:
  • the first image after the splicing process is divided into blocks to obtain at least one divided image
  • the target image is obtained by fusing the chrominance value and/or luminance value of each segmented image.
  • performing splicing processing on the first image by using an image area in the second image other than an image area matching the first image, to obtain a first image after splicing processing comprising: :
  • the first image is stitched by using the pre-processed image area to obtain a stitched first image.
  • performing splicing processing on the first image by using an image area in the second image other than an image area matching the first image, to obtain a first image after splicing processing comprising: :
  • the pre-processed first image is stitched by using an image area in the second image other than the image area matched with the first image to obtain a stitched first image.
  • the present application provides another image processing method, which is applied to a shooting device, where the shooting device includes a first camera and a second camera, and the method includes:
  • the size of the angle of view of the first camera is smaller than the size of the angle of view of the second camera.
  • controlling the first camera and the second camera to photograph the target environment includes at least one of the following:
  • the second camera is controlled to photograph the target environment first, and the first camera is controlled to photograph the target environment.
  • performing image enhancement processing on the first image according to an image area in the second image that matches the first image to obtain the target image includes: The image area matching the first image performs image enhancement processing on the first image to obtain an enhanced first image, and generates a target image according to the enhanced first image.
  • performing image enhancement processing on the first image according to an image area in the second image that matches the first image to obtain the target image includes: The image area matched with the first image performs image enhancement processing on the second image to obtain an enhanced second image, and a target image is generated according to the enhanced second image.
  • performing image enhancement processing on the first image according to an image area in the second image that matches the first image to obtain the target image includes: Perform image enhancement processing on the first image and the second image in the image area matched with the first image to obtain an enhanced first image and an enhanced second image.
  • a target image is generated from the first image and the enhanced second image.
  • the determining an image area in the second image that matches the first image includes:
  • performing image registration processing on the first image and the second image to obtain an image area in the second image that matches the first image includes:
  • the image area corresponding to the target feature point set in the second image is determined as the image area in the second image that matches the first image.
  • performing image enhancement processing on the first image according to an image area in the second image that matches the first image includes at least one of the following:
  • the first image as a reference image, perform color enhancement processing and/or texture enhancement processing on the first image according to an image area in the second image that matches the first image;
  • color enhancement processing and/or texture enhancement processing is performed on the first image and the second image according to an image area in the second image that matches the first image.
  • the present application provides an image processing apparatus, which is applied to a photographing device, the photographing device includes a first camera and a second camera, and the field of view of the first camera is smaller than that of the second camera.
  • Field angle size the device includes;
  • an acquisition unit configured to control the first camera and the second camera to photograph the target environment at the same time, and acquire a first image photographed by the first camera and a second image photographed by the second camera;
  • the processing unit is configured to perform image fusion processing on the first image and the second image to obtain a target image.
  • the present application provides a mobile terminal, including a processor and a memory, wherein the memory is used to store a computer program, the computer program includes program instructions, and the processor is configured to call the program Instructions to execute the method according to the first aspect or the second aspect.
  • the present application provides a computer-readable storage medium, characterized by comprising: the computer-readable storage medium stores one or more instructions, and the one or more instructions are suitable for being loaded by a processor and perform the method as described in the first aspect or the second aspect.
  • the image processing method, device, mobile terminal and storage medium of the present application are applied to photographing equipment, wherein the photographing equipment includes a first camera and a second camera, and the field of view of the first camera is smaller than the The size of the field of view of the second camera.
  • Controlling the first camera and the second camera to photograph the target environment at the same time obtaining a first image photographed by the first camera and a second image photographed by the second camera; Perform image fusion processing on the second image to obtain a target image.
  • two cameras can be controlled to shoot at the same time, and then a high-quality and high-definition captured image can be obtained through fusion processing, which improves the user's look and feel when browsing the image.
  • FIG. 1 is a schematic diagram of a hardware structure of a mobile terminal for implementing various embodiments of the present application provided by an embodiment of the present application;
  • FIG. 2 is an architecture diagram of a communication network system provided by an embodiment of the present application.
  • FIG. 3 is a schematic flowchart of a first image processing method provided by an embodiment of the present application.
  • FIG. 4a is a schematic diagram of a partially overlapping field of view of a first camera and a second camera provided by an embodiment of the present application;
  • 4b is a schematic diagram of a field of view angle of the first camera provided by an embodiment of the present application that is completely within the field of view angle of the second camera;
  • FIG. 5 is a schematic flowchart of a second image processing method provided by an embodiment of the present application.
  • 6a is a schematic diagram of a first image provided by an embodiment of the present application.
  • 6b is a schematic diagram of a second image provided by an embodiment of the present application.
  • FIG. 7 is a schematic flowchart of a third image processing method provided by an embodiment of the present application.
  • FIG. 8 is a schematic diagram of generating a target image according to a first image after stitching processing provided by an embodiment of the present application
  • FIG. 9 is a schematic flowchart of a fourth image processing method provided by an embodiment of the present application.
  • FIG. 10 is a schematic diagram of an enhanced first image provided by an embodiment of the present application.
  • FIG. 11 is a schematic structural diagram of an image processing apparatus provided by an embodiment of the present application.
  • FIG. 12 is a schematic structural diagram of another mobile terminal provided by an embodiment of the present application.
  • FIG. 13 is a schematic diagram of the hardware structure of a controller 140 provided by an embodiment of the present application.
  • FIG. 14 is a schematic diagram of a hardware structure of a network node 150 provided by an embodiment of the present application.
  • FIG. 15 is a schematic diagram of the hardware structure of another network node 160 provided by an embodiment of the present application.
  • FIG. 16 is a schematic diagram of the hardware structure of another controller 170 provided by an embodiment of the present application.
  • FIG. 17 is a schematic diagram of a hardware structure of another network node 180 provided by an embodiment of the present application.
  • first, second, third, etc. may be used herein to describe various information, such information should not be limited by these terms. These terms are only used to distinguish the same type of information from each other.
  • first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of this document.
  • word “if” as used herein may be interpreted as “at the time of” or “when” or “in response to determining”.
  • the singular forms "a,” “an,” and “the” are intended to include the plural forms as well, unless the context dictates otherwise.
  • A, B, C means “any of the following: A; B; C; A and B; A and C; B and C; A and B and C
  • A, B or C or "A, B and/or C” means "any of the following: A; B; C; A and B; A and C; B and C; A and B and C”. Exceptions to this definition arise only when combinations of elements, functions, steps, or operations are inherently mutually exclusive in some way.
  • the words “if”, “if” as used herein may be interpreted as “at” or “when” or “in response to determining” or “in response to detecting”.
  • the phrases “if determined” or “if detected (the stated condition or event)” can be interpreted as “when determined” or “in response to determining” or “when detected (the stated condition or event),” depending on the context )” or “in response to detection (a stated condition or event)”.
  • step codes such as S301 and S302 are used, the purpose of which is to express the corresponding content more clearly and briefly, and does not constitute a substantial restriction on the sequence.
  • S302 will be executed first and then S301, etc., but these should all fall within the protection scope of this application.
  • the photographing apparatus may be implemented in various forms.
  • the photographing equipment described in this application may include, for example, cell phones, tablet computers, notebook computers, PDAs, Personal Digital Assistants (PDAs), Portable Media Players (PMPs), navigation devices, Wearable devices, smart bracelets, pedometers, and other mobile terminals with cameras, as well as stationary terminals with cameras, such as digital TVs and desktop computers.
  • PDAs Personal Digital Assistants
  • PMPs Portable Media Players
  • navigation devices wearable devices
  • smart bracelets smart bracelets
  • pedometers and other mobile terminals with cameras
  • stationary terminals with cameras such as digital TVs and desktop computers.
  • a mobile terminal will be used as an example, and those skilled in the art will understand that, in addition to elements specially used for mobile purposes, the configurations according to the embodiments of the present application can also be applied to stationary type terminals.
  • FIG. 1 is a schematic diagram of the hardware structure of a mobile terminal that implements various embodiments of the present application provided by an embodiment of the present application.
  • the mobile terminal 100 may include: an RF (Radio Frequency, radio frequency) unit 101, a WiFi module 102, Audio output unit 103, A/V (audio/video) input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, processor 110, and power supply 111 and other components.
  • RF Radio Frequency, radio frequency
  • the radio frequency unit 101 can be used for receiving and sending signals during sending and receiving of information or during a call. Specifically, after receiving the downlink information of the base station, it is processed by the processor 110; in addition, the uplink data is sent to the base station.
  • the radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like.
  • the radio frequency unit 101 can also communicate with the network and other devices through wireless communication.
  • the above-mentioned wireless communication can use any communication standard or protocol, including but not limited to GSM (Global System of Mobile communication, Global System for Mobile Communication), GPRS (General Packet Radio Service, General Packet Radio Service), CDMA2000 (Code Division Multiple Access 2000 , Code Division Multiple Access 2000), WCDMA (Wideband Code Division Multiple Access, Wideband Code Division Multiple Access), TD-SCDMA (Time Division-Synchronous Code Division Multiple Access, Time Division Synchronous Code Division Multiple Access), FDD-LTE (Frequency Division Duplexing-Long Term Evolution, frequency division duplexing long term evolution) and TDD-LTE (Time Division Duplexing-Long Term Evolution, time division duplexing long term evolution) and so on.
  • GSM Global System of Mobile communication, Global System for Mobile Communication
  • GPRS General Packet Radio Service
  • CDMA2000 Code Division Multiple Access 2000
  • Code Division Multiple Access 2000 Code Division Multiple Access 2000
  • WCDMA Wideband Code Division Multiple Access
  • TD-SCDMA Time Division-S
  • WiFi is a short-distance wireless transmission technology
  • the mobile terminal can help users to send and receive emails, browse web pages, access streaming media, etc. through the WiFi module 102, which provides users with wireless broadband Internet access.
  • FIG. 1 shows the WiFi module 102, it can be understood that it is not a necessary component of the mobile terminal, and can be completely omitted as required within the scope of not changing the essence of the application.
  • the audio output unit 103 can store the data received by the radio frequency unit 101 or the WiFi module 102 or stored in the memory 109.
  • the audio data is converted into audio signal and output as sound.
  • the audio output unit 103 may also provide audio output related to a specific function performed by the mobile terminal 100 (eg, call signal reception sound, message reception sound, etc.).
  • the audio output unit 103 may include a speaker, a buzzer, and the like.
  • the A/V input unit 104 is used to receive audio or video signals.
  • the A/V input unit 104 may include a graphics processor (Graphics Processing Unit, GPU) 1041 and a microphone 1042, and the graphics processor 1041 responds to still pictures or images obtained by an image capture device (such as a camera) in a video capture mode or an image capture mode.
  • the image data of the video is processed.
  • the processed image frames may be displayed on the display unit 106 .
  • the image frames processed by the graphics processor 1041 may be stored in the memory 109 (or other storage medium) or transmitted via the radio frequency unit 101 or the WiFi module 102 .
  • the microphone 1042 can receive sound (audio data) via the microphone 1042 in a telephone call mode, a recording mode, a voice recognition mode, etc.
  • the processed audio (voice) data can be converted into a format that can be transmitted to a mobile communication base station via the radio frequency unit 101 for output in the case of a telephone call mode.
  • the microphone 1042 may implement various types of noise cancellation (or suppression) algorithms to remove (or suppress) noise or interference generated in the process of receiving and transmitting audio signals.
  • the mobile terminal 100 also includes at least one sensor 105, such as a light sensor, a motion sensor, and other sensors.
  • the light sensor includes an ambient light sensor and a proximity sensor.
  • the ambient light sensor can adjust the brightness of the display panel 1061 according to the brightness of the ambient light, and the proximity sensor can turn off the display when the mobile terminal 100 is moved to the ear. Panel 1061 and/or backlight.
  • the accelerometer sensor can detect the magnitude of acceleration in all directions (usually three axes), and can detect the magnitude and direction of gravity when it is stationary.
  • the display unit 106 is used to display information input by the user or information provided to the user.
  • the display unit 106 may include a display panel 1061, and the display panel 1061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
  • LCD Liquid Crystal Display
  • OLED Organic Light-Emitting Diode
  • the user input unit 107 may be used to receive input numerical or character information, and generate key signal input related to user settings and function control of the mobile terminal.
  • the user input unit 107 may include a touch panel 1071 and other input devices 1072 .
  • the touch panel 1071 also referred to as a touch screen, can collect the user's touch operations on or near it (such as the user's finger, stylus, etc., any suitable object or attachment on or near the touch panel 1071). operation), and drive the corresponding connection device according to the preset program.
  • the touch panel 1071 may include two parts, a touch detection device and a touch controller.
  • the touch detection device detects the touch orientation of the user, detects the signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives the touch information from the touch detection device and converts it into contact coordinates , and then send it to the processor 110, and can receive the command sent by the processor 110 and execute it.
  • the touch panel 1071 can be implemented by various types of resistive, capacitive, infrared, and surface acoustic waves.
  • the user input unit 107 may also include other input devices 1072 .
  • other input devices 1072 may include, but are not limited to, one or more of physical keyboards, function keys (such as volume control keys, switch keys, etc.), trackballs, mice, joysticks, etc., which are not specifically described here. limited.
  • the touch panel 1071 may cover the display panel 1061.
  • the touch panel 1071 detects a touch operation on or near it, it transmits it to the processor 110 to determine the type of the touch event, and then the processor 110 determines the type of the touch event according to the touch event.
  • the type provides corresponding visual output on the display panel 1061.
  • the touch panel 1071 and the display panel 1061 are used as two independent components to realize the input and output functions of the mobile terminal, but in some embodiments, the touch panel 1071 and the display panel 1061 may be integrated
  • the input and output functions of the mobile terminal are implemented, which is not specifically limited here.
  • the interface unit 108 serves as an interface through which at least one external device can be connected to the mobile terminal 100 .
  • external devices may include wired or wireless headset ports, external power (or battery charger) ports, wired or wireless data ports, memory card ports, ports for connecting devices with identification modules, audio input/output (I/O) ports, video I/O ports, headphone ports, and more.
  • the interface unit 108 may be used to receive input (eg, data information, power, etc.) from an external device and transmit the received input to one or more elements within the mobile terminal 100 or may be used between the mobile terminal 100 and the external Transfer data between devices.
  • the memory 109 may be used to store software programs as well as various data.
  • the memory 109 may mainly include a storage program area and a storage data area.
  • the storage program area may store an operating system, an application program required for at least one function (such as a sound playback function, an image playback function, etc.), etc.;
  • the storage data area may Stores data (such as audio data, phonebook, etc.) created according to the use of the mobile phone, and the like.
  • memory 109 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
  • the processor 110 is the control center of the mobile terminal, uses various interfaces and lines to connect various parts of the entire mobile terminal, runs or executes the software programs and/or modules stored in the memory 109, and calls the data stored in the memory 109. , perform various functions of the mobile terminal and process data, so as to monitor the mobile terminal as a whole.
  • the processor 110 may include one or more processing units; preferably, the processor 110 may integrate an application processor and a modem processor.
  • the demodulation processor mainly handles wireless communication. It can be understood that, the above-mentioned modulation and demodulation processor may not be integrated into the processor 110 .
  • the mobile terminal 100 may also include a power supply 111 (such as a battery) for supplying power to various components.
  • a power supply 111 (such as a battery) for supplying power to various components.
  • the power supply 111 may be logically connected to the processor 110 through a power management system, so as to manage charging, discharging, and power consumption management through the power management system and other functions.
  • the mobile terminal 100 may also include a Bluetooth module, etc., which will not be described herein again.
  • FIG. 2 is an architecture diagram of a communication network system provided by an embodiment of the application.
  • the communication network system is an LTE system of universal mobile communication technology.
  • 201 E-UTRAN (Evolved UMTS Terrestrial Radio Access Network, Evolved UMTS Terrestrial Radio Access Network) 202, EPC (Evolved Packet Core, Evolved Packet Core) 203 and the operator's IP service 204.
  • E-UTRAN Evolved UMTS Terrestrial Radio Access Network
  • EPC Evolved Packet Core, Evolved Packet Core
  • the UE 201 may be the above-mentioned mobile terminal 100, which will not be repeated here.
  • E-UTRAN 202 includes eNodeB 2021 and other eNodeB 2022 and the like.
  • the eNodeB 2021 can be connected with other eNodeB 2022 through a backhaul (eg X2 interface), the eNodeB 2021 is connected to the EPC 203 , and the eNodeB 2021 can provide access from the UE 201 to the EPC 203 .
  • a backhaul eg X2 interface
  • EPC 203 may include MME (Mobility Management Entity, Mobility Management Entity) 2031, HSS (Home Subscriber Server, Home Subscriber Server) 2032, other MME 2033, SGW (Serving Gate Way, Serving Gateway) 2034, PGW (PDN Gate Way, packet data network gateway) 2035 and PCRF (Policy and Charging Rules Function, policy and charging functional entity) 2036 and so on.
  • MME Mobility Management Entity
  • HSS Home Subscriber Server, Home Subscriber Server
  • SGW Serving Gate Way, Serving Gateway
  • PGW Packet Control Network Gateway
  • PCRF Policy and Charging Rules Function, policy and charging functional entity
  • the MME 2031 is a control node that handles signaling between the UE 201 and the EPC 203, and provides bearer and connection management.
  • the HSS2032 is used to provide some registers to manage functions such as the home location register (not shown in the figure), and to store some user-specific information about service characteristics, data rates, etc.
  • PCRF2036 is the policy and charging control policy decision point of service data flow and IP bearer resources, it is the policy and charging execution function A unit (not shown) selects and provides available policy and charging control decisions.
  • the IP service 204 may include the Internet, an intranet, an IMS (IP Multimedia Subsystem, IP Multimedia Subsystem) or other IP services.
  • IMS IP Multimedia Subsystem, IP Multimedia Subsystem
  • FIG. 3 is a flowchart of a first image processing method provided by an embodiment of the present application.
  • the method in this embodiment of the present application may be performed by a shooting device, where the shooting device includes a first camera and a second camera, and the field of view of the first camera is smaller than the field of view of the second camera; optionally, the The method can be performed by a server, and the method can specifically include the following steps:
  • the target environment refers to an area to be photographed by the first camera and the second camera.
  • the first camera and the second camera are located on the same side of the photographing device, and the field of view angles of the first camera and the second camera are partially overlapped (as shown in FIG. 4a ).
  • the field of view of the first camera The angle can be completely within the field of view of the second camera (as shown in Figure 4b). It can be understood that in this application, the field of view is used to reflect the shooting range of the picture. The larger the focal length, the smaller the field of view, and the smaller the range of the picture formed on the photosensitive element; conversely, the smaller the focal length, the smaller the field of view.
  • the magnification of the lens can be changed, and the size of the captured image can be changed.
  • the magnification of the lens ⁇ focal length/object distance.
  • Increase the focal length of the lens increase the magnification, you can zoom in the distant view, the scope of the picture is small, and the details of the distant view can be seen more clearly; if the focal length of the lens is reduced, the magnification decreases, the scope of the picture is expanded, and you can see larger Scenes. That is to say, the first camera in this application may be a standard camera, and the second camera may be a wide-angle camera.
  • the shooting device when the user turns on the shooting mode, the first camera and the second camera are turned on at the same time, and if the shooting device receives the user's shooting instruction, it controls the first camera and the second camera to shoot the target environment at the same time, and then obtains the first camera and the second camera.
  • the shooting view area of the first camera and the shooting view area of the second camera overlap, and the first camera and the second camera are controlled to shoot the target environment at the same time, Acquiring a first image captured by the first camera and a second image captured by the second camera; and performing image fusion processing on the first image and the second image to obtain a target image.
  • the first camera and the second camera are simultaneously turned on, and if the shooting device receives the user's shooting instruction, the first camera and the second camera are controlled to shoot the target environment at the same time, and then A first image captured by the first camera and a second image captured by the second camera are acquired. That is to say, by shooting two cameras with overlapping parts of the viewing area at the same time, two images with overlapping areas are obtained, so that the two images can be fused, which not only enhances the image display of the overlapping part, but also increases the size of the image. The shooting view area of any picture is obtained, so that the environmental areas in the shooting view areas of the two cameras are merged into one image, which improves the user's perception when browsing the image.
  • the method may further include: acquiring an image processing rule; and processing the target image according to the image processing rule to obtain a processed target image.
  • the image processing rules in the shooting device are obtained, and the target image is processed, such as algorithm overlay, compression, rotation, encoding, etc.
  • the algorithm overlay can be algorithms for faces (beauty, beauty, face-lift, big eyes, microdermabrasion, etc.), or for scenes (warm sun in winter, cool summer, etc.). Finally, after all the processing, the final processed target image is obtained.
  • the image processing method is applied to a photographing device, and the photographing device includes a first camera and a second camera, and a field of view of the first camera is smaller than that of the second camera. Controlling the first camera and the second camera to photograph the target environment at the same time, obtaining a first image photographed by the first camera and a second image photographed by the second camera; Perform image fusion processing on the second image to obtain a target image.
  • two cameras can be controlled to shoot at the same time, and then a high-quality and high-definition captured image can be obtained through fusion processing, which improves the user's look and feel when browsing the image.
  • FIG. 5 is a flowchart of a second image processing method provided by an embodiment of the present application.
  • the method in this embodiment of the present application may be performed by a shooting device, where the shooting device includes a first camera and a second camera, and the field of view of the first camera is smaller than the field of view of the second camera; optionally, the The method can be performed by a server, and the method can specifically include the following steps:
  • step S501 in this embodiment of the present application reference may be made to step S301 in the foregoing embodiment, which is not repeated in this embodiment of the present application.
  • S502. Perform image registration processing on the first image and the second image, and determine an image area in the second image that matches the first image.
  • the second image is used as the reference image (as shown in FIG. 6b ), and image registration processing is performed to determine the image area in the second image that matches the first image (as shown in FIG. 6a ).
  • image registration is a process of matching and superimposing two or more images acquired at different times, different sensors (imaging devices) or under different conditions (weather, illuminance, camera position and angle, etc.). In this application, the positions of the first camera and the second camera do not overlap, so there is a certain position difference.
  • performing image registration processing on the first image and the second image to determine an image area in the second image that matches the first image may be performed by: The method includes: extracting feature points from the first image to obtain a first feature point set, and performing feature point extraction on the second image to obtain a second feature point set; extracting feature points from the second feature point according to a preset algorithm Determine the target feature point set that matches the first feature point set in the set; determine the image area corresponding to the target feature point set in the second image as the second image that matches the first feature point set. An image that matches the image area.
  • the first image and the second image can be registered by stereo registration or feature point registration.
  • the feature points of a group of images can be identified by using an image recognition algorithm, and the obtained images containing The specific point of the two images corresponds to the feature vector set of the feature vector. After the feature points in the two feature vector sets are matched, the incorrect matching points are deleted to obtain the result after matching and correction.
  • the SIFT Scale-invariant feature transform, scale-invariant feature transform
  • the RANSAC algorithm is used to eliminate the wrong matching points and perform matching correction. Then, through accurate matching, the image area in the second image that matches the first image is determined (as shown in Figure 6b, the image area in the solid line box is the image area where the second image matches the first image) .
  • S503. Perform image enhancement processing on an image area in the second image that matches the first image according to the first image, to obtain an enhanced second image.
  • image enhancement processing is performed on the image matching the first image in the second image by using the first image captured by the first camera, so that the display of the image matching the first image in the second image is enhanced, And of course, the image of the edge portion in the first image also becomes clearer.
  • the image regions of the first image and the second image that match the first image may be subjected to block processing to obtain a plurality of first block images and all the first image blocks after the block processing of the first image.
  • performing image enhancement processing on an image area in the second image that matches the first image according to the first image, to obtain an enhanced second image includes: performing color enhancement processing and/or texture enhancement processing on an image area in the second image that matches the first image according to the first image, to obtain a processed second image.
  • grayscale transformation may be performed on each pixel value of the image area in the first image and the second image that matches the first image, to obtain the target pixel value of the first image and the target pixel value of the first image.
  • target pixel value of the image area in the second image that matches the first image according to each target pixel value of the first image and The target pixel value performs color enhancement and/or texture enhancement on the image area in the second image that matches the first image, to obtain an enhanced second image, so as to enhance the display of part of the image in the second image, and further improve the The clarity of the second image improves the user experience.
  • the target image is generated according to the enhanced second image.
  • the enhanced second image is Image generation of a target image may include: dividing the enhanced second image into blocks to obtain at least one divided image; and merging the chrominance values and/or luminance values of each divided image to obtain the target image .
  • the image enhancement obtained from the image area in the second image that matches the first image in the present application in order to make the whole second image more harmonious, the image area of the image enhancement part should be more connected with other image areas in the second image.
  • corresponding adjustments should be made to the edge of the image enhancement part and the edge connected to the second image and the image area except for the image enhancement part.
  • the enhanced second image can be divided into blocks, and the chrominance value and/or luminance value of each divided image can be fused, so that the whole image is more harmonious and gives the user a better look and feel.
  • the method may further include: acquiring an image processing rule; and processing the target image according to the image processing rule to obtain a processed target image.
  • the image processing rules in the shooting device are obtained, and the target image is processed, such as algorithm overlay, compression, rotation, encoding, etc.
  • the algorithm overlay can be an algorithm for human faces, or For scene changes. Finally, after all the processing, the final processed target image is obtained.
  • the first camera and the second camera are controlled to shoot the target environment at the same time, and the first image captured by the first camera and the second image captured by the second camera are acquired; Image registration processing, determining the image area in the second image that matches the first image; performing image enhancement processing on the image area in the second image that matches the first image according to the first image, to obtain the enhanced second image. image; generating a target image according to the enhanced second image.
  • part of the image in the second image can be easily enhanced, which greatly reduces the difficulty of taking pictures, makes it easier to take high-quality and high-definition images, improves the efficiency of the user's taking pictures, and improves the use of the user. sample.
  • FIG. 7 is a flowchart of a third image processing method provided by an embodiment of the present application.
  • the method in this embodiment of the present application may be performed by a shooting device, where the shooting device includes a first camera and a second camera, and the field of view of the first camera is smaller than the field of view of the second camera; optionally, the The method can be performed by a server, and the method can specifically include the following steps:
  • S702. Perform image registration processing on the first image and the second image, and determine an image area in the second image that matches the first image.
  • steps S701-S702 in this embodiment of the present application reference may be made to steps S501-S502 in the foregoing embodiment, and details are not repeated in this embodiment of the present application.
  • S703 Perform splicing processing on the first image by using an image area in the second image other than the image area matching the first image, to obtain a first image after splicing processing.
  • an image area in the second image other than an image area matching the first image is used to perform splicing processing on the first image to obtain a splicing-processed first image.
  • the first image can include: performing a first preset processing on each image area in the second image except the image area matching the first image, so as to obtain the image area after the preset processing; using The pre-processed image area is subjected to splicing processing on the first image to obtain a first image after splicing processing.
  • the first preset processing may be enlargement processing and/or reduction processing, and the multiple of enlargement or reduction processing may be determined according to the focal lengths of the first camera and the second camera.
  • magnification of the lens is approximately equal to the focal length divided by the object distance, it can be known that the first camera and the second camera are on the same shooting device and shoot at the same time, then the object distance is the same, then the first camera and the second camera are on the same shooting device.
  • the relationship between the magnifications of the image and the second image is the multiple relationship of the focal length. In order to perform better stitching between two images of the same size, the image matching the first image can be divided from the second image.
  • Each image area outside the area is subjected to the first preset processing to obtain the magnification or reduction factor of the image is the same as the magnification factor of the image in the first image, so that the image stitching can be better performed, and the image of the first image can be enlarged.
  • Display area to improve the efficiency of users to capture images of larger shooting areas.
  • the first image of the first image may include: performing a second preset processing on the first image; The first image is stitched to obtain a stitched first image.
  • the second preset processing may be enlargement processing and/or reduction processing.
  • the relationship between the focal lengths of the first camera and the second camera may be , the first image is enlarged or reduced, so that the size of the first image is consistent with the size of the image area in the second image that matches the first image, thereby facilitating image stitching and improving shooting efficiency .
  • an image area in the second image other than an image area matching the first image is used to perform splicing processing on the first image to obtain a splicing-processed first image.
  • the generating the target image according to the stitched first image includes: dividing the stitched first image into blocks to obtain at least one block image; The chrominance values and/or luminance values of each segmented image are fused to obtain the target image.
  • the image areas in the first image and the second image other than the image matching the first image should be connected more naturally.
  • the edge of an image, the edge of the second image that joins with the first image, and each image area in the second image except the image area matching the first image are adjusted accordingly.
  • the spliced first image can be divided into blocks, and the chrominance values and/or luminance values of each divided image can be fused, so that the entire image is more harmonious, giving the user a better look and feel.
  • edge similarity measurement should be performed, so that the first image can be better than the image area in the second image except the image area matching the first image. splicing. As shown in FIG.
  • the color difference measurement and the texture difference measurement can be performed on the edge portion of the first image, the spliced portion in the second image and the edge portion adjacent to the first image, and then the color and texture are adjusted accordingly, It makes the stitching more natural, and the transition of color and texture is natural, which improves the user's look and feel when browsing images.
  • the field of view of the first camera and the second camera are the same size
  • the shooting field of view of the first camera partially overlaps with the shooting field of view of the second camera
  • the first camera and the second camera are controlled At the same time, shoot the target environment, obtain the first image captured by the first camera and the second image captured by the second camera, perform image registration processing on the first image and the second image, and determine whether the second image is the same as the first image.
  • For the image area that matches the image use the image area in the second image except the image area matching the first image to perform splicing processing on the first image to obtain the spliced first image.
  • the first camera may be a standard camera
  • the second camera may be a black-and-white camera.
  • the color images of the standard camera are fused with the black-and-white camera data, which can improve the first image in the dark.
  • the color performance under light improves the quality and clarity of the first image, and improves the user experience.
  • the first camera and the second camera are controlled to shoot the target environment at the same time, the first image captured by the first camera and the second image captured by the second camera are acquired, and the first image and the second image are captured.
  • Image registration processing determining an image area in the second image that matches the first image; stitching the first image by using the image area in the second image other than the image area matching the first image processing to obtain a first image after stitching processing; and generating a target image according to the first image after stitching processing.
  • the range of the field of view of the first image can be expanded, and the user's perception when browsing the image can be improved.
  • FIG. 9 is a flowchart of a fourth image processing method provided by an embodiment of the present application.
  • the method in this embodiment of the present application may be performed by a photographing device, and the photographing device includes a first camera and a second camera.
  • the method may also be performed by a server, and the method may specifically include the following steps:
  • the shooting device when the user turns on the shooting mode, the first camera and the second camera are turned on, and if the shooting device receives a shooting instruction from the user, it controls the first camera and the second camera to shoot the target environment, and then obtains the The first image captured by the first camera and the second image captured by the second camera, wherein the size of the field of view of the first camera and the second camera may be the same or different.
  • the size of the field of view of the first camera is smaller than the size of the field of view of the second camera.
  • the viewing angles of the first camera and the second camera are partially overlapped or one viewing angle is within the range of another viewing angle. As shown in FIG. 4a, the viewing angles of the first camera and the second camera are partially overlapped.
  • the field of view of the first camera may be completely within the range of the field of view of the second camera.
  • controlling the first camera and the second camera to photograph the target environment includes at least one of the following: controlling the first camera and the second camera to simultaneously The target environment is photographed; the first camera is controlled to photograph the target environment first, and then the second camera is controlled to photograph the target environment; the second camera is controlled to photograph the target environment first, and then the first camera is controlled Then take pictures of the target environment.
  • the position of the shooting device does not change, the relative positions of the first camera and the second camera are determined, and when the shooting device receives the shooting instruction, the shooting order of the first camera and the second camera for the target environment is not fixed,
  • the priority of the shooting sequence may be determined according to user-defined settings, or according to the power consumption of the shooting device or the efficiency of image processing. For example, if the first camera and the second camera take pictures at the same time, it is more power-saving, and the first camera and the second camera take pictures at the same time. When the first camera and the second camera shoot at the same time, the priority is the highest.
  • controlling the first camera and the second camera to shoot the target environment can be controlled according to system settings, such as controlling the first camera and the second camera to shoot the target environment at the same time, or controlling the first camera to shoot the target first The environment is photographed, and then the second camera is controlled to photograph the target environment; or the second camera is controlled to photograph the target environment first, and the first camera is controlled to photograph the target environment.
  • image registration processing is performed on the first image and the second image to obtain an image area in the second image that matches the first image.
  • the first image is used as the reference image, and image registration processing is performed to determine the image area in the second image that matches the first image.
  • performing image registration processing on the first image and the second image to obtain an image area in the second image that matches the first image may be The method includes: extracting feature points from the first image to obtain a first feature point set, and performing feature point extraction on the second image to obtain a second feature point set; extracting feature points from the second feature point according to a preset algorithm Determine the target feature point set that matches the first feature point set in the set; determine the image area corresponding to the target feature point set in the second image as the second image that matches the first feature point set. An image that matches the image area.
  • the first image and the second image can be registered by stereo registration or feature point registration.
  • the feature points of a group of images can be identified by using an image recognition algorithm, and the obtained images containing The specific point of the two images corresponds to the feature vector set of the feature vector.
  • the incorrect matching points are deleted to obtain the result after matching and correction.
  • the SIFT Scale-invariant feature transform, scale-invariant feature transform
  • the RANSAC algorithm is used to eliminate the wrong matching points and perform matching correction. Then, through accurate matching, the image area in the second image that matches the first image is determined.
  • image enhancement processing is performed on an image area matching the first image in the second image captured by the second camera, so that the image display of the first image and/or the second image is enhanced.
  • the performing image enhancement processing according to the image area in the second image that matches the first image to obtain the target image may include: according to the second image and the first image An image area matched with an image is subjected to image enhancement processing on the first image to obtain an enhanced first image, and a target image is generated according to the enhanced first image.
  • image registration processing is performed on the second image and the first image to obtain an image area in the second image that matches the first image, and image enhancement processing is performed on the first image according to the matching image area. .
  • the image area in the second image that matches the first image and the first image may be segmented to obtain a plurality of third segmented images of the image area in the second image that matches the first image. and a plurality of fourth block images after block processing of the first image, the size of the third block image is the same as that of the fourth block image; for each pixel of the third block image and the fourth block image gray-scale transformation is performed on the value of the third sub-block image to obtain the third target pixel value of the plurality of third sub-block images and the fourth target pixel value of the plurality of fourth sub-block images.
  • the value is used to enhance the first image, thereby obtaining the enhanced first image, so that the user can obtain a high-definition and high-quality image, and the user's perception of browsing the image is improved.
  • the performing image enhancement processing according to the image area in the second image that matches the first image to obtain the target image may include: according to the second image and the first image An image area matched with an image is subjected to image enhancement processing on the second image to obtain an enhanced second image, and a target image is generated according to the enhanced second image.
  • image enhancement processing is performed on the second image according to the image area.
  • the image area in the second image that matches the first image and the second image may be segmented to obtain a plurality of fifth segmented images of the image area in the second image that matches the first image.
  • the size of the fifth block image is the same as that of the sixth block image; for each pixel of the fifth block image and the sixth block image gray-scale transformation is performed on the values to obtain the fifth target pixel value of the plurality of fifth sub-block images and the sixth target pixel value of the plurality of sixth sub-block images, according to the fifth target pixel value and the sixth target pixel value
  • the value is used to enhance the second image to obtain an enhanced second image, so that the user can obtain a clearer second image.
  • the performing image enhancement processing according to the image area in the second image that matches the first image to obtain the target image may further include: according to the second image Perform image enhancement processing on the first image and the second image in the image area that matches the first image to obtain the enhanced first image and the enhanced second image.
  • An image and an enhanced second image generate a target image.
  • image enhancement processing is respectively performed on the first image and the second image according to the image area in the second image that matches the first image, and the enhanced first image and the second image are fused, Generate a target image of the same size as the second image, or generate a target image of the same size as the first image.
  • the performing image enhancement processing according to an image area in the second image that matches the first image may include at least one of the following: taking the first image as a reference image, performing color enhancement processing and/or texture enhancement processing on the first image according to the image area in the second image that matches the first image; taking the second image as a reference image, according to the Perform color enhancement processing and/or texture enhancement processing on the second image in the image area in the second image that matches the first image; Perform color enhancement processing and/or texture enhancement processing on the first image and the second image in the image area matched with the first image; take the second image as a reference image, and performing color enhancement processing and/or texture enhancement processing on the first image and the second image in an image area matching the first image.
  • grayscale transformation may be performed on each pixel value of the image area in the first image and the second image that matches the first image to obtain the The target pixel value of the first image and the target pixel value of the image area matching the first image in the second image; according to each target pixel value of the first image and the second image and the first image Color enhancement and/or texture enhancement is performed on the first image for each target pixel value of an image area that matches an image to obtain an enhanced first image (as shown in FIG.
  • color enhancement and/or color enhancement can be performed on the first image and the second image according to each target pixel value of the first image and each target pixel value of the image area in the second image that matches the first image. or texture enhancement, to obtain the enhanced first image and the second image.
  • gray-scale transformation can be performed on each pixel value of the image area matching the first image in the first image and the second image to obtain the target pixel of the first image.
  • the first camera and the second camera are controlled to photograph the target environment, and the first image photographed by the first camera and the second image photographed by the second camera are acquired; it is determined that the second image matches the first image.
  • the image area in the second image is enhanced according to the image area in the second image that matches the first image, so as to obtain the target image.
  • the first image and/or the second image may be enhanced, so that the final target image has high definition and brighter colors, thereby improving the efficiency of the user's shooting and improving the user's look and feel when browsing the image.
  • FIG. 11 is a schematic structural diagram of an image processing apparatus provided by an embodiment of the present application.
  • the apparatus may be mounted on the photographing device in the above method embodiments, and the device may specifically be a server.
  • the photographing device includes a first camera and a second camera, and the field of view of the first camera is smaller than the field of view of the second camera. corner size.
  • the image processing apparatus shown in FIG. 11 may be used to perform some or all of the functions in the method embodiments described in the above-mentioned FIG. 3 , FIG. 5 , FIG. 7 and FIG. 9 .
  • the detailed description of each module is as follows:
  • an acquisition unit 1101 configured to control the first camera and the second camera to simultaneously capture a target environment, and acquire a first image captured by the first camera and a second image captured by the second camera;
  • the processing unit 1102 is configured to perform image fusion processing on the first image and the second image to obtain a target image.
  • the processing unit 1102 is specifically configured to perform image enhancement processing on an image area in the second image that matches the first image according to the first image, and obtain the enhanced image
  • the second image is used to generate a target image according to the enhanced second image.
  • the processing unit 1102 is specifically configured to divide the enhanced second image into blocks to obtain at least one divided image; and is used to divide the chrominance value of each divided image and the / or brightness values are fused to obtain the target image.
  • the processing unit 1102 is further configured to perform image registration processing on the first image and the second image, and determine that the second image matches the first image. image area.
  • the processing unit 1102 is specifically configured to perform feature point extraction on the first image to obtain a first feature point set, and perform feature point extraction on the second image to obtain a second feature point set.
  • a feature point set used to determine a target feature point set that matches the first feature point set from the second feature point set according to a preset algorithm; used to combine the target feature in the second image
  • the image area corresponding to the point set is determined as the image area in the second image that matches the first image.
  • the processing unit 1102 is specifically configured to, according to the first image, perform color enhancement processing and/or texture enhancement on an image area in the second image that matches the first image processing to obtain a processed second image.
  • the acquiring unit 1101 is specifically configured to acquire an image processing rule; the processing unit 1102 is specifically configured to process the target image according to the image processing rule to obtain a processed target image .
  • the processing unit 1102 is specifically configured to perform stitching processing on the first image by using an image area in the second image other than an image area matching the first image , to obtain a first image after stitching processing; it is used for generating a target image according to the first image after stitching processing.
  • the processing unit 1102 is specifically configured to divide the first image after stitching processing into blocks to obtain at least one divided image; The brightness values are fused to obtain the target image.
  • the processing unit 1102 is specifically configured to perform first preset processing on each image area in the second image except the image area matching the first image, to obtain A pre-processed image area; used to perform a splicing process on the first image by using the pre-processed image area to obtain a spliced-processed first image.
  • the processing unit 1102 is specifically configured to perform second preset processing on the first image; and is configured to use the second image to divide the image matching the first image The image area outside the area is subjected to splicing processing on the pre-processed first image to obtain the splicing-processed first image.
  • the processing unit 1102 is specifically configured to control the first camera and the second camera to photograph the target environment, and obtain the first image and the first image photographed by the first camera.
  • a second image captured by two cameras; used to determine an image area in the second image that matches the first image; Image enhancement processing to obtain the target image.
  • the processing unit 1102 is specifically configured to control the first camera and the second camera to photograph the target environment, and obtain the first image and the first image photographed by the first camera.
  • a second image captured by two cameras; used to determine an image area in the second image that matches the first image; Image enhancement processing to obtain the target image.
  • the processing unit 1102 performs image enhancement processing according to an image area in the second image that matches the first image to obtain a target image, including at least one of the following:
  • the processing unit 1102 performs image enhancement processing according to an image area in the second image that matches the first image, including at least one of the following:
  • the first image as a reference image, perform color enhancement processing and/or texture enhancement processing on the first image according to an image area in the second image that matches the first image;
  • color enhancement processing and/or texture enhancement processing is performed on the first image and the second image according to an image area in the second image that matches the first image.
  • the processing unit 1102 is specifically configured to perform image registration processing on the first image and the second image, and obtain the second image that matches the first image. image area.
  • the processing unit 1102 is specifically configured to perform feature point extraction on the first image to obtain a first feature point set, and perform feature point extraction on the second image to obtain a second feature point set.
  • a feature point set used to determine a target feature point set that matches the first feature point set from the second feature point set according to a preset algorithm; used to combine the target feature in the second image
  • the image area corresponding to the point set is determined as the image area in the second image that matches the first image.
  • the processing unit 1102 is specifically configured to take the first image as a reference image, The image is subjected to color enhancement processing and/or texture enhancement processing to obtain a processed first image.
  • some of the steps involved in the image processing methods shown in FIG. 3 , FIG. 5 , FIG. 7 , and FIG. 9 may be performed by various modules in the image processing apparatus shown in FIG. 11 .
  • Each unit in the image processing apparatus shown in FIG. 11 may be combined into one or several other modules, respectively or all, or some module(s) may be further divided into multiple modules with smaller functions. The same operation can be achieved without affecting the realization of the technical effects of the embodiments of the present application.
  • the above-mentioned units are divided based on logical functions.
  • the function of one module may also be implemented by multiple modules, or the functions of multiple modules may be implemented by one module.
  • the image processing apparatus may also include other modules, and in practical applications, these functions may also be implemented with the assistance of other modules, and may be implemented by cooperation of multiple modules.
  • FIG. 12 is a schematic structural diagram of another photographing device provided by an embodiment of the present application.
  • the present application also provides a mobile terminal.
  • the mobile terminal includes a memory 1201, a processor 1202, and an image processing program stored in the memory 1201 and running on the processor 1202.
  • the image processing program is executed by the processor, any one of the above implementations is implemented.
  • the steps of the image processing method in the example is a schematic structural diagram of another photographing device provided by an embodiment of the present application.
  • the present application also provides a mobile terminal.
  • the mobile terminal includes a memory 1201, a processor 1202, and an image processing program stored in the memory 1201 and running on the processor 1202.
  • the image processing program is executed by the processor, any one of the above implementations is implemented.
  • the present application further provides a computer-readable storage medium on which an image processing program is stored, and when the image processing program is executed by a processor, implements the steps of the image processing method in any of the foregoing embodiments.
  • the embodiments of the mobile terminal and the computer-readable storage medium provided in this application include all the technical features of the above-mentioned embodiments of the image processing methods, and the expansion and explanation content of the description are basically the same as those of the above-mentioned embodiments of the method for remarking an incoming call. This will not be repeated.
  • Embodiments of the present application also provide a computer program product, where the computer program product includes computer program code, when the computer program code runs on a computer, the computer can execute the methods in the various possible implementation manners above.
  • An embodiment of the present application further provides a chip, including a memory and a processor, where the memory is used to store a computer program, and the processor is used to call and run the computer program from the memory, so that the device with the chip installed executes the various possible implementation manners described above. Methods.
  • FIG. 13 is a schematic diagram of the hardware structure of a controller 140 provided by the present application.
  • the controller 140 includes: a memory 1401 and a processor 1402, the memory 1401 is used to store program instructions, and the processor 1402 is used to call the program instructions in the memory 1401 to execute the steps executed by the controller in the above method embodiments, and the implementation principles thereof are as follows: The beneficial effects are similar and will not be repeated here.
  • the above-mentioned controller further includes a communication interface 1403 , and the communication interface 1403 can be connected to the processor 1402 through a bus 1404 .
  • the processor 1402 may control the communication interface 1403 to implement the receiving and transmitting functions of the controller 140 .
  • FIG. 14 is a schematic diagram of the hardware structure of a network node 150 provided by this application.
  • the network node 150 includes: a memory 1501 and a processor 1502, the memory 1501 is used to store program instructions, and the processor 1502 is used to call the program instructions in the memory 1501 to execute the steps performed by the first node in the above method embodiments, and the implementation principles thereof and The beneficial effects are similar and will not be repeated here.
  • the above-mentioned network node further includes a communication interface 1503 , and the communication interface 1503 can be connected to the processor 1502 through a bus 1504 .
  • the processor 1502 may control the communication interface 1503 to implement the receiving and transmitting functions of the network node 150 .
  • FIG. 15 is a schematic diagram of the hardware structure of another network node 160 provided by this application.
  • the network node 160 includes: a memory 1601 and a processor 1602, the memory 1601 is used to store program instructions, and the processor 1602 is used to call the program instructions in the memory 1601 to execute the steps performed by the intermediate node and the tail node in the above method embodiments, which The implementation principle and beneficial effects are similar, and are not repeated here.
  • the above-mentioned network node further includes a communication interface 1603 , and the communication interface 1603 can be connected to the processor 1602 through a bus 1604 .
  • the processor 1602 may control the communication interface 1603 to implement the receiving and transmitting functions of the network node 160 .
  • FIG. 16 is a schematic diagram of the hardware structure of another controller 170 provided by the present application.
  • the controller 170 includes: a memory 1701 and a processor 1702.
  • the memory 1701 is used to store program instructions
  • the processor 1702 is used to call the program instructions in the memory 1701 to execute the steps performed by the controller in the above method embodiments, and the implementation principles thereof are as follows: The beneficial effects are similar and will not be repeated here.
  • the above-mentioned controller further includes a communication interface 1703 , and the communication interface 1703 can be connected to the processor 1702 through a bus 1704 .
  • the processor 1702 may control the communication interface 1703 to implement the receiving and transmitting functions of the controller 170 .
  • FIG. 17 is a schematic diagram of the hardware structure of still another network node 180 provided by this application.
  • the network node 180 includes: a memory 1801 and a processor 1802, the memory 1801 is used to store program instructions, and the processor 1802 is used to call the program instructions in the memory 1801 to execute the steps performed by the first node in the above method embodiments, and the implementation principles thereof and The beneficial effects are similar and will not be repeated here.
  • the above-mentioned network node further includes a communication interface 1803 , and the communication interface 1803 can be connected to the processor 1802 through a bus 1804 .
  • the processor 1802 may control the communication interface 1803 to implement the receiving and transmitting functions of the network node 180 .
  • the above-mentioned integrated modules implemented in the form of software functional modules can be stored in a computer-readable storage medium.
  • the above-mentioned software function modules are stored in a storage medium, and include several instructions to enable a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (English: processor) to execute the methods of the various embodiments of the present application. some steps.
  • a computer program product includes one or more computer instructions.
  • the computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable device.
  • Computer instructions may be stored on or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from a website site, computer, server, or data center over a wire (e.g.
  • a computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device such as a server, a data center, or the like that includes an integration of one or more available media.
  • Useful media may be magnetic media (eg, floppy disks, hard disks, magnetic tapes), optical media (eg, DVDs), or semiconductor media (eg, solid state disks, SSDs), and the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Studio Devices (AREA)

Abstract

本申请实施例提供一种图像处理方法、移动终端及存储介质,所述方法应用于拍摄设备,所述拍摄设备包括第一摄像头和第二摄像头,所述第一摄像头的视场角大小小于所述第二摄像头的视场角大小,所述方法包括:控制所述第一摄像头和所述第二摄像头同时对目标环境进行拍摄,获取所述第一摄像头拍摄的第一图像和所述第二摄像头拍摄的第二图像;将所述第一图像和所述第二图像进行图像融合处理,得到目标图像。本申请实施例,能同时控制两个摄像头进行拍摄,进而融合处理得到高质量、高清晰度的拍摄图像,提高用户浏览图像时的观感。

Description

图像处理方法、移动终端及存储介质
本申请要求于2021年2月5日提交中国专利局、申请号为202110166465.4、申请名称为“图像处理方法、移动终端及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及图像处理技术领域,具体涉及一种图像处理方法、移动终端及存储介质。
背景技术
随着拍摄设备和图像处理技术的发展越来越快速,人们对拍摄设备拍摄的图像的拍摄效果要求也越来越高。目前,拍摄设备一般是通过一个摄像头来进行图像的拍摄,但是拍摄出来的图像往往清晰度不高,出现图像模糊的现象,导致用户使用体验不佳;且由于图像清晰度直接会造成很多信息的缺失,在某些重要场合中则可能造成不必要的损失。因此,如何提高拍摄得到的图像的清晰度是有待解决的问题。
前面的叙述在于提供一般的背景信息,并不一定构成现有技术。
申请内容
针对上述技术问题,本申请提供一种图像处理方法、移动终端及存储介质,可使用户获得高清晰度的拍摄图像,提高用户浏览图像时的观感。
为解决上述技术问题,第一方面,本申请提供了一种图像处理方法,应用于拍摄设备,所述拍摄设备包括第一摄像头和第二摄像头,所述第一摄像头的视场角大小小于所述第二摄像头的视场角大小,所述方法包括:
控制所述第一摄像头和所述第二摄像头同时对目标环境进行拍摄,获取所述第一摄像头拍摄的第一图像和所述第二摄像头拍摄的第二图像;
将所述第一图像和所述第二图像进行图像融合处理,得到目标图像。
可选地,所述将所述第一图像和所述第二图像进行图像融合处理,得到目标图像,包括:
根据所述第一图像对所述第二图像中与所述第一图像相匹配的图像区域进行图像增强处理,得到增强处理后的第二图像;
根据所述增强处理后的第二图像生成目标图像。
可选地,所述根据所述增强处理后的第二图像生成目标图像,包括:
将所述增强处理后的第二图像进行分块,得到至少一个分块图像;
将各个分块图像的色度值和/或亮度值进行融合处理,得到目标图像。
可选地,所述根据所述第一图像对所述第二图像中与所述第一图像相匹配的图像区域进行图像增强处理,得到处理后的第二图像之前,所述方法还包括:
对所述第一图像和所述第二图像进行图像配准处理,确定所述第二图像中与所述第一图像相匹配的图像区域。
可选地,所述对所述第一图像和所述第二图像进行图像配准处理,确定所述第二图像中与所述第一图像相匹配的图像区域,包括:
对所述第一图像进行特征点提取,得到第一特征点集合,并对所述第二图像进行特征点提取,得到第二特征点集合;
根据预设算法从所述第二特征点集合中确定出与所述第一特征点集合相匹配的目标特征点集合;
将所述第二图像中所述目标特征点集合所对应的图像区域确定为所述第二图像中与所述第一图像相匹配的图像区域。
可选地,所述根据所述第一图像对所述第二图像中与所述第一图像相匹配的图像区域进行图像增强处理,得到处理后的第二图像,包括:
根据所述第一图像对所述第二图像中与所述第一图像相匹配的图像区域进行色彩增强处理和/或纹理增强处理,得到处理后的第二图像。
可选地,所述方法还包括:
获取图像处理规则;
根据所述图像处理规则对所述目标图像进行处理,得到处理后的目标图像。
可选地,所述将所述第一图像和所述第二图像进行图像融合处理,得到目标图像,包括:
利用所述第二图像中除与所述第一图像相匹配的图像区域之外的图像区域对所述第一图像进行拼接处理,得到拼接处理后的第一图像;
根据所述拼接处理后的第一图像生成目标图像。
可选地,所述根据所述拼接处理后的第一图像生成目标图像,包括:
将所述拼接处理后的第一图像进行分块,得到至少一个分块图像;
将各个分块图像的色度值和/或亮度值进行融合处理,得到目标图像。
可选地,所述利用所述第二图像中除与所述第一图像相匹配的图像区域之外的图像区域对所述第一图像进行拼接处理,得到拼接处理后的第一图像,包括:
对所述第二图像中除与所述第一图像相匹配的图像区域之外的各个图像区域进行第一预设处理,得到预设处理后的图像区域;
利用所述预设处理后的图像区域对所述第一图像进行拼接处理,得到拼接处理后的第一图像。
可选地,所述利用所述第二图像中除与所述第一图像相匹配的图像区域之外的图像区域对所述第一图像进行拼接处理,得到拼接处理后的第一图像,包括:
对所述第一图像进行第二预设处理;
利用所述第二图像中除与所述第一图像相匹配的图像区域之外的图像区域对预设处理后的第一图像进行拼接处理,得到拼接处理后的第一图像。
第二方面,本申请提供了另一种图像处理方法,应用于拍摄设备,所述拍摄设备包括第一摄像头和第二摄像头,所述方法包括:
控制所述第一摄像头和所述第二摄像头对目标环境进行拍摄,获取所述第一摄像头拍摄的第一图像和所述第二摄像头拍摄的第二图像;
确定所述第二图像中与所述第一图像相匹配的图像区域;
根据所述第二图像中与所述第一图像相匹配的图像区域进行图像增强处理,以得到目标图像。
可选地,所述第一摄像头的视场角大小小于所述第二摄像头的视场角大小。
可选地,所述控制所述第一摄像头和所述第二摄像头对目标环境进行拍摄,包括以下至少一种:
控制所述第一摄像头和所述第二摄像头同时对目标环境进行拍摄;
控制所述第一摄像头先对目标环境进行拍摄,控制所述第二摄像头后对目标环境进行拍摄;
控制所述第二摄像头先对目标环境进行拍摄,控制所述第一摄像头后对目标环境进行拍摄。
可选地,所述根据所述第二图像中与所述第一图像相匹配的图像区域对所述第一图像进行图像增强处理,以得到目标图像,包括:根据所述第二图像中与所述第一图像相匹配的图像区域对所述第一图像进行图像增强处理,得到增强处理后的第一图像,根据所述增强处理后的第一图像生成目标图像。
可选地,所述根据所述第二图像中与所述第一图像相匹配的图像区域对所述第一图像进行图像增强处理,以得到目标图像,包括:根据所述第二图像中与所述第一图像相匹配的图像区域对所述第二图像进行图像增强处理,得到增强处理后的第二图像,根据所述增强处理后的第二图像生成目标图像。
可选地,所述根据所述第二图像中与所述第一图像相匹配的图像区域对所述第一图像进行图像增强处理,以得到目标图像,包括:根据所述第二图像中与所述第一图像相匹配的图像区域对所述第一图像和所述第二图像进行图像增强处理,得到增强处理后的第一图像和增强处理后的第二图像,根据所述增强处理后的第一图像和所述增强处理后的第二图像生成目标图像。
可选地,所述确定所述第二图像中与所述第一图像相匹配的图像区域,包括:
对所述第一图像和所述第二图像进行图像配准处理,得到所述第二图像中与所述第一图像相匹配的图像区域。
可选地,所述对所述第一图像和所述第二图像进行图像配准处理,得到所述第二图像中与所述第一图像相匹配的图像区域,包括:
对所述第一图像进行特征点提取,得到第一特征点集合,并对所述第二图像进行特征点提取,得到第二特征点集合;
根据预设算法从所述第二特征点集合中确定出与所述第一特征点集合相匹配的目标特征点集合;
将所述第二图像中所述目标特征点集合所对应的图像区域确定为所述第二图像中与所述第一图像相匹配的图像区域。
可选地,所述根据所述第二图像中与所述第一图像相匹配的图像区域对所述第一图像进行图像增强处理,包括以下至少一种:
以所述第一图像为基准图像,根据所述第二图像中与所述第一图像相匹配的图像区域对所述第一图像进行色彩增强处理和/或纹理增强处理;
以所述第二图像为基准图像,根据所述第二图像中与所述第一图像相匹配的图像区域对所述第二图像进行色彩增强处理和/或纹理增强处理;
以所述第一图像为基准图像,根据所述第二图像中与所述第一图像相匹配的图像区域对所述第一图像和第二图像进行色彩增强处理和/或纹理增强处理;
以所述第二图像为基准图像,根据所述第二图像中与所述第一图像相匹配的图像区域对所述第一图像和第二图像进行色彩增强处理和/或纹理增强处理。
第三方面,本申请提供了一种图像处理装置,应用于拍摄设备,所述拍摄设备包括第一摄像头和第二摄像头,所述第一摄像头的视场角大小小于所述第二摄像头的视场角大小,所述装置包括;
获取单元,用于控制所述第一摄像头和所述第二摄像头同时对目标环境进行拍摄,获取所述第一摄像头拍摄的第一图像和所述第二摄像头拍摄的第二图像;
处理单元,用于将所述第一图像和所述第二图像进行图像融合处理,得到目标图像。
第四方面,本申请提供了一种移动终端,包括处理器、存储器,其中,所述存储器用于存储计算机程序,所述计算机程序包括程序指令,所述处理器被配置用于调用所述程序指令,执行如第一方面或第二方面所述的方法。
第五方面,本申请提供了一种计算机可读存储介质,其特征在于,包括:所述计算机可读存储介质存储有一条或多条指令,所述一条或多条指令适于由处理器加载并执行如第一方面或第二方面所述的方法。
如上所述,本申请的图像处理方法、装置、移动终端及存储介质,应用于拍摄设备,所述拍摄设备包括第一摄像头和第二摄像头,所述第一摄像头的视场角大小小于所述第二摄像头的视场角大小。控制所述第一摄像头和所述第二摄像头同时对目标环境进行拍摄,获取所述第一摄像头拍摄的第一图像和所述第二摄像头拍摄的第二图像;将所述第一图像和所述第二图像进行图像融合处理,得到目标图像。本申请实施例,能够同时控制两个摄像头进行拍摄,进而融合处理得到高质量、高清晰度的拍摄图像,提高用户浏览图像时的观感。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本申请的实施例,并与说明书一起用于解释本申请的原理。为了更清楚地说明本申请实施例的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,对于本领域普通技术人员而言,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是本申请实施例提供的一种实现本申请各个实施例的移动终端的硬件结构示意图;
图2是本申请实施例提供的一种通信网络系统架构图;
图3是本申请实施例提供的第一种图像处理方法的流程示意图;
图4a是本申请实施例提供的一种第一摄像头和第二摄像头的视场角有部分重合的示意图;
图4b是本申请实施例提供的一种第一摄像头的视场角完全处于第二摄像头的视场角范围内的示意图;
图5是本申请实施例提供的第二种图像处理方法的流程示意图;
图6a是本申请实施例提供的一种第一图像的示意图;
图6b是本申请实施例提供的一种第二图像的示意图;
图7是本申请实施例提供的第三种图像处理方法的流程示意图;
图8是本申请实施例提供的一种根据拼接处理后的第一图像生成目标图像的示意图;
图9是本申请实施例提供的第四种图像处理方法的流程示意图;
图10是本申请实施例提供的一种增强处理后的第一图像的示意图;
图11是本申请实施例提供的一种图像处理装置的结构示意图;
图12是本申请实施例提供的另一种移动终端的结构示意图;
图13是本申请实施例提供的一种控制器140的硬件结构示意图;
图14是本申请实施例提供的一种网络节点150的硬件结构示意图;
图15是本申请实施例提供的另一种网络节点160的硬件结构示意图;
图16是本申请实施例提供的另一种控制器170的硬件结构示意图;
图17是本申请实施例提供的又一种网络节点180的硬件结构示意图。
本申请目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。通过上述附图,已示出本申请明确的实施例,后文中将有更详细的描述。这些附图和文字描述并不是为了通过任何方式限制本申请构思的范围,而是通过参考特定实施例为本领域技术人员说明本申请的概念。
具体实施方式
这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本申请相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本申请的一些方面相一致的装置和方法的例子。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素,此外,本申请不同实施例中具有同样命名的部件、特征、要素可能具有相同含义,也可能具有不同含义,其具体含义需以其在该具体实施例中的解释或者进一步结合该具体实施例中上下文进行确定。
应当理解,尽管在本文可能采用术语第一、第二、第三等来描述各种信息,但这些信息不应限于这些术语。这些术语仅用来将同一类型的信息彼此区分开。例如,在不脱离本文范围的情况下,第一信息也可以被称为第二信息,类似地,第二信息也可以被称为第一信息。取决于语境,如在此所使用的词语"如果"可以被解释成为"在……时"或"当……时"或"响应于确定"。再者,如同在本文中所使用的,单数形式“一”、“一个”和“该”旨在也包括复数形式,除非上下文中有相反的指示。应当进一步理解,术语“包含”、“包括”表明存在所述的特征、步骤、操作、元件、组件、项目、种类、和/或组,但不排除一个或多个其他特征、步骤、操作、元件、组件、项目、种类、和/或组的存在、出现或添加。本申请使用的术语“或”、“和/或”、“包括以下至少一个”等可被解释为包括性的,或意味着任一个或任何组合。例如,“包括以下至少一个:A、B、C”意味着“以下任一个:A;B;C;A和B;A和C;B和C;A和B和C”,再如,“A、B或C”或者“A、B和/或C”意味着“以下任一个:A;B;C;A和B;A和C;B和C;A和B和C”。仅当元件、功能、步骤或操作的组合在某些方式下内在地互相排斥时,才会出现该定义的例外。
应该理解的是,虽然本申请实施例中的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,其可以以其他的顺序执行。而且,图中的至少一部分步骤可以包括多个子步骤或者多个阶段,这些子步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,其执行顺序也不必然是依次进行,而是可以与其他步骤或者其他步骤的子步骤或者阶段的至少一部分轮流或者交替地执行。
取决于语境,如在此所使用的词语“如果”、“若”可以被解释成为“在……时”或“当……时”或“响应于确定”或“响应于检测”。类似地,取决于语境,短语“如果确定”或“如果检测(陈述的条件或事件)”可以被解释成为“当确定时”或 “响应于确定”或“当检测(陈述的条件或事件)时”或“响应于检测(陈述的条件或事件)”。
需要说明的是,在本文中,采用了诸如S301、S302等步骤代号,其目的是为了更清楚简要地表述相应内容,不构成顺序上的实质性限制,本领域技术人员在具体实施时,可能会先执行S302后执行S301等,但这些均应在本申请的保护范围之内。
应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
在后续的描述中,使用用于表示元件的诸如“模块”、“部件”或者“单元”的后缀仅为了有利于本申请的说明,其本身没有特定的意义。因此,“模块”、“部件”或者“单元”可以混合地使用。
拍摄设备可以以各种形式来实施。例如,本申请中描述的拍摄设备可以包括诸如手机、平板电脑、笔记本电脑、掌上电脑、个人数字助理(Personal Digital Assistant,PDA)、便捷式媒体播放器(Portable Media Player,PMP)、导航装置、可穿戴设备、智能手环、计步器等具有摄像头的移动终端,以及诸如数字TV、台式计算机等具有摄像头的固定终端。
后续描述中将以移动终端为例进行说明,本领域技术人员将理解的是,除了特别用于移动目的的元件之外,根据本申请的实施方式的构造也能够应用于固定类型的终端。
请参阅图1,其是本申请实施例提供的一种实现本申请各个实施例的移动终端的硬件结构示意图,该移动终端100可以包括:RF(Radio Frequency,射频)单元101、WiFi模块102、音频输出单元103、A/V(音频/视频)输入单元104、传感器105、显示单元106、用户输入单元107、接口单元108、存储器109、处理器110、以及电源111等部件。本领域技术人员可以理解,图1中示出的移动终端结构并不构成对移动终端的限定,移动终端可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。
下面结合图1对移动终端的各个部件进行具体的介绍:
射频单元101可用于收发信息或通话过程中,信号的接收和发送,具体的,将基站的下行信息接收后,给处理器110处理;另外,将上行的数据发送给基站。通常,射频单元101包括但不限于天线、至少一个放大器、收发信机、耦合器、低噪声放大器、双工器等。此外,射频单元101还可以通过无线通信与网络和其他设备通信。上述无线通信可以使用任一通信标准或协议,包括但不限于GSM(Global System of Mobile communication,全球移动通讯系统)、GPRS(General Packet Radio Service,通用分组无线服务)、CDMA2000(Code Division Multiple Access 2000,码分多址2000)、WCDMA(Wideband Code Division Multiple Access,宽带码分多址)、TD-SCDMA(Time Division-Synchronous Code Division Multiple Access,时分同步码分多址)、FDD-LTE(Frequency Division Duplexing-Long Term Evolution,频分双工长期演进)和TDD-LTE(Time Division Duplexing-Long Term Evolution,分时双工长期演进)等。
WiFi属于短距离无线传输技术,移动终端通过WiFi模块102可以帮助用户收发电子邮件、浏览网页和访问流式媒体等,它为用户提供了无线的宽带互联网访问。虽然图1示出了WiFi模块102,但是可以理解的是,其并不属于移动终端的必须构成,完全可以根据需要在不改变申请的本质的范围内而省略。
音频输出单元103可以在移动终端100处于呼叫信号接收模式、通话模式、记录模式、语音识别模式、广播接收模式等等模式下时,将射频单元101或WiFi模块102接收的或者在存储器109中存储的音频数据转换成音频信号并且输出为声音。而且,音频输出单元103还可以提供与移动终端100执行的特定功能相关的音频输出(例如,呼叫信号接收声音、消息接收声音等等)。音频输出单元103可以包括扬声器、蜂鸣器等等。
A/V输入单元104用于接收音频或视频信号。A/V输入单元104可以包括图形处理器(Graphics Processing Unit,GPU)1041和麦克风1042,图形处理器1041对在视频捕获模式或图像捕获模式中由图像捕获装置(如摄像头)获得的静态图片或视频的图像数据进行处理。处理后的图像帧可以显示在显示单元106上。经图形处理器1041处理后的图像帧可以存储在存储器109(或其它存储介质)中或者经由射频单元101或WiFi模块102进行发送。麦克风1042可以在电话通话模式、记录模式、语音识别模式等等运行模式中经由麦克风1042接收声音(音频数据),并且能够将这样的声音处理为音频数据。处理后的音频(语音)数据可以在电话通话模式的情况下转换为可经由射频单元101发送到移动通信基站的格式输出。麦克风1042可以实施各种类型的噪声消除(或抑制)算法以消除(或抑制)在接收和发送音频信号的过程中产生的噪声或者干扰。
移动终端100还包括至少一种传感器105,比如光传感器、运动传感器以及其他传感器。可选地,光传感器包括环境光 传感器及接近传感器,可选地,环境光传感器可根据环境光线的明暗来调节显示面板1061的亮度,接近传感器可在移动终端100移动到耳边时,关闭显示面板1061和/或背光。作为运动传感器的一种,加速计传感器可检测各个方向上(一般为三轴)加速度的大小,静止时可检测出重力的大小及方向,可用于识别手机姿态的应用(比如横竖屏切换、相关游戏、磁力计姿态校准)、振动识别相关功能(比如计步器、敲击)等;至于手机还可配置的指纹传感器、压力传感器、虹膜传感器、分子传感器、陀螺仪、气压计、湿度计、温度计、红外线传感器等其他传感器,在此不再赘述。
显示单元106用于显示由用户输入的信息或提供给用户的信息。显示单元106可包括显示面板1061,可以采用液晶显示器(Liquid Crystal Display,LCD)、有机发光二极管(Organic Light-Emitting Diode,OLED)等形式来配置显示面板1061。
用户输入单元107可用于接收输入的数字或字符信息,以及产生与移动终端的用户设置以及功能控制有关的键信号输入。可选地,用户输入单元107可包括触控面板1071以及其他输入设备1072。触控面板1071,也称为触摸屏,可收集用户在其上或附近的触摸操作(比如用户使用手指、触笔等任何适合的物体或附件在触控面板1071上或在触控面板1071附近的操作),并根据预先设定的程式驱动相应的连接装置。触控面板1071可包括触摸检测装置和触摸控制器两个部分。可选地,触摸检测装置检测用户的触摸方位,并检测触摸操作带来的信号,将信号传送给触摸控制器;触摸控制器从触摸检测装置上接收触摸信息,并将它转换成触点坐标,再送给处理器110,并能接收处理器110发来的命令并加以执行。此外,可以采用电阻式、电容式、红外线以及表面声波等多种类型实现触控面板1071。除了触控面板1071,用户输入单元107还可以包括其他输入设备1072。可选地,其他输入设备1072可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆等中的一种或多种,具体此处不做限定。
可选地,触控面板1071可覆盖显示面板1061,当触控面板1071检测到在其上或附近的触摸操作后,传送给处理器110以确定触摸事件的类型,随后处理器110根据触摸事件的类型在显示面板1061上提供相应的视觉输出。虽然在图1中,触控面板1071与显示面板1061是作为两个独立的部件来实现移动终端的输入和输出功能,但是在某些实施例中,可以将触控面板1071与显示面板1061集成而实现移动终端的输入和输出功能,具体此处不做限定。
接口单元108用作至少一个外部装置与移动终端100连接可以通过的接口。例如,外部装置可以包括有线或无线头戴式耳机端口、外部电源(或电池充电器)端口、有线或无线数据端口、存储卡端口、用于连接具有识别模块的装置的端口、音频输入/输出(I/O)端口、视频I/O端口、耳机端口等等。接口单元108可以用于接收来自外部装置的输入(例如,数据信息、电力等等)并且将接收到的输入传输到移动终端100内的一个或多个元件或者可以用于在移动终端100和外部装置之间传输数据。
存储器109可用于存储软件程序以及各种数据。存储器109可主要包括存储程序区和存储数据区,可选地,存储程序区可存储操作系统、至少一个功能所需的应用程序(比如声音播放功能、图像播放功能等)等;存储数据区可存储根据手机的使用所创建的数据(比如音频数据、电话本等)等。此外,存储器109可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。
处理器110是移动终端的控制中心,利用各种接口和线路连接整个移动终端的各个部分,通过运行或执行存储在存储器109内的软件程序和/或模块,以及调用存储在存储器109内的数据,执行移动终端的各种功能和处理数据,从而对移动终端进行整体监控。处理器110可包括一个或多个处理单元;优选的,处理器110可集成应用处理器和调制解调处理器,可选地,应用处理器主要处理操作系统、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器110中。
移动终端100还可以包括给各个部件供电的电源111(比如电池),优选的,电源111可以通过电源管理系统与处理器110逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。
尽管图1未示出,移动终端100还可以包括蓝牙模块等,在此不再赘述。
为了便于理解本申请实施例,下面对本申请的移动终端所基于的通信网络系统进行描述。
请参阅图2,图2为本申请实施例提供的一种通信网络系统架构图,该通信网络系统为通用移动通信技术的LTE系统,该LTE系统包括依次通讯连接的UE(User Equipment,用户设备)201,E-UTRAN(Evolved UMTS Terrestrial Radio Access  Network,演进式UMTS陆地无线接入网)202,EPC(Evolved Packet Core,演进式分组核心网)203和运营商的IP业务204。
可选地,UE201可以是上述移动终端100,此处不再赘述。
E-UTRAN202包括eNodeB2021和其它eNodeB2022等。可选地,eNodeB2021可以通过回程(backhaul)(例如X2接口)与其它eNodeB2022连接,eNodeB2021连接到EPC203,eNodeB2021可以提供UE201到EPC203的接入。
EPC203可以包括MME(Mobility Management Entity,移动性管理实体)2031,HSS(Home Subscriber Server,归属用户服务器)2032,其它MME2033,SGW(Serving Gate Way,服务网关)2034,PGW(PDN Gate Way,分组数据网络网关)2035和PCRF(Policy and Charging Rules Function,政策和资费功能实体)2036等。可选地,MME2031是处理UE201和EPC203之间信令的控制节点,提供承载和连接管理。HSS2032用于提供一些寄存器来管理诸如归属位置寄存器(图中未示)之类的功能,并且保存有一些有关服务特征、数据速率等用户专用的信息。所有用户数据都可以通过SGW2034进行发送,PGW2035可以提供UE 201的IP地址分配以及其它功能,PCRF2036是业务数据流和IP承载资源的策略与计费控制策略决策点,它为策略与计费执行功能单元(图中未示)选择及提供可用的策略和计费控制决策。
IP业务204可以包括因特网、内联网、IMS(IP Multimedia Subsystem,IP多媒体子系统)或其它IP业务等。
虽然上述以LTE系统为例进行了介绍,但本领域技术人员应当知晓,本申请不仅仅适用于LTE系统,也可以适用于其他无线通信系统,例如GSM、CDMA2000、WCDMA、TD-SCDMA以及未来新的网络系统等,此处不做限定。
基于上述移动终端硬件结构以及通信网络系统,提出本申请各个实施例。
请参见图3,图3为本申请实施例提供的第一种图像处理方法的流程图。本申请实施例的所述方法可以由拍摄设备来执行,该拍摄设备包括第一摄像头和第二摄像头,第一摄像头的视场角大小小于第二摄像头的视场角大小;可选地,所述方法可以由服务器来执行,所述方法具体可包括如下步骤:
S301、控制第一摄像头和第二摄像头同时对目标环境进行拍摄,获取第一摄像头拍摄的第一图像和第二摄像头拍摄的第二图像。
本申请实施例中,目标环境是指第一摄像头和第二摄像头要进行拍摄的区域。本申请中的第一摄像头和第二摄像头位于拍摄设备的同一侧,第一摄像头和第二摄像头的视场角有部分重合(如图4a所示),可选地,第一摄像头的视场角可以完全处于第二摄像头的视场角范围内(如图4b所示)。可以理解的是,本申请中用视场角来反映画面的拍摄范围,焦距越大,视场角越小,在感光元件上形成的画面范围越小;反之,焦距越小,视场角越大,在感光元件上形成的画面范围越大。针对于摄像头来说,通过改变镜头的焦距,可以改变镜头的放大倍数,改变拍摄图像的大小。当物体与镜头的距离很远的时候,镜头的放大倍数≈焦距/物距。增加镜头的焦距,放大倍数增大,可以将远景拉近,画面的范围小,远景的细节看得更清楚;如果减少镜头的焦距,放大倍数减少,画面的范围扩大,能看到更大的场景。也就是说,本申请中的第一摄像头可以是标准摄像头,第二摄像头可以是广角摄像头。本申请中当用户开启拍摄模式时,同时打开第一摄像头和第二摄像头,若拍摄设备接收到用户的拍摄指令,则控制第一摄像头和第二摄像头同时对目标环境进行拍摄,进而获取到第一摄像头拍摄的第一图像和第二摄像头拍摄的第二图像。
在一种可选的实施方式中,所述第一摄像头的拍摄视区与第二摄像头的拍摄视区有重叠部分,控制所述第一摄像头和所述第二摄像头同时对目标环境进行拍摄,获取所述第一摄像头拍摄的第一图像和所述第二摄像头拍摄的第二图像;将所述第一图像和所述第二图像进行图像融合处理,得到目标图像。
本申请实施例中,当用户开启拍摄模式时,同时打开第一摄像头和第二摄像头,若拍摄设备接收到用户的拍摄指令,则控制第一摄像头和第二摄像头同时对目标环境进行拍摄,进而获取到第一摄像头拍摄的第一图像和第二摄像头拍摄的第二图像。也就是说,通过拍摄视区有重叠部分的两个摄像头同时进行拍摄,得到两张有重叠区域的图像,从而可将两张图片进行融合处理,不仅增强了重叠部分的图像显示,还增大了任意一张图片的拍摄视区,从而将两个摄像头拍摄视区中的环境区域都融合到一张图像中,提高了用户浏览图像时的观感。
S302、将第一图像和第二图像进行图像融合处理,得到目标图像。
本申请实施例中,将两个视场角大小不一样的摄像头分别同时获取的第一图像和第二图像,进行融合处理,得到目标图像。在一种可选的实施方式中,所述方法还可以包括:获取图像处理规则;根据所述图像处理规则对所述目标图像进行处理, 得到处理后的目标图像。本申请中在得到目标图像后,获取拍摄设备中的图像处理规则,对目标图像进行处理,例如可以是算法叠加、压缩、旋转、编码等,算法叠加可以是针对人脸的算法(美颜、瘦脸、大眼、磨皮等)、也可以是针对场景(冬日暖阳、清凉夏日等)。最后,经过所有的处理之后,得到最终处理后的目标图像。
在本申请实施例中,图像处理方法应用于拍摄设备,所述拍摄设备包括第一摄像头和第二摄像头,所述第一摄像头的视场角大小小于所述第二摄像头的视场角大小。控制所述第一摄像头和所述第二摄像头同时对目标环境进行拍摄,获取所述第一摄像头拍摄的第一图像和所述第二摄像头拍摄的第二图像;将所述第一图像和所述第二图像进行图像融合处理,得到目标图像。本申请实施例,能够同时控制两个摄像头进行拍摄,进而融合处理得到高质量、高清晰度的拍摄图像,提高用户浏览图像时的观感。
请参见图5,图5是本申请实施例提供的第二种图像处理方法的流程图。本申请实施例的所述方法可以由拍摄设备来执行,该拍摄设备包括第一摄像头和第二摄像头,第一摄像头的视场角大小小于第二摄像头的视场角大小;可选地,所述方法可以由服务器来执行,所述方法具体可包括如下步骤:
S501、控制第一摄像头和第二摄像头同时对目标环境进行拍摄,获取第一摄像头拍摄的第一图像和第二摄像头拍摄的第二图像。
需要说明的是,本申请实施例中的步骤S501具体可参见上述实施例中步骤S301,本申请实施例不再赘述。
S502、对第一图像和第二图像进行图像配准处理,确定第二图像中与第一图像相匹配的图像区域。
本申请实施例中,以第二图像为基准图像(如图6b所示),进行图像配准处理,确定第二图像中与第一图像(如图6a所示)相匹配的图像区域。可以理解的是,图像配准是将不同时间、不同传感器(成像设备)或不同条件下(天候、照度、摄像位置和角度等)获取的两幅或多幅图像进行匹配、叠加的过程。本申请中第一摄像头与第二摄像头的位置并不重叠,则存在一定的位置差异。在一种可选的实施方式中,所述对所述第一图像和所述第二图像进行图像配准处理,确定所述第二图像中与所述第一图像相匹配的图像区域,可以包括:对所述第一图像进行特征点提取,得到第一特征点集合,并对所述第二图像进行特征点提取,得到第二特征点集合;根据预设算法从所述第二特征点集合中确定出与所述第一特征点集合相匹配的目标特征点集合;将所述第二图像中所述目标特征点集合所对应的图像区域确定为所述第二图像中与所述第一图像相匹配的图像区域。本申请中可通过立体配准或者特征点配准对第一图像和第二图像进行配准,当通过特征点配准时,可利用图像识别算法识别出一组图像的特征点,分别得到含有两幅图像特定点对应特征向量的特征向量集,将两个特征向量集中的特征点进行匹配后,删除错误匹配点,得到匹配校正后的结果。具体的,运用SIFT(Scale-invariant feature transform,尺度不变特征转换)算法识别第一图像和第二图像的特征点,分别生成两幅图像的特征向量集,将两特征向量集利用最优节点有限算法(Best-bin-first算法,BBF算法)进行匹配后,采用RANSAC算法剔除错匹配点,进行匹配校正。进而通过精确的匹配,确定出第二图像中与第一图像相匹配的图像区域(如图6b所示,实线框中的图像区域则为第二图像与第一图像相匹配的图像区域)。
S503、根据第一图像对第二图像中与第一图像相匹配的图像区域进行图像增强处理,得到增强处理后的第二图像。
本申请实施例中,通过第一摄像头拍摄的第一图像对第二图像中与第一图像相匹配的图像进行图像增强处理,使得第二图像中与第一图像相匹配的图像显示得到增强,而第一图像中边缘部分的图像当然也变得更加清晰。可选地,可对所述第一图像和所述第二图像中与第一图像相匹配的图像区域进行分块处理,得到第一图像分块处理后的多个第一分块图像和所述第二图像中与第一图像相匹配的图像区域的多个第二分块图像;对所述各个第一分块图像和第二分块图像的各个像素值进行灰度变换,得到所述多个第一分块图像的第一目标像素值和所述多个第二分块图像的第二目标像素值,根据第一目标像素值和第二目标像素值对第二图像中与第一图像相匹配的图像进行图像增强,进而得到增强处理后的第二图像,使得用户获得高清晰、高质量的图像,提高用户浏览图像的观感。
在一种可选的实施方式中,所述根据所述第一图像对所述第二图像中与所述第一图像相匹配的图像区域进行图像增强处理,得到增强处理后的第二图像,包括:根据所述第一图像对所述第二图像中与所述第一图像相匹配的图像区域进行色彩增强处理和/或纹理增强处理,得到处理后的第二图像。本申请实施例中,可对所述第一图像和第二图像中与所述第一图像相匹配的图像区域的各个像素值进行灰度变换,得到所述第一图像的目标像素值和所述第二图像中与所述第一图像相匹配的图像区域的目标像素值;根据所述第一图像的各个目标像素值和第二图像中与所述第一图像相匹配的图像区域的各个目标像素 值对第二图像中与所述第一图像相匹配的图像区域进行色彩增强和/或纹理增强,得到增强处理后的第二图像,使得增强第二图像中部分图像的显示,进一步提高第二图像的清晰度,提高用户的使用体验。
S504、根据增强处理后的第二图像生成目标图像。
本申请实施例中,得到增强处理后的第二图像后,则根据增强处理后的第二图像生成目标图像,在一种可选的实施方式中,所述根据所述增强处理后的第二图像生成目标图像,可以包括:将所述增强处理后的第二图像进行分块,得到至少一个分块图像;将各个分块图像的色度值和/或亮度值进行融合处理,得到目标图像。本申请中第二图像中与第一图像相匹配的图像区域得到的图像增强,为使得整个第二图像更加和谐,要将图像增强部分的图像区域与第二图像中的其他图像区域衔接得更加自然,则相应的要对图像增强部分的边缘和第二图像衔接的边缘以及除图像增强部分的图像区域进行相应调整。具体的,可将增强处理后的第二图像进行分块,将各个分块图像的色度值和/或亮度值进行融合处理,则整张图更加和谐,给用户较好的观感。
在一种可选的实施方式中,所述方法还可以包括:获取图像处理规则;根据所述图像处理规则对所述目标图像进行处理,得到处理后的目标图像。本申请中在得到目标图像后,获取拍摄设备中的图像处理规则,对目标图像进行处理,例如可以是算法叠加、压缩、旋转、编码等,算法叠加可以是针对人脸的算法、也可以是针对场景变换。最后,经过所有的处理之后,得到最终处理后的目标图像。
在本申请实施例中,控制第一摄像头和第二摄像头同时对目标环境进行拍摄,获取第一摄像头拍摄的第一图像和第二摄像头拍摄的第二图像;对第一图像和第二图像进行图像配准处理,确定第二图像中与第一图像相匹配的图像区域;根据第一图像对第二图像中与第一图像相匹配的图像区域进行图像增强处理,得到增强处理后的第二图像;根据增强处理后的第二图像生成目标图像。本申请实施例可便捷地将第二图像中的部分图像进行增强处理,大大地降低了拍照的难度,更便捷地拍到高质量、高清晰的图像,提高用户拍照的效率,提高用户的使用提样。
请参见图7,图7是本申请实施例提供的第三种图像处理方法的流程图。本申请实施例的所述方法可以由拍摄设备来执行,该拍摄设备包括第一摄像头和第二摄像头,第一摄像头的视场角大小小于第二摄像头的视场角大小;可选地,所述方法可以由服务器来执行,所述方法具体可包括如下步骤:
S701、控制第一摄像头和第二摄像头同时对目标环境进行拍摄,获取第一摄像头拍摄的第一图像和第二摄像头拍摄的第二图像。
S702、对第一图像和第二图像进行图像配准处理,确定第二图像中与所述第一图像相匹配的图像区域。
需要说明的是,本申请实施例中的步骤S701-S702具体可参见上述实施例中步骤S501-S502,本申请实施例不再赘述。
S703、利用第二图像中除与第一图像相匹配的图像区域之外的图像区域对所述第一图像进行拼接处理,得到拼接处理后的第一图像。
本申请实施例中,利用第二图像中除与第一图像相匹配的图像区域之外的图像区域对所述第一图像进行拼接处理,得到拼接处理后的第一图像。在一种可选的实施方式中,所述利用所述第二图像中除与所述第一图像相匹配的图像区域之外的图像区域对所述第一图像进行拼接处理,得到拼接处理后的第一图像,可以包括:对所述第二图像中除与所述第一图像相匹配的图像区域之外的各个图像区域进行第一预设处理,得到预设处理后的图像区域;利用所述预设处理后的图像区域对所述第一图像进行拼接处理,得到拼接处理后的第一图像。本申请中第一预设处理可以是放大处理和/或缩小处理,放大或缩小处理的倍数可根据第一摄像头和第二摄像头的焦距进行确定。可以理解的是,由于镜头的放大倍数约等于焦距除以物距,可知的是第一摄像头和第二摄像头在同一个拍摄设备上,且同时进行拍摄,则物距是相同的,那么第一图像和第二图像的放大倍数之间的关系就是焦距的倍数关系,则为了两张相同尺寸大小的图像进行更好的拼接,可将第二图像中除与所述第一图像相匹配的图像区域之外的各个图像区域进行第一预设处理得到图像的放大或缩小倍数与第一图像中图像的放大倍数是相同的,从而可更好地进行图像的拼接,进而扩大第一图像的图像显示区域,提高用户拍摄较大拍摄区域图像的效率。
在一种可选的实施方式中,所述利用所述第二图像中除与所述第一图像相匹配的图像区域之外的图像区域对所述第一图像进行拼接处理,得到拼接处理后的第一图像,可以包括:对所述第一图像进行第二预设处理;利用所述第二图像中除与所述第一图像相匹配的图像区域之外的图像区域对预设处理后的第一图像进行拼接处理,得到拼接处理后的第一图像。本申请实施例中,第二预设处理可以是放大处理和/或缩小处理,为使得两张相同尺寸大小的图像进行更好的拼接,则可根据第一 摄像头和第二摄像头的焦距的关系,对第一图像进行放大或缩小处理,使得第一图像的尺寸大小和第二图像中与所述第一图像相匹配的图像区域的尺寸大小一致,进而方便进行图像的拼接,提高拍摄的效率。
S704、根据拼接处理后的第一图像生成目标图像。
本申请实施例中,利用第二图像中除与第一图像相匹配的图像区域之外的图像区域对所述第一图像进行拼接处理,得到拼接处理后的第一图像。在一种可选的实施方式中,所述根据所述拼接处理后的第一图像生成目标图像,包括:将所述拼接处理后的第一图像进行分块,得到至少一个分块图像;将各个分块图像的色度值和/或亮度值进行融合处理,得到目标图像。本申请中为使得拼接处理后的第一图像更加和谐,要将第一图像与第二图像中除与第一图像相匹配的图像之外的图像区域衔接得更加自然,则相应的要对第一图像的边缘和第二图像中与第一图像衔接的边缘以及所述第二图像中除与所述第一图像相匹配的图像区域之外的各个图像区域进行相应调整。具体的,可将拼接处理后的第一图像进行分块,将各个分块图像的色度值和/或亮度值进行融合处理,则整张图更加和谐,给用户较好的观感。在一种可选的方式中,在进行拼接处理时,要进行边缘相似性度量,使得第一图像能够与第二图像中除与第一图像相匹配的图像区域之外的图像区域能够较好的拼接。如图8所示,可对第一图像的边缘部分、第二图像中的拼接部分与第一图像相邻的边缘部分进行色彩差异度量和纹理差异度量,进相应的对色彩和纹理进行调整,使得拼接得更加自然,色彩、纹理过渡自然,提高用户浏览图像时的观感。
在一种可选的实施例中,第一摄像头和第二摄像头的视场角大小相同,第一摄像头的拍摄视区与第二摄像头的拍摄视区部分重叠,控制第一摄像头和第二摄像头同时对目标环境进行拍摄,获取第一摄像头拍摄的第一图像和第二摄像头拍摄的第二图像,对第一图像和第二图像进行图像配准处理,确定第二图像中与所述第一图像相匹配的图像区域,利用第二图像中除与第一图像相匹配的图像区域之外的图像区域对所述第一图像进行拼接处理,得到拼接处理后的第一图像。本申请实施例中第一摄像头可以是标准摄像头,第二摄像头可以是黑白摄像头,通过本申请中的如此步骤方法,利用黑白摄像头数据对标准摄像头的彩色图像进行融合,可提升第一图像在暗光下的颜色表现,提高第一图像的质量、清晰度,提高用户的使用体验。
在本申请实施例中,控制第一摄像头和第二摄像头同时对目标环境进行拍摄,获取第一摄像头拍摄的第一图像和第二摄像头拍摄的第二图像,对第一图像和第二图像进行图像配准处理,确定第二图像中与所述第一图像相匹配的图像区域;利用第二图像中除与第一图像相匹配的图像区域之外的图像区域对所述第一图像进行拼接处理,得到拼接处理后的第一图像;根据拼接处理后的第一图像生成目标图像。本申请实施例,可扩大第一图像的视场角范围,提高用户浏览图像时的观感。
请参见图9,图9是本申请实施例提供的第四种图像处理方法的流程图。本申请实施例的所述方法可以由拍摄设备来执行,该拍摄设备包括第一摄像头和第二摄像头,可选地,所述方法也可以由服务器来执行,所述方法具体可包括如下步骤:
S901、控制第一摄像头和第二摄像头对目标环境进行拍摄,获取第一摄像头拍摄的第一图像和第二摄像头拍摄的第二图像。
本申请实施例中,当用户开启拍摄模式时,打开第一摄像头和第二摄像头,若拍摄设备接收到用户的拍摄指令,则控制第一摄像头和第二摄像头对目标环境进行拍摄,进而获取到第一摄像头拍摄的第一图像和第二摄像头拍摄的第二图像,其中,第一摄像头和第二摄像头的视场角大小可相同或不同。可选地,第一摄像头的视场角大小小于第二摄像头的视场角大小。可选地,第一摄像头和第二摄像头的视场角部分重合或者一个视场角处于另一个视场角范围内,如图4a所示,第一摄像头和第二摄像头的视场角部分重合,或者如图4b所示,第一摄像头的视场角可完全处于第二摄像头的视场角范围内。
在一种可选的实施方式中,所述控制所述第一摄像头和所述第二摄像头对目标环境进行拍摄,包括以下至少一种:控制所述第一摄像头和所述第二摄像头同时对目标环境进行拍摄;控制所述第一摄像头先对目标环境进行拍摄,控制所述第二摄像头后对目标环境进行拍摄;控制所述第二摄像头先对目标环境进行拍摄,控制所述第一摄像头后对目标环境进行拍摄。本申请实施例中,拍摄设备的位置不发生变化,第一摄像头和第二摄像头的相对位置确定,拍摄设备接收到拍摄指令时,第一摄像头和第二摄像头对目标环境的拍摄顺序不固定,其具体可根据用户自定义设置,或者根据拍摄设备的耗电量/图像处理的效率,进行拍摄顺序的优先级确定。例如第一摄像头和第二摄像头同时进行拍摄较为省电,则第一摄像头和第二摄像头同时进行拍摄优先级最高;例如,第一摄像头和第二摄像头同时进行拍摄最能提高处理的效率,则第一摄像头和第二摄像头同时进行拍摄优先级最高。另外,控制第一摄像头和第二摄像头对目标环境进行拍摄可根据系统设置,如控制所述第一摄像头和所述第二摄像头同时对目标环境进行拍摄,或者控制所述第一摄像头先对目标环境进行拍摄,控制所述第二摄像头后对目 标环境进行拍摄;或者控制所述第二摄像头先对目标环境进行拍摄,控制所述第一摄像头后对目标环境进行拍摄。
S902、确定第二图像中与第一图像相匹配的图像区域。
在一种可选的实施方式中,对第一图像和第二图像进行图像配准处理,得到第二图像中与第一图像相匹配的图像区域。
本申请实施例中,以第一图像为基准图像,进行图像配准处理,确定第二图像中与第一图像相匹配的图像区域。在一种可选的实施方式中,所述对所述第一图像和所述第二图像进行图像配准处理,得到所述第二图像中与所述第一图像相匹配的图像区域,可以包括:对所述第一图像进行特征点提取,得到第一特征点集合,并对所述第二图像进行特征点提取,得到第二特征点集合;根据预设算法从所述第二特征点集合中确定出与所述第一特征点集合相匹配的目标特征点集合;将所述第二图像中所述目标特征点集合所对应的图像区域确定为所述第二图像中与所述第一图像相匹配的图像区域。本申请中可通过立体配准或者特征点配准对第一图像和第二图像进行配准,当通过特征点配准时,可利用图像识别算法识别出一组图像的特征点,分别得到含有两幅图像特定点对应特征向量的特征向量集,将两个特征向量集中的特征点进行匹配后,删除错误匹配点,得到匹配校正后的结果。具体的,运用SIFT(Scale-invariant feature transform,尺度不变特征转换)算法识别第一图像和第二图像的特征点,分别生成两幅图像的特征向量集,将两特征向量集利用最优节点有限算法(Best-bin-first算法,BBF算法)进行匹配后,采用RANSAC算法剔除错匹配点,进行匹配校正。进而通过精确的匹配,确定出第二图像中与第一图像相匹配的图像区域。
S903、根据第二图像中与第一图像相匹配的图像区域进行图像增强处理,以得到目标图像。
本申请实施例中,通过第二摄像头拍摄的第二图像中与第一图像相匹配的图像区域进行图像增强处理,使得第一图像和/或第二图像的图像显示得到增强。在一种可选的实施方式中,所述根据所述第二图像中与所述第一图像相匹配的图像区域进行图像增强处理,以得到目标图像,可以包括:根据第二图像中与第一图像相匹配的图像区域对第一图像进行图像增强处理,得到增强处理后的第一图像,根据增强处理后的第一图像生成目标图像。本申请实施例中,将第二图像和第一图像作图像配准处理,得到第二图像中与第一图像相匹配的图像区域,根据该相匹配的图像区域对第一图像进行图像增强处理。可选的,可对第二图像中与第一图像相匹配的图像区域和第一图像进行分块处理,得到第二图像中与第一图像相匹配的图像区域的多个第三分块图像和第一图像分块处理后的多个第四分块图像,第三分块图像与第四分块图像的大小相同;对所述各个第三分块图像和第四分块图像的各个像素值进行灰度变换,得到所述多个第三分块图像的第三目标像素值和所述多个第四分块图像的第四目标像素值,根据第三目标像素值和第四目标像素值对第一图像进行图像增强,进而得到增强处理后的第一图像,使得用户获得高清晰、高质量的图像,提高用户浏览图像的观感。
在一种可选的实施方式中,所述根据所述第二图像中与所述第一图像相匹配的图像区域进行图像增强处理,以得到目标图像,可以包括:根据第二图像中与第一图像相匹配的图像区域对第二图像进行图像增强处理,得到增强处理后的第二图像,根据增强处理后的第二图像生成目标图像。本申请实施例中,确定出第二图像中与第一图像相匹配的图像区域后,根据该图像区域对第二图像进行图像增强处理。可选的,可对第二图像中与第一图像相匹配的图像区域和第二图像进行分块处理,得到第二图像中与第一图像相匹配的图像区域的多个第五分块图像和第二图像分块处理后的多个第六分块图像,第五分块图像与第六分块图像的大小相同;对所述各个第五分块图像和第六分块图像的各个像素值进行灰度变换,得到所述多个第五分块图像的第五目标像素值和所述多个第六分块图像的第六目标像素值,根据第五目标像素值和第六目标像素值对第二图像进行图像增强,得到增强处理后的第二图像,从而使得用户能够获得更加清晰的第二图像。
在一种可选的实施方式中,所述根据所述第二图像中与所述第一图像相匹配的图像区域进行图像增强处理,以得到目标图像,还可以包括:根据所述第二图像中与所述第一图像相匹配的图像区域对所述第一图像和第二图像进行图像增强处理,得到增强处理后的第一图像和增强处理后的第二图像,根据增强处理后的第一图像和增强处理后的第二图像生成目标图像。本申请实施例中,根据第二图像中与第一图像相匹配的图像区域对第一图像和第二图像分别进行图像增强处理,将增强处理后的第一图像和第二图像进行融合处理,生成与第二图像大小相同的目标图像,或者生成与第一图像大小相同的目标图像。
在一种可选的实施方式中,所述根据所述第二图像中与所述第一图像相匹配的图像区域进行图像增强处理,可包括以下至少一种:以所述第一图像为基准图像,根据所述第二图像中与所述第一图像相匹配的图像区域对所述第一图像进行色彩增强处理和/或纹理增强处理;以所述第二图像为基准图像,根据所述第二图像中与所述第一图像相匹配的图像区域对所述第二图像进行色彩增强处理和/或纹理增强处理;以所述第一图像为基准图像,根据所述第二图像中与所述第一图像相匹配的 图像区域对所述第一图像和第二图像进行色彩增强处理和/或纹理增强处理;以所述第二图像为基准图像,根据所述第二图像中与所述第一图像相匹配的图像区域对所述第一图像和第二图像进行色彩增强处理和/或纹理增强处理。
本申请实施例中,以第一图像为基准图像时,可对所述第一图像和第二图像中与所述第一图像相匹配的图像区域的各个像素值进行灰度变换,得到所述第一图像的目标像素值和所述第二图像中与所述第一图像相匹配的图像区域的目标像素值;根据所述第一图像的各个目标像素值和第二图像中与所述第一图像相匹配的图像区域的各个目标像素值对第一图像进行色彩增强和/或纹理增强,得到增强处理后的第一图像(如图10所示),使得增强第一图像的显示;可选的,可根据所述第一图像的各个目标像素值和第二图像中与所述第一图像相匹配的图像区域的各个目标像素值,对第一图像和第二图像进行色彩增强和/或纹理增强,得到增强处理后的第一图像和第二图像。以第二图像为基准图像时,可对所述第一图像和第二图像中与所述第一图像相匹配的图像区域的各个像素值进行灰度变换,得到所述第一图像的目标像素值和所述第二图像中与所述第一图像相匹配的图像区域的目标像素值,根据所述第一图像的各个目标像素值和第二图像中与所述第一图像相匹配的图像区域的各个目标像素值对第二图像进行色彩增强和/或纹理增强,得到增强处理后的第二图像,使得第二图像的显示得到增强;可选的,可根据所述第一图像的各个目标像素值和第二图像中与所述第一图像相匹配的图像区域的各个目标像素值,对第一图像和第二图像进行色彩增强和/或纹理增强,得到增强处理后的第一图像和第二图像,有利于目标图像的生成,提高图像处理的效率。
本申请实施例中,控制第一摄像头和第二摄像头对目标环境进行拍摄,获取第一摄像头拍摄的第一图像和第二摄像头拍摄的第二图像;确定第二图像中与第一图像相匹配的图像区域;根据第二图像中与第一图像相匹配的图像区域进行图像增强处理,以得到目标图像。本申请实施例中,可对第一图像和/或第二图像进行增强处理,使得最终得到的目标图像清晰度高、色彩更加鲜明,进而提高用户拍摄的效率,提高用户浏览图像时的观感。
请参见图11,图11是本申请实施例提供的一种图像处理装置的结构示意图,该装置可搭载在上述方法实施例中的拍摄设备上,该设备具体可以是服务器。可选地,在一些实施例中,也可搭载在拍摄设备上,所述拍摄设备包括第一摄像头和第二摄像头,所述第一摄像头的视场角大小小于所述第二摄像头的视场角大小。图11所示的图像处理装置可以用于执行上述图3、图5、图7和图9所描述的方法实施例中的部分或全部功能。其中,各个模块的详细描述如下:
获取单元1101,用于控制所述第一摄像头和所述第二摄像头同时对目标环境进行拍摄,获取所述第一摄像头拍摄的第一图像和所述第二摄像头拍摄的第二图像;
处理单元1102,用于将所述第一图像和所述第二图像进行图像融合处理,得到目标图像。
在一种可选的实施方式中,处理单元1102,具体用于根据所述第一图像对所述第二图像中与所述第一图像相匹配的图像区域进行图像增强处理,得到增强处理后的第二图像;用于根据所述增强处理后的第二图像生成目标图像。
在一种可选的实施方式中,处理单元1102,具体用于将所述增强处理后的第二图像进行分块,得到至少一个分块图像;用于将各个分块图像的色度值和/或亮度值进行融合处理,得到目标图像。
在一种可选的实施方式中,处理单元1102,还用于对所述第一图像和所述第二图像进行图像配准处理,确定所述第二图像中与所述第一图像相匹配的图像区域。
在一种可选的实施方式中,处理单元1102,具体用于对所述第一图像进行特征点提取,得到第一特征点集合,并对所述第二图像进行特征点提取,得到第二特征点集合;用于根据预设算法从所述第二特征点集合中确定出与所述第一特征点集合相匹配的目标特征点集合;用于将所述第二图像中所述目标特征点集合所对应的图像区域确定为所述第二图像中与所述第一图像相匹配的图像区域。
在一种可选的实施方式中,处理单元1102,具体用于根据所述第一图像对所述第二图像中与所述第一图像相匹配的图像区域进行色彩增强处理和/或纹理增强处理,得到处理后的第二图像。
在一种可选的实施方式中,获取单元1101,具体用于获取图像处理规则;处理单元1102,具体还用于根据所述图像处理规则对所述目标图像进行处理,得到处理后的目标图像。
在一种可选的实施方式中,处理单元1102,具体用于利用所述第二图像中除与所述第一图像相匹配的图像区域之外的图像区域对所述第一图像进行拼接处理,得到拼接处理后的第一图像;用于根据所述拼接处理后的第一图像生成目标图像。
在一种可选的实施方式中,处理单元1102,具体用于将所述拼接处理后的第一图像进行分块,得到至少一个分块图像;将各个分块图像的色度值和/或亮度值进行融合处理,得到目标图像。
在一种可选的实施方式中,处理单元1102,具体用于对所述第二图像中除与所述第一图像相匹配的图像区域之外的各个图像区域进行第一预设处理,得到预设处理后的图像区域;用于利用所述预设处理后的图像区域对所述第一图像进行拼接处理,得到拼接处理后的第一图像。
在一种可选的实施方式中,处理单元1102,具体用于对所述第一图像进行第二预设处理;用于利用所述第二图像中除与所述第一图像相匹配的图像区域之外的图像区域对预设处理后的第一图像进行拼接处理,得到拼接处理后的第一图像。
在一种可选的实施方式中,处理单元1102,具体用于控制所述第一摄像头和所述第二摄像头对目标环境进行拍摄,获取所述第一摄像头拍摄的第一图像和所述第二摄像头拍摄的第二图像;用于确定所述第二图像中与所述第一图像相匹配的图像区域;用于根据所述第二图像中与所述第一图像相匹配的图像区域进行图像增强处理,以得到目标图像。
在一种可选的实施方式中,处理单元1102,具体用于控制所述第一摄像头和所述第二摄像头对目标环境进行拍摄,获取所述第一摄像头拍摄的第一图像和所述第二摄像头拍摄的第二图像;用于确定所述第二图像中与所述第一图像相匹配的图像区域;用于根据所述第二图像中与所述第一图像相匹配的图像区域进行图像增强处理,以得到目标图像。
在一种可选的实施方式中,处理单元1102根据所述第二图像中与所述第一图像相匹配的图像区域进行图像增强处理,以得到目标图像,包括以下至少一种:
根据所述第二图像中与所述第一图像相匹配的图像区域对所述第一图像进行图像增强处理,得到增强处理后的第一图像,根据所述增强处理后的第一图像生成目标图像;
根据第二图像中与第一图像相匹配的图像区域对第二图像进行图像增强处理,得到增强处理后的第二图像,根据增强处理后的第二图像生成目标图像;
根据所述第二图像中与所述第一图像相匹配的图像区域对所述第一图像和第二图像进行图像增强处理,得到增强处理后的第一图像和增强处理后的第二图像,根据增强处理后的第一图像和增强处理后的第二图像生成目标图像。
在一种可选的实施方式中,处理单元1102根据所述第二图像中与所述第一图像相匹配的图像区域进行图像增强处理,包括以下至少一种:
以所述第一图像为基准图像,根据所述第二图像中与所述第一图像相匹配的图像区域对所述第一图像进行色彩增强处理和/或纹理增强处理;
以所述第二图像为基准图像,根据所述第二图像中与所述第一图像相匹配的图像区域对所述第二图像进行色彩增强处理和/或纹理增强处理;
以所述第一图像为基准图像,根据所述第二图像中与所述第一图像相匹配的图像区域对所述第一图像和第二图像进行色彩增强处理和/或纹理增强处理;
以所述第二图像为基准图像,根据所述第二图像中与所述第一图像相匹配的图像区域对所述第一图像和第二图像进行色彩增强处理和/或纹理增强处理。
在一种可选的实施方式中,处理单元1102,具体用于对所述第一图像和所述第二图像进行图像配准处理,得到所述第二图像中与所述第一图像相匹配的图像区域。
在一种可选的实施方式中,处理单元1102,具体用于对所述第一图像进行特征点提取,得到第一特征点集合,并对所述第二图像进行特征点提取,得到第二特征点集合;用于根据预设算法从所述第二特征点集合中确定出与所述第一特征点集合相匹配的目标特征点集合;用于将所述第二图像中所述目标特征点集合所对应的图像区域确定为所述第二图像中与所述第一图像相匹配的图像区域。
在一种可选的实施方式中,处理单元1102,具体用于以所述第一图像为基准图像,根据所述第二图像中与所述第一图像相匹配的图像区域对所述第一图像进行色彩增强处理和/或纹理增强处理,得到处理后的第一图像。
根据本申请的一个实施例,图3、图5、图7和图9所示的图像处理方法所涉及的部分步骤可由图11所示的图像处理装置中的各个模块来执行。图11所示的图像处理装置中的各个单元可以分别或全部合并为一个或若干个另外的模块来构成,或者其中的某个(些)模块还可以再拆分为功能上更小的多个单元来构成,这可以实现同样的操作,而不影响本申请的实施 例的技术效果的实现。上述单元是基于逻辑功能划分的,在实际应用中,一个模块的功能也可以由多个模块来实现,或者多个模块的功能由一个模块实现。在本申请的其它实施例中,图像处理装置也可以包括其它模块,在实际应用中,这些功能也可以由其它模块协助实现,并且可以由多个模块协作实现。
请参见图12,图12是本申请实施例提供的另一种拍摄设备的结构示意图。本申请还提供一种移动终端,移动终端包括存储器1201、处理器1202以及存储在存储器1201里并可在处理器1202上运行的图像处理程序,图像处理程序被处理器执行时实现上述任一实施例中的图像处理方法的步骤。
本申请还提供一种计算机可读存储介质,计算机可读存储介质上存储有图像处理程序,图像处理程序被处理器执行时实现上述任一实施例中的图像处理方法的步骤。
在本申请提供的移动终端和计算机可读存储介质的实施例中,包含了上述图像处理方法各实施例的全部技术特征,说明书拓展和解释内容与上述来电备注方法的各实施例基本相同,在此不做再赘述。
本申请实施例还提供一种计算机程序产品,计算机程序产品包括计算机程序代码,当计算机程序代码在计算机上运行时,使得计算机执行如上各种可能的实施方式中的方法。
本申请实施例还提供一种芯片,包括存储器和处理器,存储器用于存储计算机程序,处理器用于从存储器中调用并运行计算机程序,使得安装有芯片的设备执行如上各种可能的实施方式中的方法。
图13为本申请提供的一种控制器140的硬件结构示意图。该控制器140包括:存储器1401和处理器1402,存储器1401用于存储程序指令,处理器1402用于调用存储器1401中的程序指令执行上述方法实施例中控制器所执行的步骤,其实现原理以及有益效果类似,此处不再进行赘述。
可选地,上述控制器还包括通信接口1403,该通信接口1403可以通过总线1404与处理器1402连接。处理器1402可以控制通信接口1403来实现控制器140的接收和发送的功能。
图14为本申请提供的一种网络节点150的硬件结构示意图。该网络节点150包括:存储器1501和处理器1502,存储器1501用于存储程序指令,处理器1502用于调用存储器1501中的程序指令执行上述方法实施例中首节点所执行的步骤,其实现原理以及有益效果类似,此处不再进行赘述。
可选地,上述网络节点还包括通信接口1503,该通信接口1503可以通过总线1504与处理器1502连接。处理器1502可以控制通信接口1503来实现网络节点150的接收和发送的功能。
图15为本申请提供的另一种网络节点160的硬件结构示意图。该网络节点160包括:存储器1601和处理器1602,存储器1601用于存储程序指令,处理器1602用于调用存储器1601中的程序指令执行上述方法实施例中中间节点和尾节点所执行的步骤,其实现原理以及有益效果类似,此处不再进行赘述。
可选地,上述网络节点还包括通信接口1603,该通信接口1603可以通过总线1604与处理器1602连接。处理器1602可以控制通信接口1603来实现网络节点160的接收和发送的功能。
图16为本申请提供的另一种控制器170的硬件结构示意图。该控制器170包括:存储器1701和处理器1702,存储器1701用于存储程序指令,处理器1702用于调用存储器1701中的程序指令执行上述方法实施例中控制器所执行的步骤,其实现原理以及有益效果类似,此处不再进行赘述。
可选地,上述控制器还包括通信接口1703,该通信接口1703可以通过总线1704与处理器1702连接。处理器1702可以控制通信接口1703来实现控制器170的接收和发送的功能。
图17为本申请提供的又一种网络节点180的硬件结构示意图。该网络节点180包括:存储器1801和处理器1802,存储器1801用于存储程序指令,处理器1802用于调用存储器1801中的程序指令执行上述方法实施例中首节点所执行的步骤,其实现原理以及有益效果类似,此处不再进行赘述。
可选地,上述网络节点还包括通信接口1803,该通信接口1803可以通过总线1804与处理器1802连接。处理器1802可以控制通信接口1803来实现网络节点180的接收和发送的功能。
上述以软件功能模块的形式实现的集成的模块,可以存储在一个计算机可读取存储介质中。上述软件功能模块存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或处理器(英文:processor)执行本申请各个实施例方法的部分步骤。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行计算机程序指令时,全部或部分地产生按照本申请实施例的流程或功能。计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘solid state disk,SSD)等。
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。
在本申请中,对于相同或相似的术语概念、技术方案和/或应用场景描述,一般只在第一次出现时进行详细描述,后面再重复出现时,为了简洁,一般未再重复阐述,在理解本申请技术方案等内容时,对于在后未详细描述的相同或相似的术语概念、技术方案和/或应用场景描述等,可以参考其之前的相关详细描述。
在本申请中,对各个实施例的描述都各有侧重,某个实施例中没有详述或记载的部分,可以参见其它实施例的相关描述。
本申请技术方案的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本申请记载的范围。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在如上的一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,被控终端,或者网络设备等)执行本申请每个实施例的方法。
以上仅为本申请的优选实施例,并非因此限制本申请的专利范围,凡是利用本申请说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本申请的专利保护范围内。

Claims (17)

  1. 一种图像处理方法,应用于拍摄设备,其中,所述拍摄设备包括第一摄像头和第二摄像头,所述第一摄像头的视场角大小小于所述第二摄像头的视场角大小,所述方法包括:
    控制所述第一摄像头和所述第二摄像头同时对目标环境进行拍摄,获取所述第一摄像头拍摄的第一图像和所述第二摄像头拍摄的第二图像;
    将所述第一图像和所述第二图像进行图像融合处理,得到目标图像。
  2. 根据权利要求1所述的方法,其中,所述将所述第一图像和所述第二图像进行图像融合处理,得到目标图像,包括:
    根据所述第一图像对所述第二图像中与所述第一图像相匹配的图像区域进行图像增强处理,得到增强处理后的第二图像;
    根据所述增强处理后的第二图像生成目标图像。
  3. 根据权利要求2所述的方法,其中,所述根据所述增强处理后的第二图像生成目标图像,包括:
    将所述增强处理后的第二图像进行分块,得到至少一个分块图像;
    将各个分块图像的色度值和/或亮度值进行融合处理,得到目标图像。
  4. 根据权利要求2所述的方法,其中,所述根据所述第一图像对所述第二图像中与所述第一图像相匹配的图像区域进行图像增强处理,得到增强处理后的第二图像之前,所述方法还包括:
    对所述第一图像和所述第二图像进行图像配准处理,确定所述第二图像中与所述第一图像相匹配的图像区域。
  5. 根据权利要求4所述的方法,其中,所述对所述第一图像和所述第二图像进行图像配准处理,确定所述第二图像中与所述第一图像相匹配的图像区域,包括:
    对所述第一图像进行特征点提取,得到第一特征点集合,并对所述第二图像进行特征点提取,得到第二特征点集合;
    根据预设算法从所述第二特征点集合中确定出与所述第一特征点集合相匹配的目标特征点集合;
    将所述第二图像中所述目标特征点集合所对应的图像区域确定为所述第二图像中与所述第一图像相匹配的图像区域。
  6. 根据权利要求2所述的方法,其中,所述根据所述第一图像对所述第二图像中与所述第一图像相匹配的图像区域进行图像增强处理,得到处理后的第二图像,包括:
    根据所述第一图像对所述第二图像中与所述第一图像相匹配的图像区域进行色彩增强处理和/或纹理增强处理,得到处理后的第二图像。
  7. 根据权利要求1至6中任一项所述的方法,其中,所述将所述第一图像和所述第二图像进行图像融合处理,得到目标图像,包括:
    利用所述第二图像中除与所述第一图像相匹配的图像区域之外的图像区域对所述第一图像进行拼接处理,得到拼接处理后的第一图像;
    根据所述拼接处理后的第一图像生成目标图像。
  8. 根据权利要求7所述的方法,其中,所述根据所述拼接处理后的第一图像生成目标图像,包括:
    将所述拼接处理后的第一图像进行分块,得到至少一个分块图像;
    将各个分块图像的色度值和/或亮度值进行融合处理,得到目标图像。
  9. 根据权利要求7所述的方法,其中,所述利用所述第二图像中除与所述第一图像相匹配的图像区域之外的图像区域对所述第一图像进行拼接处理,得到拼接处理后的第一图像,包括:
    对所述第二图像中除与所述第一图像相匹配的图像区域之外的各个图像区域进行第一预设处理,得到预设处理后的图像区域;
    利用所述预设处理后的图像区域对所述第一图像进行拼接处理,得到拼接处理后的第一图像。
  10. 根据权利要求7所述的方法,其中,所述利用所述第二图像中除与所述第一图像相匹配的图像区域之外的图像区域对所述第一图像进行拼接处理,得到拼接处理后的第一图像,包括:
    对所述第一图像进行第二预设处理;
    利用所述第二图像中除与所述第一图像相匹配的图像区域之外的图像区域对预设处理后的第一图像进行拼接处理,得到拼接处理后的第一图像。
  11. 一种图像处理方法,应用于拍摄设备,其中,所述拍摄设备包括第一摄像头和第二摄像头,所述方法包括:
    控制所述第一摄像头和所述第二摄像头对目标环境进行拍摄,获取所述第一摄像头拍摄的第一图像和所述第二摄像头拍摄的第二图像;
    确定所述第二图像中与所述第一图像相匹配的图像区域;
    根据所述第二图像中与所述第一图像相匹配的图像区域进行图像增强处理,以得到目标图像。
  12. 根据权利要求11所述的方法,其中,所述确定所述第二图像中与所述第一图像相匹配的图像区域,包括:
    对所述第一图像和所述第二图像进行图像配准处理,得到所述第二图像中与所述第一图像相匹配的图像区域。
  13. 根据权利要求12所述的方法,其中,所述对所述第一图像和所述第二图像进行图像配准处理,得到所述第二图像中与所述第一图像相匹配的图像区域,包括:
    对所述第一图像进行特征点提取,得到第一特征点集合,并对所述第二图像进行特征点提取,得到第二特征点集合;
    根据预设算法从所述第二特征点集合中确定出与所述第一特征点集合相匹配的目标特征点集合;
    将所述第二图像中所述目标特征点集合所对应的图像区域确定为所述第二图像中与所述第一图像相匹配的图像区域。
  14. 根据权利要求11至13中任一项所述的方法,其中,所述根据所述第二图像中与所述第一图像相匹配的图像区域进行图像增强处理,包括以下至少一种:
    以所述第一图像为基准图像,根据所述第二图像中与所述第一图像相匹配的图像区域对所述第一图像进行色彩增强处理和/或纹理增强处理;
    以所述第二图像为基准图像,根据所述第二图像中与所述第一图像相匹配的图像区域对所述第二图像进行色彩增强处理和/或纹理增强处理;
    以所述第一图像为基准图像,根据所述第二图像中与所述第一图像相匹配的图像区域对所述第一图像和第二图像进行色彩增强处理和/或纹理增强处理;
    以所述第二图像为基准图像,根据所述第二图像中与所述第一图像相匹配的图像区域对所述第一图像和第二图像进行色彩增强处理和/或纹理增强处理。
  15. 根据权利要求11至13中任一项所述的方法,其中,所述根据所述第二图像中与所述第一图像相匹配的图像区域进行图像增强处理,以得到目标图像,包括以下至少一种:
    根据所述第二图像中与所述第一图像相匹配的图像区域对所述第一图像进行图像增强处理,得到增强处理后的第一图像,根据所述增强处理后的第一图像生成目标图像;
    根据所述第二图像中与所述第一图像相匹配的图像区域对所述第二图像进行图像增强处理,得到增强处理后的第二图像,根据所述增强处理后的第二图像生成目标图像;
    根据所述第二图像中与所述第一图像相匹配的图像区域对所述第一图像和所述第二图像进行图像增强处理,得到增强处理后的第一图像和增强处理后的第二图像,根据所述增强处理后的第一图像和所述增强处理后的第二图像生成目标图像。
  16. 一种移动终端,其中,包括处理器、存储器,其中,所述存储器用于存储计算机程序,所述计算机程序包括程序指令,所述处理器被配置用于调用所述程序指令,执行如权利要求1所述的方法。
  17. 一种计算机可读存储介质,其中,包括:所述计算机可读存储介质存储有一条或多条指令,所述一条或多条指令适于由处理器加载并执行如权利要求1所述的方法。
PCT/CN2022/074357 2021-02-05 2022-01-27 图像处理方法、移动终端及存储介质 WO2022166765A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2021101664654 2021-02-05
CN202110166465.4A CN112995467A (zh) 2021-02-05 2021-02-05 图像处理方法、移动终端及存储介质

Publications (1)

Publication Number Publication Date
WO2022166765A1 true WO2022166765A1 (zh) 2022-08-11

Family

ID=76348649

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/074357 WO2022166765A1 (zh) 2021-02-05 2022-01-27 图像处理方法、移动终端及存储介质

Country Status (2)

Country Link
CN (1) CN112995467A (zh)
WO (1) WO2022166765A1 (zh)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112995467A (zh) * 2021-02-05 2021-06-18 深圳传音控股股份有限公司 图像处理方法、移动终端及存储介质
CN113810598B (zh) * 2021-08-11 2022-11-22 荣耀终端有限公司 一种拍照方法、电子设备及存储介质
CN115209191A (zh) * 2022-06-14 2022-10-18 海信视像科技股份有限公司 显示设备、终端设备和设备间共享摄像头的方法
CN115314636B (zh) * 2022-08-03 2024-06-07 天津华来科技股份有限公司 基于摄像头的多路视频流处理方法和系统
TWI819752B (zh) * 2022-08-18 2023-10-21 緯創資通股份有限公司 拍攝系統及影像融合的方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010141671A (ja) * 2008-12-12 2010-06-24 Canon Inc 撮像装置
CN106303231A (zh) * 2016-08-05 2017-01-04 深圳市金立通信设备有限公司 一种图像处理方法及终端
CN108650442A (zh) * 2018-05-16 2018-10-12 Oppo广东移动通信有限公司 图像处理方法和装置、存储介质、电子设备
CN111062881A (zh) * 2019-11-20 2020-04-24 RealMe重庆移动通信有限公司 图像处理方法及装置、存储介质、电子设备
CN112995467A (zh) * 2021-02-05 2021-06-18 深圳传音控股股份有限公司 图像处理方法、移动终端及存储介质

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103390267A (zh) * 2013-07-11 2013-11-13 华为技术有限公司 图像处理方法及装置
CN109960452B (zh) * 2017-12-26 2022-11-04 腾讯科技(深圳)有限公司 图像处理方法及其装置、存储介质
CN108449546B (zh) * 2018-04-04 2020-03-31 维沃移动通信有限公司 一种拍照方法及移动终端
CN109194881A (zh) * 2018-11-29 2019-01-11 珠海格力电器股份有限公司 图像处理方法、系统及终端
CN110493526B (zh) * 2019-09-10 2020-11-20 北京小米移动软件有限公司 基于多摄像模块的图像处理方法、装置、设备及介质
CN111028190A (zh) * 2019-12-09 2020-04-17 Oppo广东移动通信有限公司 图像处理方法、装置、存储介质及电子设备
CN111291768B (zh) * 2020-02-17 2023-05-30 Oppo广东移动通信有限公司 图像特征匹配方法及装置、设备、存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010141671A (ja) * 2008-12-12 2010-06-24 Canon Inc 撮像装置
CN106303231A (zh) * 2016-08-05 2017-01-04 深圳市金立通信设备有限公司 一种图像处理方法及终端
CN108650442A (zh) * 2018-05-16 2018-10-12 Oppo广东移动通信有限公司 图像处理方法和装置、存储介质、电子设备
CN111062881A (zh) * 2019-11-20 2020-04-24 RealMe重庆移动通信有限公司 图像处理方法及装置、存储介质、电子设备
CN112995467A (zh) * 2021-02-05 2021-06-18 深圳传音控股股份有限公司 图像处理方法、移动终端及存储介质

Also Published As

Publication number Publication date
CN112995467A (zh) 2021-06-18

Similar Documents

Publication Publication Date Title
WO2022166765A1 (zh) 图像处理方法、移动终端及存储介质
WO2021036542A1 (zh) 录屏方法及移动终端
WO2020220991A1 (zh) 截屏方法、终端设备及计算机可读存储介质
WO2020192470A1 (zh) 拍摄处理方法及移动终端
WO2023005060A1 (zh) 拍摄方法、移动终端及存储介质
WO2022266907A1 (zh) 处理方法、终端设备及存储介质
US11863901B2 (en) Photographing method and terminal
US12022190B2 (en) Photographing method and electronic device
CN107743198B (zh) 一种拍照方法、终端及存储介质
WO2021218551A1 (zh) 拍照方法、装置、终端设备及存储介质
WO2024001853A1 (zh) 处理方法、智能终端及存储介质
WO2022252158A1 (zh) 拍照方法、移动终端及可读存储介质
WO2023108444A1 (zh) 图像处理方法、智能终端及存储介质
WO2022262259A1 (zh) 一种图像处理方法、装置、设备、介质和芯片
CN113194227A (zh) 处理方法、移动终端和存储介质
CN114092366A (zh) 图像处理方法、移动终端及存储介质
WO2022133967A1 (zh) 拍摄的方法、终端及计算机存储介质
CN107959793B (zh) 一种图像处理方法及终端、存储介质
CN107566745B (zh) 一种拍摄方法、终端和计算机可读存储介质
WO2023108443A1 (zh) 图像处理方法、智能终端及存储介质
WO2023050413A1 (zh) 图像处理方法、智能终端及存储介质
WO2023102934A1 (zh) 数据处理方法、智能终端及存储介质
WO2024055333A1 (zh) 图像处理方法、智能设备及存储介质
WO2024027374A1 (zh) 隐藏信息显示方法、设备、芯片系统、介质及程序产品
WO2023122906A1 (zh) 图像处理方法、智能终端及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22749044

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 11-12-2023)