WO2024174711A1 - Image processing method and terminal device - Google Patents

Image processing method and terminal device Download PDF

Info

Publication number
WO2024174711A1
WO2024174711A1 PCT/CN2023/140999 CN2023140999W WO2024174711A1 WO 2024174711 A1 WO2024174711 A1 WO 2024174711A1 CN 2023140999 W CN2023140999 W CN 2023140999W WO 2024174711 A1 WO2024174711 A1 WO 2024174711A1
Authority
WO
WIPO (PCT)
Prior art keywords
camera module
effect parameters
image processing
terminal device
image
Prior art date
Application number
PCT/CN2023/140999
Other languages
French (fr)
Chinese (zh)
Inventor
李先明
林志杰
张文浩
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2024174711A1 publication Critical patent/WO2024174711A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/09Supervised learning
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/15Processing image signals for colour aspects of image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/257Colour aspects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/296Synchronisation thereof; Control thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • H04N23/53Constructional details of electronic viewfinders, e.g. rotatable or detachable
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems

Definitions

  • the present application relates to the field of image processing, and in particular to an image processing method and a terminal device.
  • Extended reality is a general term for multiple technologies such as augmented reality (AR), virtual reality (VR), and mixed reality (MR).
  • AR augmented reality
  • VR virtual reality
  • MR mixed reality
  • users can use multiple cameras to perceive, locate, or interact with the real world.
  • different cameras need to be able to stably capture high-quality images in complex user environments.
  • the image effect debugging used by existing XR headsets is divided into two categories: hardware debugging and software image algorithm debugging. If multiple cameras use hardware debugging, the power consumption and cost will be high; if multiple cameras use software debugging, the system load will be high.
  • the present application provides an image processing method and a terminal device, which can improve the image effect quality of multiple cameras under the constraints of power consumption, load and hardware cost.
  • an image processing method is provided, which is applied to a terminal device.
  • the terminal device includes at least one first camera module and a second camera module, and there is a common viewing area between the at least one first camera module and the second camera module.
  • the method includes: obtaining effect parameters of the at least one first camera module according to the real-time image captured by the at least one first camera module; inputting the effect parameters of the at least one first camera module into a trained mathematical model to obtain the effect parameters of the second camera module, the trained mathematical model is obtained by training the mathematical model, the input of the mathematical model is a sample image captured by the at least one first camera module, and the supervision data of the mathematical model is the effect parameters of the second camera model obtained according to the sample image captured by the second camera module; controlling the second camera module to use the effect parameters of the second camera module to capture the image, and/or using the effect parameters of the second camera module to perform image processing on the real-time image captured by the second camera module.
  • the image processing method for multiple camera modules with a common viewing area, only the effect parameters of some (i.e., one or more) camera modules are calculated, and the effect parameters of other camera modules are derived using a pre-trained mathematical model, and then other camera modules use the derived effect parameters to shoot images or use the derived effect parameters to perform corresponding image processing on images shot by other camera modules.
  • This method does not require the configuration of corresponding hardware resources or software resources for other camera modules to calculate effect parameters based on the captured images, and can improve the image effect quality of multiple cameras under the constraints of power consumption, load and hardware cost.
  • the mathematical model may be a deep learning model, such as a neural network model.
  • the effect parameter includes one or more of the following: exposure time, exposure gain, white balance coefficient, color correction matrix, or sharpening coefficient.
  • the terminal device is a mixed reality MR device.
  • the at least one first camera module is connected to at least one image processing unit, and the at least one image processing unit is a hardware resource.
  • the method of obtaining the effect parameters of the at least one first camera module according to the real-time image captured by the at least one first camera module includes: controlling the at least one image processing unit to perform image processing on the real-time image captured by the at least one first camera module to obtain the effect parameters of the at least one first camera module.
  • the solution can calculate the effect parameters of the at least one first camera module through the image processing unit.
  • the image processing unit may be an image signal processing (ISP), a graphics processing (GPU), or a digital signal processing (DSP), or other hardware resources capable of performing image processing.
  • ISP image signal processing
  • GPU graphics processing
  • DSP digital signal processing
  • the real-time image captured by the second camera module is processed using the effect parameters of the second camera module, including: controlling the image algorithm module to use part or all of the effect parameters to perform image processing on the real-time image captured by the second camera module, and the image algorithm module is a software resource.
  • a terminal device comprising a functional module/unit for executing the method of the first aspect and any possible implementation manner of the first aspect.
  • a computer-readable medium stores a program code for execution by a device, wherein the program code includes a method for executing the first aspect or any one of the implementations of the first aspect.
  • a computer program product comprising: a computer program code, which, when executed on a computer, enables the computer to execute the method according to the first aspect or any one of the implementations of the first aspect.
  • the above-mentioned computer program code can be stored in whole or in part on the first storage medium, wherein the first storage medium can be packaged together with the processor or separately packaged with the processor, and the embodiments of the present application do not specifically limit this.
  • a chip system which is applied to a terminal device, and the chip system includes one or more processors, and the processor is used to call computer instructions so that the electronic device executes the method in the first aspect or any one of the implementations of the first aspect.
  • the chip may also include a memory, in which instructions are stored, and the processor is used to execute the instructions stored in the memory.
  • the processor is used to execute the method in the first aspect or any one of the implementation methods of the first aspect.
  • a terminal device comprising one or more processors, a memory, and multiple camera modules; the memory is coupled to the one or more processors, the memory is used to store computer program code, the computer program code comprises computer instructions, and the one or more processors call the computer instructions to enable the electronic device to execute the method in the first aspect or any one of the implementations of the first aspect.
  • FIG1 is a schematic diagram of an XR head display provided in an embodiment of the present application.
  • FIG2 is a schematic diagram of a process of performing image processing by hardware provided in an embodiment of the present application.
  • FIG3 is a schematic diagram of a flow chart of image processing by software provided in an embodiment of the present application.
  • FIG4 is a schematic diagram of a camera system of a terminal device provided in an embodiment of the present application.
  • FIG5 is a schematic diagram of calculating effect parameters of a camera module provided by an embodiment of the present application.
  • FIG6 is a schematic diagram of a machine learning model training provided in an embodiment of the present application.
  • FIG7 is a schematic flow chart of an image processing method provided in an embodiment of the present application.
  • FIG8 is another schematic flow chart of another image processing method provided in an embodiment of the present application.
  • FIG9 is a schematic block diagram of a terminal device provided in an embodiment of the present application.
  • FIG. 10 is a schematic diagram of the structure of a terminal device provided in an embodiment of the present application.
  • a and/or B can represent: A exists alone, A and B exist at the same time, and B exists alone, where A and B can be singular or plural.
  • the character "/” generally indicates that the associated objects before and after are in a kind of "or" relationship.
  • references to "one embodiment” or “some embodiments” etc. described in this specification mean that a particular feature, structure or characteristic described in conjunction with the embodiment is included in one or more embodiments of the present application.
  • the phrases “in one embodiment”, “in some other embodiments”, “in other embodiments”, etc. appearing in different places in this specification do not necessarily refer to the same embodiment, but mean “one or more but not all embodiments", unless otherwise specifically emphasized.
  • the terms “include”, “comprising”, “having” and their variations all mean “including but not limited to”, unless otherwise specifically emphasized.
  • connection includes direct connection and indirect connection, unless otherwise specified.
  • First and “second” are used for descriptive purposes only and should not be understood as indicating or implying relative importance or implicitly indicating the number of the indicated technical features.
  • the words “exemplarily” or “for example” are used to indicate examples, illustrations or explanations. Any embodiment or design described as “exemplarily” or “for example” in the embodiments of the present application should not be interpreted as being more preferred or more advantageous than other embodiments or designs. Specifically, the use of words such as “exemplarily” or “for example” is intended to present related concepts in a specific way.
  • Extended Reality refers to the combination of reality and virtuality through computers to create a virtual environment that allows human-computer interaction.
  • XR is also a general term for multiple technologies such as AR, VR, and MR.
  • users can use multiple camera modules in XR terminal devices (such as XR headsets) to perceive, locate, or interact with the real world.
  • FIG1 shows an XR head display intention.
  • a plurality of camera modules are provided on the XR head display 100, such as the camera modules 101 to 104 shown in the figure.
  • At least two of the camera modules 101 to 104 have a common viewing area, that is, at least two of the camera modules 101 to 104 partially overlap in perspective.
  • the perspectives of camera module 101 and camera module 102 partially overlap
  • the perspectives of camera module 103 and camera module 104 partially overlap.
  • the images captured by camera module 101 and camera module 102 have some of the same pictures (i.e., pictures of people lifting bicycles).
  • camera module 102 and camera module 103 may also have partial perspective overlap.
  • the functions or types of the two camera modules with partial perspective overlap are different.
  • the viewing angles of the camera module 101 and the camera module 102 partially overlap, the camera module 101 can realize the video perspective function, and the camera module 102 can realize the vector space positioning function.
  • the original image output by the camera module is generally of poor quality.
  • the original image can be processed by auto exposure (AE), auto white balance (AWB), color correction, sharpening, etc. The following is a brief description of these processing methods.
  • Automatic exposure Automatically adjust the exposure time and exposure gain according to the intensity of light, so as to adjust the exposure amount, ensure that the photos taken in different lighting conditions and scenes are accurately exposed and have appropriate brightness, and prevent overexposure or underexposure.
  • Automatic white balance related to color temperature, used to measure the color authenticity and accuracy of an image. Specifically, the color temperature of the light source can be determined based on the input image, and then the R/G/B coefficients (or gains) can be calculated based on the color temperature. Finally, the R/G/B coefficients can be adjusted to correct the input image, thereby accurately restoring the original color of the object in various complex scenes.
  • Color correction mainly to correct the color error caused by color penetration between color blocks at the filter plate.
  • the general color correction process is to first compare the image captured by the image sensor with the standard image to calculate a correction matrix.
  • This matrix is the color correction matrix of the image sensor.
  • the matrix can be used to correct all images captured by the image sensor to obtain an image that is closest to the true color of the object.
  • Sharpening It is an image processing method that makes the image edge clearer. Its main principle is to extract the high-frequency components of the original image, and then superimpose them with the original image according to certain rules (involving the sharpening coefficient). The final image is the sharpened image.
  • Each of the above processing methods corresponds to one or more parameters, which can be called effect parameters of the camera module.
  • the camera module or the corresponding processing unit can use the effect parameters to capture images or perform image processing on the captured images.
  • a suitable exposure time and exposure gain can be obtained, and the exposure time and exposure gain can be considered as two effect parameters of the camera module, and the camera module can use the exposure time and exposure gain to capture images.
  • a suitable white balance coefficient can be obtained, and the white balance coefficient can be considered as an effect parameter of the camera module, and the camera module can use the white balance coefficient to capture images.
  • the corresponding color correction matrix and sharpening coefficient can be obtained, and the color correction matrix and the sharpening coefficient are both effect parameters of the camera module, and the color correction matrix and the sharpening coefficient can be used to process the image captured by the camera module.
  • the original image can also be processed by noise removal, bad pixel removal, interpolation, etc.; accordingly, the effect parameters can include the parameters used for the corresponding processing.
  • FIG2 is a flow chart of image processing by hardware.
  • the camera module outputs an image signal
  • the image processing unit can perform post-processing on the image signal, such as automatic exposure, automatic white balance, color correction, sharpening, etc., so as to better restore the details of the scene under different optical conditions.
  • the image processing unit After processing the image signal, the image processing unit outputs the processed image signal to the processor, which further processes it, such as presenting an image corresponding to the processed image signal on a display screen.
  • the image processing unit in the present application is a hardware resource capable of performing image processing.
  • the image processing unit in the present application may be an image signal processing (ISP), a graphics processing (GPU), or a digital signal processing (DSP) or other hardware resources capable of performing image processing.
  • ISP image signal processing
  • GPU graphics processing
  • DSP digital signal processing
  • the processor in the present application can run a variety of image processing algorithms and control peripheral devices.
  • the processor can be a central processing unit (CPU), GPU or other types of processors.
  • an image processing unit may only be able to process the image signal output by one camera module at a time, or it may be able to process the image signals input by multiple (usually 2) camera modules at the same time. If multiple camera modules of the XR headset use hardware for image processing, then multiple image processing units are required, which will result in higher costs and power consumption.
  • FIG3 is a flow chart of image processing by software.
  • the camera module outputs an image signal to the processor, and the image algorithm module in the processor performs post-processing on the image signal, such as automatic exposure, automatic white balance, color correction, sharpening, etc., so as to better restore the details of the scene under different optical conditions.
  • the image signal processed by the processor can be further processed by other modules in the processor, such as presenting an image corresponding to the processed image signal on a display screen.
  • the image algorithm module in the present application refers to a software algorithm for image signal processing.
  • the image algorithm module will occupy a certain load in the process of processing image signals. If multiple camera modules of the XR headset adopt this software debugging method, it will bring a higher load.
  • the present application provides an image processing method, which can calculate the effect parameters of only some camera modules in a scenario where multiple camera modules with a common viewing area are used together, and use a pre-trained mathematical model to derive the effect parameters of other camera modules, and then other camera modules use the derived effect parameters to shoot images or use the derived effect parameters to perform corresponding image processing on images shot by other camera modules.
  • This method does not need to configure corresponding hardware resources or software resources for other camera modules to calculate effect parameters based on the captured images, and can improve the image effect quality of multiple cameras under the constraints of power consumption, load and hardware cost.
  • the method provided in the present application can be applied to a terminal device including multiple camera modules having a common viewing area.
  • the terminal device can be a VR device or an MR device.
  • the present application does not limit the type of the terminal device.
  • a camera module is sometimes also referred to as a camera or a camera.
  • FIG4 shows a schematic diagram of a camera system of a terminal device provided by the present application.
  • the camera system may include multiple camera modules, for example, including camera module 1 to camera module N, N ⁇ 2.
  • the image processing units corresponding to camera module 1 to camera module M can respectively perform image processing on the image signals output by camera module 1 to camera module M.
  • the image processing unit can perform automatic exposure, automatic white balance, color correction, sharpening and other processing on the image.
  • the processor can calculate the effect parameters of the remaining part or all of the camera modules, such as camera module (M+1) to camera module N, based on the effect parameters of the image processing units corresponding to camera module 1 to camera module M, using a pre-trained mathematical model.
  • the effect parameters of camera module 3 and camera module 4 can be obtained according to the effect parameters of camera module 1, and if camera module 2 and camera module 5 have a common viewing area, the effect parameters of camera module 5 can be obtained according to the effect parameters of camera module 2.
  • the effect parameters of the corresponding camera module may be used to capture images and/or perform image processing on the image signals of the camera modules.
  • the processor may send the calculated effect parameters of camera module (M+1) to camera module N to camera module (M+1) to camera module N respectively, and camera module (M+1) to camera module N may use their corresponding effect parameters to perform image processing on the image signal, and then may output the processed image signal to the processor.
  • the processor can send the calculated effect parameters of camera module (M+1) to camera module N to the image algorithm modules corresponding to camera module (M+1) to camera module N respectively, and the image algorithm module can use the effect parameters of the camera module to perform image processing on the image signal output by the camera module, and output the processed image signal.
  • the camera module can use part of the effect parameters to capture the image, and the image algorithm module can use the remaining part of the effect parameters to perform image processing on the image signal output by the camera module, and output the processed image signal.
  • the image algorithm module can be a software algorithm module, which can be executed by other hardware resources in the camera system, such as a processor, or other hardware resources with processing functions.
  • one image processing unit processes the image signal of one camera module.
  • one image processing unit can process the image signals of multiple camera modules, then multiple camera modules only need to be equipped with one image processing unit.
  • the image processing unit is disposed outside the processor, but it should be understood that, in practice, the image processing unit may be disposed inside the processor, that is, the processor may include one or more image processing units.
  • terminal shown in FIG. 4 may also include other camera modules in addition to camera module 1 to module N.
  • the camera system shown in FIG. 4 can be configured in the XR head display shown in FIG. 1, that is, the camera modules in FIG. 4 are part or all of the camera modules in FIG. 1.
  • camera module 1 to module module M in FIG. 4 are camera modules 101 and 103 in FIG. 1
  • camera module (M+1) to module module N in FIG. 4 are camera modules 102 and 104 in FIG. 1.
  • the effect parameters of camera module 3 can be obtained according to the effect parameters of camera module 1 (equivalent to camera module 101 in FIG. 1)
  • the effect parameters of camera module 4 (equivalent to camera module 104 in FIG.
  • the effect parameters of camera module 1 can be obtained according to the effect parameters of camera module 3 (equivalent to camera module 102 in Figure 1)
  • the effect parameters of camera module 2 can be obtained according to the effect parameters of camera module 4 (equivalent to camera module 104 in Figure 1).
  • the method provided in the present application is implemented on the basis of a trained mathematical model (the model is referred to as a trained mathematical model in the present application). Based on the trained mathematical model, the effect parameters of one or more camera modules can be predicted according to the effect parameters of one or more camera modules.
  • the mathematical model may be a machine learning model.
  • the following is a brief description of how to perform machine learning model training.
  • the multiple camera modules with a common viewing area in the terminal device work in the same environment, and the color temperature, overall brightness, and environmental content (bedroom scene, office scene, etc.) of the environment are the same or have a small difference, so there is a certain mapping relationship between the effect parameters of the multiple cameras.
  • multiple camera modules can be used to collect images in the same environment first, and then the effect parameters of the multiple camera modules can be determined based on the images collected by the multiple camera modules in the same environment. Finally, the effect parameters of the multiple camera modules are used for machine model learning to obtain a trained machine learning model.
  • images ie, sample images captured by multiple camera modules under the same environment may be input into an image processing unit, and the image processing unit may obtain effect parameters of the multiple camera modules according to the sample images.
  • FIG5 shows a schematic diagram of calculating the effect parameters of a camera module.
  • both camera module 1 and camera module 2 have a common viewing area with camera module 3.
  • the images collected by camera module 1 to camera module 3 in the same environment are input into the ISPs to which they are connected.
  • the ISP processes the input images, the effect parameters of the corresponding camera modules in the environment can be obtained.
  • the ISP performs automatic exposure, automatic white balance, color correction, sharpening and other processing on the images input by each camera module, and then the exposure time, exposure gain, white balance coefficient, color correction matrix, sharpening coefficient and other effect parameters corresponding to each camera module can be obtained.
  • sample images may also be calculated using software algorithms to obtain the effect parameters of the multiple camera modules.
  • FIG6 shows a schematic diagram of a machine learning model training.
  • the input parameter vector of the machine learning model is the effect parameters of camera module 1 and camera module 2 calculated in FIG5 .
  • [t i , g i , rwi , gwi , bwi , Xi , k i ] is the effect parameter of camera module i, including exposure time t i , exposure gain g i , white balance coefficient ( rwi , gwi , bwi ), color correction Matrix Xi , sharpening coefficient k i .
  • the output parameter vector of the machine learning model is the predicted effect parameter of the camera module 3.
  • the output effect parameter of the camera module 3 is compared with the effect parameter of the camera module 3 calculated in FIG5 (i.e., the supervision data or the true value), and the difference is calculated; the obtained difference is used to reversely update the relevant parameters in the machine learning model, and the above steps are repeated until the difference between the effect parameter of the camera module 3 output by the machine learning model and the effect parameter of the supervision data is less than a certain threshold, thereby completing the training of the machine learning model.
  • machine learning model in the present application can be any machine learning model, for example, it can be the deep learning model shown in the figure, such as a neural network model, that is, other machine learning models.
  • the mathematical model may be a linear model.
  • the image processing method provided by the present application can be executed.
  • the image processing method provided by the present application is described in detail below in conjunction with the flowchart shown in FIG7 .
  • FIG7 is a schematic flow chart of an image processing method provided by the present application.
  • the method can be executed by a terminal device, which includes at least one first camera module and a second camera module, and the at least one first camera module and the second camera module have a common viewing area.
  • the terminal device may be the XR glasses shown in FIG1
  • the at least one first camera module may be camera module 101
  • the second camera module may be camera module 102
  • camera module 101 and camera module 102 have a common viewing area.
  • the method may include S710 to S730, and each step is described below.
  • S710 Acquire effect parameters of the at least one first camera module according to the real-time image captured by the at least one first camera module.
  • the effect parameters of any first camera module may be obtained by performing image processing on a real-time image captured by any first camera module.
  • the real-time image refers to the image currently being captured.
  • the effect parameters of the at least one first camera module can be obtained through an image processing unit connected to the at least one first camera module.
  • the image taken by any first camera module can be input to the image processing unit connected thereto, and the image processing unit can obtain the effect parameters of the first camera module by processing the input image.
  • the image processing unit can obtain the effect parameters such as exposure time, exposure gain, white balance coefficient, color correction matrix, sharpening coefficient, etc. by performing automatic exposure, automatic white balance, color correction, sharpening and other processing on the input image.
  • S720 Input the effect parameters of the at least one first camera module into the trained mathematical model to obtain the effect parameters of the second camera module output by the trained mathematical model.
  • the trained mathematical model is obtained by training the mathematical model.
  • the input of the mathematical model is the sample image taken by the at least one first camera module
  • the supervision data of the mathematical model is the effect parameter of the second camera model obtained according to the sample image taken by the second camera module. It can be understood that the trained mathematical model can characterize the relationship between the effect parameter of the at least one first camera module and the effect parameter of the second camera module.
  • the at least one first camera module here can be the camera module 1 and the camera module 2 described in Figures 5 and 6, and the second camera module can be the camera module 3.
  • the effect parameters of the second camera module may include only parameters for image capture, may include only parameters for image processing, or may include both parameters for image capture and parameters for image processing. If the effect parameters of the second camera module include parameters for image capture, the second camera module may use the parameters for image capture to capture images. If the effect parameters of the second camera module include parameters for image processing, the parameters for image processing may be used to process the real-time image captured by the second camera module.
  • the second camera module can use the exposure time, the exposure gain, or one or more of the white balance coefficient to capture the image.
  • the effect parameters of the second camera module include a color correction matrix and/or a sharpening coefficient
  • the color correction matrix and/or the sharpening coefficient can be used to color correct and/or sharpen the real-time image captured by the second camera module.
  • the image processing method for multiple camera modules having a common viewing area, only the effect parameters of some (i.e., one or more) camera modules are calculated, and the effect parameters of other camera modules are derived using a pre-trained mathematical model. Then, the other camera modules use the derived effect parameters to take images or use the derived effect parameters to process other camera modules.
  • the method does not need to configure corresponding hardware resources or software resources for other camera modules to calculate effect parameters according to the captured images, and can improve the image effect quality of multiple cameras under the constraints of power consumption, load and hardware cost.
  • FIG8 is another schematic flow chart of the image processing method provided by the present application.
  • camera modules 801 to 804 are camera modules in the same terminal device, and camera module 801 and camera module 802, camera module 803 and camera module 804 all have a common viewing area, and the viewing angles of camera module 802, camera module 803 and camera module 804 are not completely the same.
  • the image signal output by camera module 801 is input into ISP 811; ISP 811 can obtain the effect parameters of camera module 801 by processing the input image signal.
  • the effect parameters of camera module 801 are respectively input into the trained mathematical model A, the trained mathematical model B and the trained mathematical model C, and the effect parameters of camera module 802, camera module 803 and camera module 804 can be obtained.
  • camera module 802, camera module 803 and camera module 804 can respectively use their respective effect parameters to perform image processing on the images they have captured, so as to obtain usable images.
  • the image algorithm module may respectively use the effect parameters of the camera module 802 , the camera module 803 , and the camera module 804 to process the image signals output by the corresponding camera modules to obtain a usable image.
  • effect parameters of camera module 801 can be calculated through ISP, and the effect parameters of camera module 802, camera module 803 and camera module 804 can be derived using the trained mathematical model and sent to camera module 802, camera module 803 and camera module 804 or transmitted to the image algorithm module 821 of the image for use, thereby achieving the migration and reuse of effect parameters, thereby saving ISP resources and ensuring the effects of multiple camera modules.
  • the trained mathematical model A can be obtained by training the mathematical model according to the effect parameters of the camera module 801 and the camera module 802 in the same environment;
  • the trained mathematical model B can be obtained by training the mathematical model according to the effect parameters of the camera module 801 and the camera module 803 in the same environment;
  • the trained mathematical model C can be obtained by training the mathematical model according to the effect parameters of the camera module 801 and the camera module 804 in the same environment.
  • the trained mathematical model A, the trained mathematical model B, and the trained mathematical model C can be of the same or different types, and this application does not limit this.
  • the camera module 801 shown in FIG. 8 may be at least one first camera module in the method 700
  • the camera module 802 , the camera module 803 or the camera module 804 may be a second camera module in the method 700 .
  • the terminal device in the embodiment of the present application can execute the image processing method in the aforementioned embodiment of the present application.
  • the specific working process of the terminal device can refer to the corresponding process in the aforementioned method embodiment.
  • FIG9 is a schematic block diagram of a terminal device provided in an embodiment of the present application. It should be understood that the terminal device 900 can execute the image processing method shown in FIG7.
  • the terminal device 900 includes: a processing unit 910.
  • the terminal device 900 also includes at least one first camera module and a second camera module, and the at least one first camera module and the second camera module have a common viewing area.
  • the processing unit 910 is used to: obtain effect parameters of the at least one first camera module based on the real-time image captured by the at least one first camera module; input the effect parameters of the at least one first camera module into a trained mathematical model to obtain effect parameters of the second camera module, wherein the trained mathematical model is obtained by training the mathematical model, the input of the mathematical model is the sample image captured by the at least one first camera module, and the supervision data of the mathematical model is the effect parameters of the second camera model obtained based on the sample image captured by the second camera module; control the second camera module to capture images using the effect parameters of the second camera module, and/or perform image processing on the real-time image captured by the second camera module using the effect parameters of the second camera module.
  • the effect parameter includes one or more of the following: exposure time, exposure gain, white balance coefficient, color correction matrix, or sharpening coefficient.
  • the terminal device is a mixed reality MR device.
  • the processing unit includes at least one image processing unit, and the at least one first camera module is connected to the at least one image processing unit, and the at least one image processing unit is used to: perform image processing on the real-time image taken by the at least one first camera module to obtain effect parameters of the at least one first camera module.
  • the processing unit further includes an image algorithm module, configured to perform image processing on the real-time image captured by the second camera module using part or all of the effect parameters of the second camera module.
  • terminal device 900 is implemented in the form of functional units.
  • unit here can be implemented in the form of software and/or hardware, and is not specifically limited to this.
  • a "unit” may be a software program, a hardware circuit, or a combination of the two to implement the above functions.
  • the hardware circuit may include an application An application specific integrated circuit (ASIC), electronic circuit, processor (e.g., shared processor, dedicated processor or group processor, etc.) and memory for executing one or more software or firmware programs, combined logic circuits and/or other suitable components supporting the described functions.
  • ASIC application An application specific integrated circuit
  • processor e.g., shared processor, dedicated processor or group processor, etc.
  • memory for executing one or more software or firmware programs, combined logic circuits and/or other suitable components supporting the described functions.
  • the units of each example described in the embodiments of the present application can be implemented by electronic hardware, or a combination of computer software and electronic hardware. Whether these functions are performed in hardware or software depends on the specific application and design constraints of the technical solution. Professional and technical personnel can use different methods to implement the described functions for each specific application, but such implementation should not be considered to be beyond the scope of the present application.
  • Fig. 10 shows a schematic diagram of the structure of a terminal device provided by the present application.
  • the terminal device 1000 can be used to implement the method described in the above method embodiment.
  • the terminal device 1000 includes a plurality of camera modules 1006 and one or more processors 1001.
  • the plurality of camera modules may include at least one first camera module and a second camera module as described above.
  • the one or more processors 1001 may support the image processing method in the method embodiment implemented by the terminal device 1000.
  • the processor 1001 may include a general-purpose processor and/or a dedicated processor.
  • the processor 1001 may include one or more of the following: a central processing unit (CPU), a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic devices, such as discrete gates, transistor logic devices or discrete hardware components.
  • the processor 1001 can be used to control the terminal device 1000, execute software programs, and process data of the software programs.
  • the terminal device 1000 may further include a communication unit 1005 for implementing input (reception) and output (transmission) of signals.
  • the terminal device 1000 may be a chip
  • the communication unit 1005 may be an input and/or output circuit of the chip
  • the communication unit 1005 may be a communication interface of the chip
  • the chip may be a component of a terminal device or other terminal devices.
  • the terminal device 1000 may be a terminal device
  • the communication unit 1005 may be a transceiver of the terminal device
  • the communication unit 1005 may be a transceiver circuit of the terminal device.
  • the terminal device 1000 may include one or more memories 1002 on which a program 1004 is stored.
  • the program 1004 can be executed by the processor 1001 to generate instructions 1003, so that the processor 1001 executes the image processing method described in the above method embodiment according to the instructions 1003.
  • data may also be stored in the memory 1002 .
  • the processor 1001 may also read data stored in the memory 1002 .
  • the data may be stored at the same storage address as the program 1004 , or may be stored at a different storage address from the program 1004 .
  • the processor 1001 and the memory 1002 may be provided separately or integrated together, for example, integrated on a system on chip (SOC) of a terminal device.
  • SOC system on chip
  • the memory 1002 may be used to store a program 1004 related to the image processing method provided in an embodiment of the present application, and the processor 1001 may be used to execute the program 1004 related to the image processing method stored in the memory 1002 .
  • the processor 1001 may be configured to execute various steps/functions of the embodiment shown in FIG. 7 .
  • the present application also provides a computer program product, which, when executed by the processor 1001, implements the image processing method of any method embodiment in the present application.
  • the computer program product may be stored in the memory 1002 , for example, a program 1004 , which is converted into an executable target file that can be executed by the processor 1001 after preprocessing, compiling, assembling, and linking.
  • the present application also provides a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a computer, the image processing method described in any method embodiment of the present application is implemented.
  • the computer program can be a high-level language program or an executable target program.
  • the computer-readable storage medium is, for example, memory 1002.
  • Memory 1002 may be a volatile memory or a non-volatile memory, or memory 1002 may include both volatile memory and non-volatile memory.
  • the non-volatile memory may be a read-only memory (ROM), a programmable read-only memory (PROM), an erasable programmable read-only memory (EPROM), an electrically erasable programmable read-only memory (EEPROM), or a flash memory.
  • the volatile memory may be a random access memory (RAM), which is used as an external cache.
  • RAM random access memory
  • SRAM static random access memory
  • DRAM dynamic random access memory
  • SDRAM synchronous dynamic random access memory
  • DDR SDRAM double data rate synchronous dynamic random access memory
  • ESDRAM Enhanced synchronous dynamic random access memory
  • SLDRAM synchronous link dynamic random access memory
  • direct rambus RAM direct rambus RAM
  • terminal device 1000 shown in FIG. 10 only shows a memory, a processor, and a communication interface
  • the terminal device 1000 may also include other devices necessary for normal operation.
  • the terminal device 1000 may also include hardware devices for implementing other additional functions.
  • the terminal device 1000 may also include only the devices necessary for implementing the embodiments of the present application, and does not necessarily include all the devices shown in FIG. 10.
  • the above embodiments can be implemented in whole or in part by software, hardware, firmware or any other combination.
  • the above embodiments can be implemented in whole or in part in the form of a computer program product.
  • the computer program product includes one or more computer instructions or computer programs. When the computer instructions or computer programs are loaded or executed on a computer, the process or function described in the embodiment of the present application is generated in whole or in part.
  • the computer can be a general-purpose computer, a special-purpose computer, a computer network, or other programmable device.
  • the computer instructions can be stored in a computer-readable storage medium, or transmitted from one computer-readable storage medium to another computer-readable storage medium.
  • the computer instructions can be transmitted from one website site, computer, server or data center to another website site, computer, server or data center by wired (e.g., infrared, wireless, microwave, etc.).
  • the computer-readable storage medium can be any available medium that a computer can access or a data storage device such as a server or data center that contains one or more available media sets.
  • the available medium can be a magnetic medium (e.g., a floppy disk, a hard disk, a tape), an optical medium (e.g., a DVD), or a semiconductor medium.
  • the semiconductor medium can be a solid-state hard disk.
  • At least one means one or more, and “more than one” means two or more.
  • At least one of the following” or similar expressions refers to any combination of these items, including any combination of single or plural items.
  • at least one of a, b, or c can mean: a, b, c, a-b, a-c, b-c, or a-b-c, where a, b, c can be single or multiple.
  • the size of the serial numbers of the above-mentioned processes does not mean the order of execution.
  • the execution order of each process should be determined by its function and internal logic, and should not constitute any limitation on the implementation process of the embodiments of the present application.
  • the disclosed systems, devices and methods can be implemented in other ways.
  • the device embodiments described above are only schematic.
  • the division of the units is only a logical function division. There may be other division methods in actual implementation, such as multiple units or components can be combined or integrated into another system, or some features can be ignored or not executed.
  • Another point is that the mutual coupling or direct coupling or communication connection shown or discussed can be through some interfaces, indirect coupling or communication connection of devices or units, which can be electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place or distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the functions are implemented in the form of software functional units and sold or used as independent products, they can be stored in a computer-readable storage medium.
  • the technical solution of the present application, or the part that contributes to the prior art, or the part of the technical solution can be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for a computer device (which can be a personal computer, a server, or a network device, etc.) to execute the various embodiments of the present application.
  • the aforementioned storage medium includes: a USB flash drive, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk or an optical disk, and other media that can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Studio Devices (AREA)

Abstract

The present application relates to the field of terminals. Provided are an image processing method and a terminal device. The method comprises: acquiring effect parameters of at least one first camera module according to a real-time image captured by the at least one first camera module; inputting the effect parameters of the at least one first camera module into a trained mathematical model, so as to obtain effect parameters of a second camera module, wherein the at least one first camera module and the second camera module have a co-visibility region; and controlling the second camera module to capture an image by using the effect parameters of the second camera module, and/or by using the effect parameters of the second camera module, performing image processing on a real-time image captured by the second camera module. In the method, effect parameters are calculated according to captured images, without the need to configure corresponding hardware resources or software resources for a second camera module, such that the image effect and quality of a plurality of cameras can be improved under the constraints of power consumption, loads, and hardware costs.

Description

图像处理方法和终端设备Image processing method and terminal device
本申请要求于2023年02月23日提交国家知识产权局、申请号为202310203636.5、申请名称为“图像处理方法和终端设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims priority to the Chinese patent application filed with the State Intellectual Property Office on February 23, 2023, with application number 202310203636.5 and application name “Image Processing Method and Terminal Device”, all contents of which are incorporated by reference in this application.
技术领域Technical Field
本申请涉及图像处理领域,具体地涉及一种图像处理方法和终端设备。The present application relates to the field of image processing, and in particular to an image processing method and a terminal device.
背景技术Background Art
扩展现实(extended reality,XR)是增强现实(augmented reality,AR)、虚拟现实(virtual reality,VR)、混合现实(mixed reality,MR)等多种技术的统称。XR场景下,用户可以通过多种相机完成对真实世界的感知、定位或交互等功能。为了更好的XR体验,需要不同相机在复杂的用户使用环境下能够稳定地拍摄高质量的图像。现有的XR头显采用的图像效果调试分两类,一是硬件调试,二是软件图像算法调试。如果多种相机均采用硬件调试,则功耗及成本较高;如果多种相机均采用软件调试,则占用的系统负载较高。Extended reality (XR) is a general term for multiple technologies such as augmented reality (AR), virtual reality (VR), and mixed reality (MR). In XR scenarios, users can use multiple cameras to perceive, locate, or interact with the real world. In order to have a better XR experience, different cameras need to be able to stably capture high-quality images in complex user environments. The image effect debugging used by existing XR headsets is divided into two categories: hardware debugging and software image algorithm debugging. If multiple cameras use hardware debugging, the power consumption and cost will be high; if multiple cameras use software debugging, the system load will be high.
因此,如何在功耗、负载及硬件成本约束下提高多相机的图像效果质量,是一个亟待解决的问题。Therefore, how to improve the image quality of multiple cameras under the constraints of power consumption, load and hardware cost is an urgent problem to be solved.
发明内容Summary of the invention
本申请提供了一种图像处理方法和终端设备,能够在功耗、负载及硬件成本约束下提高多相机的图像效果质量。The present application provides an image processing method and a terminal device, which can improve the image effect quality of multiple cameras under the constraints of power consumption, load and hardware cost.
第一方面,提供了一种图像处理方法,应用于终端设备。该终端设备包括至少一个第一相机模组和第二相机模组,该至少一个第一相机模组和该第二相机模组存在共视区。该方法包括:根据该至少一个第一相机模组拍摄的实时图像,获取该至少一个第一相机模组的效果参数;将该至少一个第一相机模组的效果参数输入至训练后的数学模型,得到该第二相机模组的效果参数,该训练后的数学模型是通过对该数学模型进行模型训练得到的,该数学模型的输入为该至少一个第一相机模组拍摄的样本图像,该数学模型的监督数据为根据该第二相机模组拍摄的样本图像得到的该第二相机模型的效果参数;控制该第二相机模组采用该第二相机模组的效果参数进行图像拍摄,和/或,采用该第二相机模组的效果参数对该第二相机模组所拍摄的实时图像进行图像处理。In a first aspect, an image processing method is provided, which is applied to a terminal device. The terminal device includes at least one first camera module and a second camera module, and there is a common viewing area between the at least one first camera module and the second camera module. The method includes: obtaining effect parameters of the at least one first camera module according to the real-time image captured by the at least one first camera module; inputting the effect parameters of the at least one first camera module into a trained mathematical model to obtain the effect parameters of the second camera module, the trained mathematical model is obtained by training the mathematical model, the input of the mathematical model is a sample image captured by the at least one first camera module, and the supervision data of the mathematical model is the effect parameters of the second camera model obtained according to the sample image captured by the second camera module; controlling the second camera module to use the effect parameters of the second camera module to capture the image, and/or using the effect parameters of the second camera module to perform image processing on the real-time image captured by the second camera module.
根据本申请提供的图像处理方法,对于具有共视区的多个相机模组,仅计算部分(即,一个或多个)相机模组的效果参数,利用预先训练好的数学模型推导出其他相机模组的效果参数,然后其他相机模组利用推倒出的效果参数进行图像拍摄或者利用推倒出的效果参数对其他相机模组拍摄的图像进行相应的图像处理。该方法不需要为其他相机模组配置相应的硬件资源或者软件资源来根据所拍摄的图像计算效果参数,能够在功耗、负载及硬件成本约束下提高多相机的图像效果质量。According to the image processing method provided by the present application, for multiple camera modules with a common viewing area, only the effect parameters of some (i.e., one or more) camera modules are calculated, and the effect parameters of other camera modules are derived using a pre-trained mathematical model, and then other camera modules use the derived effect parameters to shoot images or use the derived effect parameters to perform corresponding image processing on images shot by other camera modules. This method does not require the configuration of corresponding hardware resources or software resources for other camera modules to calculate effect parameters based on the captured images, and can improve the image effect quality of multiple cameras under the constraints of power consumption, load and hardware cost.
在一种可能的实现方式中,该数学模型可以是深度学习模型,如神经网络模型。In one possible implementation, the mathematical model may be a deep learning model, such as a neural network model.
在一种可能的实现方式中,该效果参数包括下述中的一项或多项:曝光时间、曝光增益、白平衡系数、色彩校正矩阵、或者锐化系数。In a possible implementation, the effect parameter includes one or more of the following: exposure time, exposure gain, white balance coefficient, color correction matrix, or sharpening coefficient.
在一种可能的实现方式中,该终端设备为混合现实MR设备。In a possible implementation, the terminal device is a mixed reality MR device.
在一种可能的实现方式中,该至少一个第一相机模组与至少一个图像处理单元连接,该至少一个图像处理单元为硬件资源,该根据该至少一个第一相机模组拍摄的实时图像,获取该至少一个第一相机模组的效果参数,包括:控制该至少一个图像处理单元对该至少一个第一相机模组拍摄的实时图像进行图像处理,得到该至少一个第一相机模组的效果参数。该方案可以通过图像处理单元计算该至少一个第一相机模组的效果参数。In a possible implementation, the at least one first camera module is connected to at least one image processing unit, and the at least one image processing unit is a hardware resource. The method of obtaining the effect parameters of the at least one first camera module according to the real-time image captured by the at least one first camera module includes: controlling the at least one image processing unit to perform image processing on the real-time image captured by the at least one first camera module to obtain the effect parameters of the at least one first camera module. The solution can calculate the effect parameters of the at least one first camera module through the image processing unit.
示例性的,该图像处理单元可以是图像信号处理(image signal processing,ISP)、图形处理器(graphicsprocessing,GUP)、或者数字信号处理(digitalsignal processing,DSP)其他可以进行图像处理的硬件资源。 Exemplarily, the image processing unit may be an image signal processing (ISP), a graphics processing (GPU), or a digital signal processing (DSP), or other hardware resources capable of performing image processing.
在一种可能的实现方式中,该采用该第二相机模组的效果参数对该第二相机模组所拍摄的实时图像进行图像处理,包括:控制图像算法模块采用该部分或全部效果参数对该第二相机模组所拍摄的实时图像进行图像处理,该图像算法模块为软件资源。In a possible implementation, the real-time image captured by the second camera module is processed using the effect parameters of the second camera module, including: controlling the image algorithm module to use part or all of the effect parameters to perform image processing on the real-time image captured by the second camera module, and the image algorithm module is a software resource.
第二方面,提供了一种终端设备,包括用于执行第一方面以及第一方面任意一种可能实现方式的方法的功能模块/单元。In a second aspect, a terminal device is provided, comprising a functional module/unit for executing the method of the first aspect and any possible implementation manner of the first aspect.
第三方面,提供一种计算机可读介质,该计算机可读介质存储用于设备执行的程序代码,该程序代码包括用于执行第一方面或者第一方面的任意一种实现方式中的方法。According to a third aspect, a computer-readable medium is provided, wherein the computer-readable medium stores a program code for execution by a device, wherein the program code includes a method for executing the first aspect or any one of the implementations of the first aspect.
第四方面,提供了一种计算机程序产品,所述计算机程序产品包括:计算机程序代码,当所述计算机程序代码在计算机上运行时,使得计算机执行第一方面或者第一方面的任意一种实现方式中的方法。According to a fourth aspect, a computer program product is provided, comprising: a computer program code, which, when executed on a computer, enables the computer to execute the method according to the first aspect or any one of the implementations of the first aspect.
需要说明的是,上述计算机程序代码可以全部或者部分存储在第一存储介质上,其中第一存储介质可以与处理器封装在一起的,也可以与处理器单独封装,本申请实施例对此不作具体限定。It should be noted that the above-mentioned computer program code can be stored in whole or in part on the first storage medium, wherein the first storage medium can be packaged together with the processor or separately packaged with the processor, and the embodiments of the present application do not specifically limit this.
第五方面,提供了一种芯片系统,所述芯片系统应用于终端设备,所述芯片系统包括一个或多个处理器,所述处理器用于调用计算机指令以使得所述电子设备执行第一方面或者第一方面中的任意一种实现方式中的方法。In a fifth aspect, a chip system is provided, which is applied to a terminal device, and the chip system includes one or more processors, and the processor is used to call computer instructions so that the electronic device executes the method in the first aspect or any one of the implementations of the first aspect.
可选地,作为一种实现方式,所述芯片还可以包括存储器,所述存储器中存储有指令,所述处理器用于执行所述存储器上存储的指令,当所述指令被执行时,所述处理器用于执行第一方面或者第一方面中的任意一种实现方式中的方法。Optionally, as an implementation method, the chip may also include a memory, in which instructions are stored, and the processor is used to execute the instructions stored in the memory. When the instructions are executed, the processor is used to execute the method in the first aspect or any one of the implementation methods of the first aspect.
第六方面,提供一种终端设备,所述终端设备包括一个或多个处理器、存储器、多个相机模组;所述存储器与所述一个或多个处理器耦合,所述存储器用于存储计算机程序代码,所述计算机程序代码包括计算机指令,所述一个或多个处理器调用所述计算机指令以使得所述电子设备执行第一方面或者第一方面中的任意一种实现方式中的方法。In a sixth aspect, a terminal device is provided, comprising one or more processors, a memory, and multiple camera modules; the memory is coupled to the one or more processors, the memory is used to store computer program code, the computer program code comprises computer instructions, and the one or more processors call the computer instructions to enable the electronic device to execute the method in the first aspect or any one of the implementations of the first aspect.
附图说明BRIEF DESCRIPTION OF THE DRAWINGS
图1是本申请实施例提供的一种XR头显的示意图;FIG1 is a schematic diagram of an XR head display provided in an embodiment of the present application;
图2是本申请实施例提供的一种通过硬件进行图像处理的流程示意图;FIG2 is a schematic diagram of a process of performing image processing by hardware provided in an embodiment of the present application;
图3是本申请实施例提供的一种通过软件进行图像处理的流程示意图;FIG3 is a schematic diagram of a flow chart of image processing by software provided in an embodiment of the present application;
图4是本申请实施例提供的一种终端设备的相机系统的示意图;FIG4 is a schematic diagram of a camera system of a terminal device provided in an embodiment of the present application;
图5是本申请实施例提供的一种计算相机模组的效果参数的示意图;FIG5 is a schematic diagram of calculating effect parameters of a camera module provided by an embodiment of the present application;
图6是本申请实施例提供的一种机器学习模型训练示意图;FIG6 is a schematic diagram of a machine learning model training provided in an embodiment of the present application;
图7是本申请实施例提供的一种图像处理方法的示意性流程图;FIG7 is a schematic flow chart of an image processing method provided in an embodiment of the present application;
图8是本申请实施例提供的另一种图像处理方法的另一示意性流程图;FIG8 is another schematic flow chart of another image processing method provided in an embodiment of the present application;
图9是本申请实施例提供的一种终端设备的示意性框图;FIG9 is a schematic block diagram of a terminal device provided in an embodiment of the present application;
图10是本申请实施例提供的一种终端设备的结构示意图。FIG. 10 is a schematic diagram of the structure of a terminal device provided in an embodiment of the present application.
具体实施方式DETAILED DESCRIPTION
下面结合本申请实施例中的附图,对本申请实施例中的技术方案进行描述。其中,在本申请实施例的描述中,以下实施例中所使用的术语只是为了描述特定实施例的目的,而并非旨在作为对本申请的限制。如在本申请的说明书和所附权利要求书中所使用的那样,单数表达形式“一种”、“所述”、“上述”、“该”和“这一”旨在也包括例如“一个或多个”这种表达形式,除非其上下文中明确地有相反指示。还应当理解,在本申请以下各实施例中,“至少一个”、“一个或多个”是指一个或两个以上(包含两个)。术语“和/或”,用于描述关联对象的关联关系,表示可以存在三种关系;例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B的情况,其中A、B可以是单数或者复数。字符“/”一般表示前后关联对象是一种“或”的关系。The technical solutions in the embodiments of the present application are described below in conjunction with the drawings in the embodiments of the present application. Among them, in the description of the embodiments of the present application, the terms used in the following embodiments are only for the purpose of describing specific embodiments, and are not intended to be used as limitations on the present application. As used in the specification and the appended claims of the present application, the singular expressions "a kind", "said", "above", "the" and "this" are intended to also include expressions such as "one or more", unless there is a clear contrary indication in the context. It should also be understood that in the following embodiments of the present application, "at least one", "one or more" refer to one or more (including two). The term "and/or" is used to describe the association relationship of associated objects, indicating that three relationships can exist; for example, A and/or B can represent: A exists alone, A and B exist at the same time, and B exists alone, where A and B can be singular or plural. The character "/" generally indicates that the associated objects before and after are in a kind of "or" relationship.
在本说明书中描述的参考“一个实施例”或“一些实施例”等意味着在本申请的一个或多个实施例中包括结合该实施例描述的特定特征、结构或特点。由此,在本说明书中的不同之处出现的语句“在一个实施例中”、“在一些实施例中”、“在其他一些实施例中”、“在另外一些实施例中”等不是必然都参考相同的实施例,而是意味着“一个或多个但不是所有的实施例”,除非是以其他方式另外特别强调。术语“包括”、“包含”、“具有”及它们的变形都意味着“包括但不限于”,除非是以其他方式另外特别强 调。术语“连接”包括直接连接和间接连接,除非另外说明。“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。References to "one embodiment" or "some embodiments" etc. described in this specification mean that a particular feature, structure or characteristic described in conjunction with the embodiment is included in one or more embodiments of the present application. Thus, the phrases "in one embodiment", "in some other embodiments", "in other embodiments", etc. appearing in different places in this specification do not necessarily refer to the same embodiment, but mean "one or more but not all embodiments", unless otherwise specifically emphasized. The terms "include", "comprising", "having" and their variations all mean "including but not limited to", unless otherwise specifically emphasized. The term "connection" includes direct connection and indirect connection, unless otherwise specified. "First" and "second" are used for descriptive purposes only and should not be understood as indicating or implying relative importance or implicitly indicating the number of the indicated technical features.
在本申请实施例中,“示例性地”或者“例如”等词用于表示作例子、例证或说明。本申请实施例中被描述为“示例性地”或者“例如”的任何实施例或设计方案不应被解释为比其它实施例或设计方案更优选或更具优势。确切而言,使用“示例性地”或者“例如”等词旨在以具体方式呈现相关概念。In the embodiments of the present application, the words "exemplarily" or "for example" are used to indicate examples, illustrations or explanations. Any embodiment or design described as "exemplarily" or "for example" in the embodiments of the present application should not be interpreted as being more preferred or more advantageous than other embodiments or designs. Specifically, the use of words such as "exemplarily" or "for example" is intended to present related concepts in a specific way.
扩展现实XR是指通过计算机将真实与虚拟相结合,打造一个可人机交互的虚拟环境。XR也是AR、VR、以及MR等多种技术的统称。XR场景下,用户可以通过XR终端设备(比如,XR头显)中的多个相机模组完成对真实世界的感知、定位或交互等功能。Extended Reality (XR) refers to the combination of reality and virtuality through computers to create a virtual environment that allows human-computer interaction. XR is also a general term for multiple technologies such as AR, VR, and MR. In XR scenarios, users can use multiple camera modules in XR terminal devices (such as XR headsets) to perceive, locate, or interact with the real world.
示例性的,图1示出了一种XR头显示意图。参见图1,XR头显100上设置有多个相机模组,例如图中所示的相机模组101至相机模组104。相机模组101至相机模组104中的至少两个相机模组存在共视区,即相机模组101至相机模组104中的至少两个相机模组的部分视角重叠。比如,相机模组101和相机模组102的部分视角重叠,相机模组103和相机模组104的部分视角重叠。比如,参见图1,相机模组101和相机模组102拍摄的图像中具有部分相同的画面(即,人举起自行车的画面)。另外,相机模组102和相机模组103也可以部分视角重叠。在一些实施例中,部分视角重叠的两个相机模组的功能或者类型不同。比如,相机模组101和相机模组102的部分视角重叠,相机模组101能够实现视频透视功能,相机模组102能够实现矢量空间定位功能。Exemplarily, FIG1 shows an XR head display intention. Referring to FIG1 , a plurality of camera modules are provided on the XR head display 100, such as the camera modules 101 to 104 shown in the figure. At least two of the camera modules 101 to 104 have a common viewing area, that is, at least two of the camera modules 101 to 104 partially overlap in perspective. For example, the perspectives of camera module 101 and camera module 102 partially overlap, and the perspectives of camera module 103 and camera module 104 partially overlap. For example, referring to FIG1 , the images captured by camera module 101 and camera module 102 have some of the same pictures (i.e., pictures of people lifting bicycles). In addition, camera module 102 and camera module 103 may also have partial perspective overlap. In some embodiments, the functions or types of the two camera modules with partial perspective overlap are different. For example, the viewing angles of the camera module 101 and the camera module 102 partially overlap, the camera module 101 can realize the video perspective function, and the camera module 102 can realize the vector space positioning function.
为了更好的XR体验,需要不同相机模组在复杂的用户使用环境下能够稳定地拍摄高质量的图像。然而,由于相机模组中的镜头和图像传感器的物理缺陷,相机模组输出的原始图像一般质量较差。为了从相机模组输出的原始图像中获得可用的图像,通常需要对原始图像进行进一步处理,比如,可以对原始图像进行自动曝光(auto exposure,AE)、自动白平衡(auto white balance,AWB)、颜色校正、锐化(sharpening)等处理。下面对这几种处理方式进行简要说明。In order to provide a better XR experience, different camera modules need to be able to stably capture high-quality images in complex user environments. However, due to the physical defects of the lens and image sensor in the camera module, the original image output by the camera module is generally of poor quality. In order to obtain a usable image from the original image output by the camera module, it is usually necessary to further process the original image. For example, the original image can be processed by auto exposure (AE), auto white balance (AWB), color correction, sharpening, etc. The following is a brief description of these processing methods.
自动曝光:根据光线的强弱自动调整曝光时间和曝光增益,从而实现曝光量的调整,确保在不同的照明条件和场景中拍摄的照片获得准确的曝光从而具有合适的亮度,防止曝光过度或者不足。Automatic exposure: Automatically adjust the exposure time and exposure gain according to the intensity of light, so as to adjust the exposure amount, ensure that the photos taken in different lighting conditions and scenes are accurately exposed and have appropriate brightness, and prevent overexposure or underexposure.
自动白平衡:与色温相关,用于衡量图像的色彩真实性和准确性。具体地,可以根据输入图像确定光源的色温,然后根据色温计算R/G/B的系数(或增益),最后调整R/G/B的系数实现对输入图像的校正,从而在各种复杂场景下精确的还原物体本来的颜色。Automatic white balance: related to color temperature, used to measure the color authenticity and accuracy of an image. Specifically, the color temperature of the light source can be determined based on the input image, and then the R/G/B coefficients (or gains) can be calculated based on the color temperature. Finally, the R/G/B coefficients can be adjusted to correct the input image, thereby accurately restoring the original color of the object in various complex scenes.
颜色校正:主要为了校正在滤光板处各颜色块之间的颜色渗透带来的颜色误差。一般颜色校正的过程是首先利用图像传感器拍摄到的图像与标准图像相比较,以此来计算得到一个校正矩阵。该矩阵就是该图像传感器的颜色校正矩阵。在该图像传感器应用的过程中,及可以利用该矩阵对该图像传感器所拍摄的所有图像来进行校正,以获得最接近于物体真实颜色的图像。Color correction: mainly to correct the color error caused by color penetration between color blocks at the filter plate. The general color correction process is to first compare the image captured by the image sensor with the standard image to calculate a correction matrix. This matrix is the color correction matrix of the image sensor. During the application of the image sensor, the matrix can be used to correct all images captured by the image sensor to obtain an image that is closest to the true color of the object.
锐化:是使图像边缘更加清晰的一种图像处理方法。其原理主要就是将原图像的高频分量提取出来,再和原图像按一定规则(涉及到锐化系数)叠加起来,最终得到的图像就是锐化后的图像。Sharpening: It is an image processing method that makes the image edge clearer. Its main principle is to extract the high-frequency components of the original image, and then superimpose them with the original image according to certain rules (involving the sharpening coefficient). The final image is the sharpened image.
上述的每一种处理方式都对应一个或多个参数,这些参数可以称为相机模组的效果参数。在相同的环境或场景下,相机模组或者相应的处理单元可以采用该效果参数进行图像拍摄或者对所拍摄的图像进行图像处理。比如,经过自动曝光后,可以得到合适的曝光时间和曝光增益,该曝光时间和该曝光增益可以认为是相机模组的两个效果参数,相机模组可以采用该曝光时间和该曝光增益进行图像拍摄。又如,经过自动白平衡后,可以得到合适白平衡系数,该白平衡系数可以认为是相机模组的一个效果参数,相机模组可以采用该白平衡系数进行图像拍摄。类似地,经过颜色校正和锐化处理,可以得到对应的颜色校正矩阵和锐化系数,该颜色校正矩阵和该锐化系数均为相机模组的效果参数,可以采用该颜色校正矩阵和该锐化系数对相机模组所拍摄的图像进行处理。Each of the above processing methods corresponds to one or more parameters, which can be called effect parameters of the camera module. In the same environment or scene, the camera module or the corresponding processing unit can use the effect parameters to capture images or perform image processing on the captured images. For example, after automatic exposure, a suitable exposure time and exposure gain can be obtained, and the exposure time and exposure gain can be considered as two effect parameters of the camera module, and the camera module can use the exposure time and exposure gain to capture images. For another example, after automatic white balance, a suitable white balance coefficient can be obtained, and the white balance coefficient can be considered as an effect parameter of the camera module, and the camera module can use the white balance coefficient to capture images. Similarly, after color correction and sharpening processing, the corresponding color correction matrix and sharpening coefficient can be obtained, and the color correction matrix and the sharpening coefficient are both effect parameters of the camera module, and the color correction matrix and the sharpening coefficient can be used to process the image captured by the camera module.
应理解,上文中仅介绍了部分图像处理方式,实际上也可以采用更多或更少的处理方式对原始图像进行处理,比如还可以对原始图像进行噪声去除、坏点去除、内插等处理;相应地,效果参数可以包括相应处理所采用的参数。It should be understood that only some image processing methods are introduced above. In fact, more or fewer processing methods can be used to process the original image. For example, the original image can also be processed by noise removal, bad pixel removal, interpolation, etc.; accordingly, the effect parameters can include the parameters used for the corresponding processing.
上文所描述的图像处理可以在硬件中实现,也可以通过软件算法实现。下面结合图1和图2, 对这两种实现方式进行简要说明。The image processing described above can be implemented in hardware or through software algorithms. These two implementation methods are briefly described.
图2是一种通过硬件进行图像处理的流程示意图。参见图2,相机模组输出图像信号,图像处理单元可以对该图像信号进行后期处理,比如,自动曝光、自动白平衡、颜色校正、锐化等,从而在不同的光学条件下较好的还原现场细节。图像处理单元对图像信号进行处理后,输出处理后的图像信号至处理器,由处理器再进行进一步处理,比如在显示屏上呈现处理后的图像信号对应的图像。FIG2 is a flow chart of image processing by hardware. Referring to FIG2 , the camera module outputs an image signal, and the image processing unit can perform post-processing on the image signal, such as automatic exposure, automatic white balance, color correction, sharpening, etc., so as to better restore the details of the scene under different optical conditions. After processing the image signal, the image processing unit outputs the processed image signal to the processor, which further processes it, such as presenting an image corresponding to the processed image signal on a display screen.
示例性的,本申请中的图像处理单元为能够进行图像处理的硬件资源。比如,本申请中的图像处理单元可以是图像信号处理(image signal processing,ISP)、图形处理器(graphicsprocessing,GUP)、或者数字信号处理(digitalsignal processing,DSP)其他可以进行图像处理的硬件资源。Exemplarily, the image processing unit in the present application is a hardware resource capable of performing image processing. For example, the image processing unit in the present application may be an image signal processing (ISP), a graphics processing (GPU), or a digital signal processing (DSP) or other hardware resources capable of performing image processing.
示例性的,本申请中的处理器可以运行多种图像处理算法以及控制外围设备。比如,处理器可以是中央处理单元(central processing unit,CPU)、GUP或者其他类型的处理器。Exemplarily, the processor in the present application can run a variety of image processing algorithms and control peripheral devices. For example, the processor can be a central processing unit (CPU), GPU or other types of processors.
根据图像处理单元的处理能力不同,一个图像处理单元可能只能同时处理一个相机模组输出的图像信号,也可能能够同时处理多个(一般为2个)相机模组输入的图像信号。如果XR头显的多个相机模组均采用硬件进行图像处理,那么就需要多个图像处理单元,这将带来较高的成本和功耗。Depending on the processing capabilities of the image processing unit, an image processing unit may only be able to process the image signal output by one camera module at a time, or it may be able to process the image signals input by multiple (usually 2) camera modules at the same time. If multiple camera modules of the XR headset use hardware for image processing, then multiple image processing units are required, which will result in higher costs and power consumption.
图3是一种通过软件进行图像处理的流程示意图。参见图3,相机模组输出图像信号至处理器,该处理器中的图像算法模块对该图像信号进行后期处理,比如,自动曝光、自动白平衡、颜色校正、锐化等,从而在不同的光学条件下较好的还原现场细节。经过处理器处理后的图像信号可以由处理器中的其他模块再进行进一步处理,比如在显示屏上呈现处理后的图像信号对应的图像。FIG3 is a flow chart of image processing by software. Referring to FIG3 , the camera module outputs an image signal to the processor, and the image algorithm module in the processor performs post-processing on the image signal, such as automatic exposure, automatic white balance, color correction, sharpening, etc., so as to better restore the details of the scene under different optical conditions. The image signal processed by the processor can be further processed by other modules in the processor, such as presenting an image corresponding to the processed image signal on a display screen.
示例性的,本申请中的图像算法模块是指用于图像信号处理的软件算法。Exemplarily, the image algorithm module in the present application refers to a software algorithm for image signal processing.
图像算法模块对图像信号进行处理的过程中会占用一定的负载,如果XR头显的多个相机模组均采用这种软件调试方法,那么将带来较高的负载。The image algorithm module will occupy a certain load in the process of processing image signals. If multiple camera modules of the XR headset adopt this software debugging method, it will bring a higher load.
然而,为保证XR头显的正常工作,通常对功耗、负载及硬件成本都有一定的约束。如何在功耗、负载及硬件成本约束下提高多相机模组的图像效果质量,是一个需要考虑的问题。However, to ensure the normal operation of XR headsets, there are usually certain constraints on power consumption, load, and hardware cost. How to improve the image quality of multi-camera modules under the constraints of power consumption, load, and hardware cost is an issue that needs to be considered.
有鉴于此,本申请提供了一种图像处理方法,该方法可以在具有共视区的多个相机模组共同使用场景下,仅计算部分相机模组的效果参数,利用预先训练好的数学模型推导出其他相机模组的效果参数,然后其他相机模组利用推倒出的效果参数进行图像拍摄或者利用推倒出的效果参数对其他相机模组拍摄的图像进行相应的图像处理。该方法不需要为其他相机模组配置相应的硬件资源或者软件资源来根据所拍摄的图像计算效果参数,能够在功耗、负载及硬件成本约束下提高多相机的图像效果质量。In view of this, the present application provides an image processing method, which can calculate the effect parameters of only some camera modules in a scenario where multiple camera modules with a common viewing area are used together, and use a pre-trained mathematical model to derive the effect parameters of other camera modules, and then other camera modules use the derived effect parameters to shoot images or use the derived effect parameters to perform corresponding image processing on images shot by other camera modules. This method does not need to configure corresponding hardware resources or software resources for other camera modules to calculate effect parameters based on the captured images, and can improve the image effect quality of multiple cameras under the constraints of power consumption, load and hardware cost.
本申请提供的方法可以应用于包括具有共视区的多个相机模组的终端设备中。例如,该终端设备可以VR设备或者MR设备。本申请对该终端设备的类型不作限定。The method provided in the present application can be applied to a terminal device including multiple camera modules having a common viewing area. For example, the terminal device can be a VR device or an MR device. The present application does not limit the type of the terminal device.
应理解,相机模组有时也被称为相机或者摄像头。It should be understood that a camera module is sometimes also referred to as a camera or a camera.
示例性的,图4示出了本申请提供的一种终端设备的相机系统的示意图。参见图4,该相机系统可以包括多个相机模组,比如,包括相机模组1至相机模组N,N≥2。其中,与相机模组1至相机模组M分别对应的图像处理单元可以分别对相机模组1至相机模组M输出的图像信号进行图像处理。例如,图像处理单元可以对图像进行自动曝光、自动白平衡、颜色校正、锐化等处理。处理器可以基于相机模组1至相机模组M对应的图像处理单元的效果参数,采用预先训练好的数学模型计算剩余部分或全部相机模组,比如相机模组(M+1)至相机模组N的效果参数。比如,相机模组1和相机模组3以及相机模组4都存在共视区,则可以根据相机模组1的效果参数得到相机模组3和相机模组4的效果参数,相机模组2和相机模组5存在共视区,则可以根据相机模组2的效果参数得到相机模组5的效果参数。在得到剩余部分或全部相机模组的效果参数后,可以采用相应相机模组的效果参数,进行图像拍摄和/或对该相机模组的图像信号进行图像处理。Exemplarily, FIG4 shows a schematic diagram of a camera system of a terminal device provided by the present application. Referring to FIG4, the camera system may include multiple camera modules, for example, including camera module 1 to camera module N, N ≥ 2. Among them, the image processing units corresponding to camera module 1 to camera module M can respectively perform image processing on the image signals output by camera module 1 to camera module M. For example, the image processing unit can perform automatic exposure, automatic white balance, color correction, sharpening and other processing on the image. The processor can calculate the effect parameters of the remaining part or all of the camera modules, such as camera module (M+1) to camera module N, based on the effect parameters of the image processing units corresponding to camera module 1 to camera module M, using a pre-trained mathematical model. For example, if camera module 1, camera module 3 and camera module 4 all have a common viewing area, the effect parameters of camera module 3 and camera module 4 can be obtained according to the effect parameters of camera module 1, and if camera module 2 and camera module 5 have a common viewing area, the effect parameters of camera module 5 can be obtained according to the effect parameters of camera module 2. After obtaining the effect parameters of the remaining part or all of the camera modules, the effect parameters of the corresponding camera module may be used to capture images and/or perform image processing on the image signals of the camera modules.
在一些实施例中,参见图4中的(a),处理器可以将计算得到的相机模组(M+1)至相机模组N的效果参数分别发送给相机模组(M+1)至相机模组N,相机模组(M+1)至相机模组N可以采用其对应的效果参数对图像信号进行图像处理,然后可以将处理后的图像信号输出至处理器。在一些实施例 中,参见图4中的(b),处理器可以将计算得到的相机模组(M+1)至相机模组N的效果参数分别发送给相机模组(M+1)至相机模组N对应的图像算法模块,可以由图像算法模块采用相机模组的效果参数对该相机模组输出的图像信号进行图像处理,并输出处理后的图像信号。在一些实施例中,相机模组可以采用部分效果参数进行图像拍摄,图像算法模块可以采用剩余部分效果参数对该相机模组输出的图像信号进行图像处理,并输出处理后的图像信号。应理解,图像算法模块可以为软件算法模块,其可以由相机系统中其他硬件资源执行,比如可以由处理器执行,也可以由其他具有处理功能的硬件资源执行。In some embodiments, referring to (a) in FIG. 4 , the processor may send the calculated effect parameters of camera module (M+1) to camera module N to camera module (M+1) to camera module N respectively, and camera module (M+1) to camera module N may use their corresponding effect parameters to perform image processing on the image signal, and then may output the processed image signal to the processor. 4, see (b), the processor can send the calculated effect parameters of camera module (M+1) to camera module N to the image algorithm modules corresponding to camera module (M+1) to camera module N respectively, and the image algorithm module can use the effect parameters of the camera module to perform image processing on the image signal output by the camera module, and output the processed image signal. In some embodiments, the camera module can use part of the effect parameters to capture the image, and the image algorithm module can use the remaining part of the effect parameters to perform image processing on the image signal output by the camera module, and output the processed image signal. It should be understood that the image algorithm module can be a software algorithm module, which can be executed by other hardware resources in the camera system, such as a processor, or other hardware resources with processing functions.
应理解,图4所示的相机系统中,一个图像处理单元处理一个相机模组图像信号。然而,在实践中,如果一个图像处理单元能够处理多个相机模组的图像信号,那么多个相机模组仅配置一个图像处理单元即可。It should be understood that in the camera system shown in Figure 4, one image processing unit processes the image signal of one camera module. However, in practice, if one image processing unit can process the image signals of multiple camera modules, then multiple camera modules only need to be equipped with one image processing unit.
另外,图4所示的相机系统中,图像处理单元设置在处理器外部,但应理解,在实践中,图像处理单元可以设置在处理器内部,即处理器可以包括一个或多个图像处理单元。In addition, in the camera system shown in FIG. 4 , the image processing unit is disposed outside the processor, but it should be understood that, in practice, the image processing unit may be disposed inside the processor, that is, the processor may include one or more image processing units.
应理解,图4中仅示出了相机系统中与本申请发明点相关的若干个单元。在实际的相机系统中,可能还包括其他参与图像信号处理的单元或模块,本申请中虽然未示出,但并不代表本申请的保护范围将其排除在外。It should be understood that only several units in the camera system related to the invention of the present application are shown in Fig. 4. In an actual camera system, other units or modules involved in image signal processing may also be included, which, although not shown in the present application, do not mean that they are excluded from the protection scope of the present application.
应理解,图4所示的终端还可以包括除相机模组1至模组模组N以外的其他相机模组。It should be understood that the terminal shown in FIG. 4 may also include other camera modules in addition to camera module 1 to module N.
应理解,图4中所示的相机系统可以配置在图1所示的XR头显中,即,图4中的相机模组为图1中的部分或全部相机模组。比如,图4中的相机模组1至模组模组M为图1中的相机模组101和103,图4中的相机模组(M+1)至模组模组N为图1中的相机模组102和104。在一个示例中,可以根据相机模组1(相当于图1中的相机模组101)的效果参数得到相机模组3(相当于图1中的相机模组102)的效果参数,可以根据相机模组2(相当于图1中的相机模组103)的效果参数得到相机模组4(相当于图1中的相机模组104)的效果参数。在另一个示例中,可以根据相机模组3(相当于图1中的相机模组102)的效果参数得到相机模组1(相当于图1中的相机模组101)的效果参数,可以根据相机模组4(相当于图1中的相机模组104)的效果参数得到相机模组2(相当于图1中的相机模组103)的效果参数。It should be understood that the camera system shown in FIG. 4 can be configured in the XR head display shown in FIG. 1, that is, the camera modules in FIG. 4 are part or all of the camera modules in FIG. 1. For example, camera module 1 to module module M in FIG. 4 are camera modules 101 and 103 in FIG. 1, and camera module (M+1) to module module N in FIG. 4 are camera modules 102 and 104 in FIG. 1. In one example, the effect parameters of camera module 3 (equivalent to camera module 102 in FIG. 1) can be obtained according to the effect parameters of camera module 1 (equivalent to camera module 101 in FIG. 1), and the effect parameters of camera module 4 (equivalent to camera module 104 in FIG. 1) can be obtained according to the effect parameters of camera module 2 (equivalent to camera module 103 in FIG. 1). In another example, the effect parameters of camera module 1 (equivalent to camera module 101 in Figure 1) can be obtained according to the effect parameters of camera module 3 (equivalent to camera module 102 in Figure 1), and the effect parameters of camera module 2 (equivalent to camera module 103 in Figure 1) can be obtained according to the effect parameters of camera module 4 (equivalent to camera module 104 in Figure 1).
下面对本申请提供的方案进行详细说明。The solution provided in this application is described in detail below.
在一些实施例中,本申请提供的方法是在已经训练好数学模型(本申请中将此模型称为训练后的数学模型)的基础上实现的。基于训练后的数学模型,可以根据一个或多个相机模组的效果参数,预测其他一个或多个相机模组的效果参数。In some embodiments, the method provided in the present application is implemented on the basis of a trained mathematical model (the model is referred to as a trained mathematical model in the present application). Based on the trained mathematical model, the effect parameters of one or more camera modules can be predicted according to the effect parameters of one or more camera modules.
在一个示例中,该数学模型可以是机器学习模型,下面对如何进行机器学习模型训练进行简要说明。In one example, the mathematical model may be a machine learning model. The following is a brief description of how to perform machine learning model training.
终端设备中的具有共视区的多个相机模组工作在同一个环境中,环境的色温、整体亮度、环境内容(卧室场景,办公室场景等)是相同或相差较小的,因此该多个相机的效果参数之间存在一定的映射关系。在本申请中,可以首先使用多个相机模组在相同环境下进行图像采集,然后根据相同环境下多个相机模组所采集的图像,确定该多个相机模组的效果参数。最后,利用该多个相机模组的效果参数进行机器模型学习,得到训练后的机器学习模型。The multiple camera modules with a common viewing area in the terminal device work in the same environment, and the color temperature, overall brightness, and environmental content (bedroom scene, office scene, etc.) of the environment are the same or have a small difference, so there is a certain mapping relationship between the effect parameters of the multiple cameras. In this application, multiple camera modules can be used to collect images in the same environment first, and then the effect parameters of the multiple camera modules can be determined based on the images collected by the multiple camera modules in the same environment. Finally, the effect parameters of the multiple camera modules are used for machine model learning to obtain a trained machine learning model.
可选地,可以将多个相机模组在相同环境下采集的图像(即,样本图像)输入至图像处理单元,图像处理单元根据样本图像可以获得该多个相机模组的效果参数。Optionally, images (ie, sample images) captured by multiple camera modules under the same environment may be input into an image processing unit, and the image processing unit may obtain effect parameters of the multiple camera modules according to the sample images.
例如,图5示出了一种计算相机模组的效果参数的示意图。参见图5,相机模组1和相机模组2均与相机模组3存在共视区。将相机模组1至相机模组3在同一环境(比如,办公场景或者室外场景等)下采集的图像输入至各自连接的ISP中,ISP对输入的图像进行处理后,可以得到在该环境下对应的相机模组的效果参数。比如,ISP对各相机模组输入的图像进行自动曝光、自动白平衡、颜色校正、锐化等处理,则可以得到各相机模组对应的曝光时间、曝光增益、白平衡系数、颜色校正矩阵、锐化系数等效果参数。For example, FIG5 shows a schematic diagram of calculating the effect parameters of a camera module. Referring to FIG5 , both camera module 1 and camera module 2 have a common viewing area with camera module 3. The images collected by camera module 1 to camera module 3 in the same environment (for example, an office scene or an outdoor scene, etc.) are input into the ISPs to which they are connected. After the ISP processes the input images, the effect parameters of the corresponding camera modules in the environment can be obtained. For example, the ISP performs automatic exposure, automatic white balance, color correction, sharpening and other processing on the images input by each camera module, and then the exposure time, exposure gain, white balance coefficient, color correction matrix, sharpening coefficient and other effect parameters corresponding to each camera module can be obtained.
应理解,也可以对样本图像采用软件算法进行计算,获得该多个相机模组的效果参数。It should be understood that the sample images may also be calculated using software algorithms to obtain the effect parameters of the multiple camera modules.
示例性的,图6示出了一种机器学习模型训练示意图。参见图6,机器学习模型的输入参数向量为在图5计算得到的相机模组1和相机模组2的效果参数。其中,[ti,gi,rwi,gwi,bwi,Xi,ki]为相机模组i的效果参数,其中包括曝光时间ti,曝光增益gi,白平衡系数(rwi,gwi,bwi),颜色校正 矩阵Xi,锐化系数ki。机器学习模型的输出参数向量为预测的相机模组3的效果参数。将输出的相机模组3的效果参数与在图5计算得到的相机模组3的效果参数(即,监督数据或者真值)进行比较,计算差值;利用得到的差值反向更新机器学习模型中的相关参数,重复执行上述步骤,直到机器学习模型输出的相机模组3的效果参数与监督数据的效果参数差值小于一定的阈值,从而完成机器学习模型的训练。For example, FIG6 shows a schematic diagram of a machine learning model training. Referring to FIG6 , the input parameter vector of the machine learning model is the effect parameters of camera module 1 and camera module 2 calculated in FIG5 . Among them, [t i , g i , rwi , gwi , bwi , Xi , k i ] is the effect parameter of camera module i, including exposure time t i , exposure gain g i , white balance coefficient ( rwi , gwi , bwi ), color correction Matrix Xi , sharpening coefficient k i . The output parameter vector of the machine learning model is the predicted effect parameter of the camera module 3. The output effect parameter of the camera module 3 is compared with the effect parameter of the camera module 3 calculated in FIG5 (i.e., the supervision data or the true value), and the difference is calculated; the obtained difference is used to reversely update the relevant parameters in the machine learning model, and the above steps are repeated until the difference between the effect parameter of the camera module 3 output by the machine learning model and the effect parameter of the supervision data is less than a certain threshold, thereby completing the training of the machine learning model.
应理解,本申请中的机器学习模型可以是任意一种机器学习模型,比如可以是图中所示的深度学习模型,如神经网络模型,也就是其他的机器学习模型。It should be understood that the machine learning model in the present application can be any machine learning model, for example, it can be the deep learning model shown in the figure, such as a neural network model, that is, other machine learning models.
在另一示例中,该数学模型可以是线性模型。In another example, the mathematical model may be a linear model.
在获得训练后的数学模型后,可以执行本申请提供的图像处理方法。下面结合图7所示的流程图,对本申请提供的图像处理方法进行详细说明。After obtaining the trained mathematical model, the image processing method provided by the present application can be executed. The image processing method provided by the present application is described in detail below in conjunction with the flowchart shown in FIG7 .
图7是本申请提供的一种图像处理方法的示意性流程图。该方法可以由终端设备执行,该终端设备包括至少一个第一相机模组和第二相机模组,该至少一个第一相机模组和第二相机模组存在共视区。比如,该终端设备可以为图1所示的XR眼镜,该至少一个第一相机模组可以是相机模组101,该第二相机模组可以是相机模组102,相机模组101和相机模组102存在共视区。该方法可以包括S710至S730,下面对各步骤进行说明。FIG7 is a schematic flow chart of an image processing method provided by the present application. The method can be executed by a terminal device, which includes at least one first camera module and a second camera module, and the at least one first camera module and the second camera module have a common viewing area. For example, the terminal device may be the XR glasses shown in FIG1, the at least one first camera module may be camera module 101, the second camera module may be camera module 102, and camera module 101 and camera module 102 have a common viewing area. The method may include S710 to S730, and each step is described below.
S710,根据该至少一个第一相机模组拍摄的实时图像,获取该至少一个第一相机模组的效果参数。S710: Acquire effect parameters of the at least one first camera module according to the real-time image captured by the at least one first camera module.
具体地,可以通过对任意一个第一相机模组拍摄的实时图像进行图像处理,得到该第一相机模组的效果参数。Specifically, the effect parameters of any first camera module may be obtained by performing image processing on a real-time image captured by any first camera module.
应理解,实时图像是指当前拍摄的图像。It should be understood that the real-time image refers to the image currently being captured.
在一个示例中,可以通过与该至少一个第一相机模组连接的图像处理单元获取该至少一个第一相机模组的效果参数。具体地,任意一个第一相机模组拍摄的图像可以输入至与其连接的图像处理单元,该图像处理单元通过对输入的图像进行处理,可以获得该第一相机模组的效果参数。比如,该图像处理单元通过对输入的图像进行自动曝光、自动白平衡、颜色校正、锐化等处理,则可以得到曝光时间、曝光增益、白平衡系数、颜色校正矩阵、锐化系数等效果参数。In one example, the effect parameters of the at least one first camera module can be obtained through an image processing unit connected to the at least one first camera module. Specifically, the image taken by any first camera module can be input to the image processing unit connected thereto, and the image processing unit can obtain the effect parameters of the first camera module by processing the input image. For example, the image processing unit can obtain the effect parameters such as exposure time, exposure gain, white balance coefficient, color correction matrix, sharpening coefficient, etc. by performing automatic exposure, automatic white balance, color correction, sharpening and other processing on the input image.
S720,将该至少一个第一相机模组的效果参数输入至训练后的数学模型中,得到训练后的数学模型输出的第二相机模组的效果参数。S720: Input the effect parameters of the at least one first camera module into the trained mathematical model to obtain the effect parameters of the second camera module output by the trained mathematical model.
其中,该训练后的数学模型是通过对数学模型进行模型训练得到的。在对该数学模型进行训练时,该数学模型的输入为该至少一个第一相机模组拍摄的样本图像,该数学模型的监督数据为根据第二相机模组拍摄的样本图像得到的第二相机模型的效果参数。可以理解,该训练后的数学模型可以表征该至少一个第一相机模组的效果参数和该第二相机模组的效果参数之间的关系。The trained mathematical model is obtained by training the mathematical model. When the mathematical model is trained, the input of the mathematical model is the sample image taken by the at least one first camera module, and the supervision data of the mathematical model is the effect parameter of the second camera model obtained according to the sample image taken by the second camera module. It can be understood that the trained mathematical model can characterize the relationship between the effect parameter of the at least one first camera module and the effect parameter of the second camera module.
关于如何进行数学模型训练,得到训练后的数学模型,具体可以参考前文的描述,这里不再赘述。可以理解,这里的至少一个第一相机模组可以是图5和图6所描述的相机模组1和相机模组2,第二相机模组可以是相机模组3。Regarding how to train the mathematical model and obtain the trained mathematical model, please refer to the previous description, which will not be repeated here. It can be understood that the at least one first camera module here can be the camera module 1 and the camera module 2 described in Figures 5 and 6, and the second camera module can be the camera module 3.
S730,控制第二相机模组采用第二相机模组的效果参数进行图像拍摄,和/或,采用第二相机模组的效果参数对第二相机模组所拍摄的实时图像进行图像处理。S730, controlling the second camera module to use the effect parameters of the second camera module to capture an image, and/or using the effect parameters of the second camera module to perform image processing on the real-time image captured by the second camera module.
第二相机模组的效果参数可能仅包括用于图像拍摄的参数,也可能仅包括用于图像处理的参数,还可能同时包括用于图像拍摄的参数和用于图像处理的参数。如果第二相机模组的效果参数包括用于图像拍摄的参数,则第二相机模组可以采用用于图像拍摄的参数进行图像拍摄。如果第二相机模组的效果参数包括用于图像处理的参数,则可以采用用于图像处理的参数对第二相机模组拍摄的实时图像进行图像处理。The effect parameters of the second camera module may include only parameters for image capture, may include only parameters for image processing, or may include both parameters for image capture and parameters for image processing. If the effect parameters of the second camera module include parameters for image capture, the second camera module may use the parameters for image capture to capture images. If the effect parameters of the second camera module include parameters for image processing, the parameters for image processing may be used to process the real-time image captured by the second camera module.
比如,在第二相机模组的效果参数包括曝光时间、曝光增益、或者白平衡系数中的一项或多项的情况下,则第二相机模组可以采用该曝光时间、该曝光增益、或者该白平衡系数中的一项或多项,进行图像拍摄。在第二相机模组的效果参数包括颜色校正矩阵和/或锐化系数的情况下,则可以采用该颜色校正矩阵和/或该锐化系数,对第二相机模组拍摄的实时图像颜色校正和/或锐化。For example, when the effect parameters of the second camera module include one or more of exposure time, exposure gain, or white balance coefficient, the second camera module can use the exposure time, the exposure gain, or one or more of the white balance coefficient to capture the image. When the effect parameters of the second camera module include a color correction matrix and/or a sharpening coefficient, the color correction matrix and/or the sharpening coefficient can be used to color correct and/or sharpen the real-time image captured by the second camera module.
综上,根据本申请提供的图像处理方法,对于具有共视区的多个相机模组,仅计算部分(即,一个或多个)相机模组的效果参数,利用预先训练好的数学模型推导出其他相机模组的效果参数,然后其他相机模组利用推倒出的效果参数进行图像拍摄或者利用推倒出的效果参数对其他相机模 组拍摄的图像进行相应的图像处理。该方法不需要为其他相机模组配置相应的硬件资源或者软件资源来根据所拍摄的图像计算效果参数,能够在功耗、负载及硬件成本约束下提高多相机的图像效果质量。In summary, according to the image processing method provided by the present application, for multiple camera modules having a common viewing area, only the effect parameters of some (i.e., one or more) camera modules are calculated, and the effect parameters of other camera modules are derived using a pre-trained mathematical model. Then, the other camera modules use the derived effect parameters to take images or use the derived effect parameters to process other camera modules. The method does not need to configure corresponding hardware resources or software resources for other camera modules to calculate effect parameters according to the captured images, and can improve the image effect quality of multiple cameras under the constraints of power consumption, load and hardware cost.
图8是本申请提供的图像处理方法的另一示意性流程图。参见图8,相机模组801至相机模组804为同一终端设备中的相机模组,相机模组801与相机模组802、相机模组803以及相机模组804都存在共视区,并且相机模组802、相机模组803以及相机模组804的视角不完全相同。相机模组801输出的图像信号输入至ISP 811中;ISP 811通过对输入的图像信号进行处理,可以得到相机模组801的效果参数。然后,将相机模组801的效果参数分别输入至训练后的数学模型A、训练后的数学模型B和训练后的数学模型C中,可以得到相机模组802、相机模组803以及相机模组804的效果参数。然后,相机模组802、相机模组803以及相机模组804可以分别采用各自的效果参数对其拍摄的图像进行图像处理,从而可以得到可用的图像。或者,图像算法模块可以分别采用相机模组802、相机模组803以及相机模组804的效果参数对对应相机模组输出的图像信号进行处理,得到可用的图像。FIG8 is another schematic flow chart of the image processing method provided by the present application. Referring to FIG8 , camera modules 801 to 804 are camera modules in the same terminal device, and camera module 801 and camera module 802, camera module 803 and camera module 804 all have a common viewing area, and the viewing angles of camera module 802, camera module 803 and camera module 804 are not completely the same. The image signal output by camera module 801 is input into ISP 811; ISP 811 can obtain the effect parameters of camera module 801 by processing the input image signal. Then, the effect parameters of camera module 801 are respectively input into the trained mathematical model A, the trained mathematical model B and the trained mathematical model C, and the effect parameters of camera module 802, camera module 803 and camera module 804 can be obtained. Then, camera module 802, camera module 803 and camera module 804 can respectively use their respective effect parameters to perform image processing on the images they have captured, so as to obtain usable images. Alternatively, the image algorithm module may respectively use the effect parameters of the camera module 802 , the camera module 803 , and the camera module 804 to process the image signals output by the corresponding camera modules to obtain a usable image.
该方法中,可以通过ISP仅计算相机模组801的效果参数,利用训练后的数学模型推导出相机模组802、相机模组803以及相机模组804的效果参数并下发到相机模组802、相机模组803以及相机模组804或输送给图像的图像算法模块821以供使用,达到效果参数的迁移复用,从而节省ISP资源并保证了多相机模组的效果。In this method, only the effect parameters of camera module 801 can be calculated through ISP, and the effect parameters of camera module 802, camera module 803 and camera module 804 can be derived using the trained mathematical model and sent to camera module 802, camera module 803 and camera module 804 or transmitted to the image algorithm module 821 of the image for use, thereby achieving the migration and reuse of effect parameters, thereby saving ISP resources and ensuring the effects of multiple camera modules.
应理解,训练后的数学模型A可以根据同一环境下相机模组801和相机模组802的效果参数对数学模型训练得到;训练后的数学模型B可以根据同一环境下相机模组801和相机模组803的效果参数对数学模型训练得到;训练后的数学模型C可以根据同一环境下相机模组801和相机模组804的效果参数对数学模型训练得到。训练后的数学模型A、训练后的数学模型B和训练后的数学模型C类型可以相同,也可以不同,本申请对此不作限定。It should be understood that the trained mathematical model A can be obtained by training the mathematical model according to the effect parameters of the camera module 801 and the camera module 802 in the same environment; the trained mathematical model B can be obtained by training the mathematical model according to the effect parameters of the camera module 801 and the camera module 803 in the same environment; the trained mathematical model C can be obtained by training the mathematical model according to the effect parameters of the camera module 801 and the camera module 804 in the same environment. The trained mathematical model A, the trained mathematical model B, and the trained mathematical model C can be of the same or different types, and this application does not limit this.
应理解,图8所示的相机模组801可以是方法700中的至少一个第一相机模组,相机模组802、相机模组803或者相机模组804可以是方法700中的第二相机模组。It should be understood that the camera module 801 shown in FIG. 8 may be at least one first camera module in the method 700 , and the camera module 802 , the camera module 803 or the camera module 804 may be a second camera module in the method 700 .
上文详细描述了本申请实施例提供的图像处理方法,下面将结合图9和图10,详细描述本申请的装置实施例。应理解,本申请实施例中的终端设备可以执行前述本申请实施例中的图像处理方法。该终端设备的具体工作过程,可以参考前述方法实施例中的对应过程。The above describes in detail the image processing method provided by the embodiment of the present application. The following will describe in detail the device embodiment of the present application in conjunction with Figures 9 and 10. It should be understood that the terminal device in the embodiment of the present application can execute the image processing method in the aforementioned embodiment of the present application. The specific working process of the terminal device can refer to the corresponding process in the aforementioned method embodiment.
图9是本申请实施例提供的一种终端设备的示意性框图。应理解,终端设备900可以执行图7所示的图像处理方法。该终端设备900包括:处理单元910。该终端设备900还包括至少一个第一相机模组和第二相机模组,该至少一个第一相机模组和该第二相机模组存在共视区。FIG9 is a schematic block diagram of a terminal device provided in an embodiment of the present application. It should be understood that the terminal device 900 can execute the image processing method shown in FIG7. The terminal device 900 includes: a processing unit 910. The terminal device 900 also includes at least one first camera module and a second camera module, and the at least one first camera module and the second camera module have a common viewing area.
该处理单元910用于:根据该至少一个第一相机模组拍摄的实时图像,获取该至少一个第一相机模组的效果参数;将该至少一个第一相机模组的效果参数输入至训练后的数学模型,得到该第二相机模组的效果参数,该训练后的数学模型是通过对该数学模型进行模型训练得到的,该数学模型的输入为该至少一个第一相机模组拍摄的样本图像,该数学模型的监督数据为根据该第二相机模组拍摄的样本图像得到的该第二相机模型的效果参数;控制该第二相机模组采用该第二相机模组的效果参数进行图像拍摄,和/或,采用所该第二相机模组的效果参数对该第二相机模组所拍摄的实时图像进行图像处理。The processing unit 910 is used to: obtain effect parameters of the at least one first camera module based on the real-time image captured by the at least one first camera module; input the effect parameters of the at least one first camera module into a trained mathematical model to obtain effect parameters of the second camera module, wherein the trained mathematical model is obtained by training the mathematical model, the input of the mathematical model is the sample image captured by the at least one first camera module, and the supervision data of the mathematical model is the effect parameters of the second camera model obtained based on the sample image captured by the second camera module; control the second camera module to capture images using the effect parameters of the second camera module, and/or perform image processing on the real-time image captured by the second camera module using the effect parameters of the second camera module.
可选地,该效果参数包括下述中的一项或多项:曝光时间、曝光增益、白平衡系数、色彩校正矩阵、或者锐化系数。Optionally, the effect parameter includes one or more of the following: exposure time, exposure gain, white balance coefficient, color correction matrix, or sharpening coefficient.
可选地,该终端设备为混合现实MR设备。Optionally, the terminal device is a mixed reality MR device.
可选地,该处理单元包括至少一个图像处理单元,该至少一个第一相机模组与至少一个图像处理单元连接,该至少一个图像处理单元用于:对该至少一个第一相机模组拍摄的实时图像进行图像处理,得到该至少一个第一相机模组的效果参数。Optionally, the processing unit includes at least one image processing unit, and the at least one first camera module is connected to the at least one image processing unit, and the at least one image processing unit is used to: perform image processing on the real-time image taken by the at least one first camera module to obtain effect parameters of the at least one first camera module.
可选地,该处理单元还包括图像算法模块,用于:采用该第二相机模组的效果参数中的部分或全部效果参数对该第二相机模组所拍摄的实时图像进行图像处理。Optionally, the processing unit further includes an image algorithm module, configured to perform image processing on the real-time image captured by the second camera module using part or all of the effect parameters of the second camera module.
需要说明的是,上述终端设备900功能单元的形式体现。这里的术语“单元”可以通过软件和/或硬件形式实现,对此不作具体限定。It should be noted that the terminal device 900 is implemented in the form of functional units. The term "unit" here can be implemented in the form of software and/or hardware, and is not specifically limited to this.
例如,“单元”可以是实现上述功能的软件程序、硬件电路或二者结合。所述硬件电路可能包括应 用特有集成电路(application specific integrated circuit,ASIC)、电子电路、用于执行一个或多个软件或固件程序的处理器(例如共享处理器、专有处理器或组处理器等)和存储器、合并逻辑电路和/或其它支持所描述的功能的合适组件。For example, a "unit" may be a software program, a hardware circuit, or a combination of the two to implement the above functions. The hardware circuit may include an application An application specific integrated circuit (ASIC), electronic circuit, processor (e.g., shared processor, dedicated processor or group processor, etc.) and memory for executing one or more software or firmware programs, combined logic circuits and/or other suitable components supporting the described functions.
因此,在本申请的实施例中描述的各示例的单元,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。Therefore, the units of each example described in the embodiments of the present application can be implemented by electronic hardware, or a combination of computer software and electronic hardware. Whether these functions are performed in hardware or software depends on the specific application and design constraints of the technical solution. Professional and technical personnel can use different methods to implement the described functions for each specific application, but such implementation should not be considered to be beyond the scope of the present application.
图10示出了本申请提供的一种终端设备的结构示意图。该终端设备1000可以用于实现上述方法实施例中描述的方法。Fig. 10 shows a schematic diagram of the structure of a terminal device provided by the present application. The terminal device 1000 can be used to implement the method described in the above method embodiment.
终端设备1000包括多个相机模组1006和一个或多个处理器1001。该多个相机模组可以包括前述的至少一个第一相机模组和第二相机模组。该一个或多个处理器1001可支持终端设备1000实现方法实施例中的图像处理方法。处理器1001可以包括通用处理器和/或专用处理器。例如,处理器1001可以包括下述中的一项或多项:中央处理器(central processing unit,CPU)、数字信号处理器(digital signal processor,DSP)、专用集成电路(application specific integrated circuit,ASIC)、现场可编程门阵列(field programmable gate array,FPGA)或者其它可编程逻辑器件,如分立门、晶体管逻辑器件或分立硬件组件。The terminal device 1000 includes a plurality of camera modules 1006 and one or more processors 1001. The plurality of camera modules may include at least one first camera module and a second camera module as described above. The one or more processors 1001 may support the image processing method in the method embodiment implemented by the terminal device 1000. The processor 1001 may include a general-purpose processor and/or a dedicated processor. For example, the processor 1001 may include one or more of the following: a central processing unit (CPU), a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic devices, such as discrete gates, transistor logic devices or discrete hardware components.
处理器1001可以用于对终端设备1000进行控制,执行软件程序,处理软件程序的数据。The processor 1001 can be used to control the terminal device 1000, execute software programs, and process data of the software programs.
终端设备1000还可以包括通信单元1005,用以实现信号的输入(接收)和输出(发送)。The terminal device 1000 may further include a communication unit 1005 for implementing input (reception) and output (transmission) of signals.
例如,终端设备1000可以是芯片,通信单元1005可以是该芯片的输入和/或输出电路,或者,通信单元1005可以是该芯片的通信接口,该芯片可以作为终端设备或其它终端设备的组成部分。For example, the terminal device 1000 may be a chip, the communication unit 1005 may be an input and/or output circuit of the chip, or the communication unit 1005 may be a communication interface of the chip, and the chip may be a component of a terminal device or other terminal devices.
又例如,终端设备1000可以是终端设备,通信单元1005可以是该终端设备的收发器,或者,通信单元1005可以是该终端设备的收发电路。For another example, the terminal device 1000 may be a terminal device, the communication unit 1005 may be a transceiver of the terminal device, or the communication unit 1005 may be a transceiver circuit of the terminal device.
终端设备1000中可以包括一个或多个存储器1002,其上存有程序1004,程序1004可被处理器1001运行,生成指令1003,使得处理器1001根据指令1003执行上述方法实施例中描述的图像处理方法。The terminal device 1000 may include one or more memories 1002 on which a program 1004 is stored. The program 1004 can be executed by the processor 1001 to generate instructions 1003, so that the processor 1001 executes the image processing method described in the above method embodiment according to the instructions 1003.
可选地,存储器1002中还可以存储有数据。Optionally, data may also be stored in the memory 1002 .
可选地,处理器1001还可以读取存储器1002中存储的数据,该数据可以与程序1004存储在相同的存储地址,该数据也可以与程序1004存储在不同的存储地址。Optionally, the processor 1001 may also read data stored in the memory 1002 . The data may be stored at the same storage address as the program 1004 , or may be stored at a different storage address from the program 1004 .
处理器1001和存储器1002可以单独设置,也可以集成在一起,例如,集成在终端设备的系统级芯片(system on chip,SOC)上。The processor 1001 and the memory 1002 may be provided separately or integrated together, for example, integrated on a system on chip (SOC) of a terminal device.
示例性地,存储器1002可以用于存储本申请实施例中提供的图像处理方法的相关程序1004,处理器1001可以用于执行存储器1002存储的图像处理方法的相关程序1004。Exemplarily, the memory 1002 may be used to store a program 1004 related to the image processing method provided in an embodiment of the present application, and the processor 1001 may be used to execute the program 1004 related to the image processing method stored in the memory 1002 .
可选地,处理器1001可以用于执行图7中所示实施例的各个步骤/功能。Optionally, the processor 1001 may be configured to execute various steps/functions of the embodiment shown in FIG. 7 .
本申请还提供了一种计算机程序产品,该计算机程序产品被处理器1001执行时实现本申请中任一方法实施例的图像处理方法。The present application also provides a computer program product, which, when executed by the processor 1001, implements the image processing method of any method embodiment in the present application.
该计算机程序产品可以存储在存储器1002中,例如是程序1004,程序1004经过预处理、编译、汇编和链接等处理过程最终被转换为能够被处理器1001执行的可执行目标文件。The computer program product may be stored in the memory 1002 , for example, a program 1004 , which is converted into an executable target file that can be executed by the processor 1001 after preprocessing, compiling, assembling, and linking.
本申请还提供了一种计算机可读存储介质,其上存储有计算机程序,该计算机程序被计算机执行时实现本申请中任一方法实施例所述的图像处理方法。该计算机程序可以是高级语言程序,也可以是可执行目标程序。The present application also provides a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a computer, the image processing method described in any method embodiment of the present application is implemented. The computer program can be a high-level language program or an executable target program.
可选地,该计算机可读存储介质例如是存储器1002。存储器1002可以是易失性存储器或非易失性存储器,或者,存储器1002可以同时包括易失性存储器和非易失性存储器。其中,非易失性存储器可以是只读存储器(read-only memory,ROM)、可编程只读存储器(programmable ROM,PROM)、可擦除可编程只读存储器(erasable PROM,EPROM)、电可擦除可编程只读存储器(electrically EPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(random access memory,RAM),其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的RAM可用,例如静态随机存取存储器(static RAM,SRAM)、动态随机存取存储器(dynamic RAM,DRAM)、同步动态随机存取存储器(synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(double data rate SDRAM,DDR SDRAM)、 增强型同步动态随机存取存储器(enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(synchlink DRAM,SLDRAM)和直接内存总线随机存取存储器(direct rambus RAM,DR RAM)。Optionally, the computer-readable storage medium is, for example, memory 1002. Memory 1002 may be a volatile memory or a non-volatile memory, or memory 1002 may include both volatile memory and non-volatile memory. Among them, the non-volatile memory may be a read-only memory (ROM), a programmable read-only memory (PROM), an erasable programmable read-only memory (EPROM), an electrically erasable programmable read-only memory (EEPROM), or a flash memory. The volatile memory may be a random access memory (RAM), which is used as an external cache. By way of example but not limitation, many forms of RAM are available, such as static random access memory (SRAM), dynamic random access memory (DRAM), synchronous dynamic random access memory (SDRAM), double data rate synchronous dynamic random access memory (DDR SDRAM), Enhanced synchronous dynamic random access memory (enhanced SDRAM, ESDRAM), synchronous link dynamic random access memory (synchlink DRAM, SLDRAM) and direct memory bus random access memory (direct rambus RAM, DR RAM).
应注意,尽管上述图10所示的终端设备1000仅仅示出了存储器、处理器、通信接口,但是在具体实现过程中,本领域的技术人员应当理解,终端设备1000还可以包括实现正常运行所必须的其他器件。同时,根据具体需要,本领域的技术人员应当理解,上述终端设备1000还可包括实现其他附加功能的硬件器件。此外,本领域的技术人员应当理解,上述终端设备1000也可仅仅包括实现本申请实施例所必须的器件,而不必包括图10中所示的全部器件。It should be noted that although the terminal device 1000 shown in FIG. 10 only shows a memory, a processor, and a communication interface, in the specific implementation process, those skilled in the art should understand that the terminal device 1000 may also include other devices necessary for normal operation. At the same time, according to specific needs, those skilled in the art should understand that the terminal device 1000 may also include hardware devices for implementing other additional functions. In addition, those skilled in the art should understand that the terminal device 1000 may also include only the devices necessary for implementing the embodiments of the present application, and does not necessarily include all the devices shown in FIG. 10.
上述实施例,可以全部或部分地通过软件、硬件、固件或其他任意组合来实现。当使用软件实现时,上述实施例可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令或计算机程序。在计算机上加载或执行所述计算机指令或计算机程序时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以为通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集合的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质。半导体介质可以是固态硬盘。The above embodiments can be implemented in whole or in part by software, hardware, firmware or any other combination. When implemented using software, the above embodiments can be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions or computer programs. When the computer instructions or computer programs are loaded or executed on a computer, the process or function described in the embodiment of the present application is generated in whole or in part. The computer can be a general-purpose computer, a special-purpose computer, a computer network, or other programmable device. The computer instructions can be stored in a computer-readable storage medium, or transmitted from one computer-readable storage medium to another computer-readable storage medium. For example, the computer instructions can be transmitted from one website site, computer, server or data center to another website site, computer, server or data center by wired (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that a computer can access or a data storage device such as a server or data center that contains one or more available media sets. The available medium can be a magnetic medium (e.g., a floppy disk, a hard disk, a tape), an optical medium (e.g., a DVD), or a semiconductor medium. The semiconductor medium can be a solid-state hard disk.
应理解,本文中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况,其中A,B可以是单数或者复数。另外,本文中字符“/”,一般表示前后关联对象是一种“或”的关系,但也可能表示的是一种“和/或”的关系,具体可参考前后文进行理解。It should be understood that the term "and/or" in this article is only a description of the association relationship of associated objects, indicating that there can be three relationships. For example, A and/or B can represent: A exists alone, A and B exist at the same time, and B exists alone. A and B can be singular or plural. In addition, the character "/" in this article generally indicates that the associated objects before and after are in an "or" relationship, but it may also indicate an "and/or" relationship. Please refer to the context for specific understanding.
本申请中,“至少一个”是指一个或者多个,“多个”是指两个或两个以上。“以下至少一项(个)”或其类似表达,是指的这些项中的任意组合,包括单项(个)或复数项(个)的任意组合。例如,a,b,或c中的至少一项(个),可以表示:a,b,c,a-b,a-c,b-c,或a-b-c,其中a,b,c可以是单个,也可以是多个。In this application, "at least one" means one or more, and "more than one" means two or more. "At least one of the following" or similar expressions refers to any combination of these items, including any combination of single or plural items. For example, at least one of a, b, or c can mean: a, b, c, a-b, a-c, b-c, or a-b-c, where a, b, c can be single or multiple.
应理解,在本申请的各种实施例中,上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。It should be understood that in the various embodiments of the present application, the size of the serial numbers of the above-mentioned processes does not mean the order of execution. The execution order of each process should be determined by its function and internal logic, and should not constitute any limitation on the implementation process of the embodiments of the present application.
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。Those of ordinary skill in the art will appreciate that the units and algorithm steps of each example described in conjunction with the embodiments disclosed herein can be implemented in electronic hardware, or a combination of computer software and electronic hardware. Whether these functions are performed in hardware or software depends on the specific application and design constraints of the technical solution. Professional and technical personnel can use different methods to implement the described functions for each specific application, but such implementation should not be considered to be beyond the scope of this application. Those skilled in the art can clearly understand that for the convenience and simplicity of description, the specific working process of the above-described system, device and unit can refer to the corresponding process in the aforementioned method embodiment, and will not be repeated here.
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。In the several embodiments provided in the present application, it should be understood that the disclosed systems, devices and methods can be implemented in other ways. For example, the device embodiments described above are only schematic. For example, the division of the units is only a logical function division. There may be other division methods in actual implementation, such as multiple units or components can be combined or integrated into another system, or some features can be ignored or not executed. Another point is that the mutual coupling or direct coupling or communication connection shown or discussed can be through some interfaces, indirect coupling or communication connection of devices or units, which can be electrical, mechanical or other forms.
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。The units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place or distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。In addition, each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施 例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。If the functions are implemented in the form of software functional units and sold or used as independent products, they can be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present application, or the part that contributes to the prior art, or the part of the technical solution can be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for a computer device (which can be a personal computer, a server, or a network device, etc.) to execute the various embodiments of the present application. The aforementioned storage medium includes: a USB flash drive, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk or an optical disk, and other media that can store program codes.
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。 The above is only a specific implementation of the present application, but the protection scope of the present application is not limited thereto. Any person skilled in the art who is familiar with the present technical field can easily think of changes or substitutions within the technical scope disclosed in the present application, which should be included in the protection scope of the present application. Therefore, the protection scope of the present application should be based on the protection scope of the claims.

Claims (14)

  1. 一种图像处理方法,其特征在于,应用于终端设备,所述终端设备包括至少一个第一相机模组和第二相机模组,所述至少一个第一相机模组和所述第二相机模组存在共视区,所述方法包括:An image processing method, characterized in that it is applied to a terminal device, the terminal device includes at least one first camera module and a second camera module, the at least one first camera module and the second camera module have a common viewing area, and the method includes:
    根据所述至少一个第一相机模组拍摄的实时图像,获取所述至少一个第一相机模组的效果参数;Acquiring effect parameters of the at least one first camera module according to the real-time image captured by the at least one first camera module;
    将所述至少一个第一相机模组的效果参数输入至训练后的数学模型,得到所述第二相机模组的效果参数,所述训练后的数学模型是通过对所述数学模型进行模型训练得到的,所述数学模型的输入为所述至少一个第一相机模组拍摄的样本图像,所述数学模型的监督数据为根据所述第二相机模组拍摄的样本图像得到的所述第二相机模型的效果参数;Inputting the effect parameters of the at least one first camera module into a trained mathematical model to obtain the effect parameters of the second camera module, wherein the trained mathematical model is obtained by performing model training on the mathematical model, the input of the mathematical model is a sample image taken by the at least one first camera module, and the supervision data of the mathematical model is the effect parameters of the second camera model obtained according to the sample image taken by the second camera module;
    控制所述第二相机模组采用所述第二相机模组的效果参数进行图像拍摄,和/或,采用所述第二相机模组的效果参数对所述第二相机模组所拍摄的实时图像进行图像处理。Control the second camera module to use the effect parameters of the second camera module to capture images, and/or use the effect parameters of the second camera module to perform image processing on the real-time image captured by the second camera module.
  2. 如权利要求1所述的方法,其特征在于,所述效果参数包括下述中的一项或多项:曝光时间、曝光增益、白平衡系数、色彩校正矩阵、或者锐化系数。The method according to claim 1, characterized in that the effect parameters include one or more of the following: exposure time, exposure gain, white balance coefficient, color correction matrix, or sharpening coefficient.
  3. 如权利要求1或2所述的方法,其特征在于,所述终端设备为混合现实MR设备。The method according to claim 1 or 2, characterized in that the terminal device is a mixed reality MR device.
  4. 如权利要求1-3中任一项所述的方法,其特征在于,所述至少一个第一相机模组与至少一个图像处理单元连接,所述至少一个图像处理单元为硬件资源,所述根据所述至少一个第一相机模组拍摄的实时图像,获取所述至少一个第一相机模组的效果参数,包括:The method according to any one of claims 1 to 3, characterized in that the at least one first camera module is connected to at least one image processing unit, the at least one image processing unit is a hardware resource, and the obtaining of the effect parameters of the at least one first camera module according to the real-time image captured by the at least one first camera module comprises:
    控制所述至少一个图像处理单元对所述至少一个第一相机模组拍摄的实时图像进行图像处理,得到所述至少一个第一相机模组的效果参数。The at least one image processing unit is controlled to perform image processing on the real-time image captured by the at least one first camera module to obtain effect parameters of the at least one first camera module.
  5. 如权利要求1-4中任一项所述的方法,其特征在于,所述采用所述第二相机模组的效果参数对所述第二相机模组所拍摄的实时图像进行图像处理,包括:The method according to any one of claims 1 to 4, characterized in that the using the effect parameter of the second camera module to perform image processing on the real-time image captured by the second camera module comprises:
    控制图像算法模块采用所述效果参数中的部分或全部效果参数对所述第二相机模组所拍摄的实时图像进行图像处理,所述图像算法模块为软件资源。The image algorithm module is controlled to use part or all of the effect parameters to perform image processing on the real-time image taken by the second camera module, and the image algorithm module is a software resource.
  6. 一种终端设备,其特征在于,包括:至少一个第一相机模组、第二相机模组以及处理单元,所述至少一个第一相机模组和所述第二相机模组存在共视区;A terminal device, characterized in that it comprises: at least one first camera module, a second camera module and a processing unit, wherein the at least one first camera module and the second camera module have a common viewing area;
    所述处理单元用于:The processing unit is used for:
    根据所述至少一个第一相机模组拍摄的实时图像,获取所述至少一个第一相机模组的效果参数;Acquiring effect parameters of the at least one first camera module according to the real-time image captured by the at least one first camera module;
    将所述至少一个第一相机模组的效果参数输入至训练后的数学模型,得到所述第二相机模组的效果参数,所述训练后的数学模型是通过对所述数学模型进行模型训练得到的,所述数学模型的输入为所述至少一个第一相机模组拍摄的样本图像,所述数学模型的监督数据为根据所述第二相机模组拍摄的样本图像得到的所述第二相机模型的效果参数;Inputting the effect parameters of the at least one first camera module into a trained mathematical model to obtain the effect parameters of the second camera module, wherein the trained mathematical model is obtained by performing model training on the mathematical model, the input of the mathematical model is a sample image taken by the at least one first camera module, and the supervision data of the mathematical model is the effect parameters of the second camera model obtained according to the sample image taken by the second camera module;
    控制所述第二相机模组采用所述第二相机模组的效果参数进行图像拍摄,和/或,采用所所述第二相机模组的效果参数对所述第二相机模组所拍摄的实时图像进行图像处理。Control the second camera module to use the effect parameters of the second camera module to capture images, and/or use the effect parameters of the second camera module to perform image processing on the real-time image captured by the second camera module.
  7. 如权利要求6所述的终端设备,其特征在于,所述效果参数包括下述中的一项或多项:曝光时间、曝光增益、白平衡系数、色彩校正矩阵、或者锐化系数。The terminal device as described in claim 6 is characterized in that the effect parameters include one or more of the following: exposure time, exposure gain, white balance coefficient, color correction matrix, or sharpening coefficient.
  8. 如权利要求6或7所述的终端设备,其特征在于,所述终端设备为混合现实MR设备。The terminal device as described in claim 6 or 7 is characterized in that the terminal device is a mixed reality MR device.
  9. 如权利要求6-8中任一项所述的终端设备,其特征在于,所述处理单元包括至少一个图像处理单元,所述至少一个第一相机模组与至少一个图像处理单元连接,所述至少一个图像处理单元用于:The terminal device according to any one of claims 6 to 8, characterized in that the processing unit includes at least one image processing unit, the at least one first camera module is connected to the at least one image processing unit, and the at least one image processing unit is used to:
    对所述至少一个第一相机模组拍摄的实时图像进行图像处理,得到所述至少一个第一相机模组的效果参数。Perform image processing on the real-time image captured by the at least one first camera module to obtain effect parameters of the at least one first camera module.
  10. 如权利要求6-9中任一项所述的终端设备,其特征在于,所述处理单元还包括图像算法模块,用于:The terminal device according to any one of claims 6 to 9, characterized in that the processing unit further comprises an image algorithm module, which is used to:
    采用所述第二相机模组的效果参数中的部分或全部效果参数对所述第二相机模组所拍摄的实 时图像进行图像处理。The real image taken by the second camera module is processed by using part or all of the effect parameters of the second camera module. The image is processed.
  11. 一种终端设备,其特征在于,包括:A terminal device, characterized by comprising:
    一个或多个处理器、存储器、至少一个第一相机模组和第二相机模组,所述至少一个第一相机模组和所述第二相机模组存在共视区;One or more processors, memories, at least one first camera module and a second camera module, wherein the at least one first camera module and the second camera module have a common viewing area;
    所述存储器与所述一个或多个处理器耦合,所述存储器用于存储计算机程序代码,所述计算机程序代码包括计算机指令,所述一个或多个处理器调用所述计算机指令以使得所述终端设备执行如权利要求1至5中任一项所述的方法。The memory is coupled to the one or more processors, and the memory is used to store computer program codes, wherein the computer program codes include computer instructions, and the one or more processors call the computer instructions to enable the terminal device to execute the method according to any one of claims 1 to 5.
  12. 一种芯片系统,其特征在于,所述芯片系统应用于终端设备,所述芯片系统包括一个或多个处理器,所述处理器用于调用计算机指令以使得所述终端设备执行如权利要求1至5中任一项所述的方法。A chip system, characterized in that the chip system is applied to a terminal device, the chip system comprises one or more processors, and the processor is used to call computer instructions so that the terminal device executes the method as described in any one of claims 1 to 5.
  13. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储了计算机程序,当所述计算机程序被处理器执行时,使得处理器执行权利要求1至5中任一项所述的方法。A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the processor executes the method according to any one of claims 1 to 5.
  14. 一种计算机程序产品,其特征在于,所述计算机程序产品包括计算机程序代码,当所述计算机程序代码被处理器执行时,使得处理器执行权利要求1至5中任一项所述的方法。 A computer program product, characterized in that the computer program product comprises computer program code, and when the computer program code is executed by a processor, the processor executes the method according to any one of claims 1 to 5.
PCT/CN2023/140999 2023-02-23 2023-12-22 Image processing method and terminal device WO2024174711A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202310203636.5A CN118540449A (en) 2023-02-23 2023-02-23 Image processing method and terminal equipment
CN202310203636.5 2023-02-23

Publications (1)

Publication Number Publication Date
WO2024174711A1 true WO2024174711A1 (en) 2024-08-29

Family

ID=92383374

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/140999 WO2024174711A1 (en) 2023-02-23 2023-12-22 Image processing method and terminal device

Country Status (2)

Country Link
CN (1) CN118540449A (en)
WO (1) WO2024174711A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0895421A2 (en) * 1997-07-29 1999-02-03 Canon Kabushiki Kaisha Camera control system controlling different types of cameras
EP3328060A1 (en) * 2016-11-23 2018-05-30 Nokia Technologies OY Positioning cameras
CN114339022A (en) * 2020-09-30 2022-04-12 北京小米移动软件有限公司 Camera shooting parameter determining method and neural network model training method
CN115701113A (en) * 2021-07-29 2023-02-07 华为技术有限公司 Shooting method, shooting parameter training method, electronic device and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0895421A2 (en) * 1997-07-29 1999-02-03 Canon Kabushiki Kaisha Camera control system controlling different types of cameras
EP3328060A1 (en) * 2016-11-23 2018-05-30 Nokia Technologies OY Positioning cameras
CN114339022A (en) * 2020-09-30 2022-04-12 北京小米移动软件有限公司 Camera shooting parameter determining method and neural network model training method
CN115701113A (en) * 2021-07-29 2023-02-07 华为技术有限公司 Shooting method, shooting parameter training method, electronic device and storage medium

Also Published As

Publication number Publication date
CN118540449A (en) 2024-08-23

Similar Documents

Publication Publication Date Title
WO2021135391A1 (en) Image quality evaluation method and apparatus
WO2018176925A1 (en) Hdr image generation method and apparatus
JP6961797B2 (en) Methods and devices for blurring preview photos and storage media
US9699380B2 (en) Fusion of panoramic background images using color and depth data
US20190335077A1 (en) Systems and methods for image capture and processing
US11132770B2 (en) Image processing methods and apparatuses, computer readable storage media and electronic devices
US10970821B2 (en) Image blurring methods and apparatuses, storage media, and electronic devices
TW202011260A (en) Liveness detection method, apparatus and computer-readable storage medium
US20190130536A1 (en) Image blurring methods and apparatuses, storage media, and electronic devices
WO2023030139A1 (en) Image fusion method, electronic device, and storage medium
CN106815803B (en) Picture processing method and device
WO2020215180A1 (en) Image processing method and apparatus, and electronic device
JP2014232938A (en) Image processing apparatus, image processing method, and program
US20150365612A1 (en) Image capture apparatus and image compensating method thereof
WO2023273111A1 (en) Image processing method and apparatus, and computer device and storage medium
US10915996B2 (en) Enhancement of edges in images using depth information
CN114298942A (en) Image deblurring method and device, computer readable medium and electronic equipment
WO2023109389A1 (en) Image fusion method and apparatus, and computer device and computer-readable storage medium
JP2022002376A (en) Image processing apparatus, image processing method, and program
US10009545B2 (en) Image processing apparatus and method of operating the same
WO2022089185A1 (en) Image processing method and image processing device
US9900503B1 (en) Methods to automatically fix flash reflection at capture time
CN111917986A (en) Image processing method, medium thereof, and electronic device
WO2024174711A1 (en) Image processing method and terminal device
US9659355B1 (en) Applying corrections to regions of interest in image data