WO2022226732A1 - 电子装置和电子装置的图像处理方法 - Google Patents

电子装置和电子装置的图像处理方法 Download PDF

Info

Publication number
WO2022226732A1
WO2022226732A1 PCT/CN2021/089980 CN2021089980W WO2022226732A1 WO 2022226732 A1 WO2022226732 A1 WO 2022226732A1 CN 2021089980 W CN2021089980 W CN 2021089980W WO 2022226732 A1 WO2022226732 A1 WO 2022226732A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
image processing
image signal
processor
electronic device
Prior art date
Application number
PCT/CN2021/089980
Other languages
English (en)
French (fr)
Inventor
张伟成
陈少杰
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to PCT/CN2021/089980 priority Critical patent/WO2022226732A1/zh
Priority to EP21938223.1A priority patent/EP4297397A4/en
Priority to CN202180006443.XA priority patent/CN115529850A/zh
Publication of WO2022226732A1 publication Critical patent/WO2022226732A1/zh
Priority to US18/493,917 priority patent/US20240054751A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/60Image enhancement or restoration using machine learning, e.g. neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4015Image demosaicing, e.g. colour filter arrays [CFA] or Bayer patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/60Extraction of image or video features relating to illumination properties, e.g. using a reflectance or lighting model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20004Adaptive image processing
    • G06T2207/20008Globally adaptive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Definitions

  • the embodiments of the present application relate to the field of electronic technologies, and in particular, to an electronic device and an image processing method for the electronic device.
  • smart terminals have integrated more and more functions. Thanks to the development of image processing technology, more and more users like to use smart terminal devices to take photos, record videos, and make video calls.
  • the industry has proposed a traditional image processing algorithm and artificial intelligence (AI, Artificial Intelligence) Algorithms are combined to perform image processing techniques.
  • AI Artificial Intelligence
  • the same network model is usually used to process image signals collected in various scenarios, which increases the complexity of the model structure and the complexity of the model training process.
  • this network model is difficult to deploy and implement in the terminal device. Therefore, the conventional technology has not solved the problem that the image processing effect of the conventional ISP is not good in the terminal device.
  • the electronic device and the image processing method for the electronic device provided by the present application can improve the image processing effect.
  • the present application adopts the following technical solutions.
  • an embodiment of the present application provides an electronic device, the electronic device includes: an artificial intelligence AI processor, configured to select a first image processing model from a plurality of image processing models based on scene information, and use the first image processing model.
  • An image processing model performs first image signal processing on a first image signal to obtain a second image signal, the first image signal is obtained based on first image data output by an image sensor, and the scene information reflects the Feature classification of the first image signal; an image signal processor ISP, configured to perform second image signal processing on the second image signal to obtain a first image processing result.
  • the AI processor can reduce the complexity of the structure of each image processing model by running multiple image processing models to process image data collected in multiple scenarios. For example, each image processing model can use fewer convolutional layers. It can be realized with a smaller number of nodes, making the image processing model easier to deploy and run in terminal devices; because the complexity of the image processing model structure is reduced, the running speed of the AI processor, that is, the image processing speed, can be improved; in addition , since each image processing model is dedicated to processing image data in one scene, compared with using the same image processing model to process image data collected from multiple scenes, the image processing effect can also be improved.
  • the first image signal processing includes at least one of the following processing procedures: noise removal, black level correction, shadow correction, white balance correction, demosaicing, chromatic aberration correction or gamma correction.
  • the scene information includes at least one item of first ambient light brightness information and first motion state information of the electronic device.
  • the AI processor is further configured to: when the first motion state information is used to instruct the electronic device to move at a speed lower than a preset threshold, based on the image signal of the previous frame and the The image processing result of the image signal of the previous frame is used to process the first image signal.
  • the ISP is configured to: select a first parameter from multiple sets of parameters of an image processing algorithm based on the scene information; obtain an updated image processing algorithm based on the first parameter; The second image signal processing is performed on the second image signal using the updated image processing algorithm.
  • the second image signal processing includes at least one of the following processing procedures: noise removal, black level correction, shadow correction, white balance correction, demosaicing, chromatic aberration correction, gamma correction, and chromatic aberration correction Or RGB to YUV domain.
  • the ISP is further configured to: receive the first image data from the image sensor, and perform third image signal processing on the first image data to obtain the first image Signal.
  • the ISP is further configured to: perform the third image signal processing on the first image data by using the updated image processing algorithm.
  • the electronic device further includes: a controller, configured to generate the scene information based on data collected by at least one sensor, where the at least one sensor includes at least one of the following: an acceleration sensor, a gravity sensor sensor and the image sensor.
  • the third image signal processing includes at least one of the following processing procedures: noise removal, black level correction, shadow correction, white balance correction, demosaicing, chromatic aberration correction or gamma correction.
  • the multiple image processing models are obtained by training based on multiple training sample sets corresponding to multiple scenarios, wherein each training sample set in the multiple training sample sets includes A preprocessing image signal generated by processing sample image data collected in a corresponding scene, and a reference image signal generated by processing the sample image data.
  • an embodiment of the present application provides an image processing method for an electronic device, the image processing method comprising: controlling an artificial intelligence AI processor to select a first image processing model from multiple image processing models based on scene information, and using the The first image processing model performs first image signal processing on the first image signal to obtain a second image signal, the first image signal is obtained based on the first image data output by the image sensor, and the scene information reflects Classification of the features of the first image signal; controlling the image signal processor ISP to perform second image signal processing on the second image signal to obtain a first image processing result.
  • controlling the image signal processor ISP to perform second image signal processing on the second image signal to obtain an image processing result includes: based on the scene information, controlling The ISP selects a first parameter from a plurality of sets of parameters for running the image processing algorithm; controls the ISP to obtain an updated image processing algorithm based on the first parameter; controls the ISP to use the updated image A processing algorithm performs the second image signal processing on the second image signal.
  • an embodiment of the present application provides an image processing apparatus, the image processing apparatus includes: an AI processing module, configured to select a first image processing model from a plurality of image processing models, and use the first image processing model to The first image signal performs first image signal processing to obtain a second image signal, the first image signal is obtained based on the first image data output by the image sensor, and the scene information reflects the first image signal.
  • an AI processing module configured to select a first image processing model from a plurality of image processing models, and use the first image processing model to The first image signal performs first image signal processing to obtain a second image signal, the first image signal is obtained based on the first image data output by the image sensor, and the scene information reflects the first image signal.
  • Feature classification an image signal processing module configured to perform second image signal processing on the second image signal to obtain a first image processing result.
  • the scene information includes at least one item of first ambient light brightness information and first motion state information of the electronic device.
  • the image signal processing module is configured to: based on the scene information, select a first parameter from a plurality of sets of parameters for running an image processing algorithm; and obtain an update based on the first parameter The updated image processing algorithm is used; the second image signal processing is performed on the second image signal by using the updated image processing algorithm.
  • the AI processing module is further configured to: when the first motion state information is used to instruct the electronic device to move at a speed lower than a preset threshold, based on the image signal of the previous frame and the The image processing result of the image signal of the previous frame is used to process the first image signal.
  • the first image signal processing includes at least one of the following processing procedures: noise removal, black level correction, shadow correction, white balance correction, demosaicing, chromatic aberration correction or gamma correction.
  • the second image signal processing includes at least one of the following processing procedures: noise removal, black level correction, shadow correction, white balance correction, demosaicing, chromatic aberration correction, gamma correction, and chromatic aberration correction Or RGB to YUV domain.
  • the multiple image processing models are obtained by training based on multiple training sample sets corresponding to multiple scenarios, wherein each training sample set in the multiple training sample sets includes A preprocessing image signal generated by processing sample image data collected in a corresponding scene, and a reference image signal generated by processing the sample image data.
  • an embodiment of the present application provides an electronic device, the electronic device includes a memory and at least one processor, where the memory is used to store a computer program, and the at least one processor is configured to call the memory to store All or part of the computer program of the above-mentioned second aspect executes the method.
  • the at least one processor includes the AI processor and an ISP.
  • the electronic device further includes the image sensor.
  • an embodiment of the present application provides a system-on-chip, the system-on-chip includes at least one processor and an interface circuit, and the interface circuit is used to obtain a computer program from outside the system-on-chip; the computer program is described by the When executed by at least one processor, it is used to implement the method described in the second aspect.
  • the at least one processor includes the AI processor and an ISP.
  • an embodiment of the present application provides a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and when the computer program is executed by at least one processor, is used to implement the second aspect.
  • the at least one processor includes the AI processor and an ISP.
  • an embodiment of the present application provides a computer program product, which is used to implement the method described in the second aspect above when the computer program product is executed by at least one processor.
  • the at least one processor includes the AI processor and an ISP.
  • FIG. 1 is a schematic diagram of a hardware structure of an electronic device provided by an embodiment of the present application.
  • FIG. 2 is a schematic diagram of an application scenario provided by an embodiment of the present application.
  • FIG. 3 is another schematic diagram of a hardware structure of an electronic device provided by an embodiment of the present application.
  • FIG. 4 is another schematic diagram of a hardware structure of an electronic device provided by an embodiment of the present application.
  • FIG. 5 is another schematic diagram of a hardware structure of an electronic device provided by an embodiment of the present application.
  • FIG. 6 is another schematic diagram of a hardware structure of an electronic device provided by an embodiment of the present application.
  • FIG. 7 is a schematic flowchart of a training method for an image processing model run in an AI processor provided by an embodiment of the present application
  • FIG. 8 is a schematic flowchart of an image processing method provided by an embodiment of the present application.
  • FIG. 9 is a schematic diagram of a software structure of an electronic device provided by an embodiment of the present application.
  • references herein to "first,” or “second,” and similar terms do not denote any order, quantity, or importance, but are merely used to distinguish the different parts. Likewise, words such as “a” or “an” do not denote a quantitative limitation, but rather denote the presence of at least one. Words like “coupled” are not limited to physical or mechanical direct connections, but may include electrical connections, whether direct or indirect, equivalent to communication in a broad sense.
  • words such as “exemplary” or “for example” are used to represent examples, illustrations or illustrations. Any embodiments or designs described in the embodiments of the present application as “exemplary” or “such as” should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as “exemplary” or “such as” is intended to present the related concepts in a specific manner.
  • the meaning of "plurality” refers to two or more. For example, a plurality of processors refers to two or more processors.
  • the electronic device provided by the embodiments of the present application may be an electronic device or a module, chip, chip set, circuit board or component integrated in the electronic device.
  • the electronic device may be a user equipment (User Equipment, UE), such as various types of devices such as a mobile phone, a tablet computer, a smart screen, or an image capturing device.
  • UE User Equipment
  • the electronic device may be provided with an image sensor for collecting image data.
  • the electronic device can also be installed with various software applications, such as camera applications, video calling applications, or online video shooting applications, which are used to drive the image sensor to capture images, and the user can use the image sensor to take photos by starting the above various applications. or video. Users can also personalize various image beautification settings through this type of application.
  • video calling applications users can select the picture presented to the screen during a video call (such as the presented facial avatar, or the presented background). screen) for automatic adjustment (such as "one-click beautification").
  • the image processing service supported by the above-mentioned various applications in the electronic device can trigger the electronic device to process the image data collected by the image sensor, thereby The processed image is presented on the screen of the electronic device to achieve the effect of image beautification.
  • the image beautification may include, but is not limited to: improving the brightness of a part of the image or the entire frame, changing the display color of the image, skinning the facial objects presented in the image, adjusting the saturation of the image, adjusting the exposure of the image, adjusting the sharpness of the image, Adjust screen highlights, adjust screen contrast, adjust screen sharpness, or adjust screen clarity, etc.
  • the image processing described in this embodiment of the present application may include, but is not limited to, noise removal, black level correction, shadow correction, white balance correction, demosaicing, chromatic aberration correction, gamma correction, or red-green-blue (RGB) conversion to YUV ( YCrCb) domain, so as to achieve the above image beautification effect.
  • the image displayed on the screen of the electronic device used by user A and the image displayed on the screen of the electronic device used by user B are
  • the image of user A on the screen of the electronic device can be an image processed by the electronic device described in the embodiments of the present application, and the processed image will be presented until user A terminates the video call with user B or user A closes the Image processing services.
  • FIG. 1 shows a schematic diagram of a hardware structure of an electronic device provided by an embodiment of the present application.
  • the electronic device 100 may specifically be a chip or a chip set or a circuit board equipped with a chip or a chip set, or an electronic device including the circuit board, but it is not used to limit the embodiment.
  • the specific electronic device is as described above, which is omitted here.
  • the chip or chip set or the circuit board equipped with the chip or chip set can be driven by software.
  • the electronic device 100 includes one or more processors, such as an AI processor 101 and an ISP 102 .
  • the one or more processors can be integrated in one or more chips, and the one or more chips can be regarded as a chipset, when one or more processors are integrated in the same chip
  • the chip is also called a system on a chip (SOC).
  • the electronic device 100 also includes one or more other components, such as a memory 104 and an image sensor 105 .
  • the memory 104 may be located in the same system-on-chip as the AI processor 101 and the ISP 102 in the electronic device 100 , that is, the memory 104 is integrated in the SOC as shown in FIG. 1 above.
  • the AI processor 101 shown in FIG. 1 may include a special neural processor such as a neural network processor (Neural-network Processing Unit, NPU), including but not limited to a convolutional neural network processor, a tensor processor, or a neural processing unit. engine.
  • the AI processor can be used alone as a component or integrated in other digital logic devices, including but not limited to: CPU (Central Processing Unit, Central Processing Unit), GPU (Graphics Processing Unit, Graphics Processing Unit) or DSP ( Digital Signal Processor, Digital Signal Processing).
  • the CPU, GPU and DSP are all processors within a system-on-chip.
  • the AI processor 101 can run multiple image processing models, where the multiple image processing models are used to perform image processing operations in various scenarios.
  • One of the image processing models is used to perform image processing operations in one of the scenarios.
  • various scenarios include scenes with high external ambient light and scenes with low external ambient light
  • the AI processor 101 can run two image processor models, one of which is used to execute a high ambient light scene
  • the image processing operations under the image processing model are used to perform image processing operations in low ambient light scenes.
  • the foregoing multiple scenarios may be divided based on preset scenario information.
  • the scene information may include, but is not limited to, at least one of the following: ambient light brightness information and motion state information of the electronic device.
  • the scene information reflects the feature classification of the image signal to be processed by the AI processor.
  • the scene information may include ambient light brightness information and motion state information of the electronic device.
  • the features of the image signal may include, but are not limited to, noise features, shadow features, white balance features, and the like.
  • the feature classification includes motion features and ambient light brightness features, such as low ambient light brightness and high ambient light brightness, or a high-speed motion state and a low-speed motion state. This feature classification can be used to indicate the size of noise in the image signal, or the size of shadows in the image signal, and the like.
  • the feature categories of the image signal corresponding to the collected image data are different.
  • the AI processor can know the characteristics of the image signal corresponding to the image data through the scene information. Thus, the AI processor 101 can run one of the image processing models to perform image processing based on the scene information.
  • Each of the above-mentioned multiple image processing models may perform one or more image processing processes.
  • the one or more image processing procedures may include, but are not limited to, noise reduction, black level correction, shadow correction, white balance correction, demosaicing, chromatic aberration correction, gamma correction, or RGB to YUV domain conversion.
  • Each image processing model is trained using machine learning methods based on sample image data collected in the corresponding scene. For the training method of the image processing model, refer specifically to the embodiment shown in FIG. 7 . It should be noted that the multiple image processing models run in the AI processor 101 are all used to perform the same image processing operation. For example, a plurality of image processing models executed in the AI processor 101 are used to perform denoising image processing operations.
  • the level of noise reduction performed by each image processing model is different. For example, in the high ambient light scene, the noise of the image signal is low, and the noise reduction level performed by the image processing model corresponding to the high ambient light scene is weak; in the low ambient light scene, the noise of the image signal is high, which is different from the low ambient light level.
  • the level of noise reduction performed by the image processing model corresponding to the brightness scene is stronger.
  • the AI processor 101 can reduce the complexity of each image processing model by running multiple image processing models to process image data collected in multiple scenarios. For example, each image processing model can use Fewer convolution layers and a smaller number of nodes can be implemented, thereby improving the running speed of the AI processor 101, that is, the image processing speed.
  • each image processing model is dedicated to processing image data in one scene, the image processing effect can be improved compared with using the same image processing model to process image data collected from multiple scenes.
  • the motion state of the electronic device may be divided into multiple motion state intervals according to the order of the motion speed of the electronic device from high to low, for example, the motion state of the electronic device is divided into the first motion state to the fifth motion state Five exercise state intervals.
  • the ambient light brightness can be divided into multiple brightness intervals in the order of ambient light brightness from low brightness to high brightness, for example, the ambient light brightness is divided into five brightness intervals from the first brightness to the fifth brightness. Then, any combination of the motion state interval and the brightness interval is performed to obtain various combinations of the motion state and the brightness.
  • the motion state of the electronic device is divided into two types: high-speed motion state and low-speed motion state (wherein the stationary state can be divided into low-speed motion state), and the ambient light brightness is divided into two types: low ambient light brightness and high ambient light brightness.
  • the motion state of the electronic device is divided into a low-speed motion state and a high-speed motion state
  • the ambient light intensity is divided into a low ambient light intensity and a high ambient light intensity
  • the AI processor 101 can run four image processing models.
  • the image processing model 01 is used to perform image processing operations on the image data collected in the low-speed motion state and the low ambient brightness scene; the image processing model 02 is used to perform image processing on the image data collected in the low-speed motion state and the high ambient brightness scene. Processing operations; image processing model 03 is used to perform image processing operations on image data collected in high-speed motion and low ambient light scenarios; image processing model 04 is used for image data collected in high-speed motion states and high ambient light scenes. Perform image processing operations.
  • the AI processor 101 runs one image processing model among the four image processing models to perform image processing based on the scene information. As an example, assuming that the scene information is used to indicate a high-speed motion state and high ambient light brightness, the AI processor 101 runs the image processing model 04 to process the image data.
  • the image processing model corresponding to the low-speed motion state may be It is obtained by training the recurrent neural network based on the training samples.
  • the AI processor when the AI processor runs the image processing model to process the image signal of the current frame, the AI processor can also process at least one of the image signal of the previous frame and the image processing result of the image signal of the previous frame , and the image signal of the current frame are input to the image processing model, and the image processing model can process the image signal of the current frame with reference to the image signal of the previous frame and the image processing result of the image signal of the previous frame.
  • the scene information described in the embodiments of the present application may be delivered to the AI processor 101 by the controller running in the electronic device 100 .
  • the AI processor 101 may pre-store a first mapping relationship table between the scene information and the storage address information of the image processing model. After obtaining the scene information, the AI processor 101 may query the first mapping relationship table to obtain the address information of the corresponding image processing model. Finally, the AI processor 101 can load the image processing model from the address indicated by the obtained address information.
  • the above-mentioned first mapping relationship table may also be pre-stored in the controller, and after obtaining the scene information, the controller may directly issue the storage address information of the image processing model based on the first mapping relationship table to the AI processor 101.
  • the ISP 102 as shown in FIG. 1 can set up multiple hardware modules or run software programs to process images.
  • the ISP 102 executes multiple image processing processes by running image processing algorithms, and the multiple image processing processes may include, but are not limited to: tone mapping, contrast enhancement, edge enhancement, noise reduction, color correction, and the like.
  • the values of some parameters are adjustable. For example, spatial domain Gaussian kernel parameters and pixel value domain Gaussian kernel parameters in image processing algorithms used to perform the noise reduction process.
  • the ISP 102 may preset multiple sets of adjustable parameter values, and the multiple sets of adjustable parameter values correspond to image processing in multiple scenarios. The value of one set of adjustable parameters corresponds to the image processing in one of the scenarios.
  • the multiple scenes may also be divided based on scene information.
  • the scene information here is the same as the scene information used for setting the image processing model.
  • the ISP 102 may select a set of adjustable parameter values based on the scene information, and update the corresponding part of the image processing algorithm based on the selected adjustable parameter values. For example, among the multiple image processing processes performed by ISP102, only the parameters of the image processing algorithm for noise reduction are adjustable, and the parameters of other image processing processes do not need to be adjusted. ISP102 can only update noise reduction based on the selected parameter values. image processing algorithms. Then use the updated image processing algorithm to process the image signal.
  • the adjustable parameters in IPS102 Still taking the scene information including ambient light brightness information and the motion state information of electronic equipment, the motion state of electronic equipment including high-speed motion state and low-speed motion state, and the ambient light brightness including low brightness and high brightness as an example, for the adjustable parameters in IPS102 The correspondence between the value and the scene is described. Four sets of adjustable parameter values can be preset in ISP102.
  • the value of the first set of adjustable parameters corresponds to the low-speed motion state and low-brightness scene
  • the second group of adjustable parameter values corresponds to the low-speed motion state and high-brightness scene
  • the third The value of the group of adjustable parameters corresponds to the high-speed motion state and the low-brightness scene
  • the value of the fourth group of adjustable parameters corresponds to the high-speed motion state and the high-brightness scene.
  • the ISP 102 adopts the values of the fourth group of adjustable parameters to update the relevant image processing algorithms that are run. Then, the image signal is processed using the updated image processing algorithm.
  • the ISP 102 may pre-store a second mapping relationship table between the scene information and the value of the adjustable parameter. Based on the scene information, the ISP 102 may query the second mapping relationship table to obtain corresponding adjustable parameter values.
  • the AI processor 101 and the ISP 102 may cooperate with each other to process image data collected in the same scene.
  • the image data obtained from the image sensor 105 may undergo multiple image processing processes to generate a final image processing result, and the multiple image processing processes may include, but are not limited to: noise reduction, black level correction, shadow correction, White balance correction, demosaicing, chromatic aberration correction, Gamma correction or RGB to YUV conversion.
  • the AI processor 101 can execute one or more of the above image processing processes by running the image processing model, that is, corresponding to the above one or more image processing operations, the ISP 102 can also execute the above image processing process by running the image processing algorithm one or more of the processes.
  • the entire image processing flow includes a plurality of processing procedures and is assigned to the AI processor 101 and the ISP 102 as tasks.
  • the AI processor 101 and the ISP 102 can perform different image processing processes, and the AI processor 101 and the ISP 102 can also perform the same image processing process.
  • the image processing performed by the AI processor 101 can be used as an enhancement or supplement to the image processing process.
  • the AI processor 101 and the ISP 102 perform the process of noise removal simultaneously, the ISP 102 is used to perform the primary noise removal, and the AI processor 101 is used to perform the secondary noise removal based on the primary noise removal by the ISP 102 .
  • the ISP 102 and the AI processor 101 may communicate via an electronic circuit connection.
  • the electronic line connection between the AI processor 101 and the ISP 102 is also called a physical connection or disconnection.
  • the interrupted connection includes an interrupted signal processing hardware circuit for realizing the functions of transmitting and receiving the interrupted signal and a connection line for transmitting the signal, so as to realize the sending and receiving of the interrupting signal.
  • Interrupt signal processing hardware circuits include, but are not limited to, conventional interrupt controller circuits. For the specific implementation scheme of the interrupt signal processing hardware circuit, reference may be made to the relevant description of the interrupt controller in the prior art, which will not be repeated here.
  • the specific connection between the AI processor 101 and the ISP 102 and the specific implementation of the cooperation between the AI processor 101 and the ISP 102 to process the image refer to the relevant descriptions of the embodiments shown in FIG. 4 to FIG. 6 .
  • the electronic device 100 further includes a controller 103, as shown in FIG. 3 .
  • Controller 103 may be an integrated controller.
  • the controller 103 may be various digital logic devices or circuits, including but not limited to: CPU, GPU, microcontroller, microprocessor or DSP, and so on.
  • the controller 103 may be located in the same system-on-chip as the AI processor 101 and the ISP 102 in the electronic device 100 , that is, the controller 103 is integrated in the SOC as shown in FIG. 1 .
  • the controller 103 may also be provided separately from the AI processor 101 , the ISP 102 and the memory 10 , which is not limited in this embodiment.
  • the controller 103 and the AI processor 101 may also be integrated into the same logic operation device (eg, CPU), and the same logic operation device can implement the controller 103 and the AI processor 101 described in the embodiments of the present application. function performed.
  • the controller 103 runs a software program or software plug-in to drive the controller 103 to obtain the above scene information, and then sends the obtained scene information to the AI processor 101 and the ISP 102 respectively.
  • the scene information includes ambient light brightness information
  • the ambient light brightness information may be generated by the controller 103 based on the sensitivity information of the image data, wherein the sensitivity information of the image data may be in the ISP 102
  • the exposure compensation module is calculated by running the corresponding algorithm.
  • the ambient light brightness information can be a bit signal.
  • the controller 103 may be preset with multiple sensitivity intervals (eg, low sensitivity intervals and low sensitivity intervals), and the controller 103 may associate the obtained sensitivity information with the thresholds of the multiple sensitivity intervals. The values are compared, and a bit signal is generated based on the comparison result.
  • the motion state information of the electronic device may be the acceleration data of the electronic device based on the controller 103 and the three-axis components (X-axis, Y-axis) of the electronic device and Z axis) data.
  • the motion state information of the electronic device can be a bit signal.
  • the controller 103 can also be preset with a plurality of motion speed intervals (for example, a low motion speed interval and a high motion speed interval).
  • the controller 103 can generate motion state data based on the acceleration data and the three-axis component data, and then The motion state data is compared with the threshold values of multiple motion speed intervals, and a bit signal is generated based on the comparison result.
  • the above acceleration data may be collected by an acceleration sensor, and the three-axis component data of the electronic device may be collected by a gravity sensor.
  • the electronic device 100 may further include an acceleration sensor 106 and a gravity sensor 107, as shown in FIG. 3 .
  • the scene information can use a two-bit signal, the first bit indicates the ambient light brightness, and the second bit indicates the motion state of the electronic device. For example, “00" indicates low ambient light level, low motion state; “01” indicates low ambient light level, high motion state; “10” indicates high ambient light level, low motion state; “11” indicates high ambient light level, High sports status. It should be noted that the number of bits used to indicate scene information shown in the embodiments of the present application is only illustrative, and the number of bits may include more bits or less. For example, when the brightness interval includes low brightness, medium brightness, and high brightness, the bits used to indicate the brightness may include three bits.
  • an overlapping interval is set between every two adjacent numerical interval segments. Assuming that the scene information currently generated by the controller 103 falls within the overlapping interval, the controller 103 may refer to the scene information generated last time.
  • the AI processor 101 can keep the currently running image processing model unchanged, and the ISP 102 can keep the currently running image The processing algorithm remains unchanged; if the difference between the currently generated scene information and the last generated scene information is greater than the preset threshold, the currently generated scene information can be resent to the AI processor 101 and the ISP 102, so that the AI The processor 101 changes the image processing model, and the ISP 102 changes the parameters of the image processing algorithm.
  • the AI processor 101 by setting a coincident interval between every two adjacent numerical interval segments, it is possible to prevent frequent switching of the image processing model running in the AI processor 101, and improve the performance of the image processing model running in the AI processor 101. stability.
  • the controller 103 may obtain scene information in real time or periodically.
  • the scene information indicating the current scene is sent to the ISP 102 and the AI processor 101 respectively in time.
  • the AI processor 101 replaces the running image processing model in time based on the currently received scene information, so as to run the replaced image processing model when performing image processing in the next image processing cycle.
  • the ISP 102 can also change the parameters of the running image processing algorithm in time based on the currently received scene information, so as to run the image processing algorithm with the updated parameters when performing image processing in the next image processing cycle.
  • the electronic device described in the embodiments of the present application can dynamically adjust the adopted image processing model and the parameters of the image processing algorithm run by the ISP 102 based on the scene information, so that the user can use the electronic device described in the embodiments of the present application.
  • the scene is changed (for example, from a strong light area to a weak light area or the electronic device is converted from a static state to a moving state)
  • the collected images are processed in a targeted manner to improve the image processing effect, which is conducive to improving user experience. .
  • FIG. 4 shows a schematic structural diagram of the connection between the ISP 102 and the AI processor 101 provided by an embodiment of the present application through an electronic circuit.
  • the ISP 102 may include multiple cascaded image processing modules, the multiple cascaded image processing modules include image processing module 01 , image processing module 02 , image processing module 03 . . .
  • image processing module 01 is used to perform image processing of black level correction
  • image processing module 02 is used to perform image processing of shading correction
  • image processing module 03 is used to perform image processing of shading correction
  • image processing module N+1 Used to perform RGB to YUV processing.
  • any one of the above-mentioned multiple cascaded image processing modules may be provided with an output port and an input port, the output port is used to provide the AI processor 101 with the image signal A, and the input port For obtaining the image signal B from the AI processor 101, FIG. 4 schematically shows that the image processing module 02 is provided with an output port V po1 , and the image processing module 03 is provided with an input port V pi1 . Based on the structure shown in FIG.
  • the electronic device 100 may also be provided with an on-chip RAM, and the on-chip RAM, the ISP 102 and the AI processor 101 are integrated into one chip in the electronic device 100 .
  • Both the image signal provided by the AI processor 101 and the image signal provided by the AI processor 101 to the ISP 102 can be stored in the on-chip RAM.
  • the on-chip RAM is also used to store intermediate data generated during the running of the AI processor 101 and weight data of each network node in the neural network run by the AI processor 101 .
  • the on-chip RAM may be provided in the memory 104 as shown in FIG. 1 or FIG. 3 .
  • the ISP 102 obtains image data from the image sensor 105, and the image data is processed by the image processing module 01 and the image processing module 02 in turn to perform shadow correction and white balance correction processing to generate an image signal A and store it in the on-chip RAM.
  • the image processing module 02 stores the image signal A in the on-chip RAM and sends an interrupt signal Z1 to the AI processor 101 .
  • the AI processor 101 acquires the image signal A from the on-chip RAM in response to the interrupt signal Z1.
  • the AI processor 101 performs demosaic processing on the image signal A to generate the image signal B, and stores the image signal B in the on-chip RAM.
  • the AI processor 101 stores the image signal B in the on-chip RAM and sends the above-mentioned interrupt signal Z2 to the image processing module 03 .
  • the image processing module 03 reads the image signal B from the on-chip RAM in response to the interrupt signal Z2, and the image signal B passes through the image processing module 03..., the image processing module N and the image processing module N+1 in the ISP102 to perform chromatic aberration correction,...Gamma in turn After correction and RGB to YUV domain processing, the final image processing result is generated.
  • more image processing modules may be included before the image processing module 01, so that the ISP 102 performs more image processing processes on the image data.
  • the image processing process performed by the AI processor 101 is arranged between multiple image processing processes performed by the ISP 102 to replace or supplement some intermediate image processing processes performed by the ISP 102 .
  • the AI processor 101 may directly acquire image data from the image sensor 105, and execute the front-end image processing process.
  • the AI processor 101 can replace and supplement some image processing modules in the front end of the ISP 102 to perform corresponding image processing processes.
  • the AI processor 101 can directly communicate with the image processing modules behind the ISP 102.
  • FIG. 5 for the hardware structure of this implementation.
  • the connection and interaction between the AI processor 101 and the ISP 102 shown in FIG. 5 are similar to the connection and interaction between the AI processor 101 and the ISP 102 shown in FIG. 4 .
  • the relevant descriptions in the embodiment shown in FIG. 4 will not be repeated here.
  • an interaction is performed between the AI processor 101 and the ISP 102, and the AI processor 101 performs one image processing process or performs multiple consecutive image processing processes to process image data or image signals. to be processed.
  • the AI processor 101 may perform multiple discontinuous image processing processes, that is to say, the AI processor 101 and the ISP 102 may perform image processing alternately, so that both parties can jointly complete the image processing process to The processing result is obtained to replace the image processing process of the traditional ISP.
  • the ISP 102 may also include more output ports and input ports. The following description takes the structure of the electronic device shown in FIG. 6 as an example. In FIG.
  • the image processing module 02 and the image processing module 03 in the ISP102 are respectively provided with an output port V po1 and an output port V po2
  • the image processing module 03 and the image processing module N are respectively provided with an input port V pi1 and an input port V pi2 .
  • the output ports of each module are used to provide image signals to the AI processor, and the input ports of each module are used to obtain image signals from the AI processor.
  • the image data collected by the image sensor 105 is processed by the image processing module 01 and the image processing module 02 to generate an image signal A, which is provided to the AI processor 101; the AI processor 101 processes the image signal A, generates an image signal B, and provides it to the image processing module 03; the image signal B is processed by the image processing module 03 to generate an image signal C, which is provided to the AI processor 101; the AI processor processes the image signal C to generate an image signal D and provides it to the image processing module N; the image signal D passes through the image processing module.
  • the processing of N and image processing module N+1 generates the final image processing result.
  • At least one first image processing model executes the first image processing operation
  • at least one second image processing model executes the second image processing operation operate.
  • the multiple first image processing models are used to process image data collected in different scenarios, and the first image processing operations performed by the multiple first image processing models are the same Image processing operations.
  • the multiple second image processing models are used to process image data collected in different scenarios, and the second image processing operations performed by the multiple second image processing models are the same Image processing operations.
  • the AI processor 101 can run two first image processing models and two second image processing models.
  • one of the first image processing models is used to perform noise reduction processing on image data collected in high ambient light scenes
  • the other first image processing model is used to perform noise reduction on image data collected in low ambient light scenes.
  • one of the second image processing models is used to perform demosaic processing on image data collected in high ambient light scenarios
  • the other first image processing model is used to perform demosaic processing on image data collected in low ambient light scenarios deal with.
  • the electronic device further includes an off-chip memory 108, as shown in FIG. 3 .
  • the off-chip memory 108 has a larger storage space, it can replace the on-chip RAM to store larger units of image data.
  • the off-chip memory 108 may be used to store multiple frames of images, and the multiple frames of images may be the previous frame image, the previous two frames of images or the previous multi-frame images before the current image.
  • the off-chip memory 108 may also be used to store the feature map of each frame of the above-mentioned multiple frames of images.
  • the feature map is generated after the image processing model running in the AI processor 101 performs operations such as convolution and pooling on the image signal.
  • the image signal of the previous frame of the current image signal or the characteristics of the image signal of the previous frame can also be obtained from the off-chip memory 108 and then use the image signal of the previous frame or the feature map of the image signal of the previous frame as reference data to process the current image signal.
  • the AI processor 101 can also store the processed image signal in the off-chip memory 108 .
  • the off-chip memory 108 may include random access memory (RAM), which may include volatile memory (eg, SRAM, DRAM, DDR (Double Data Rate SDRAM), or SDRAM, etc.) and non-volatile memory.
  • the electronic device 100 may further include a communication unit (not shown in the figure), where the communication unit includes but is not limited to a short-range communication unit or a cellular communication unit.
  • the short-range communication unit performs information exchange with a terminal located outside the mobile terminal for accessing the Internet by running a short-range wireless communication protocol.
  • the short-range wireless communication protocol may include, but is not limited to, various protocols supported by radio frequency identification technology, Bluetooth communication technology protocols, or infrared communication protocols.
  • the cellular communication unit is connected to the Internet by running the cellular wireless communication protocol and the wireless access network, so as to realize the information exchange between the mobile communication unit and the server supporting various applications in the Internet.
  • the communication unit may be integrated in the same SOC with the AI processor 101 and the ISP 102 described in the above embodiments, or may be provided separately.
  • the electronic device 100 may optionally include a bus, an input/output port I/O, a memory controller, and the like.
  • the memory controller is used to control the memory 103 and the off-chip memory 108 .
  • the bus, the input/output port I/O, and the storage controller, etc. can be integrated into the same SOC with the above-mentioned ISP 102 and AI processor 101 and the like. It should be understood that, in practical applications, the electronic device 100 may include more or less components than those shown in FIG. 1 or FIG. 3 , which is not limited in this embodiment of the present application.
  • each image processing model of the multiple image processing models run in the AI processor is based on the sample image data collected in the corresponding scene, using machine learning After the network is trained, it is deployed in the electronic device.
  • FIG. 7 shows a schematic flow 700 of the training method of the image processing model running in the AI processor. In conjunction with FIG. 7 , the training of the image processing model is described.
  • Step 701 generating multiple training sample sets.
  • the step of generating multiple training sample sets may include the following sub-steps: Step 7011 , generating a first model.
  • the first model is an end-to-end model, which is generated at the offline end, and the first model can process image data collected from any scene.
  • the first model can be obtained by training using traditional model training methods based on training samples.
  • Step 7012 based on the divided scenes, collect sample image data in different scenes respectively.
  • Step 7013 Input the collected sample image data into the first model respectively to generate reference image signals in different scenarios.
  • Step 7014 based on the image processing flow executed by the AI processor, preprocess the sample image data to generate a preprocessed image signal to be input to the image processing model.
  • each training sample set corresponds to the scene one by one, and each training sample set includes a preprocessing image signal generated by processing the sample image data collected in the scene, and processing the sample image data collected in the scene using the first model.
  • the generated reference image signal is a preprocessing image signal generated by processing the sample image data collected in the scene, and processing the sample image data collected in the scene using the first model.
  • the neural network may include, but is not limited to, a recurrent neural network, a convolutional neural network, or a deep neural network.
  • a recurrent neural network for the scene where the electronic device is stationary or moving at a low speed, any one of the recurrent neural network, the convolutional neural network and the deep neural network can be trained to obtain the image processing model; for the scene where the electronic device is moving at a high speed, the training volume Integrate any of neural networks and deep neural networks to obtain image processing models.
  • a recurrent neural network can be trained to obtain an image processing model.
  • the neural network is a convolutional neural network as an example.
  • the weight parameters of the network; the weight parameters of the neural network are iteratively adjusted by the back-propagation algorithm and the gradient descent algorithm; when the preset conditions are met, the parameters of the neural network are saved, and the neural network that meets the preset conditions is the image processing model.
  • the above preset conditions may include at least one of the following: the loss value of the preset loss function is less than or equal to the preset threshold value and the number of times of iteratively adjusting the neural network is greater than or equal to the preset threshold value.
  • the embodiments of the present application further provide an image processing method.
  • the image processing method can be applied to the electronic device 100 as shown in any of FIG. 1 , FIG. 3 to FIG. 6 .
  • the image processing method provided by the embodiment of the present application will be described below in conjunction with the electronic device 100 shown in FIG. 3 and FIG. Please continue to refer to FIG. 8 .
  • FIG. 8 is a process 800 of the image processing method provided by the embodiment of the present application.
  • the image processing method includes: Step 801 , the image sensor 105 collects image data, and the collected image data Provided to ISP102.
  • Step 802 the controller 103 obtains the sensitivity information of the image data from the ISP 102, obtains the acceleration data of the electronic device from the acceleration sensor, and obtains the three-axis component data of the electronic device from the gravity sensor.
  • Step 803 the controller 103 generates motion state data of the electronic device based on the acceleration data and the three-axis component data.
  • Step 804 the controller 103 compares the sensitivity information with a plurality of preset sensitivity intervals, compares the motion state data with a plurality of preset motion speed intervals, and based on the comparison results, generates information including ambient light brightness and motion.
  • the scene information of the status information is provided to the AI processor 101 and the ISP 102, respectively.
  • the ambient light brightness information is used to indicate low ambient light brightness
  • the motion state information is used to indicate low-speed motion of the electronic device.
  • Step 805 the ISP 102 updates the parameters of the image processing algorithm based on the scene information.
  • Step 806 using the updated image processing algorithm to process the image data to generate an image signal A.
  • Step 807 based on the scene information, the AI processor 101 selects one image processing model from a plurality of image processing models to process the image signal A to generate the image signal B.
  • the ISP 102 processes the image signal B to generate a final image processing result.
  • step 804 may not need to provide the scene information to the ISP 102, and step 805 may also be omitted.
  • the image processing method described in the embodiments of the present application is applied to the electronic device 100 as shown in FIG.
  • step 808 is replaced by the ISP 102 processing the image signal B to generate the image signal C, after step 808 It also includes the steps of the AI processor 101 processing the image signal C to generate the image signal D, and the ISP 102 processing the image signal D to generate the final image processing result.
  • the electronic device includes corresponding hardware and/or software modules for executing each function.
  • the steps of each example described in conjunction with the embodiments disclosed herein can be implemented in hardware or in a combination of hardware and computer software. Whether a function is performed by hardware or computer software driving hardware depends on the specific application and design constraints of the technical solution. Those skilled in the art may use different methods to implement the described functionality for each particular application in conjunction with the embodiments, but such implementations should not be considered beyond the scope of this application.
  • the above-mentioned one or more processors may be divided into functional modules according to the foregoing method examples. For example, different processors may be divided corresponding to each function, or two or more processors of functions may be integrated in in a processor module.
  • the above-mentioned integrated modules can be implemented in the form of hardware. It should be noted that, the division of modules in this embodiment is schematic, and is only a logical function division, and there may be other division manners in actual implementation.
  • FIG. 9 shows a possible schematic diagram of the apparatus 900 involved in the above-mentioned embodiment, and the above-mentioned apparatus can be further expanded.
  • the apparatus 900 may include: an AI processing module 901 and an image signal processing module 902 .
  • the AI processing module 901 is configured to select a first image processing model from a plurality of image processing models, and use the first image processing model to perform first image signal processing on the first image signal to obtain a second image signal,
  • the first image signal is obtained based on the first image data output by the image sensor, and the scene information reflects the feature classification of the first image signal;
  • the image signal processing module 902 is used for processing the second image signal A second image signal processing is performed to obtain a first image processing result.
  • the scene information includes at least one item of first ambient light brightness information and first motion state information of the electronic device.
  • the image signal processing module 902 is configured to: based on the scene information, select a first parameter from a plurality of sets of parameters for running the image processing algorithm; based on the first parameter, obtain The updated image processing algorithm; the second image signal processing is performed on the second image signal by using the updated image processing algorithm.
  • the AI processing module 901 is further configured to: when in response to the first motion state information being used to instruct the electronic device to move at a speed lower than a preset threshold, based on the previous frame of image The first image signal is processed according to the image processing result of the signal and the image signal of the previous frame.
  • the first image signal processing includes at least one of the following processing procedures: noise removal, black level correction, shadow correction, white balance correction, demosaicing, chromatic aberration correction or gamma correction.
  • the second image signal processing includes at least one of the following processing procedures: noise removal, black level correction, shadow correction, white balance correction, demosaicing, chromatic aberration correction, gamma correction, and chromatic aberration correction Or RGB to YUV domain.
  • the multiple image processing models are obtained by training based on multiple training sample sets corresponding to multiple scenarios, wherein each training sample set in the multiple training sample sets includes A preprocessing image signal generated by processing sample image data collected in a corresponding scene, and a reference image signal generated by processing the sample image data.
  • the image processing apparatus 900 provided in this embodiment is configured to execute the image processing method executed by the electronic apparatus 100, and can achieve the same effect as the above-mentioned implementation method or apparatus.
  • each module corresponding to the above FIG. 9 may be implemented by software, hardware or a combination of the two.
  • each module may be implemented in the form of software, corresponding to the corresponding processor corresponding to the module in FIG. 1 , for driving the corresponding processor to work.
  • each module may include a corresponding processor and a corresponding driver software, that is, implemented in a combination of software or hardware. Therefore, the image processing apparatus 900 can be considered to logically include the apparatuses shown in FIG. 1 and FIG. 3 to FIG. 6 , and each module includes at least a driver software program of a corresponding function, which is not expanded in this embodiment.
  • the image processing apparatus 900 may include at least one processor and a memory, with specific reference to FIG. 1 .
  • at least one processor can call all or part of the computer program stored in the memory to control and manage the actions of the electronic device 100, for example, can be used to support the electronic device 100 to perform the steps performed by the above-mentioned modules.
  • the memory may be used to support the execution of the electronic device 100 by storing program codes and data, and the like.
  • At least one processor can implement or execute various exemplary logic modules described in conjunction with the present disclosure, which can be a combination of one or more microprocessors that implement computing functions, such as, but not limited to, those shown in FIG. 1 .
  • the AI processor 101 and the image signal processor 102 are shown.
  • the at least one processor may also include other programmable logic devices, transistor logic devices, or discrete hardware components, or the like.
  • the memory in this embodiment may include, but is not limited to, the off-chip memory 108 or the memory 104 shown in FIG. 3 .
  • This embodiment also provides a computer-readable storage medium, where computer instructions are stored in the computer-readable storage medium, and when the computer instructions are executed on the computer, the computer executes the above-mentioned relevant method steps to realize the image processing in the above-mentioned embodiments. method.
  • This embodiment also provides a computer program product, which when the computer program product runs on a computer, causes the computer to execute the above-mentioned relevant steps, so as to realize the image processing method in the above-mentioned embodiment.
  • the computer-readable storage medium or computer program product provided in this embodiment is used to execute the corresponding method provided above. Therefore, for the beneficial effect that can be achieved, reference may be made to the corresponding method provided above. The beneficial effects will not be repeated here.
  • each functional unit in each embodiment of the present application may be integrated into one product, or each unit may physically exist alone, or two or more units may be integrated into one product.
  • each functional unit in each embodiment of the present application may be integrated into one product, or each unit may physically exist alone, or two or more units may be integrated into one product.
  • FIG. 9 if the above modules are implemented in the form of software functional units and sold or used as independent products, they may be stored in a readable storage medium.
  • the technical solutions of the embodiments of the present application can be embodied in the form of software products in essence, or the parts that contribute to the prior art, or all or part of the technical solutions, which are stored in a storage medium , including several instructions to make a device (which may be a single chip microcomputer, a chip, etc.) or a processor (processor) to execute all or part of the steps of the methods in the various embodiments of the present application.
  • aforementioned readable storage medium includes: U disk, mobile hard disk, read only memory (ROM), random access memory (RAM), magnetic disk or optical disk, etc. that can store program codes. medium.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Artificial Intelligence (AREA)
  • Image Processing (AREA)

Abstract

本申请实施例提供了一种电子装置和电子装置的图像处理方法,该电子装置包括:用于基于场景信息,从多个图像处理模型中选择第一图像处理模型,利用所述第一图像处理模型对第一图像信号执行第一图像信号处理,以得到第二图像信号,第一图像信号是基于图像传感器输出的第一图像数据获得的,场景信息反映了第一图像信号的特征分类;图像信号处理器ISP,用于对第二图像信号执行第二图像信号处理以得到第一图像处理结果,本申请实施例提供的电子装置可以提高图像处理效果。

Description

电子装置和电子装置的图像处理方法 技术领域
本申请实施例涉及电子技术领域,尤其涉及一种电子装置和电子装置的图像处理方法。
背景技术
随着电子科学技术的进步,智能终端集成有越来越多的功能,得益于图像处理技术的发展,越来越多的用户喜爱利用智能终端设备进行拍照、视频录制和视频通话等。
由于受限于智能终端内图像信号处理器(ISP,Image Signal Processor)的算法的运算能力,为了提高图像处理效果,业界提出了一种将传统的图像处理算法与人工智能(AI,Artificial Intelligence)算法相结合以进行图像处理的技术。在这种方案中,通常采用同一个网络模型对各种场景下采集的图像信号进行处理,这就提高了模型结构的复杂度以及模型训练过程的复杂度。受限于终端设备的内存容量和运行速度,该网络模型难以在终端设备中部署和实施。由此,传统技术仍未解决终端设备中采用传统ISP图像处理效果不佳的问题。
发明内容
本申请提供的电子装置和电子装置的图像处理方法,可以提高图像处理效果。为达到上述目的,本申请采用如下技术方案。
第一方面,本申请实施例提供一种电子装置,所述电子装置包括:人工智能AI处理器,用于基于场景信息,从多个图像处理模型中选择第一图像处理模型,利用所述第一图像处理模型对第一图像信号执行第一图像信号处理,以得到第二图像信号,所述第一图像信号是基于图像传感器输出的第一图像数据获得的,所述场景信息反映了所述第一图像信号的特征分类;图像信号处理器ISP,用于对所述第二图像信号执行第二图像信号处理以得到第一图像处理结果。
AI处理器通过运行多种图像处理模型以对多种场景下采集的图像数据进行处理,可以降低每一个图像处理模型结构的复杂度,例如每一个图像处理模型均可以采用较少的卷积层以及较少的节点数目即可实现,使得图像处理模型更加容易在终端设备中部署和运行;由于图像处理模型结构的复杂度降低,可以提高AI处理器的运行速度,也即图像处理速度;另外,由于每一个图像处理模型专用于处理一个场景下的图像数据,与采用同一个图像处理模型处理多种场景采集的图像数据相比,还可以提高图像处理效果。
在一种可能的实现方式中,所述第一图像信号处理包括如下至少一个处理过程:噪声消除、黑电平校正、阴影矫正、白平衡校正、去马赛克、色差校正或者伽马矫正。
在一种可能的实现方式中,所述场景信息包括第一环境光亮度信息和电子设备的第一运动状态信息中的至少一项。
在一种可能的实现方式中,所述AI处理器还用于:当所述第一运动状态信息用于指 示电子设备以低于预设阈值的速度运动时,基于前一帧图像信号和所述前一帧图像信号的图像处理结果,处理所述第一图像信号。
通过参考前一帧图像和前一帧图像的图像处理结果对当前帧图像信号进行处理,可以进一步提高对图像信号的处理效果。
在一种可能的实现方式中,所述ISP用于:基于所述场景信息,从图像处理算法的多组参数中选择第一参数;基于所述第一参数,获得更新后的图像处理算法;采用所述更新后的图像处理算法对所述第二图像信号执行所述第二图像信号处理。
在一种可能的实现方式中,所述第二图像信号处理包括如下至少一个处理过程:噪声消除、黑电平矫正、阴影矫正、白平衡校正、去马赛克、色差矫正、伽马矫正、色差校正或者RGB转YUV域。
在一种可能的实现方式中,所述ISP还用于:从所述图像传感器接收所述第一图像数据,并对所述第一图像数据执行第三图像信号处理以得到所述第一图像信号。
在一种可能的实现方式中,所述ISP还用于:采用所述更新后的图像处理算法对所述第一图像数据执行所述第三图像信号处理。
在一种可能的实现方式中,所述电子装置还包括:控制器,用于基于至少一个传感器采集的数据,生成所述场景信息,所述至少一个传感器包括以下至少一项:加速度传感器、重力传感器和所述图像传感器。
在一种可能的实现方式中,所述第三图像信号处理包括如下至少一个处理过程:噪声消除、黑电平校正、阴影矫正、白平衡校正、去马赛克、色差校正或者伽马矫正。
在一种可能的实现方式中,所述多个图像处理模型,是基于与多种场景对应的多个训练样本集训练得到的,其中,所述多个训练样本集中的每一个训练样本集包括对相应场景下采集的样本图像数据进行处理所生成的预处理图像信号、以及对所述样本图像数据进行处理所生成的参考图像信号。
第二方面,本申请实施例提供一种电子装置的图像处理方法,该图像处理方法包括:基于场景信息,控制人工智能AI处理器从多个图像处理模型中选择第一图像处理模型,利用所述第一图像处理模型对第一图像信号执行第一图像信号处理,以得到第二图像信号,所述第一图像信号是基于图像传感器输出的第一图像数据获得的,所述场景信息反映了所述第一图像信号的特征分类;控制图像信号处理器ISP对所述第二图像信号执行第二图像信号处理以得到第一图像处理结果。
基于第二方面,在一种可能的实现方式中,所述控制图像信号处理器ISP对所述第二图像信号执行第二图像信号处理以得到图像处理结果,包括:基于所述场景信息,控制所述ISP从用于运行图像处理算法的多组参数中选择第一参数;控制所述ISP基于所述第一参数,获得更新后的图像处理算法;控制所述ISP采用所述更新后的图像处理算法对所述第二图像信号执行所述第二图像信号处理。
第三方面,本申请实施例提供一种图像处理装置,该图像处理装置包括:AI处理模块,用于从多个图像处理模型中选择第一图像处理模型,利用所述第一图像处理模型对第一图像信号执行第一图像信号处理,以得到第二图像信号,所述第一图像信号是基于图像传感器输出的第一图像数据获得的,所述场景信息反映了所述第一图像信号的特征分类;图像信号处理模块,用于对所述第二图像信号执行第二图像信号处理以得到第一图像处理 结果。
在一种可能的实现方式中,所述场景信息包括第一环境光亮度信息和电子装置的第一运动状态信息中的至少一项。
在一种可能的实现方式中,所述图像信号处理模块用于:基于所述场景信息,从用于运行图像处理算法的多组参数中选择第一参数;基于所述第一参数,获得更新后的图像处理算法;采用所述更新后的图像处理算法对所述第二图像信号执行所述第二图像信号处理。
在一种可能的实现方式中,所述AI处理模块还用于:当所述第一运动状态信息用于指示电子设备以低于预设阈值的速度运动时,基于前一帧图像信号和所述前一帧图像信号的图像处理结果,处理所述第一图像信号。
在一种可能的实现方式中,所述第一图像信号处理包括如下至少一个处理过程:噪声消除、黑电平校正、阴影矫正、白平衡校正、去马赛克、色差校正或者伽马矫正。
在一种可能的实现方式中,所述第二图像信号处理包括如下至少一个处理过程:噪声消除、黑电平矫正、阴影矫正、白平衡校正、去马赛克、色差矫正、伽马矫正、色差校正或者RGB转YUV域。
在一种可能的实现方式中,所述多个图像处理模型,是基于与多种场景对应的多个训练样本集训练得到的,其中,所述多个训练样本集中的每一个训练样本集包括对相应场景下采集的样本图像数据进行处理所生成的预处理图像信号、以及对所述样本图像数据进行处理所生成的参考图像信号。
第四方面,本申请实施例提供一种电子装置,所述电子装置包括存储器和至少一个处理器,所述存储器用于存储计算机程序,所述至少一个处理器被配置用于调用所述存储器存储的全部或部分计算机程序,执行上述第二方面所述的方法。所述至少一个处理器包括所述AI处理器和ISP。可选地,该电子装置还包括所述图像传感器。
第五方面,本申请实施例提供一种片上系统,所述片上系统包括至少一个处理器和接口电路,所述接口电路用于从所述芯片系统外部获取计算机程序;所述计算机程序被所述至少一个处理器执行时用于实现上述第二方面所述的方法。所述至少一个处理器包括所述AI处理器和ISP。
第六方面,本申请实施例提供一种计算机可读存储介质,所述计算机可读存储介质存储中存储有计算机程序,该计算机程序被至少一个处理器执行时用于实现如第二方面所述的方法。所述至少一个处理器包括所述AI处理器和ISP。
第七方面,本申请实施例提供一种计算机程序产品,当所述计算机程序产品被至少一个处理器执行时用于实现上述第二方面所述的方法。所述至少一个处理器包括所述AI处理器和ISP。
应当理解的是,本申请的第二至七方面与本申请的第一方面的技术方案一致,各方面及对应的可行实施方式所取得的有益效果相似,不再赘述。
附图说明
为了更清楚地说明本申请实施例的技术方案,下面将对本申请实施例的描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些 附图获得其他的附图。
图1是本申请实施例提供的电子装置的一个硬件结构示意图;
图2是本申请实施例提供的应用场景的一个示意图;
图3是本申请实施例提供的电子装置的又一个硬件结构示意图;
图4是本申请实施例提供的电子装置的又一个硬件结构示意图;
图5是本申请实施例提供的电子装置的又一个硬件结构示意图;
图6是本申请实施例提供的电子装置的又一个硬件结构示意图;
图7是本申请实施例提供的AI处理器中所运行的图像处理模型的训练方法的一个示意性流程图;
图8是本申请实施例提供的图像处理方法的一个示意性流程图;
图9是本申请实施例提供的电子装置的软件结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本文所提及的"第一"、或"第二"以及类似的词语并不表示任何顺序、数量或者重要性,而只是用来区分不同的部分。同样,"一个"或者"一"等类似词语也不表示数量限制,而是表示存在至少一个。"耦合"等类似的词语并非限定于物理的或者机械的直接连接,而是可以包括电性的连接,不管是直接的还是间接的,等同于广义上的联通。
在本申请实施例中,“示例性的”或者“例如”等词用于表示作例子、例证或说明。本申请实施例中被描述为“示例性的”或者“例如”的任何实施例或设计方案不应被解释为比其它实施例或设计方案更优选或更具优势。确切而言,使用“示例性的”或者“例如”等词旨在以具体方式呈现相关概念。在本申请实施例的描述中,除非另有说明,“多个”的含义是指两个或两个以上。例如,多个处理器是指两个或两个以上的处理器。
本申请实施例提供的电子装置,可以是个电子设备或集成于电子设备内的模组、芯片、芯片组、电路板或部件。该电子设备可以是一个用户设备(User Equipment,UE),如手机、平板电脑、智能屏幕或者图像拍摄设备等各种类型的设备。该电子设备可以设置有图像传感器,以用于采集图像数据。该电子设备还可以安装有诸如摄像类应用、视频通话类应用或者在线视频拍摄类应用等各种用于驱动图像传感器采集图像的软件应用,用户可以通过启动上述各类应用以利用图像传感器拍摄照片或视频。用户还可以通过该类应用进行各种图像美化的个性化设置,以视频通话类应用为例,用户可以在视频通话时选择对屏幕呈现的画面(例如所呈现的面部头像、或所呈现的背景画面)进行自动调节(例如“一键美化”)。当用户启动上述各类应用后或者启动上述各类应用且选择图像美化后,电子设备中对上述各类应用所支持的图像处理服务可以触发电子设备对图像传感器所采集的图像数据进行处理,从而在电子设备的屏幕中呈现处理后的图像,以达到图像美化的效果。该图像美化例如可以包括但不限于:提高图像局部或者整个画幅的亮度、更改图像的显示颜色、对图像中呈现的面部对象磨皮、调节画面饱和度、调节画面曝光度、调节画面鲜明度、调节画 面高光、调节画面对比度、调节画面锐度或者调节画面清晰度等。本申请实施例所述的图像处理可以包括但不限于:噪声去除、黑电平矫正、阴影矫正、白平衡校正、去马赛克、色差矫正、伽马Gamma矫正或者红绿蓝(RGB)转YUV(YCrCb)域,从而达到上述图像美化的效果。基于本申请实施例所述的电子装置,在一个具体的应用场景中,用户A与用户B之间进行视频通话时,呈现在用户A使用的电子设备屏幕中的图像,以及呈现在用户B使用的电子设备屏幕中的用户A的图像,可以是经过本申请实施例所述的电子装置处理后的图像,并且会一直呈现处理后的图像直到用户A与用于B终止视频通话或者用户A关闭图像处理服务。
基于如上所述的应用场景,请参考图1,其示出了本申请实施例提供的电子装置的一个硬件结构示意图。电子装置100例如具体可以是芯片或芯片组或搭载有芯片或芯片组的电路板或包括所述电路板的电子设备,但不用于限定实施例,具体的电子设备如前面的介绍,此处省略。该芯片或芯片组或搭载有芯片或芯片组的电路板可在软件驱动下工作。电子装置100包括一个或多个处理器,例如AI处理器101和ISP102。可选地,所述一个或多个处理器可以集成在一个或多个芯片内,该一个或多个芯片可以被视为是一个芯片组,当一个或多个处理器被集成在同一个芯片内时该芯片也叫片上系统(System on a Chip,SOC)。在所述一个或多个处理器之外,电子装置100还包括一个或多个其他部件,例如存储器104和图像传感器105。在一种可能的实现方式中,存储器104可以与AI处理器101和ISP102位于电子装置100中的同一个片上系统内,也即存储器104集成于如上图1所示的SOC中。
如图1所示的AI处理器101,可以包括神经网络处理器(Neural-network Processing Unit,NPU)等专用神经处理器,包括但不限于卷积神经网络处理器、张量处理器或神经处理引擎。AI处理器可以单独作为一个部件或集成于其他数字逻辑器件中,该数字逻辑器件包括但不限于:CPU(中央处理器,Central Processing Unit)、GPU(图形处理器,Graphics Processing Unit)或者DSP(数字信号处理器,Digital Signal Processing)。示例性地,该CPU、GPU和DSP都是片上系统内的处理器。AI处理器101可以运行多个图像处理模型,该多个图像处理模型用于执行多种场景下的图像处理操作。其中一个图像处理模型用于执行其中一种场景下的图像处理操作。例如,多种场景包括外部环境光亮度较高的场景和外部环境光亮度较低的场景,AI处理器101可以运行两个图像处理器模型,其中一个图像处理模型用于执行高环境光亮度场景下的图像处理操作,另外一个图像处理模型用于执行低环境光亮度场景下的图像处理操作。一种可能的实现方式中,上述多种场景可以是基于预先设定的场景信息来划分的。场景信息可以包括但不限于以下至少一项:环境光亮度信息和电子设备的运动状态信息。本申请实施例中,场景信息反映了AI处理器所要处理的图像信号的特征分类。场景信息可包括环境光亮度信息和电子设备的运动状态信息。具体来说,图像信号的特征可以包括但不限于噪声特征、阴影特征和白平衡特征等。特征分类包括运动特征和环境光亮度特征,例如可分为低环境光亮度和高环境光亮度,或者分为高速运动状态和低速运动状态。该特征分类能够用于指示图像信号噪声的大小、或图像信号中阴影的大小等。当采集图像数据的场景不同时,所采集的图像数据对应的图像信号的特征类别不同。AI处理器通过场景信息,即可获知图像数据所对应的图像信号的特征。从而,AI处理器101可以基于场景信息,运行其中一个图像处理模型以进行图像处理。上述 多个图像处理模型中的每一个图像处理模型,可以执行一个或多个图像处理过程。该一个或多个图像处理过程可以包括但不限于:降噪、黑电平矫正、阴影矫正、白平衡校正、去马赛克、色差矫正、Gamma矫正或者RGB转YUV域。每一个图像处理模型,是基于相应场景下采集的样本图像数据,采用机器学习的方法训练得到的。其中,对图像处理模型的训练方法具体参考图7所示的实施例。需要说明的是,AI处理器101中所运行的多个图像处理模型均用于执行相同的图像处理操作。例如,AI处理器101中所运行的多个图像处理模型均用于执行降噪的图像处理操作。基于场景信息的不同,各图像处理模型所执行的降噪水平不同。例如,高环境光亮度场景下,图像信号的噪声低,与高环境光度场景对应的图像处理模型所执行的降噪水平较弱;低环境光亮度场景下,图像信号的噪声高,与低环境光亮度场景对应的图像处理模型所执行的降噪水平较强。本申请实施例中,AI处理器101通过运行多个图像处理模型以对多种场景下采集的图像数据进行处理,可以降低每一个图像处理模型的复杂度,例如每一个图像处理模型均可以采用较少的卷积层以及较少的节点数目即可实现,从而可以提高AI处理器101的运行速度,也即图像处理速度。另外,由于每一个图像处理模型专用于处理一个场景下的图像数据,与采用同一个图像处理模型处理多种场景采集的图像数据相比,还可以提高图像处理效果。
下面以场景信息包括环境光亮度信息和电子设备的运动状态信息为例,对AI处理器101所运行的多种图像处理模型进行更为详细的描述。本申请实施例中,按照电子设备的运动速度由高到低的顺序可以将电子设备的运动状态划分多个运动状态区间,例如将电子设备的运动状态划分为第一运动状态~第五运动状态五个运动状态区间。同样,按照环境光亮度由低亮度到高亮度的顺序可以将环境光亮度划分为多个亮度区间,例如将环境光亮度划分为第一亮度~第五亮度五个亮度区间。然后对运动状态区间和亮度区间进行任意组合,得到运动状态和亮度的多种组合。对于多种组合中的每一种组合,均对应一个图像处理模型。也即多种组合对应多个图像处理模型。本申请实施例中以电子设备的运动状态划分为高速运动状态和低速运动状态(其中静止状态可划分至低速运动状态中)两类、环境光亮度划分为低环境光亮度和高环境光亮度两类为例,结合图2所示的应用场景进行描述。如图2所示,电子设备的运动状态划分为低速运动状态和高速运动状态、环境光亮度划分为低环境光亮度和高环境光亮度,AI处理器101可以运行四种图像处理模型。其中,图像处理模型01用于对低速运动状态、低环境光亮度场景下采集的图像数据执行图像处理操作;图像处理模型02用于对低速运动状态、高环境亮度场景下采集的图像数据执行图像处理操作;图像处理模型03用于对高速运动状态、低环境光亮度场景下采集的图像数据执行图像处理操作;图像处理模型04用于对高速运动状态、高环境光亮度场景下采集的图像数据执行图像处理操作。AI处理器101基于场景信息,运行该四个图像处理模型中的一个图像处理模型以进行图像处理。作为示例,假设场景信息用于指示高速运动状态、高环境光亮度,则AI处理器101运行图像处理模型04以对图像数据进行处理。
本申请实施例一种可能的实现方式中,当场景信息包括电子设备的运动状态信息时,与低速运动状态(也即电子设备以低于预设阈值的速度运动)对应的图像处理模型可以是基于训练样本对循环神经网络训练得到的。在低速运动状态场景下,当AI处理器运行该图像处理模型对当前帧图像信号处理时,AI处理器还可以将前一帧图像信号和前一帧图像信号的图像处理结果中的至少一项、以及当前帧图像信号均输入至该图像处理模型,该 图像处理模型可以参考前一帧图像信号以及前一帧图像信号的图像处理结果对当前帧图像信号进行处理。
本申请实施例中所述的场景信息,可以是电子设备100中运行的控制器下发给AI处理器101的。在一种可能的实现方式中,AI处理器101中可以预先存储有场景信息与图像处理模型的存储地址信息之间的第一映射关系表。AI处理器101在获得到场景信息后,可以查询该第一映射关系表,从而获得相应的图像处理模型的地址信息。最后,AI处理器101可以从所获得的地址信息所指示的地址加载图像处理模型。在其他可能的实现方式中,上述第一映射关系表也可以预先存储在控制器中,控制器在获得场景信息后,基于第一映射关系表,可以直接将图像处理模型的存储地址信息下发给AI处理器101。其中,场景信息的具体确定方式参考下文中图3所示的实施例中的相关描述,在此不在赘述。
如图1所示的ISP102中可以设置多个硬件模块或者运行软件程序以对图像进行处理。ISP102通过运行图像处理算法以执行多个图像处理过程,该多个图像处理过程可以包括但不限于:阶调映射、对比度增强、边缘增强、降噪、颜色校正等。ISP102运行的图像处理算法中,某些参数的值是可调的。例如,用于执行降噪过程的图像处理算法中的空间域高斯核参数以及像素值域高斯核参数。在一种可能的实现方式中,ISP102中可以预先设置有多组可调参数的值,该多组可调参数的值对应多种场景下的图像处理。其中一组可调参数的值对应其中一种场景下的图像处理。该多种场景同样可以是基于场景信息来划分的,这里的场景信息与用于设置图像处理模型的场景信息相同,具体参考相关描述,在此不在赘述。ISP102可以基于场景信息,选择出一组可调参数的值,基于所选择出的可调参数的值更新相应部分的图像处理算法。例如,ISP102所执行的多个图像处理过程中,只有降噪的图像处理算法的参数是可调的,其余图像处理过程的参数不需要调整,ISP102可以基于所选择出的参数值仅更新降噪的图像处理算法。然后采用更新后的图像处理算法对图像信号进行处理。仍以场景信息包括环境光亮度信息和电子设备的运动状态信息、电子设备的运动状态包括高速运动状态和低速运动状态、环境光亮度包括低亮度和高亮度为例,对IPS102中可调参数的值与场景之间的对应关系进行描述。ISP102中可以预先设置有四组可调参数的值,第一组可调参数的值对应低速运动状态、低亮度场景,第二组可调参数的值对应低速运动状态、高亮度场景,第三组可调参数的值对应高速运动状态、低亮度场景,第四组可调参数的值对应高速运动状态、高亮度场景。作为示例,假设场景信息用于指示高速运动状态、高亮度,则ISP102采用第四组可调参数的值,更新所运行的相关图像处理算法。然后,采用更新后的图像处理算法对图像信号进行处理。在一种可能的实现方式中,ISP102中可以预先存储有场景信息与可调参数的值之间的第二映射关系表。ISP102基于场景信息,可以查询该第二映射关系表,从而获得相应的可调参数值。
本申请实施例中,AI处理器101和ISP102之间可以相互配合以对同一场景下采集的图像数据进行处理。具体来说,从图像传感器105获取的图像数据可以经过多个图像处理过程以生成最终的图像处理结果,该多个图像处理过程可以包括但不限于:降噪、黑电平矫正、阴影矫正、白平衡校正、去马赛克、色差矫正、Gamma矫正或者RGB转YUV域。AI处理器101通过运行图像处理模型可以执行上述图像处理过程中的一个过程或多个过程,也即对应上述一种或多种图像处理操作,ISP102通过运行图像处理算法也可以执行上述图像处理过程中的一个过程或多个过程。因此,整个图像处理流程包括多个处理过程, 并被作为任务分配给AI处理器101和ISP102。AI处理器101可以与ISP102执行不同的图像处理过程,AI处理器101与ISP102也可以执行相同的图像处理过程。当AI处理器101和ISP102执行相同的图像处理过程时,AI处理器101所执行的图像处理可以作为对该图像处理过程的增强或补充。例如,当AI处理器101和ISP102同时执行噪声消除的过程时,ISP102用于进行初次去噪,AI处理器101用于在ISP102初次去噪的基础上进行二次去噪。在一种可能的实现方式中,ISP102和AI处理器101可以通过电子线路连接的方式通信。AI处理器101与ISP102之间的电子线路连接也叫物理连接或中断连接。该中断连接包括用于实现中断信号发送和接收功能的中断信号处理硬件电路和传输信号的连接线,以实现中断信号的收发。中断信号处理硬件电路包括但不限于传统的中断控制器电路。关于中断信号处理硬件电路的具体实现方案,可以参照现有技术中的中断控制器的相关描述,此处不做赘述。其中,AI处理器101和ISP102之间的具体连接以及相互之间配合以对图像进行处理的具体实现参考图4-图6所示的实施例的相关描述。
本申请实施例中,电子设备100还包括控制器103,如图3所示。控制器103可以是一个集成控制器。具体实现中,控制器103可以为各种数字逻辑器件或电路,包括但不限于:CPU、GPU、微控制器、微处理器或者DSP等。控制器103可以与AI处理器101和ISP102位于电子装置100中的同一个片上系统内,也即,也即控制器103集成于如图1所示的SOC中。此外,控制器103也可以与AI处理器101、ISP102和存储器10分离设置,本实施例不做限定。进一步的,控制器103也可以与AI处理器101均集成于同一个逻辑运算器件(例如CPU)中,由同一个逻辑运算器件实现本申请实施例所述的控制器103和AI处理器101所执行的功能。控制器103运行有软件程序或软件插件以驱动控制器103获得上述场景信息,然后将所获得的场景信息分别发送给AI处理器101和ISP102。当场景信息包括环境光亮度信息时,在一种可能的实现方式中,环境光亮度信息可以是控制器103基于图像数据的感光度信息生成的,其中,图像数据的感光度信息可以是ISP102中的曝光补偿模块通过运行相应算法计算得到的。环境光亮度信息可以为比特位信号。控制器103中可以预先设置有多个感光度区间段(例如低感光度区间段和低感光度区间段),控制器103可以将所获得的感光度信息与多个感光度区间段的门限值进行比较,基于比较结果生成比特位信号。当场景信息包括电子设备的运动状态信息时,一种可能的实现方式中,电子设备的运动状态信息可以是控制器103基于电子设备的加速度数据以及电子设备的三轴分量(X轴、Y轴和Z轴)数据而生成的。电子设备的运动状态信息可以为比特位信号。控制器103中同样可以预先设置有多个运动速度区间段(例如低运动速度区间段和高运动速度区间段),控制器103可以基于加速度数据和三轴分量数据生成运动状态数据,将所生成的运动状态数据与多个运动速度区间段的门限值进行比较,基于比较结果生成比特位信号。上述加速度数据可以由加速度传感器采集,电子设备的三轴分量数据可以由重力传感器采集。此时,电子设备100还可以包括加速度传感器106和重力传感器107,如图3所示。场景信息可以采用两位比特位信号,第一位指示环境光亮度、第二位指示电子设备的运动状态。例如,“00”指示低环境光亮度、低运动状态;“01”指示低环境光亮度、高运动状态;“10”指示高环境光亮度、低运动状态;“11”指示高环境光亮度、高运动状态。需要说明的是,本申请实施例所示的用于指示场景信息的比特位数目只是示意性的,比特位数目根据具体所包括的场景信息以及每一种场景所划分的区间,可以包括更 多位比特位或者更少位比特位。例如,当亮度区间包括低亮度、中亮度和高亮度时,用于指示亮度的比特位可以包括三位。还需要说明的是,控制器103中预先设置的用于划分不同场景的多个数值区间段中,每相邻的两个数值区间段之间设置有重合区间。假设控制器103当前所生成的场景信息落入重合区间时,控制器103可以参考上一次所生成的场景信息。如果当前所生成的场景信息与上一次所生成的场景信息之间的差值小于等于预设阈值,可以使得AI处理器101保持当前所运行的图像处理模型不变、ISP102保持当前所运行的图像处理算法不变;如果当前所生成的场景信息与上一次所生成的场景信息之间的差值大于预设阈值,可以向AI处理器101和ISP102重新发送当前所生成的场景信息,以使得AI处理器101更换图像处理模型,ISP102更换图像处理算法的参数。本申请实施例通过在每相邻的两个数值区间段之间设置重合区间,可以防止AI处理器101中所运行的图像处理模型频繁切换,提高AI处理器101中所运行的图像处理模型的稳定性。
本申请实施例中,控制器103可以实时性的或者周期性的获得场景信息,当检测到当前场景信息与之前所获得的场景信息不同时(例如由高亮度、低运动状态的场景转换成高亮度、高运动状态的场景),及时将指示当前场景的场景信息分别发送给ISP102和AI处理器101。AI处理器101基于当前所接收到的场景信息及时更换所运行的图像处理模型,从而在下一个图像处理周期进行图像处理时运行所更换的图像处理模型。ISP102还可以基于当前所接收到的场景信息,及时更换所运行的图像处理算法的参数,从而在下一个图像处理周期进行图像处理时运行参数更新后的图像处理算法。由此,本申请实施例所述的电子装置,可以基于场景信息动态调整所采用的图像处理模型以及ISP102所运行的图像处理算法的参数,可以使得用户在使用本申请实施例所述的电子装置进行场景更换(例如从光线强的区域更换至光线弱的区域或者电子设备从静止状态转换成运动状态)时,对所采集的图像有针对性的处理,提高图像处理效果,有利于提高用户体验。
请继续参考图4,其示出了本申请实施例提供的ISP102和AI处理器101通过电子线路连接的一个结构示意图。在如图4所示的电子装置100中,ISP102可以包括多个级联的图像处理模块,该多个级联的图像处理模块包括图像处理模块01、图像处理模块02、图像处理模块03…图像处理模块N以及图像处理模块N+1,每一个图像处理模块均可以包括多个逻辑器件或电路以执行特定的图像处理功能。例如,图像处理模块01用于执行黑电平矫正的图像处理,图像处理模块02用于执行阴影矫正的图像处理,图像处理模块03用于执行阴影校正的图像处理…,图像处理模块N+1用于执行RGB转YUV的处理。基于对图像的处理需求,上述多个级联的图像处理模块中的任意一个图像处理模块可以设置有输出端口和输入端口,该输出端口用于向AI处理器101提供图像信号A,该输入端口用于从AI处理器101获得图像信号B,图4中示意性的示出了图像处理模块02设置有输出端口V po1、图像处理模块03设置有输入端口V pi1。基于图4所示的结构,在一种可能的实现方式中,电子装置100还可以设置有片上RAM,该片上RAM与ISP102和AI处理器101集成在电子装置100中的一个芯片中,ISP102向AI处理器101提供的图像信号以及AI处理器101向ISP102提供的图像信号均可以存储在片上RAM中。此外,片上RAM还用于存储AI处理器101运行过程中所产生的中间数据以及AI处理器101所运行的神经网络中各网络节点的权重数据等。具体实现中,片上RAM可以设置于如图1或图3所示的存储器104中。
在一个具体的场景中,ISP102从图像传感器105获取图像数据,图像数据通过图像处理模块01、图像处理模块02依次执行阴影矫正和白平衡校正处理后生成图像信号A存储至片上RAM。图像处理模块02将图像信号A存储至片上RAM后向AI处理器101发送中断信号Z1。AI处理器101响应于中断信号Z1从片上RAM获取图像信号A。AI处理器101对图像信号A进行去马赛克处理后生成图像信号B,以及将图像信号B存储至片上RAM。AI处理器101将图像信号B存储至片上RAM后向图像处理模块03发送上述中断信号Z2。图像处理模块03响应于中断信号Z2,从片上RAM读取图像信号B,图像信号B经过ISP102中的图像处理模块03…、图像处理模块N以及图像处理模块N+1依次执行色差矫正、…Gamma矫正以及RGB转YUV域的处理后生成最终的图像处理结果。需要说明的是,在图像处理模块01之前还可以包括更多的图像处理模块,以使得ISP102对图像数据执行更多的图像处理过程。
在图4所示的实施例中,AI处理器101所执行的图像处理过程设置在ISP102所执行的多个图像处理过程之间,来替代或补充ISP102所执行的某些中间的图像处理过程。在其他一些可能的实现方式中,AI处理器101可以直接从图像传感器105获取图像数据,执行前端的图像处理过程。在该实现方式中,AI处理器101可以替代和补充ISP102中前端的某些图像处理模块,执行相应的图像处理过程,此时,AI处理器101可以直接与ISP102后面的图像处理模块通信。该实现方式的硬件结构参考图5,如图5所示的AI处理器101和ISP102之间的连接和交互与图4所示AI处理器101和ISP102之间的连接和交互相类似,具体参考图4所示的实施例中的相关描述,在此不再赘述。
在图4和图5所示的实施例中,AI处理器101与ISP102之间进行一次交互,AI处理器101执行一个图像处理过程或者执行多个连续的图像处理过程以对图像数据或者图像信号进行处理。在其他一些可能的实现方式中,AI处理器101可以执行多个不连续的图像处理过程,也即是说,AI处理器101和ISP102可以交替执行图像处理,使得双方共同完成图像处理过程,以得到处理结果,以代替传统ISP的图像处理过程。此时,ISP102还可以包括更多的输出端口和输入端口。下面以图6所示的电子装置的结构为例进行描述。在图6中,ISP102中的图像处理模块02和图像处理模块03分别设置有输出端口V po1和输出端口V po2,图像处理模块03和图像处理模块N分别设置有输入端口V pi1和输入端口V pi2。各模块的输出端口用于向AI处理器提供图像信号,各模块的输入端口用于从AI处理器获得图像信号。图像传感器105采集的图像数据经图像处理模块01和图像处理模块02处理后生成图像信号A提供至AI处理器101;AI处理器101对图像信号A进行处理,生成图像信号B提供至图像处理模块03;图像信号B经过图像处理模块03的处理生成图像信号C提供至AI处理器101;AI处理器对图像信号C处理后生成图像信号D提供至图像处理模块N;图像信号D经过图像处理模块N和图像处理模块N+1的处理生成最终的图像处理结果。
基于图6所示的结构示意图,AI处理器101中所运行的多个图像处理模型中,至少一个第一图像处理模型执行第一图像处理操作,至少一个第二图像处理模型执行第二图像处理操作。这里,当第一图像处理模型包括多个时,多个第一图像处理模型用于处理不同场景下采集的图像数据,该多个第一图像处理模型所执行的第一图像处理操作为相同的图像处理操作。同样,当第二图像处理模型包括多个时,多个第二图像处理模型用于处理不 同场景下采集的图像数据,该多个第二图像处理模型所执行的第二图像处理操作为相同的图像处理操作。举例来说,假设AI处理器101可以运行两个第一图像处理模型和两个第二图像处理模型。其中,其中一个第一图像处理模型用于对高环境光亮度场景下采集的图像数据执行降噪处理,另外一个第一图像处理模型用于对低环境光亮度场景下采集的图像数据执行降噪处理,其中一个第二图像处理模型用于对高环境光亮度场景下采集的图像数据执行解马赛克处理,另外一个第一图像处理模型用于对低环境光亮度场景下采集的图像数据执行解马赛克处理。
在本申请实施例一种可能的实现方式中,电子装置还包括片外存储器108,如图3所示。片外存储器108由于具有更大存储空间,可以代替片上RAM,用来存储更大单位的图像数据。该片外存储器108可以用于存储多帧图像,该多帧图像可以为当前图像之前的前一帧图像、前两帧图像或者之前的多帧图像。此外,片外存储器108中还可以用于存储上述多帧图像中的每一帧图像的特征图。该特征图是AI处理器101中运行的图像处理模型对图像信号执行诸如卷积操作、池化操作后所生成的。当AI处理器101中运行循环神经网络所生成的图像处理模型以对当前图像信号进行处理时,还可以从片外存储器108获取当前图像信号的前一帧图像信号或者前一帧图像信号的特征图,然后将前一帧图像信号或者前一帧图像信号的特征图作为参考数据对当前的图像信号进行处理。此外,AI处理器101还可以将处理后的图像信号存储至该片外存储器108。该片外存储器108可以包括随机存取存储器(RAM),该随机存取存储器可以包括易失性存储器(如SRAM、DRAM、DDR(双倍数据速率SDRAM,Double Data Rate SDRAM)或SDRAM等)和非易失性存储器。
在本实施例中,电子装置100还可以包括通信单元(图中未示出),该通信单元包括但不限于短距离通信单元、或蜂窝通信单元。其中,短距离通信单元通过运行短距离无线通信协议与位于移动终端外的用于接入互联网的终端之间进行信息交互。该短距离无线通信协议可以包括但不限于:射频识别技术支持的各种协议、蓝牙通信技术协议、或红外通信协议等。蜂窝通信单元通过运行蜂窝无线通信协议与无线接入网接入互联网,以实现移动通信单元与互联网中对各种应用进行支持的服务器进行信息交互。该通信单元可以与如上各实施例所述的AI处理器101和ISP102等集成于同一SOC中,或者可以分离设置。此外,电子装置100还可选择性地包括总线、输入/输出端口I/O、或存储控制器等。存储控制器用于控制存储器103以及片外存储器108。其中,总线、输入/输出端口I/O、和存储控制器等均可以与上述ISP102和AI处理器101等集成于同一SOC中。应理解,在实际应用中,电子装置100可以包括比图1或者图3所示的更多或更少的部件,本申请实施例不作限定。
本申请实施例中,AI处理器中所运行的多个图像处理模型中的每一个图像处理模型,均是基于相应场景下采集的样本图像数据、采用机器学习的方法在离线端对多个神经网络训练后,部署在电子设备中的。请参考图7,其示出了AI处理器中所运行的图像处理模型的训练方法的一个示意性流程700,结合图7,对图像处理模型的训练进行描述。
步骤701,生成多个训练样本集。生成多个训练样本集的步骤可以包括如下子步骤:步骤7011,生成第一模型。该第一模型为端到端模型,其在离线端生成的,且该第一模型可以对任意场景采集的图像数据进行处理。第一模型可以基于训练样本、利用传统的模型 训练方法训练得到。步骤7012,基于所划分的场景,分别采集不同场景下的样本图像数据。步骤7013,将所采集到的样本图像数据分别输入至第一模型,生成不同场景下的参考图像信号。步骤7014,基于AI处理器所执行的图像处理流程,对样本图像数据预处理,生成待输入至图像处理模型的预处理图像信号。经过步骤7011-步骤7014,可以得到多个训练样本集。训练样本集与场景一一对应,每一个训练样本集中包括对该场景下采集的样本图像数据进行处理所生成的预处理图像信号、以及利用第一模型对该场景下采集的样本图像数据进行处理所生成的参考图像信号。
步骤702,利用多个训练样本集分别对多个神经网络进行训练,基于训练结果,生成多个图像处理模型。该神经网络可以包括但不限于:循环神经网络、卷积神经网络或者深度神经网络。具体实现中,针对电子设备静止或者低速运动的场景,可以训练循环神经网络、卷积神经网络和深度神经网络中的任一种以得到图像处理模型;针对电子设备高速运动的场景,可以训练卷积神经网络和深度神经网络中的任一种以得到图像处理模型。优选的,针对电子设备静止或者低速运动的场景,为了进一步提高对图像信号的处理效果,可以训练循环神经网络以得到图像处理模型。下面以其中一个训练样本集训练其中一个神经网络、该神经网络为卷积神经网络为例进行详细描述。将预处理图像信号输入至神经网络,得到输出图像信号;将输出图像信号与参考图像信号进行比较,基于输出图像信号与参考图像信号之间的差值,构建损失函数,该损失函数中包括神经网络的权重参数;利用反向传播算法和梯度下降算法迭代调整神经网络的权重参数;当满足预设条件时,保存神经网络的参数,该满足预设条件的神经网络即为图像处理模型。上述预设条件可以包括以下至少一项:预设损失函数的损失值小于或等于预设阈值和迭代调整神经网络的次数大于或等于预设阈值。
基于如上所述的各实施例,本申请实施例还提供一种图像处理方法。该图像处理方法可以应用于如图1、图3-图6任意所示的电子装置100中。下面以场景信息包括环境光亮度信息和电子设备的运动状态信息为例,结合图3和图4所示的电子装置100,对本申请实施例提供的图像处理方法进行描述。请继续参考图8,图8是对本申请实施例提供的图像处理方法的一个流程800,该图像处理方法,该图像处理方法包括:步骤801,图像传感器105采集图像数据,将采集到的图像数据提供至ISP102。
步骤802,控制器103从ISP102获得图像数据的感光度信息、从加速度传感器获得电子设备的加速度数据以及从重力传感器获得电子设备的三轴分量数据。步骤803,控制器103基于加速度数据和三轴分量数据生成电子设备的运动状态数据。步骤804,控制器103将感光度信息与预先设置的多个感光度区间段比较,将运动状态数据与预先设置的多个运动速度区间段比较,基于比较结果,生成包括环境光亮度信息和运动状态信息的场景信息分别提供至AI处理器101和ISP102。其中,环境光亮度信息用于指示低环境光亮度、运动状态信息用于指示电子设备低速运动。
步骤805,ISP102基于场景信息,更新图像处理算法的参数。步骤806,采用更新后的图像处理算法对图像数据进行处理,生成图像信号A。步骤807,AI处理器101基于场景信息,从多个图像处理模型中选择其中一个图像处理模型对图像信号A进行处理,生成图像信号B。步骤808,ISP102对图像信号B进行处理,生成最终的图像处理结果。
应理解,图8所示的图像处理方法的步骤或操作仅是示例,本申请实施例还可以执行 其他操作或者图8中的各个操作的变形。本申请实施例还可以包括比图8所示的步骤更多或更少的步骤。例如,当ISP102中未设置参数调整的单元、ISP102采用相同的参数对不同场景采集的图像进行处理时,步骤804中控制器103可以不需要将场景信息提供给ISP102,同时还可以省略步骤805。再例如,当本申请实施例中所述的图像处理方法应用于如图6所示的电子装置100中时,步骤808被替换为ISP102对图像信号B进行处理生成图像信号C,在步骤808之后还包括AI处理器101对图像信号C进行处理生成图像信号D、ISP102对图像信号D进行处理生成最终的图像处理结果的步骤。
可以理解的是,电子装置为了实现上述功能,其包含了执行各个功能相应的硬件和/或软件模块。结合本文中所公开的实施例描述的各示例的步骤,本申请能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。本领域技术人员可以结合实施例对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
本实施例可以根据上述方法示例对以上一个或多个处理器进行功能模块的划分,例如,可以对应各个功能划分各个不同处理器,也可以将两个或两个以上的功能的处理器集成在一个处理器模块中。上述集成的模块可以采用硬件的形式实现。需要说明的是,本实施例中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。
在采用对应各个功能划分各个功能模块的情况下,图9示出了上述实施例中涉及的装置900的一种可能的示意图,可以对之前提到的装置进行进一步扩展。如图9所示,该装置900可以包括:AI处理模块901和图像信号处理模块902。其中,AI处理模块901,用于从多个图像处理模型中选择第一图像处理模型,利用所述第一图像处理模型对第一图像信号执行第一图像信号处理,以得到第二图像信号,所述第一图像信号是基于图像传感器输出的第一图像数据获得的,所述场景信息反映了所述第一图像信号的特征分类;图像信号处理模块902,用于对所述第二图像信号执行第二图像信号处理以得到第一图像处理结果。
在一种可能的实现方式中,所述场景信息包括第一环境光亮度信息和电子装置的第一运动状态信息中的至少一项。
在一种可能的实现方式中,所述图像信号处理模块902用于:基于所述场景信息,从用于运行图像处理算法的多组参数中选择第一参数;基于所述第一参数,获得更新后的图像处理算法;采用所述更新后的图像处理算法对所述第二图像信号执行所述第二图像信号处理。
在一种可能的实现方式中,所述AI处理模块901还用于:当响应于所述第一运动状态信息用于指示电子设备以低于预设阈值的速度运动时,基于前一帧图像信号和所述前一帧图像信号的图像处理结果,处理所述第一图像信号。
在一种可能的实现方式中,所述第一图像信号处理包括如下至少一个处理过程:噪声消除、黑电平校正、阴影矫正、白平衡校正、去马赛克、色差校正或者伽马矫正。
在一种可能的实现方式中,所述第二图像信号处理包括如下至少一个处理过程:噪声消除、黑电平矫正、阴影矫正、白平衡校正、去马赛克、色差矫正、伽马矫正、色差校正 或者RGB转YUV域。
在一种可能的实现方式中,所述多个图像处理模型,是基于与多种场景对应的多个训练样本集训练得到的,其中,所述多个训练样本集中的每一个训练样本集包括对相应场景下采集的样本图像数据进行处理所生成的预处理图像信号、以及对所述样本图像数据进行处理所生成的参考图像信号。
本实施例提供的图像处理装置900,用于执行电子装置100所执行的图像处理方法,可以达到与上述实现方法或装置相同的效果。具体地,以上图9对应的各个模块可以软件、硬件或二者结合实现。例如,每个模块可以以软件形式实现,对应于图1中与该模块对应的相应处理器,用于驱动该相应处理器工作。或者,每个模块可包括对应的处理器和相应的驱动软件两部分,即以软件或硬件结合实现。因此,图像处理装置900可以认为在逻辑上包含了图1、图3-图6所示的装置,每个模块中均至少包含了对应功能的驱动软件程序,本实施例对此不做展开。
示例性地,图像处理装置900可以包括至少一个处理器和存储器,具体参考图1。其中,至少一个处理器可以调用存储器存储的全部或部分计算机程序,对电子装置100的动作进行控制管理,例如,可以用于支持电子装置100执行上述各个模块执行的步骤。存储器可以用于支持电子装置100执行存储程序代码和数据等。至少一个处理器可以实现或执行结合本申请公开内容所描述的各种示例性的多个逻辑模块,其可以是实现计算功能的一个或多个微处理器组合,例如包括但不限于图1所示的AI处理器101和图像信号处理器102。此外,至少一个处理器还可以包括其他可编程逻辑器件、晶体管逻辑器件、或者分立硬件组件等。本实施例所述存储器可以包括但不限于图3所示的片外存储器108或存储器104。
本实施例还提供一种计算机可读存储介质,该计算机可读存储介质中存储有计算机指令,当该计算机指令在计算机上运行时,使得计算机执行上述相关方法步骤实现上述实施例中的图像处理方法。
本实施例还提供了一种计算机程序产品,当该计算机程序产品在计算机上运行时,使得计算机执行上述相关步骤,以实现上述实施例中的图像处理方法。
其中,本实施例提供的计算机可读存储介质或者计算机程序产品均用于执行上文所提供的对应的方法,因此,其所能达到的有益效果可参考上文所提供的对应的方法中的有益效果,此处不再赘述。
通过以上实施方式的描述,所属领域的技术人员可以了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。
另外,在本申请各个实施例中的各功能单元可以集成在一个产品中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个产品中。对应于图9,上述模块如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该软件产品存储在一个存储介质中,包括若干指令用以使得一个设备(可以是单片机,芯片等)或处理器 (processor)执行本申请各个实施例方法的全部或部分步骤。而前述的可读存储介质包括:U盘、移动硬盘、只读存储器(read only memory,ROM)、随机存取存储器(random access memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (12)

  1. 一种电子装置,其特征在于,包括:
    人工智能AI处理器,用于基于场景信息,从多个图像处理模型中选择第一图像处理模型,利用所述第一图像处理模型对第一图像信号执行第一图像信号处理,以得到第二图像信号,所述第一图像信号是基于图像传感器输出的第一图像数据获得的,所述场景信息反映了所述第一图像信号的特征分类;
    图像信号处理器ISP,用于对所述第二图像信号执行第二图像信号处理以得到第一图像处理结果。
  2. 根据权利要求1所述的电子装置,其特征在于,所述场景信息包括第一环境光亮度信息和所述电子装置的第一运动状态信息中的至少一项。
  3. 根据权利要求1或2所述的电子装置,其特征在于,所述ISP用于:
    基于所述场景信息,从图像处理算法的多组参数中选择第一参数;
    基于所述第一参数,获得更新后的图像处理算法;
    采用所述更新后的图像处理算法对所述第二图像信号执行所述第二图像信号处理。
  4. 根据权利要求1-3任一项所述电子装置,其特征在于,所述电子装置还包括:
    控制器,用于基于至少一个传感器采集的数据,生成所述场景信息,所述至少一个传感器包括以下至少一项:加速度传感器、重力传感器和所述图像传感器。
  5. 根据权利要求2所述的电子装置,其特征在于,所述AI处理器还用于:
    当所述第一运动状态信息用于指示电子设备以低于预设阈值的速度运动时,基于前一帧图像信号和所述前一帧图像信号的图像处理结果,利用所述第一图像处理模型处理所述第一图像信号。
  6. 根据权利要求1-5任一项所述的电子装置,其特征在于,所述第一图像信号处理包括如下至少一个处理过程:噪声消除、黑电平校正、阴影矫正、白平衡校正、去马赛克、色差校正或者伽马矫正。
  7. 根据权利要求1-6任一项所述的电子装置,其特征在于,所述第二图像信号处理包括如下至少一个处理过程:噪声消除、黑电平矫正、阴影矫正、白平衡校正、去马赛克、色差矫正、伽马矫正、色差校正或者RGB转YUV域。
  8. 根据权利要求1-7任一项所述的电子装置,其特征在于,所述多个图像处理模型,是基于与多种场景对应的多个训练样本集训练得到的,其中,所述多个训练样本集中的每一个训练样本集包括对相应场景下采集的样本图像数据进行处理所生成的预处理图像信号、以及对所述样本图像数据进行处理所生成的参考图像信号。
  9. 一种图像处理方法,其特征在于,所述方法包括:
    基于场景信息,控制人工智能AI处理器从多个图像处理模型中选择第一图像处理模型,利用所述第一图像处理模型对第一图像信号执行第一图像信号处理,以得到第二图像信号,所述第一图像信号是基于图像传感器输出的第一图像数据获得的,所述场景信息反映了所述第一图像信号的特征分类;
    控制图像信号处理器ISP对所述第二图像信号执行第二图像信号处理以得到第一图像处理结果。
  10. 根据权利要求9所述的图像处理方法,其特征在于,所述控制图像信号处理器ISP对所述第二图像信号执行第二图像信号处理以得到图像处理结果,包括:
    基于所述场景信息,控制所述ISP从图像处理算法的多组参数中选择第一参数;
    控制所述ISP基于所述第一参数,获得更新后的图像处理算法;
    控制所述ISP采用所述更新后的图像处理算法对所述第二图像信号执行所述第二图像信号处理。
  11. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质中存储有计算机程序,该计算机程序被至少一个处理器执行时用于实现如权利要求9或10所述的方法。
  12. 一种计算机程序产品,其特征在于,当所述计算机程序产品被至少一个处理器执行时用于实现如权利要求9或10所述的方法。
PCT/CN2021/089980 2021-04-26 2021-04-26 电子装置和电子装置的图像处理方法 WO2022226732A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
PCT/CN2021/089980 WO2022226732A1 (zh) 2021-04-26 2021-04-26 电子装置和电子装置的图像处理方法
EP21938223.1A EP4297397A4 (en) 2021-04-26 2021-04-26 ELECTRONIC DEVICE AND IMAGE PROCESSING METHOD OF AN ELECTRONIC DEVICE
CN202180006443.XA CN115529850A (zh) 2021-04-26 2021-04-26 电子装置和电子装置的图像处理方法
US18/493,917 US20240054751A1 (en) 2021-04-26 2023-10-25 Electronic apparatus and image processing method of electronic apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/089980 WO2022226732A1 (zh) 2021-04-26 2021-04-26 电子装置和电子装置的图像处理方法

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/493,917 Continuation US20240054751A1 (en) 2021-04-26 2023-10-25 Electronic apparatus and image processing method of electronic apparatus

Publications (1)

Publication Number Publication Date
WO2022226732A1 true WO2022226732A1 (zh) 2022-11-03

Family

ID=83847613

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/089980 WO2022226732A1 (zh) 2021-04-26 2021-04-26 电子装置和电子装置的图像处理方法

Country Status (4)

Country Link
US (1) US20240054751A1 (zh)
EP (1) EP4297397A4 (zh)
CN (1) CN115529850A (zh)
WO (1) WO2022226732A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117453265A (zh) * 2023-06-25 2024-01-26 快电动力(北京)新能源科技有限公司 基于算法平台的资源调用方法、装置及设备

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103295238A (zh) * 2013-06-03 2013-09-11 南京信息工程大学 安卓平台上基于roi运动检测的视频实时定位方法
CN109688351A (zh) * 2017-10-13 2019-04-26 华为技术有限公司 一种图像信号处理方法、装置及设备
CN110266946A (zh) * 2019-06-25 2019-09-20 普联技术有限公司 一种拍照效果自动优化方法、装置、存储介质及终端设备

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112529775A (zh) * 2019-09-18 2021-03-19 华为技术有限公司 一种图像处理的方法和装置

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103295238A (zh) * 2013-06-03 2013-09-11 南京信息工程大学 安卓平台上基于roi运动检测的视频实时定位方法
CN109688351A (zh) * 2017-10-13 2019-04-26 华为技术有限公司 一种图像信号处理方法、装置及设备
CN110266946A (zh) * 2019-06-25 2019-09-20 普联技术有限公司 一种拍照效果自动优化方法、装置、存储介质及终端设备

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4297397A4 *

Also Published As

Publication number Publication date
EP4297397A1 (en) 2023-12-27
US20240054751A1 (en) 2024-02-15
EP4297397A4 (en) 2024-04-03
CN115529850A (zh) 2022-12-27

Similar Documents

Publication Publication Date Title
US20220207680A1 (en) Image Processing Method and Apparatus
JP7266672B2 (ja) 画像処理方法および画像処理装置、ならびにデバイス
US11430209B2 (en) Image signal processing method, apparatus, and device
KR102149187B1 (ko) 전자 장치와, 그의 제어 방법
CN104883504B (zh) 开启智能终端上高动态范围hdr功能的方法及装置
CN108924420B (zh) 图像拍摄方法、装置、介质、电子设备及模型训练方法
CN108933897A (zh) 基于图像序列的运动检测方法及装置
US20220245765A1 (en) Image processing method and apparatus, and electronic device
KR20230098575A (ko) 프레임 프로세싱 및/또는 캡처 명령 시스템들 및 기법들
US20240054751A1 (en) Electronic apparatus and image processing method of electronic apparatus
CN110445986A (zh) 图像处理方法、装置、存储介质及电子设备
WO2022151852A1 (zh) 图像处理方法、装置、系统、电子设备以及存储介质
US20230214955A1 (en) Electronic apparatus and image processing method of electronic apparatus
US10769416B2 (en) Image processing method, electronic device and storage medium
US20220301278A1 (en) Image processing method and apparatus, storage medium, and electronic device
CN113744139A (zh) 图像处理方法、装置、电子设备及存储介质
CN107968937B (zh) 一种缓解眼球疲劳的系统
CN117014720A (zh) 图像拍摄方法、装置、终端、存储介质及产品
CN116048323A (zh) 图像处理方法及电子设备
RU2794062C2 (ru) Устройство и способ обработки изображения и оборудование
EP4343693A1 (en) Image processing method and related device
WO2023036313A1 (zh) 图像拍摄方法、装置、计算机设备及存储介质
CN116389885B (zh) 拍摄方法、电子设备及存储介质
WO2023160221A1 (zh) 一种图像处理方法和电子设备
RU2791810C2 (ru) Способ, аппаратура и устройство для обработки и изображения

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21938223

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2021938223

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2021938223

Country of ref document: EP

Effective date: 20230921

NENP Non-entry into the national phase

Ref country code: DE