WO2019072057A1 - 一种图像信号处理方法、装置及设备 - Google Patents

一种图像信号处理方法、装置及设备 Download PDF

Info

Publication number
WO2019072057A1
WO2019072057A1 PCT/CN2018/104678 CN2018104678W WO2019072057A1 WO 2019072057 A1 WO2019072057 A1 WO 2019072057A1 CN 2018104678 W CN2018104678 W CN 2018104678W WO 2019072057 A1 WO2019072057 A1 WO 2019072057A1
Authority
WO
WIPO (PCT)
Prior art keywords
image signal
scene
processor
attribute information
accurate
Prior art date
Application number
PCT/CN2018/104678
Other languages
English (en)
French (fr)
Inventor
蔡金
刘国祥
陈辉
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP18867060.8A priority Critical patent/EP3674967B1/en
Publication of WO2019072057A1 publication Critical patent/WO2019072057A1/zh
Priority to US16/844,115 priority patent/US11430209B2/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/217Validation; Performance evaluation; Active pattern learning techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/776Validation; Performance evaluation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/35Categorising the entire scene, e.g. birthday party or wedding scene
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/71Circuitry for evaluating the brightness variation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Definitions

  • the present application relates to the field of image processing technologies, and in particular, to an image signal processing method, apparatus, and device.
  • the mobile terminal In order for a user to take a high quality photo or record a high quality video, the mobile terminal usually has a scene recognition function. After the mobile terminal acquires the sensor signal through the image sensor, the sensor signal is processed into an image signal, and the scene recognition function is used to identify the scene to which the image signal belongs, and the image signal processor (ISP) is used to process the image signal to meet the image signal. The image signal of the recognized scene.
  • ISP image signal processor
  • mobile terminals may rely on scene recognition based on color channels or based on template matching, or the mobile terminal may assist in scene recognition through additional photometric devices.
  • the traditional scene recognition method has a high false recognition rate. For example, when using a color channel to identify a green plant scene, it is easy to identify other green objects (non-green plants) as green plants. For example, when the scene recognition night scene is assisted by an additional photometric device, the photometric device is occluded or in other dark scenes, it is misidentified as a night scene.
  • the scene recognition method is currently used to identify the scene, and the scene recognition accuracy is relatively low, which may affect the quality of image signal processing, thereby affecting the quality of the user taking photos or recording video.
  • the embodiment of the present application provides an image signal processing method, device and device, which use a neural network to initially identify a scene, and then use the attribute information of the image signal to further judge the accuracy of the initially identified scene to improve the scene recognition accuracy. If the determined scene is accurate, the image signal is enhanced according to the identified scene to generate an enhanced image signal, thereby improving the quality of the image signal processing.
  • an image signal processing method in which a scene to which an image signal belongs is identified by using a neural network, and in the case where it is determined that the scene recognized by the neural network is accurate, the image is imaged according to the recognized scene.
  • the signal is enhanced to generate an enhanced image signal.
  • the image signal processing method provided by the embodiment of the present application can improve the accuracy of the scene recognition by using the neural network to identify the scene to which the image signal belongs and further determining the accuracy of the scene recognized by the neural network. And the image signal is enhanced according to the identified accurate scene to generate an enhanced image signal, which can improve the quality of the image signal processing to a certain extent.
  • the image signal recognized by the neural network is derived from the sensor signal collected by the image sensor, and the attribute information of the image signal can be used to determine whether the scene recognized by the neural network is accurate, and if the identified scene is determined to be accurate Then, the image signal is enhanced according to the identified scene to generate an enhanced image signal.
  • the attribute information of the image signal involved in the embodiment of the present application may be at least one of light intensity information and foreground position information included in the image signal.
  • the attribute information of the image signal includes light intensity information.
  • the attribute information of the image signal is used to determine whether the scene recognized by the neural network is accurate, whether the light intensity of the image signal is within a preset light intensity threshold value may be determined according to the light intensity information to determine the neural network identification. Whether the scene is accurate.
  • the attribute information of the image signal includes foreground position information.
  • the attribute information of the image signal is used to determine whether the scene recognized by the neural network is accurate, whether the foreground position of the image signal is within a preset distance threshold range may be determined according to the foreground position information to determine whether the scene recognized by the neural network is accurate.
  • the image signal recognized by the neural network may be an image signal obtained by processing the sensor signal by the image signal processor.
  • the attribute information of the image signal is attribute information of an image signal obtained by processing the sensor signal by the image signal processor.
  • an enhancement algorithm used for performing enhancement processing on each scene may be preset, and when the image signal is enhanced according to the identified scene, an enhancement corresponding to the identified scene may be adopted.
  • the algorithm performs enhancement processing on the image signal.
  • the scene to which the image signal belongs may be identified by the neural network operation processor.
  • the image signal is enhanced by the image signal processor, and the image signal may be enhanced by the arithmetic processor, and the image signal may be enhanced by the image signal processor and the arithmetic processor.
  • the process of determining whether the scene is accurate by using the attribute information of the image signal may be performed by an image signal processor, and the attribute information determined by using the image signal may be determined by an operation processor.
  • the process of determining whether the scene is accurate may also be performed by the image signal processor and the arithmetic processor to determine whether the scene is accurate using the attribute information of the image signal.
  • the scene of the image signal is recognized by the neural network operation processor, and the accuracy of the scene recognized by the neural network operation processor is assisted by the operation processor and the image signal processor, thereby improving the accuracy of the scene recognition. degree.
  • an image signal processing apparatus having a function of performing image signal processing in the design of the above method. These functions can be implemented in hardware or in software by executing the corresponding software.
  • the hardware or software includes one or more units corresponding to the functions described above.
  • the image signal processing device can be applied to an electronic device having an image processing function. .
  • the image signal processing apparatus includes an acquisition unit, a neural network identification unit, and an image signal processing unit, wherein functions of the acquisition unit, the neural network identification unit, and the image signal processing unit may correspond to each method step, I will not repeat them here.
  • an image signal processing apparatus comprising an image signal processor, an arithmetic processor, and a neural network operation processor.
  • the image signal processing device may further include an image sensor for collecting an external signal, and converting the external signal into a sensor signal.
  • the image signal processing apparatus may further include a memory for storing an image signal processor, an arithmetic processor, and program code executed by the neural network operation processor.
  • the image signal processing device may further include a photographing or recording function control module for implementing a photographing or recording function, and performing post processing on the image signal.
  • the image signal processor, the arithmetic processor, and the neural network operation processor may perform the respective functions in the image signal processing method provided by any of the above first aspect or any of the possible aspects of the first aspect.
  • the neural network operation processor is configured to acquire an image signal, which is derived from a sensor signal collected by an image sensor, and uses a neural network to identify a scene to which the image signal belongs.
  • At least one of the image signal processor and the operation processor is configured to determine, by using attribute information of the image signal, whether the scene recognized by the neural network operation processor is accurate. If it is determined that the scene is accurate, the image signal is enhanced according to the scene to generate an enhanced image signal.
  • a fourth aspect a computer readable storage medium having instructions stored thereon that, when executed on a computer, cause the computer to perform the first aspect described above and any possible design of the first aspect Image signal processing method.
  • a computer program product comprising instructions for causing a computer to perform an image signal processing method as described above and in any possible design of the first aspect, when the computer program product comprising the instructions is run on a computer.
  • the image signal processing method, device and device provided by the embodiments of the present invention use a neural network to initially identify a scene, and then use the attribute information of the image signal to assist the judgment of the accuracy of the initially identified scene, thereby improving the scene recognition accuracy rate, and further Improve the quality of image signal processing.
  • FIG. 1 is a schematic structural diagram of hardware of a mobile terminal according to an embodiment of the present application.
  • FIG. 2 is a schematic diagram of a principle of a neural network according to an embodiment of the present application.
  • FIG. 3 is a schematic structural diagram of an image signal processing device according to an embodiment of the present disclosure.
  • FIG. 4 is a flowchart of an image signal processing method according to an embodiment of the present application.
  • FIG. 5 is a schematic structural diagram of an image signal processing apparatus according to an embodiment of the present application.
  • the image signal processing method and device provided by the embodiments of the present application can be applied to an electronic device, which can be a mobile terminal, a mobile station (MS), a user equipment (UE), etc.
  • Mobile devices can also be fixed devices, such as fixed phones, desktop computers, etc., and can also be video monitors.
  • the electronic device has an image acquisition and processing device for image signal acquisition and processing functions, and the electronic device can also selectively have a wireless connection function to provide a voice and/or data connectivity to the user, or to connect to Other processing devices of the wireless modem, such as: the mobile device may be a mobile phone (or "cellular" phone), a computer with a mobile terminal, etc., or may be portable, pocket, handheld, computer built, or in-vehicle.
  • Mobile devices can also be wearable devices (such as smart watches, smart bracelets, etc.), tablets, personal computers (PCs), personal digital assistants (PDAs), sales terminals (Point of Sales) , POS), etc.
  • wearable devices such as smart watches, smart bracelets, etc.
  • tablets personal computers (PCs), personal digital assistants (PDAs), sales terminals (Point of Sales) , POS), etc.
  • PDAs personal digital assistants
  • sales terminals Point of Sales
  • POS Point of Sales
  • FIG. 1 is a schematic diagram of an optional hardware structure of a mobile terminal 100 according to an embodiment of the present application.
  • the mobile terminal 100 mainly includes a chipset and a peripheral device, wherein a power management unit (PMU), a voice codec, a short-range module, and a radio frequency (radio frequency) in a solid line frame in FIG. , RF), arithmetic processor, random-access memory (RAM), input/output (I/O), display interface, image processor (Image Signal Processor, ISP), sensor interface (Sensor)
  • PMU power management unit
  • voice codec a short-range module
  • a radio frequency radio frequency
  • RF radio frequency
  • RF radio frequency
  • RAM random-access memory
  • I/O input/output
  • display interface image processor
  • image processor Image Signal Processor
  • ISP Sensor interface
  • Each component such as a hub
  • a baseband communication module constitutes a chip or a chipset.
  • Components such as USB interface, memory, display, battery/commercial, earphone/speaker, antenna, sensor, etc. can be understood as peripheral devices.
  • the computing processor, RAM, I/O, display interface, ISP, Sensor hub, baseband and other components in the chipset can form a system-on-a-chip (SOC), which is the main part of the chipset.
  • SOC system-on-a-chip
  • the components in the SOC can all be integrated into one complete chip, or some components can be integrated in the SOC, and the other components are not integrated.
  • the baseband communication module in the SOC can be integrated with other parts and become a separate part.
  • the components in the SOC can be connected to one another via a bus or other connection.
  • the PMU, voice codec, RF, etc. outside the SOC usually include analog circuit parts, and therefore often outside the SOC, are not integrated with each other.
  • the PMU is used for external power supply or battery, and supplies power to the SOC. It can use the utility power to charge the battery.
  • the voice codec is used as a sound codec unit to connect headphones or speakers to realize the conversion between the natural analog voice signal and the SOC-processable digital voice signal.
  • Short-range modules can include wireless fidelity (WiFi) and Bluetooth, and can also include infrared, near field communication (NFC), radio (FM) or global positioning system (GPS). ) Modules, etc.
  • the RF is connected to the baseband communication module in the SOC to implement conversion of the air interface RF signal and the baseband signal, that is, mixing. For mobile phones, reception is downconversion and transmission is upconversion.
  • Both the short range module and the RF can have one or more antennas for signal transmission or reception.
  • Baseband is used for baseband communication, including one or more of a variety of communication modes for processing wireless communication protocols, including physical layer (layer 1), medium access control (MAC) ( The processing of each protocol layer, such as layer 2) and radio resource control (RRC) (layer 3), can support various cellular communication systems, such as Long Term Evolution (LTE) communication.
  • the Sensor hub is an interface between the SOC and an external sensor for collecting and processing data of at least one external sensor.
  • the external sensors may be, for example, an accelerometer, a gyroscope, a control sensor, an image sensor, or the like.
  • the arithmetic processor may be a general purpose processor, such as a central processing unit (CPU), or may be one or more integrated circuits, such as one or more application specific integrated circuits (ASICs), or One or more digital singnal processors (DSPs), or microprocessors, or one or more field programmable gate arrays (FPGAs), and the like.
  • the arithmetic processor can include one or more cores and can selectively schedule other units.
  • the RAM can store intermediate data in some calculations or processes, such as CPU and baseband intermediate calculation data.
  • the ISP uses the data collected by the image sensor for processing.
  • the I/O is used for the SOC to interact with various external interfaces, such as a universal serial bus (USB) interface for data transmission.
  • USB universal serial bus
  • the memory can be one or a group of chips.
  • the display screen may be a touch screen and connected to the bus through a display interface.
  • the display interface may be data processing before image display, such as aliasing of multiple layers to be displayed, buffering of display data, or adjustment of control of screen brightness.
  • the mobile terminal 100 involved in the embodiment of the present application includes an image sensor, which can collect external signals such as light from the outside, and convert the external signal into a sensor signal, that is, an electrical signal.
  • the sensor signal can be a still image signal or a dynamic video image signal.
  • the image sensor can be, for example, a camera.
  • the mobile terminal 100 involved in the embodiment of the present application further includes an ISP.
  • the image sensor collects the sensor signal and transmits it to the image signal processor.
  • the ISP acquires the sensor signal, and the sensor signal can be processed to obtain the definition, color, and Image signals that conform to human eye characteristics in all aspects such as brightness.
  • the ISP processing the image signal may include the following aspects:
  • Correction and compensation defective pixel correction (DPC), black level compensation (BLC), lens distortion correction (LSC), for distortion, stretching, offset, etc.
  • DPC defective pixel correction
  • BLC black level compensation
  • LSC lens distortion correction
  • Geometric correction gamma correction, correction related to perspective principle, etc.
  • denoising and image enhancement time domain, spatial domain filtering, hierarchical compensation filtering, various noise removal, sharpening, suppression of ringing effect and banding artifacts, edge enhancement, brightness enhancement, contrast enhancement.
  • color and format conversion color interpolation Demosaic (raw-> RGB), color space conversion RGB-> YUV or YCbCr or YPbPr, tone mapping, chroma adjustment, color correction, saturation adjustment, zoom, rotation and so on.
  • adaptive processing automatic white balance, automatic exposure, auto focus, strobe detection.
  • Visual recognition face, gesture recognition
  • image processing in extreme environments.
  • the extreme environment includes vibration, rapid movement, darker, too bright and so on.
  • the processing involved generally includes deblurring, point spread function estimation, brightness compensation, motion detection, dynamic capture, image stabilization, and high-dynamic range (HDR) processing.
  • HDR high-dynamic range
  • the ISP involved in the embodiment of the present application may be one or a group of chips, that is, may be integrated or independent.
  • the ISP included in the mobile terminal 100 may be an integrated ISP chip integrated in the arithmetic processor.
  • the mobile terminal 100 involved in the embodiment of the present application has a function of taking a photo or recording a video, and when the mobile terminal 100 performs photographing or recording a video, the user is required to take a high-quality photo or record a high-quality video.
  • the ISP processes the acquired sensor signal, it can combine the scene recognition function of the mobile terminal 100 to perform linear correction, noise removal, dead pixel repair, color interpolation, white balance correction, exposure correction, etc. on the sensor signal to transmit the sensor signal. Processed as an image signal that conforms to the identified scene. However, when the mobile terminal 100 performs scene recognition, the scene recognition accuracy is relatively low.
  • the embodiment of the present application provides an image signal processing method, in which a method for identifying a scene to which an image signal belongs may be separately provided, and the ISP is accurately identified according to the case where the identified scene is determined to be accurate.
  • the scene is enhanced by the image signal to generate an enhanced image signal, which can improve the scene recognition accuracy and improve the quality of the image signal processing to some extent.
  • the neural network is a network structure that imitates the behavioral characteristics of animal neural networks, and is also referred to as artificial neural networks (ANN).
  • the neural network may be a recurrent neural network (RNN) or a Convolutional Neural Network (CNN).
  • the neural network structure is composed of a large number of nodes (or neurons) connected to each other, and the purpose of processing information is achieved by learning and training the input information based on a specific operation model.
  • a neural network includes an input layer, a hidden layer, and an output layer.
  • the input layer is responsible for receiving input signals
  • the output layer is responsible for outputting the calculation result of the neural network
  • the hidden layer is responsible for learning, training, and the like, and is a memory unit of the network, and a memory of the hidden layer.
  • the function is characterized by a weight matrix, which typically corresponds to a weighting factor.
  • FIG. 2 it is a schematic diagram of a neural network having N processing layers, N ⁇ 3 and N taking a natural number, and the first layer of the neural network is an input layer 101, which is responsible for receiving an input signal.
  • the last layer of the neural network is the output layer 103, which outputs the processing result of the neural network, and the other layers of the first layer and the last layer are removed as the intermediate layer 104, and these intermediate layers together form the hidden layer 102, each of the hidden layers.
  • the middle layer of the layer can receive both the input signal and the output signal, and the hidden layer is responsible for the processing of the input signal.
  • Each layer represents a logical level of signal processing, through which multiple layers of data can be processed by multiple levels of logic.
  • the processing function may be a rectified linear unit (ReLU), a hyperbolic tangent function (tanh), or a sigmoid (sigmoid).
  • (x 1 , x 2 , x 3 ) is a one-dimensional input signal matrix
  • (h 1 , h 2 , h 3 ) is the output signal matrix
  • W ij represents the weight coefficient between the input x j and the output h i
  • the matrix formed by the weight coefficients is a weight matrix
  • the weight matrix W corresponding to the one-dimensional input signal matrix and the output signal matrix is as shown in the formula (1):
  • Equation (2) The relationship between the input signal and the output signal is as shown in equation (2), where b i is the offset value of the neural network processing function, and the offset value adjusts the input of the neural network to obtain an ideal output result.
  • the input signal of the neural network may be various forms of signals such as a voice signal, a text signal, an image signal, a temperature signal, and the like.
  • the processed image signal may be a variety of sensor signals such as a landscape signal captured by a camera (image sensor), an image signal of a community environment captured by a display monitoring device, and a facial signal of a face acquired by the access control system.
  • the input signals of the neural network include other various computer-processable engineering signals, which are not enumerated here.
  • the processing performed by the hidden layer 102 of the neural network may be processing such as recognizing a facial image signal of a face. If the neural network is used for deep learning of the image signal, the scene to which the image signal belongs can be relatively accurately identified. Therefore, in the embodiment of the present application, the mobile terminal may perform deep learning by using a neural network to identify a scene to which the image signal belongs.
  • a neural network operation processor may be added to the mobile terminal, and the neural network operation processor may be independent of the operation processor involved in FIG. It can also be integrated in the arithmetic processor involved in FIG.
  • the neural network operation processor can also be understood as a special operation processor that is different from the arithmetic processor involved in FIG.
  • the neural network computing processor may be a CPU running an operating system, or may be other types of computing devices, such as a dedicated hardware accelerated processor.
  • the neural network operation processor is independent of the operation processor as an example.
  • the accuracy of the scene identified by the neural network may be further determined.
  • the attribute information of the image signal can be used to determine whether the scene recognized by the neural network is accurate.
  • the image signal is enhanced according to the scene recognized by the neural network to generate an enhanced image signal, thereby improving the accuracy of scene recognition, thereby improving the user's photographing or recording video. quality.
  • FIG. 3 is a schematic structural diagram of an image signal processing apparatus 200 according to an embodiment of the present disclosure.
  • the image signal processing apparatus 200 is configured to perform an image signal processing method provided by an embodiment of the present application.
  • the image signal processing apparatus 200 includes an image signal processor 201, an arithmetic processor 202, and a neural network operation processor 203.
  • the image sensor, image signal processor 201, arithmetic processor 202, and neural network operation processor 203 can be connected by a bus.
  • the structural diagram of the image processing apparatus 200 shown in FIG. 3 of the embodiment of the present application is only for illustrative purposes, and is not limited thereto, and the image processing apparatus 200 may further include other components.
  • the image signal processing apparatus 200 shown in Fig. 3 may further include an image sensor for collecting an external signal, and converting the external signal into a sensor signal.
  • the image signal processing apparatus 200 shown in FIG. 3 may further include a memory for storing program codes executed by the image signal processor 201, the arithmetic processor 202, and the neural network operation processor 203.
  • the image signal processing device 200 shown in FIG. 3 may further include a photographing or recording function control module for implementing a photographing or recording function, mainly for post processing of the image signal.
  • the photographing or recording function control module can be implemented by using software, hardware or a combination of software and hardware.
  • the photographing or recording function control module may be integrated in the arithmetic processor 202, or may be integrated in the image signal processor 201, or may be a separate functional component.
  • FIG. 4 is a flowchart of an image signal processing method according to an embodiment of the present application.
  • the method execution body shown in FIG. 4 may be an image signal processing device 200, or may be a component included in the image signal processing device 200, such as Chip or chipset.
  • the method includes:
  • the neural network operation processor 203 acquires an image signal.
  • the image signal acquired by the neural network operation processor 203 in the embodiment of the present application is derived from the sensor signal collected by the image sensor.
  • the external signal collected by the image sensor is processed to obtain the sensor signal.
  • the image signal acquired by the embodiment of the present application is obtained according to the sensor signal collected by the image sensor.
  • the image sensor is a camera
  • the sensor signal collected by the camera is an optical signal
  • the optical signal can be converted into an electrical signal after being processed by the camera, which can be understood as an image signal.
  • the image signal processor 201 of the image signal processing device 200 can acquire a sensor signal
  • the image signal processor 201 can process the sensor signal to obtain the image signal.
  • S102 Identify a scene to which the image signal belongs by using a neural network.
  • the neural network operation processor 203 is used to identify the scene to which the image signal belongs.
  • the image signal processed by the image signal processor 201 may be used for scene recognition by using a neural network.
  • the operation processor 202 acquires the image signal processed by the image signal processor 201, converts the signal processed by the image signal processor 201 into an image signal recognizable by the neural network operation processor 203, and transmits the converted image signal to the nerve.
  • Network operation processor 203 may be converted by the image signal processor 201 into an image signal that the neural network operation processor 203 can recognize, and the converted image signal is sent to the neural network operation processor 203. .
  • the neural network used to identify the scene to which the image signal belongs may be a Convolutional Neural Network (CNN), and the model used for the scene recognition by the convolutional neural network may be Alexnet, VGG16, VGG19, ResNet, inception. At least one of the net and other models is not limited in this embodiment of the present application.
  • CNN Convolutional Neural Network
  • a convolutional neural network may be designed to perform image signal learning to identify stage scenes, night scenes, blue sky scenes, green scenes, flower scenes, gourmet scenes, beach scenes, snow scenes, text scenes, and animals. Scenes (cats and dogs) that meet the daily needs of the user when taking pictures or recording videos.
  • S103 Determine whether the scene identified by the neural network is accurate. Specifically, in the embodiment of the present application, attribute information of the image signal may be used to determine whether the scene identified by the neural network is accurate.
  • the attribute information of the image signal obtained by processing the image signal by the image signal processor 201 may be used to determine whether the scene identified by the neural network is accurate.
  • the attribute information of the image signal may include at least one of light intensity information and foreground position information included in the image signal.
  • the light intensity information can reflect the brightness of the corresponding image.
  • the foreground position information may reflect the distance from the foreground to the image signal processing device 200 in the corresponding image.
  • the blue sky scene is usually relatively strong in illumination
  • the night scene is usually weak in illumination
  • the gourmet scene is generally in close range, etc.
  • the correspondence between the scene to which the image signal belongs and the attribute information of the image signal is set, and different scenes match different image signal attributes.
  • the light intensity threshold range and the distance threshold range corresponding to each scene may be preset, as shown in FIG. 1 :
  • Scenes Image signal attribute information stage The light intensity is less than the first set light intensity threshold and is within the first light intensity threshold Blue sky Light is stronger than the second light intensity setting threshold and is within the second light intensity threshold night view
  • the light intensity is less than the third light intensity setting threshold and is within the third light intensity threshold Green plant no Flower no Food
  • the foreground position is less than the set first distance threshold and is within the first distance threshold Beach Light is stronger than the fourth light intensity setting threshold and is within the fourth light intensity threshold Fireworks
  • the light intensity is less than the fifth light intensity setting threshold and is within the fifth light intensity threshold Cat, dog no Text no Snow scene no
  • each of the distance thresholds and the light intensity thresholds in the above-mentioned Table 1 are set in an actual scenario, and the specific values are not limited herein.
  • the above-mentioned “first”, “second”, etc. are only used to distinguish different thresholds, and are not necessarily used to describe a specific order or order, for example, the first light intensity threshold mentioned above in the embodiment of the present application,
  • the second light intensity threshold, the third light intensity threshold, the fourth light intensity threshold, and the fifth light intensity threshold are merely for convenience of description and distinguishing different light intensity thresholds, and do not constitute a limitation of the light intensity threshold.
  • the thresholds so used are interchangeable under appropriate circumstances, so that the embodiments of the invention described herein can be implemented in a sequence other than those illustrated or described herein.
  • the attribute information of the image signal when the attribute information of the image signal is used to determine whether the scene identified by the neural network is accurate, whether the scene identified by the neural network matches the attribute information of the image signal may be determined, and if it matches, the identified The scene is accurate. If it does not match, it is determined that the identified scene is inaccurate.
  • the attribute information of the image signal in the embodiment of the present application includes the light intensity information
  • the light intensity threshold corresponding to each scene may be preset, and whether the light intensity of the image signal is determined according to the light intensity information of the acquired image signal is preset. Within the light intensity threshold range to determine if the identified scene is accurate.
  • the light intensity of the image signal is less than the first set light intensity threshold, and within the first light intensity threshold range, the scene of the image signal can be determined to be a stage scene, and if the scene identified by the neural network is a stage scene, then It can be determined that the scene identified by the neural network is accurate. If the scene identified by the neural network is not a stage scene, it can be determined that the scene identified by the neural network is inaccurate.
  • the attribute information of the image signal in the embodiment of the present application includes the foreground position information
  • the distance threshold corresponding to each scene may be preset, and the foreground position information of the image signal is determined according to the obtained foreground position information of the image signal.
  • the foreground position threshold is within the range to determine if the identified scene is accurate.
  • the foreground location reflects the distance from the foreground to the current device, such as the foreground to the end device or sensor. For example, the distance reflected by the foreground position of the image signal is smaller than the first set foreground position threshold, and within the first foreground position threshold range, the scene of the image signal may be determined to be a gourmet scene, if the neural network is used to identify If the scene is a gourmet scene, the scene identified by the neural network may be determined to be accurate. If the scene identified by the neural network is not a gourmet scene, the scene identified by the neural network may be determined to be inaccurate.
  • the specific execution process of determining whether the scene identified by the neural network is accurate in the embodiment of the present application may be performed by the operation processor 202, may also be performed by the image signal processor 201, and may also be processed by the operation processor 202 and the image signal.
  • the unit 201 is cooperatively executed.
  • the operation processor 202 and the image signal processor 201 cooperate to determine whether the scene recognized by the neural network is accurate or not.
  • the scene recognition result is sent to the operation processor 202, and the operation processor 202 obtains the scene recognition result, and is taken from the image signal processor 201.
  • the attribute information of the image signal is used to determine whether the scene to which the image signal belongs is accurate by the neural network operation processor 203. If it is accurate, the step S104 is performed, and the operation processor 202 transmits the accurate scene information.
  • the image signal processor 201 performs enhancement processing on the image signal in accordance with the accurate scene.
  • step S105 may be performed, and the operation processor 202 may not interact with the image signal processor 201 to correct the result of the scene recognition, and the image signal processor 201 processes the image signal according to the original processing manner, and the specific implementation is implemented.
  • the process of information interaction in the process can be seen in Figure 3.
  • an enhancement algorithm used for performing enhancement processing on an image signal in each scenario may be preset, and different scenarios may adopt different enhancement algorithms. These algorithms can all adopt algorithms already existing in the prior art.
  • the following is an example of an enhancement algorithm corresponding to a blue sky scene, a green scene, and a night scene in the embodiment of the present application.
  • the enhancement algorithm corresponding to the blue sky scene based on the region segmentation, statistical brightness color information of the blue sky region, the intensity of adaptive optimization; the attribute of the blue sky before the statistical color enhancement, and the optimization target is determined according to the statistical result.
  • Different weather, different clouds and blue sky need the strength of adaptive optimization, and avoid the blue sky optimization transition is not natural.
  • the saturation is related to the brightness during the color optimization process; the brightness of the blue sky area of the picture changes due to the change of the picture framing.
  • the optimized saturation target is determined based on the current luminance statistics.
  • the blue gamut range is different for different brightnesses, and the gamut limit is considered when enhancing.
  • the hue is correlated with the saturation; the overall mapping of the hue of the blue sky to the hue range of the memory color, while compensating for the difference in hue of subjective visuals at different saturations.
  • the color of the blue sky is more in line with the subjective memory color.
  • the color gamut of the blue sky is optimized, and the transition effect of the adjacent color gamut is smooth and natural; according to the statistical information of the blue sky, the color gamut range enhanced by the blue sky scene is adaptively limited. The enhanced amplitude smoothly transitions across the boundaries of the gamut.
  • the enhancement algorithm corresponding to the green plant scene using the green plant recognition result, effectively solving the information interference of the green plant on the light source in the white balance statistics, and improving the accuracy of the white balance.
  • White balance statistics In the chromaticity space, it is difficult to generalize the green plant or the data corresponding to the light source, and the deep learning recognition of the green plant will use more information than the color.
  • the white balance algorithm is optimized to estimate the chromaticity coordinates of the current light source, and the white balance accuracy of the green plant scene in various illumination environments is greatly improved.
  • Color enhancement of the green color gamut enhanced control of the color gamut based on differences in brightness, hue, and saturation. The color gamut of low saturation and low brightness is more vivid after enhancement, while the color gamut of high saturation does not overflow after color gamut enhancement, and the color gradation is still there.
  • the overall enhancement is bright and delicate, and rich in layers.
  • the enhancement algorithm corresponding to the night scene for the brightness characteristics of the night scene, the synthesis algorithm and the subdivision algorithm control parameters in the HDR mode are optimized; the HDR algorithm processes the prior knowledge of the high dynamic scene utilization scene, which is different from the unified processing strategy. Optimize the number of composite frames and the exposure strategy of the composite frame, improve the brightness and detail of the dark part, control the overexposed area, and enhance the contrast.
  • the composite image is transparent and has a high dynamic range.
  • the night scene can be subdivided to optimize the noise of the night sky and the overall contrast of the scene; for the sub-class scene with night sky in the night scene, the control parameters such as brightness and contrast are optimized, so that the night sky is clean, the whole is transparent, and the night scene is highlighted. The lower part of the main body brings out a strong night atmosphere.
  • the enhancement algorithm used in the enhancement processing of the image signal according to the identified scene in the embodiment of the present application may include an algorithm for performing pre-enhancement processing on the image signal, and may also include performing enhanced processing on the image signal.
  • the algorithm may also include an algorithm for performing pre-enhancement processing and enhanced post-processing on the image signal.
  • the enhancement algorithm may be performed by the image signal processor 201, or by the arithmetic processor 202, and may also be performed by the image signal processor 201 and the arithmetic processor 202.
  • the pre-enhancement processing may be performed by the image signal processor 201.
  • the processor 202 performs enhanced post processing.
  • the image signal processor 201 may include a function module for performing pre-processing and performing The post-processing photographing or recording function control module, the image signal processor 201 performs enhanced pre-processing and enhanced post-processing.
  • the function module for performing pre-processing included in the image signal processor 201 may be implemented by using software, hardware or a combination of software and hardware. If the enhancement algorithm includes an algorithm for performing post-enhancement processing on the image signal, and the photographing or recording function control module is integrated in the arithmetic processor 202, the post-enhancement processing may be performed by the arithmetic processor 202.
  • an enhancement algorithm corresponding to the scene to which the neural network operation processor 203 recognizes the image signal may be used to perform enhancement processing on the image signal.
  • the neural network operation processor 203 recognizes the scene to which the image signal belongs, and the operation processor 202 confirms the recognition by the neural network operation processor 203.
  • the scene is accurate, and the scene information with accurate recognition can be sent to the function module for performing pre-processing in the image signal processor 201, and the function module for performing pre-processing in the image signal processor 201 adopts an algorithm for enhancing pre-processing.
  • the operation processor 202 can also send the recognized scene information to the photographing or recording function control module integrated in the image signal processor 201 to perform post processing of taking a photo or recording a video.
  • the image processing or the recording function control module in the image signal processor 201 may perform enhancement processing on the image signal pre-processed by the function module for performing pre-processing in the image signal processor 201.
  • the processing processor 202 controls the photographing or recording function control module to perform post-processing of taking a photo or recording a video.
  • Different optimization strategies may be set for different scenarios.
  • the optimization strategy may be set by using the following Table 2:
  • S105 If it is determined that the scene identified by the neural network is inaccurate, the image signal is processed according to the original processing manner. Among them, S105 is an optional step.
  • the scene of the image signal can be identified by the neural network operation processor 203, and the accuracy of the scene recognized by the neural network operation processor 203 can be assisted by the operation processor 202 and the image signal processor 201. Improved scene recognition accuracy.
  • a hardware structure and/or a software module corresponding to each function is included.
  • the embodiments of the present application can be implemented in a combination of hardware or hardware and computer software in combination with the elements of the examples and algorithm steps described in the embodiments disclosed in the application. Whether a function is implemented in hardware or computer software to drive hardware depends on the specific application and design constraints of the solution. A person skilled in the art can use different methods to implement the described functions for each specific application, but such implementation should not be considered to be beyond the scope of the technical solutions of the embodiments of the present application.
  • the embodiment of the present application may perform division of functional units on the image signal processing device according to the above method example.
  • each functional unit may be divided according to each function, or two or more functions may be integrated into one processing unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
  • the division of the unit in the embodiment of the present application is schematic, and is only a logical function division. In actual implementation, there may be another division manner.
  • FIG. 5 is a schematic diagram showing the structure of an image signal processing apparatus 300 provided by an embodiment of the present application.
  • the image signal processing apparatus 300 can be used to perform the functions of the image signal processing apparatus 200 described above. Referring to FIG.
  • the image signal processing apparatus 300 includes an acquisition unit 301, a neural network recognition unit 302, and an image signal processing unit 303.
  • the obtaining unit 301 can be used to perform the step of performing the acquiring image signal involved in the foregoing method embodiment.
  • the image signal may be an image signal obtained by processing the sensor signal collected by the image sensor by the image signal processing unit 303.
  • the various units in Figure 5 can be implemented in software, hardware, or a combination of software and hardware.
  • the neural network identification unit 302 can be used to perform the step of the neural network operation processor 203 involved in the above method embodiment to identify the scene to which the image signal belongs, for example, using the neural network to identify the scene to which the image signal acquired by the acquisition unit 301 belongs.
  • the image signal processing unit 303 is configured to determine whether the scene recognized by the neural network recognizing unit 302 is accurate by using the attribute information of the image signal. If it is determined that the scene recognized by the neural network recognizing unit 302 is accurate, the scene recognized by the neural network recognizing unit 302 is used.
  • the image signal is subjected to enhancement processing to generate an enhanced image signal.
  • the attribute information of the image signal may be attribute information of an image signal obtained by processing the sensor signal collected by the image sensor by the image signal processing unit 303.
  • the attribute information of the image signal may include at least one of light intensity information and foreground position information.
  • the image signal processing unit 303 may determine whether the light intensity of the image signal is within a preset light intensity threshold according to the light intensity information to determine whether the scene recognized by the neural network recognition unit 302 is accurate.
  • the image signal processing unit 303 may determine, according to the foreground position information, whether the foreground position of the image signal is within a preset distance threshold range, to determine whether the scene recognized by the neural network recognition unit 302 is accurate.
  • the image processing enhancement algorithm corresponding to the scene may be preset, and the image signal processing unit 303 may perform the enhancement processing on the image signal according to the scene recognized by the neural network identification unit 302, and may be identified by the neural network identification unit 302.
  • the enhancement algorithm corresponding to the scene enhances the image signal.
  • the image signal processing unit 303 in the embodiment of the present application may be at least one of the image signal processor 202 and the operation processor 203 involved in the above embodiment. Therefore, the enhancement processing referred to in the foregoing embodiment of the present application may be performed in at least one of the image signal processor 202 and the operation processor 203. Further, in the embodiment of the present application, the attribute information of the image signal used in the foregoing embodiment is used to determine whether the scene recognized by the neural network identification unit 302 is accurate, or may be at least one of the image signal processor 202 and the operation processor 203. The processor executes.
  • image signal processing apparatus 300 provided by the embodiment of the present application has all the functions in the implementation process of the image signal processing method involved in the foregoing method embodiments, and the specific implementation process may refer to the foregoing embodiments and related drawings. Description, no longer repeat here.
  • the image signal processing method, device and device provided by the embodiments of the present invention use a neural network to initially identify a scene, and then use the attribute information of the image signal to assist the judgment of the accuracy of the initially identified scene, thereby improving the scene recognition accuracy rate, and further Improve the quality of image signal processing.
  • the embodiment of the present application further provides a computer readable storage medium having instructions stored thereon that, when executed on a computer, cause the computer to execute the image signal processing method according to the above embodiments.
  • the embodiment of the present application further provides a computer program product comprising instructions for causing a computer to execute the image signal processing method according to the above embodiment when the computer program product containing the instruction is run on a computer.
  • Embodiments of the present application are described with reference to flowchart illustrations and/or block diagrams of methods, devices (systems), and computer program products according to embodiments of the present application. It will be understood that each flow and/or block of the flowchart illustrations and/or FIG.
  • These computer program instructions can be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing device to produce a machine for the execution of instructions for execution by a processor of a computer or other programmable data processing device.
  • the computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device.
  • the apparatus implements the functions specified in one or more blocks of a flow or a flow and/or block diagram of the flowchart.
  • These computer program instructions can also be loaded onto a computer or other programmable data processing device such that a series of operational steps are performed on a computer or other programmable device to produce computer-implemented processing for execution on a computer or other programmable device.
  • the instructions provide steps for implementing the functions specified in one or more of the flow or in a block or blocks of a flow diagram.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Signal Processing (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

一种图像信号处理方法、装置及设备,在该图像信号处理方法中,获取图像信号,该图像信号来源于图像传感器采集的传感器信号,利用神经网络初步识别出图像信号所属的场景,然后利用图像信号的属性信息进一步确定初步识别的场景的是否准确,若确定所述场景是准确的,则按照所述场景对所述图像信号进行增强处理,以生成增强后的图像信号,可以提高场景识别准确率,进而提高图像信号处理的质量。

Description

一种图像信号处理方法、装置及设备
本申请要求在2017年10月13日提交中国专利局、申请号为201710952413.3、发明名称为“一种图像信号处理方法、装置及设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及图像处理技术领域,尤其涉及一种图像信号处理方法、装置及设备。
背景技术
随着科学技术的发展,手机、平板电脑等具有拍照和视频录制功能的移动终端已被人们广泛使用。
为使用户拍摄出高质量的照片或者录制出高质量的视频,移动终端通常具有场景识别功能。移动终端通过图像传感器采集到传感器信号后,将传感器信号处理为图像信号,利用场景识别功能识别出图像信号所属的场景,并利用图像信号处理器(image signal processor,ISP)将图像信号处理为符合识别出的场景的图像信号。
目前,移动终端可依赖于基于色彩通道或基于模板匹配等进行场景识别,或者移动终端也可通过额外的测光器件辅助完成场景识别。然而这些传统的场景识别方法误识别率比较高,例如利用色彩通道识别绿植场景时,很容易将其它绿色物体(非绿植)识别为绿植。再比如,通过额外的测光器件辅助完成场景识别夜景时,在测光器件被遮挡或在其它暗光场景下,则会误识别为夜景。
故,利用目前已有的场景识别方法识别场景,场景识别准确度比较低,可能会影响图像信号处理的质量,进而影响用户拍摄照片或录制视频的质量。
发明内容
本申请实施例提供一种图像信号处理方法、装置及设备,利用神经网络初步识别出场景,然后利用图像信号的属性信息对初步识别的场景的准确性进行进一步判断,以提高场景识别准确率。若确定的场景是准确的,则按照识别出的场景对图像信号进行增强处理,以生成增强后的图像信号,进而提高图像信号处理的质量。
第一方面,提供一种图像信号处理方法,在该方法中,利用神经网络识别出图像信号所属的场景,并在确定神经网络识别出的场景准确的情况下,按照该识别出的场景对图像信号进行增强处理,以生成增强后的图像信号。
本申请实施例提供的图像信号处理方法,通过利用神经网络识别出图像信号所属的场景并进一步确定神经网络识别出的场景的准确性,可以提高场景识别准确率。并且按照识别出的准确场景对图像信号进行增强处理,生成增强后的图像信号,可在一定程度上提高图像信号处理的质量。
一种可能的设计中,神经网络所识别的图像信号来源于图像传感器采集的传感器信号,并且可利用图像信号的属性信息确定神经网络识别出的场景是否准确,若确定识别出的场景是准确的,则按照识别出的场景对图像信号进行增强处理,以生成增强 后的图像信号。
其中,本申请实施例中涉及的图像信号的属性信息可以是所述图像信号所包含的光强度信息和前景位置信息中的至少一项。
一种可能的示例中,图像信号的属性信息包括光强度信息。利用图像信号的属性信息确定神经网络识别出的场景是否准确时,可根据所述光强度信息判断所述图像信号的光强度是否在预设的光强度阈值范围内,以确定神经网络识别出的场景是否准确。
另一种可能的示例中,图像信号的属性信息包括前景位置信息。利用图像信号的属性信息确定神经网络识别出的场景是否准确时,可根据前景位置信息判断图像信号的前景位置是否在预设的距离阈值范围内,以确定神经网络识别出的场景是否准确。
进一步的,神经网络所识别的图像信号可以是通过图像信号处理器对传感器信号进行处理得到的图像信号。图像信号的属性信息是通过图像信号处理器对传感器信号进行处理得到的图像信号的属性信息。
另一种可能的设计中,本申请实施例中可预设各场景进行增强处理所用的增强算法,在按照识别出的场景对图像信号进行增强处理时,可采用与识别出的场景对应的增强算法,对图像信号进行增强处理。
又一种可能的设计中,本申请实施例中可以通过神经网络运算处理器识别图像信号所属的场景。
进一步的,通过图像信号处理器对图像信号进行增强处理,也可以通过运算处理器对图像信号进行增强处理,还可以通过图像信号处理器和运算处理器对图像信号进行增强处理。
更进一步的,本申请实施例中可通过图像信号处理器执行利用所述图像信号的属性信息确定所述场景是否准确的过程,也可以通过运算处理器执行利用所述图像信号的属性信息确定所述场景是否准确的过程,还可以通过图像信号处理器和运算处理器执行利用所述图像信号的属性信息确定所述场景是否准确的过程。
本申请实施例,通过上述神经网络运算处理器识别出图像信号的场景,并且通过运算处理器以及图像信号处理器可辅助判断神经网络运算处理器识别出的场景的准确性,提高了场景识别准确度。
第二方面,提供一种图像信号处理装置,该图像信号处理装置具有实现上述方法设计中进行图像信号处理的功能。这些功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。所述硬件或软件包括一个或多个与上述功能相对应的单元。该图像信号处理装置可以应用于具有图像处理功能的电子设备。。
一种可能的设计中,图像信号处理装置包括获取单元、神经网络识别单元和图像信号处理单元,其中,获取单元、神经网络识别单元和图像信号处理单元的功能与可以和各方法步骤相对应,在此不予赘述。
第三方面,提供一种图像信号处理设备,该图像信号处理设备包括图像信号处理器、运算处理器和神经网络运算处理器。该图像信号处理设备中还可包括图像传感器,该图像传感器用于采集外界信号,将该外界信号进行处理转换成传感器信号。该图像信号处理设备还可包括存储器,该存储器,用于存储图像信号处理器、运算处理器和神经网络运算处理器执行的程序代码。该图像信号处理设备中还可包括拍照或录制功能控制模块,用于实现拍照或录制功能,并对图像信号进行后期处理。
在一个可能的设计中,图像信号处理器、运算处理器和神经网络运算处理器可执行上述第一方面或第一方面的任意一种可能的设计所提供的图像信号处理方法中的相应功能。例如,所述神经网络运算处理器,用于获取图像信号,所述图像信号来源于图像传感器采集的传感器信号,利用神经网络识别出所述图像信号所属的场景。所述图像信号处理器和所述运算处理器中的至少一个,用于利用所述图像信号的属性信息确定所述神经网络运算处理器识别的场景是否准确。若确定所述场景是准确的,则按照所述场景对所述图像信号进行增强处理,以生成增强后的图像信号。
第四方面,提供一种计算机可读存储介质,该计算机可读存储介质上存储有指令,当所述指令在计算机上运行时,使得计算机执行上述第一方面以及第一方面任意可能的设计中的图像信号处理方法。
第五方面,提供一种包含指令的计算机程序产品,当所述包含指令的计算机程序产品在计算机上运行时,使得计算机执行上述以及第一方面任意可能的设计中的图像信号处理方法。
本申请实施例提供的图像信号处理方法、装置及设备,利用神经网络初步识别出场景,然后利用图像信号的属性信息对初步识别的场景的准确性进行辅助判断,可以提高场景识别准确率,进而提高图像信号处理的质量。
附图说明
图1为本申请实施例涉及的一种移动终端的硬件结构示意图;
图2为本申请实施例提供的神经网络原理示意图;
图3为本申请实施例提供的一种图像信号处理设备结构示意图;
图4为本申请实施例提供的一种图像信号处理方法流程图;
图5为本申请实施例提供的一种图像信号处理装置结构示意图。
具体实施方式
下面将结合附图,对本申请实施例进行描述。
本申请实施例提供的图像信号处理方法及装置,可应用于电子设备,该电子设备,可以是移动终端(mobile terminal)、移动台(mobile station,MS)、用户设备(user equipment,UE)等移动设备,也可以是固定设备,如固定电话、台式电脑等,还可以是视频监控器等。该电子设备,具有图像信号采集与处理功能的图像采集与处理设备,该电子设备还可以选择性地具有无线连接功能,以向用户提供语音和/或数据连通性的手持式设备、或连接到无线调制解调器的其他处理设备,比如:该电子设备可以是移动电话(或称为“蜂窝”电话)、具有移动终端的计算机等,还可以是便携式、袖珍式、手持式、计算机内置的或者车载的移动装置,当然也可以是可穿戴设备(如智能手表、智能手环等)、平板电脑、个人电脑(personal computer,PC)、个人数字助理(personal digital assistant,PDA)、销售终端(Point of Sales,POS)等。本申请实施例中以下以电子设备为移动终端为例进行说明。
图1所示为本申请实施例涉及的移动终端100的一种可选的硬件结构示意图。
如图1所示,移动终端100主要包括芯片组和外设装置,其中,图1中实线框中的电源管理单元(power management unit,PMU)、语音codec、短距离模块和射频(radio frequency, RF)、运算处理器、随机存储器(random-access memory,RAM)、输入/输出(input/output,I/O)、显示接口、图像处理器(Image Signal Processor,ISP)、传感器接口(Sensor hub)、基带通信模块等各部件组成芯片或芯片组。USB接口、存储器、显示屏、电池/市电、耳机/扬声器、天线、传感器(Sensor)等部件可以理解为是外设装置。芯片组内的运算处理器、RAM、I/O、显示接口、ISP、Sensor hub、基带等部件可组成片上系统(system-on-a-chip,SOC),为芯片组的主要部分。SOC内的各部件可以全部集成为一个完整芯片,或者SOC内也可以是部分部件集成,另一部分部件不集成,比如SOC内的基带通信模块,可以与其他部分不集成在一起,成为独立部分。SOC中的各部件可通过总线或其他连接线互相连接。SOC外部的PMU、语音codec、RF等通常包括模拟电路部分,因此经常在SOC之外,彼此并不集成。
图1中,PMU用于外接市电或电池,为SOC供电,可以利用市电为电池充电。语音codec作为声音的编解码单元外接耳机或扬声器,实现自然的模拟语音信号与SOC可处理的数字语音信号之间的转换。短距离模块可包括无线保真(wireless fidelity,WiFi)和蓝牙,也可选择性包括红外、近距离无线通信(near field communication,NFC)、收音机(FM)或全球定位系统(Global Positioning System,GPS)模块等。RF与SOC中的基带通信模块连接,用来实现空口RF信号和基带信号的转换,即混频。对手机而言,接收是下变频,发送则是上变频。短距离模块和RF都可以有一个或多个用于信号发送或接收的天线。基带用来做基带通信,包括多种通信模式中的一种或多种,用于进行无线通信协议的处理,可包括物理层(层1)、媒体接入控制(medium access control,MAC)(层2)、无线资源控制(radio resource control,RRC)(层3)等各个协议层的处理,可支持各种蜂窝通信制式,例如长期演进(Long Term Evolution,LTE)通信。Sensor hub是SOC与外界传感器的接口,用来收集和处理外界至少一个传感器的数据,外界的传感器例如可以是加速计、陀螺仪、控制传感器、图像传感器等。运算处理器可以是通用处理器,例如中央处理器(central processing unit,CPU),还可以是一个或多个集成电路,例如:一个或多个特定集成电路(application specific integrated circuit,ASIC),或,一个或多个数字信号处理器(digital singnal processor,DSP),或微处理器,或,一个或者多个现场可编程门阵列(field programmable gate array,FPGA)等。运算处理器可包括一个或多个核,并可选择性调度其他单元。RAM可存储一些计算或处理过程中的中间数据,如CPU和基带的中间计算数据。ISP用于图像传感器采集的数据进行处理。I/O用于SOC与外界各类接口进行交互,如可与用于数据传输的通用串行总线(universal serial bus,USB)接口进行交互等。存储器可以是一个或一组芯片。显示屏可以是触摸屏,通过显示接口与总线连接,显示接口可以是进行图像显示前的数据处理,比如需要显示的多个图层的混叠、显示数据的缓存或对屏幕亮度的控制调整等。
本申请实施例中涉及的移动终端100中包括有图像传感器,该图像传感器可从外界采集光线等外界信号,将该外界信号进行处理转换成传感器信号,即电信号。该传感器信号可以是静态图像信号,也可以是动态的视频图像信号。其中,该图像传感器例如可以是摄像头。
本申请实施例中涉及的移动终端100还包括有ISP,图像传感器采集到传感器信号传送给图像信号处理器,ISP获取到该传感器信号,可对该传感器信号进行处理,以得到清晰度、色彩、亮度等各方面均符合人眼特性的图像信号。
具体的,ISP对图像信号进行处理可以包括如下几方面:
1、校正及补偿:缺陷像素校正(defective pixel correction,DPC),黑电平补偿(black levelcompensation,BLC),镜头畸变校正(Lens distortion correction,LSC),针对扭曲、拉伸、偏移等进行的几何校正,伽马校正、与透视原理相关的校正等。
2、去噪及图像增强:时域、空域滤波、分级补偿滤波,各种噪声去除,锐化,抑制振铃效应和带状伪影,边缘增强,亮度增强,对比度增强。
3、颜色及格式转换:颜色插值Demosaic(raw->RGB),颜色空间转换RGB->YUV or YCbCr or YPbPr,色调映射,色度调整,颜色校正、饱和度调整、缩放,旋转等。
4、自适应处理:自动白平衡,自动曝光,自动聚焦,频闪检测等。
5、视觉识别(人脸、姿势识别)及极端环境下的图像处理。其中,极端环境包括震动、快速移动、较暗、过亮等。涉及的处理一般包括去模糊、点扩散函数估计,亮度补偿,运动检测,动态捕捉,图像稳定,高动态范围图像(High-Dynamic Range,HDR)处理等。
可以理解的是,本申请实施例中涉及的ISP可以是一个或一组芯片,即可以是集成的,也可以是独立的。例如,移动终端100中包括的ISP可以是集成在运算处理器中的集成ISP芯片。
本申请实施例中涉及的移动终端100具有拍摄照片或录制视频的功能,在移动终端100进行拍照或录制视频时,为使用户拍摄出高质量的照片或者录制出高质量的视频。ISP对获取的传感器信号进行处理时,可以结合移动终端100的场景识别功能,对传感器信号进行线性纠正、噪点去除、坏点修补、颜色插值、白平衡校正、曝光校正等处理,以将传感器信号处理为符合识别出的场景的图像信号。然而目前移动终端100进行场景识别时,场景识别准确度比较低。有鉴于此,本申请实施例提供一种图像信号处理方法,在该方法中可单独提供一种图像信号所属场景的识别方法,在确定识别的场景准确的情况下,使ISP按照该准确识别出的场景对图像信号进行增强处理生成增强后的图像信号,可以提高场景识别准确率,并且在一定程度上提高图像信号处理的质量。
其中,神经网络(neural network,NN),是一种模仿动物神经网络行为特征进行信息处理的网络结构,也简称为人工神经网络(artificial neural networks,ANN)。神经网络可以是循环神经网络(recurrent neural network,RNN),也可以是卷积神经网络(Convolutional neural network,CNN)。神经网络结构由大量的节点(或称神经元)相互联接构成,基于特定运算模型通过对输入信息进行学习和训练达到处理信息的目的。一个神经网络包括输入层、隐藏层及输出层,输入层负责接收输入信号,输出层负责输出神经网络的计算结果,隐藏层负责学习、训练等计算过程,是网络的记忆单元,隐藏层的记忆功能由权重矩阵来表征,通常每个神经元对应一个权重系数。
如图2所示,是一种神经网络的原理示意图,该神经网络100具有N个处理层,N≥3且N取自然数,该神经网络的第一层为输入层101,负责接收输入信号,该神经网络的最后一层为输出层103,输出神经网络的处理结果,除去第一层和最后一层的其他层为中间层104,这些中间层共同组成隐藏层102,隐藏层中的每一层中间层既可以接收输入信号,也可以输出信号,隐藏层负责输入信号的处理过程。每一层代表了信号处理的一个逻辑级别,通过多个层,数据信号可经过多级逻辑的处理。
为便于理解,下面对本申请实施例中神经网络的处理原理进行描述,神经网络的处理通常是非线性函数f(x i),如f(x i)=max(0,x i),在一些可行的实施例中,该处理函数可以是激活函数(rectified linear units,ReLU)、双曲正切函数(tanh)或S型函数(sigmoid) 等。假设(x 1,x 2,x 3)是一个一维输入信号矩阵,(h 1,h 2,h 3)是输出信号矩阵,W ij表示输入x j与输出h i之间的权重系数,权重系数构成的矩阵为权重矩阵,则该一维输入信号矩阵与输出信号矩阵对应的权重矩阵W如式(1)所示:
Figure PCTCN2018104678-appb-000001
输入信号与输出信号的关系如式(2)所示,其中b i为神经网络处理函数的偏置值,该偏置值对神经网络的输入进行调整从而得到理想的输出结果。
Figure PCTCN2018104678-appb-000002
在一些可行的实施例中该神经网络的输入信号可以是语音信号、文本信号、图像信号、温度信号等各种形式的信号。在本实施例中,被处理的图像信号可以是相机(图像传感器)拍摄的风景信号、显监控设备捕捉的社区环境的图像信号以及门禁系统获取的人脸的面部信号等各类传感器信号,该神经网络的输入信号包括其他各种计算机可处理的工程信号,在此不再一一列举。该神经网络的隐藏层102进行的处理可以是对人脸的面部图像信号进行识别等处理。若利用神经网络对图像信号进行深度学习,可相对较准确的识别出图像信号所属的场景。故,本申请实施例中移动终端可以利用神经网络进行深度学习,以识别出图像信号所属的场景。
本申请实施例中,为实现利用神经网络识别图像信号所属的场景,可在移动终端中新增神经网络运算处理器,该神经网络运算处理器可以是独立于图1中涉及的运算处理器的,也可以是集成在图1中涉及的运算处理器中的。该神经网络运算处理器,也可以理解为是一种区别于图1中涉及的运算处理器的特殊运算处理器。例如,该神经网络运算处理器可以是运行操作系统的CPU,也可以是其他类型的计算设备,如专用硬件加速处理器。本申请实施例中以神经网络运算处理器独立于运算处理器为例进行说明。
进一步的,本申请实施例中为了提高图像信号所属场景识别的准确性,可对利用神经网络识别出的场景的准确性进行进一步的判断。例如,可利用图像信号的属性信息确定神经网络识别出的场景是否准确。在确定神经网络识别出的场景准确的情况下,按照神经网络识别出的场景对图像信号进行增强处理,以生成增强后的图像信号,提高场景识别准确率,进而提高用户拍摄照片或录制视频的质量。
本申请实施例提供一种图像信号处理设备,该图像信号处理设备可以是上述实施例中涉及的移动终端100,当然也可以是其它具有图像信号处理功能的电子设备,如移动终端100中的芯片或芯片组等。图3所示为本申请实施例提供的一种图像信号处理设备200的结构示意图,该图像信号处理设备200可用于执行本申请实施例提供的图像信号处理方法。参阅图3所示,图像信号处理设备200包括图像信号处理器201、运算处理器202和神经网络运算处理器203。图像传感器、图像信号处理器201、运算处理器202和神经网络运算处理器203可通过总线连接。
可以理解的是,本申请实施例图3所示的图像处理设备200的结构示意图仅是进行示意性说明,并不引以为限,图像处理设备200还可包括其它部件。例如,图3所示的图像信号处理设备200中还可包括图像传感器,该图像传感器用于采集外界信号,将该外界信号进 行处理转换成传感器信号。图3所示的图像信号处理设备200中还可包括存储器,该存储器,用于存储图像信号处理器201、运算处理器202和神经网络运算处理器203执行的程序代码。图3所示的图像信号处理设备200中还可包括拍照或录制功能控制模块,用于实现拍照或录制功能,主要是对图像信号的后期处理。其中,拍照或录制功能控制模块可以采用软件、硬件或软件与硬件结合的方式实现。拍照或录制功能控制模块可以集成在运算处理器202中,也可以集成在图像信号处理器201中,当然,也可以是独立的功能部件。
以下将结合实际应用对本申请实施例提供的图像信号处理设备200执行图像信号处理方法的过程进行说明。
图4所示为本申请实施例提供的一种图像信号处理方法流程图,图4所示的方法执行主体可以是图像信号处理设备200,也可以是图像信号处理设备200内包括的部件,如芯片或芯片组。参阅图4所示,该方法包括:
S101:神经网络运算处理器203获取图像信号。
具体的,本申请实施例中神经网络运算处理器203获取的图像信号来源于图像传感器采集的传感器信号。图像传感器采集的外界信号经过处理后,可得到传感器信号。换言之,本申请实施例获取的图像信号是根据图像传感器采集的传感器信号得到的。例如,图像传感器为摄像头的情况下,摄像头采集的传感器信号为光信号,该光信号经过摄像机处理后可转变为电信号,可以理解为是图像信号。本申请实施例中,图像信号处理设备200的图像信号处理器201可获取到传感器信号,图像信号处理器201可对传感器信号进行处理得到所述图像信号。
S102:利用神经网络识别出图像信号所属的场景。本申请实施例中,利用神经网络运算处理器203识别出图像信号所属的场景。
本申请实施例中,为了增强图像信号处理器201对图像信号的处理,一种可能的实施方式中,可利用神经网络对图像信号处理器201处理后的图像信号进行场景识别。运算处理器202获取图像信号处理器201处理后的图像信号,将图像信号处理器201处理后的信号转换为神经网络运算处理器203可以识别的图像信号,并将转换后的图像信号发送给神经网络运算处理器203。当然,本申请实施例中也可以由图像信号处理器201将处理后的图像信号转换为神经网络运算处理器203可以识别的图像信号,并将转换后的图像信号发送给神经网络运算处理器203。
本申请实施例中识别图像信号所属场景所用的神经网络可以是卷积神经网络(Convolutional neural network,CNN),卷积神经网络进行场景识别时所用的模型可选用Alexnet、VGG16、VGG19、ResNet、inception net等模型中的至少一种,本申请实施例对此不做限定。
本申请实施例中可设计卷积神经网络进行图像信号学习的模型,以识别出舞台场景、夜景场景、蓝天场景、绿植场景、花朵场景、美食场景、沙滩场景、雪景场景、文字场景和动物场景(猫和狗),满足用户在拍照或录制视频时日常需要的场景。
S103:确定利用神经网络识别出的场景的是否准确。具体的,本申请实施例中可利用图像信号的属性信息确定利用神经网络识别出的场景的是否准确。
一种可能的实施方式中,在进行场景准确性确定时,可利用图像信号处理器201对图像信号进行处理得到的图像信号的属性信息,确定利用神经网络识别出的场景的是否准确。
本申请实施例中,图像信号的属性信息可以包括图像信号所包含的光强度信息和前景 位置信息中的至少一项。其中,光强度信息可以反映对应图像的亮度。前景位置信息可以反映对应图像中的前景至图像信号处理设备200的距离。
本申请实施例中,在确定利用神经网络识别出的场景的是否准确时,可根据实际经验,例如蓝天场景通常是光照比较强、夜景通常是光照比较弱、美食场景一般在近距离等,预设图像信号所属场景与图像信号的属性信息之间的对应关系,不同的场景匹配不同的图像信号属性。例如,本申请实施例中可预设各个场景对应的光强度阈值范围以及距离阈值范围,如图表1所示:
表1
场景 图像信号的属性信息
舞台 光强小于第一设定光强度阈值,并在第一光强度阈值范围内
蓝天 光强大于第二光强度设定阈值,并在第二光强度阈值范围内
夜景 光强小于第三光强度设定阈值,并在第三光强度阈值范围内
绿植
花朵
美食 前景位置小于设定第一距离阈值,并在第一距离阈值范围内
沙滩 光强大于第四光强度设定阈值,并在第四光强度阈值范围内
烟花 光强小于第五光强度设定阈值,并在第五光强度阈值范围内
猫、狗
文字
雪景
上述表1中涉及的各距离阈值以及各光强度阈值,都是以实际场景进行设置的,具体设置的数值,本申请实施例在此不做限定。并且上述涉及的,“第一”、“第二”等仅是为了区分不同的阈值,而不必用于描述特定的顺序或先后次序,例如本申请实施例中上述涉及的第一光强度阈值、第二光强度阈值、第三光强度阈值、第四光强度阈值以及第五光强度阈值,仅是用于方便描述以及区分不同的光强度阈值,并不构成对光强度阈值的限定。应该理解这样使用的阈值在适当情况下可以互换,以便这里描述的本发明的实施例能够以除了在这里图示或描述的那些以外的顺序实施。
本申请实施例中,利用图像信号的属性信息确定利用神经网络识别出的场景的是否准确时,可判断利用神经网络识别出的场景是否匹配图像信号的属性信息,若匹配,则确定识别出的场景是准确的,若不匹配则确定识别出的场景是不准确的。例如,本申请实施例中图像信号的属性信息包括光强度信息时,可预设各个场景对应的光强度阈值,根据获取到的图像信号的光强度信息判断图像信号的光强度是否在预设的光强度阈值范围内,以确定识别出的场景是否准确。例如,图像信号的光强度小于第一设定光强度阈值,并在第一光强度阈值范围内,可确定该图像信号的场景为舞台场景,若利用神经网络识别出的场景 为舞台场景,则可确定利用神经网络识别出的场景准确,若利用神经网络识别出的场景不是舞台场景,则可确定利用神经网络识别出的场景不准确。再例如,本申请实施例中图像信号的属性信息包括前景位置信息时,可预设各个场景对应的距离阈值,根据获取到的图像信号的前景位置信息判断图像信号的前景位置是否在预设的前景位置阈值范围内,以确定识别出的场景是否准确。前景位置反映前景到当前设备,如前景到终端设备或传感器的距离。例如,图像信号的前景位置所反映的所述距离小于第一设定前景位置阈值,并在第一前景位置阈值范围内,可确定该图像信号的场景为美食场景,若利用神经网络识别出的场景为美食场景,则可确定利用神经网络识别出的场景准确,若利用神经网络识别出的场景不是美食场景,则可确定利用神经网络识别出的场景不准确。
具体的,本申请实施例中上述确定利用神经网络识别出的场景是否准确的具体执行过程,可由运算处理器202执行,也可由图像信号处理器201执行,还可由运算处理器202和图像信号处理器201共同配合执行。
本申请实施例以下以运算处理器202和图像信号处理器201共同配合,确定利用神经网络识别出的场景是否准确的执行过程为例进行说明。本申请实施例中神经网络运算处理器203识别出图像信号所属的场景后,将场景识别结果发送给运算处理器202,运算处理器202获取到该场景识别结果,并从图像信号处理器201处图像信号的属性信息,利用图像信号的属性信息,判断神经网络运算处理器203识别出图像信号所属的场景是否准确,若准确,则执行S104的步骤,运算处理器202将该准确的场景信息发送给图像信号处理器201,图像信号处理器201按照该准确的场景对图像信号进行增强处理。若不准确,则可执行S105的步骤,运算处理器202可不与图像信号处理器201交互场景识别准确与否的结果,图像信号处理器201按照原有的处理方式对图像信号进行处理,具体实现过程中信息交互过程,可参阅图3所示。
S104:若确定利用神经网络识别出的场景是准确的,则按照识别出的场景对图像信号进行增强处理,以生成增强后的图像信号。
本申请实施例中,可预设各场景下对图像信号进行增强处理所用的增强算法,不同的场景可采用不同的增强算法。这些算法均可以采用现有技术中已经存在的算法。本申请实施例中以下以对蓝天场景、绿植场景、夜景场景对应的增强算法为例进行说明。
蓝天场景对应的增强算法:基于区域分割,统计蓝天区域的亮度色彩信息,自适应优化的强度;统计色彩增强前蓝天的属性,根据统计结果确定优化的目标。不同的天气,不同的云彩蓝天,需要自适应优化的强度,避免碧蓝的天空优化过渡反而不自然。色彩优化过程中将饱和度与亮度关联起来;由于画面取景的变化,画面蓝天区域的亮度会变化。色彩优化时,根据当前亮度统计信息,确定优化的饱和度目标。不同的亮度下,蓝色的色域范围是不同的,在增强时,考虑色域的限制。色彩优化过程中将色调与饱和度关联起来;整体映射蓝天的色调到记忆色的色调范围,同时,补偿不同饱和度下主观视觉的色调差异。在色彩优化增强后,蓝天的色调更符合主观的记忆色。进一步的,在色彩优化时仅优化蓝天的色域,相邻色域的过渡效果平滑自然;根据蓝天的统计信息,自适应限制蓝天场景增强的色域范围。增强的幅度在色域的边界平滑过渡。
绿植场景对应的增强算法:利用绿植识别结果,有效解决白平衡统计中绿植对光源的信息干扰,提升白平衡的准确性。白平衡统计值色度空间中难于普遍性地区分绿植还是光源对应的数据,而深度学习对绿植的识别会利用除颜色之外的更多信息。利用识别后的绿 植作为记忆色的先验知识优化白平衡算法对当前光源的色度坐标估计,大幅提升各种光照环境下绿植场景的白平衡准确性。对绿植色域进行色彩增强,依据亮度、色调和饱和度的差异对色域进行增强控制。低饱和度和低亮度的色域,增强后更鲜艳,而高饱和度的色域增强后色彩不溢出,色彩层次仍在。整体增强后色彩鲜艳细腻,富有层次。
夜景场景对应的增强算法:针对夜景场景的亮度特点,专门优化HDR模式下的合成算法和细分算法控制参;HDR算法处理高动态场景利用场景的先验知识,区别于统一的处理策略。优化合成帧数及合成帧的曝光策略,提升暗部的亮度和细节,控制过曝区域,同时增强对比度。合成的图像通透,高动态范围。进一步的,还可细分夜景,优化夜空的噪声和场景的整体对比度;针对夜景中有夜空的这一子类场景,专门优化亮度和对比度等控制参数,使得夜空干净,整体通透,突出夜景下主体部分,烘托出浓浓的夜的氛围。
可以理解的是,本申请实施例中在按照识别出的场景对图像信号进行增强处理时所采用的增强算法可以包括对图像信号进行增强前处理的算法,也可以包括对图像信号进行增强后处理的算法,还可以包括对图像信号进行增强前处理和增强后处理的算法。该增强算法可以由图像信号处理器201执行,也可以由运算处理器202执行,还可以由图像信号处理器201和运算处理器202共同执行。具体的,若增强算法包括对图像信号进行增强前处理和增强后处理的算法,且拍照或录制功能控制模块集成在运算处理器202中,则可由图像信号处理器201执行增强前处理,由运算处理器202执行增强后处理。若增强算法包括对图像信号进行增强前处理和增强后处理的算法,且拍照或录制功能控制模块集成在图像信号处理器201中,则图像信号处理器201可包括执行前处理的功能模块以及进行后处理的拍照或录制功能控制模块,图像信号处理器201执行增强前处理和增强后处理。其中,图像信号处理器201中包括的用于执行前处理的功能模块可以采用软件、硬件或软件与硬件结合的方式实现。若增强算法包括对图像信号进行增强后处理的算法,且拍照或录制功能控制模块集成在运算处理器202中,则可由运算处理器202执行增强后处理。
本申请实施例中,在按照识别出的场景对图像信号进行增强处理时,可采用与神经网络运算处理器203识别出图像信号所属的场景对应的增强算法,对图像信号进行增强处理。例如,应用本申请实施例提供的图像信号处理方法进行拍摄照片或录制视频时,神经网络运算处理器203识别出图像信号所属的场景后,运算处理器202确认神经网络运算处理器203识别出的场景是准确的,可以将该识别准确的场景信息发送给图像信号处理器201中用于执行前处理的功能模块,图像信号处理器201中用于执行前处理的功能模块采用增强前处理的算法对图像信号进行拍摄照片或录制视频的前处理。运算处理器202还可将该识别准确的场景信息发送给集成在图像信号处理器201中的拍照或录制功能控制模块,进行拍摄照片或录制视频的后处理。其中,图像信号处理器201中的拍照或录制功能控制模块进行增强后处理时可以是对图像信号处理器201中用于执行前处理的功能模块进行前处理后的图像信号进行增强后处理。
本申请实施例中,运算处理器202控制拍照或录制功能控制模块进行拍摄照片或录制视频的后处理可针对不同的场景设置不同的优化策略,例如可采用如下表2的方式设置优化策略:
表2
Figure PCTCN2018104678-appb-000003
Figure PCTCN2018104678-appb-000004
S105:若确定利用神经网络识别出的场景是不准确的,则按照原有的处理方式对图像信号进行处理。其中,S105为可选步骤。
本申请实施例,通过上述神经网络运算处理器203可以识别出图像信号的场景,并且通过运算处理器202以及图像信号处理器201可辅助判断神经网络运算处理器203识别出的场景的准确性,提高了场景识别准确度。
可以理解的是,在本发明实施例提到的一种示意性的图像信号处理设备中,其包含了执行各个功能相应的硬件结构和/或软件模块。结合本申请中所公开的实施例描述的各示例的单元及算法步骤,本申请实施例能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。本领域技术人员可以对每个特定的应用来使用不同的方法来实现所描述的功能,但是这种实现不应认为超出本申请实施例的技术方案的范围。
本申请实施例可以根据上述方法示例对图像信号处理设备进行功能单元的划分,例如,可以对应各个功能划分各个功能单元,也可以将两个或两个以上的功能集成在一个处理单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。需要说明的是,本申请实施例中对单元的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。在采用集成的单元的情况下,图5示出了本申请实施例提供的一种图像信号处理装置300的结构示意图,该图像信号处理装置300可用于执行上述 图像信号处理设备200的功能。参阅图5所示,图像信号处理装置300包括获取单元301、神经网络识别单元302和图像信号处理单元303。其中,获取单元301可用于执行上述方法实施例中涉及的获取图像信号的执行步骤。其中,该图像信号可以是通过图像信号处理单元303对图像传感器采集的传感器信号进行处理得到的图像信号。图5中的各个单元可以采用软件、硬件或软件与硬件结合的方式实现。
神经网络识别单元302可用于执行上述方法实施例中涉及的神经网络运算处理器203识别图像信号所属场景的执行步骤,例如利用神经网络识别出所述获取单元301获取的图像信号所属的场景。
图像信号处理单元303,用于利用图像信号的属性信息确定神经网络识别单元302识别的场景是否准确,若确定神经网络识别单元302识别的场景是准确的,则按照神经网络识别单元302识别的场景对图像信号进行增强处理,以生成增强后的图像信号。其中,该图像信号的属性信息可以是通过图像信号处理单元303对图像传感器采集的传感器信号进行处理得到的图像信号的属性信息。图像信号的属性信息可以包括光强度信息和前景位置信息中的至少一项。
一种可能的实施方式中,图像信号处理单元303可根据光强度信息判断图像信号的光强度是否在预设的光强度阈值范围内,以确定神经网络识别单元302识别的场景是否准确。
另一种可能的实施方式中,图像信号处理单元303可根据前景位置信息判断图像信号的前景位置是否在预设的距离阈值范围内,以确定神经网络识别单元302识别的场景是否准确。
具体的,本申请实施例中可预设场景对应的图像处理增强算法,图像信号处理单元303按照神经网络识别单元302识别的场景对图像信号进行增强处理时可采用与神经网络识别单元302识别的场景对应的增强算法,对图像信号进行增强处理。
进一步的,本申请实施例中图像信号处理单元303可以为上述实施例中涉及的图像信号处理器202和运算处理器203中的至少一个。故,本申请实施例中上述涉及的增强处理可在图像信号处理器202和运算处理器203中的至少一个处理器中进行。更进一步的,本申请实施例中上述涉及的利用图像信号的属性信息,确定神经网络识别单元302识别的场景是否准确的执行过程,也可由图像信号处理器202和运算处理器203中的至少一个处理器执行。
需要说明的是,本申请实施例提供的图像信号处理装置300具有实现上述方法实施例中涉及的图像信号处理方法执行过程中的所有功能,其具体实现过程可参阅上述实施例及附图的相关描述,在此不再赘述。
本申请实施例提供的图像信号处理方法、装置及设备,利用神经网络初步识别出场景,然后利用图像信号的属性信息对初步识别的场景的准确性进行辅助判断,可以提高场景识别准确率,进而提高图像信号处理的质量。
本申请实施例还提供一种计算机可读存储介质,该计算机可读存储介质上存储有指令,当所述指令在计算机上运行时,使得计算机执行上述实施例涉及的图像信号处理方法。
本申请实施例还提供一种包含指令的计算机程序产品,当所述包含指令的计算机程序产品在计算机上运行时,使得计算机执行上述实施例涉及的图像信号处理方法。
本申请实施例是参照根据本申请实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些 计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
显然,本领域的技术人员可以对本申请实施例进行各种改动和变型而不脱离本申请的范围。这样,倘若本申请实施例的这些修改和变型属于本申请权利要求及其等同技术的范围之内,则本申请也意图包含这些改动和变型在内。

Claims (20)

  1. 一种图像信号处理方法,其特征在于,所述方法包括:
    获取图像信号,所述图像信号来源于图像传感器采集的传感器信号;
    利用神经网络识别出所述图像信号所属的场景;
    利用所述图像信号的属性信息确定所述场景是否准确;
    若确定所述场景是准确的,则按照所述场景对所述图像信号进行增强处理,以生成增强后的图像信号。
  2. 根据权利要求1所述的方法,其特征在于,所述图像信号的属性信息包括所述图像信号所包含的光强度信息和前景位置信息中的至少一项。
  3. 根据权利要求2所述的方法,其特征在于,所述图像信号的属性信息包括所述光强度信息;
    所述利用所述图像信号的属性信息确定所述场景是否准确,包括:
    根据所述光强度信息判断所述图像信号的光强度是否在预设的光强度阈值范围内,以确定所述场景是否准确。
  4. 根据权利要求2所述的方法,其特征在于,所述图像信号的属性信息包括所述前景位置信息;
    所述利用所述图像信号的属性信息确定所述场景是否准确,包括:
    根据所述前景位置信息判断所述图像信号的前景位置是否在预设的距离阈值范围内,以确定所述场景是否准确。
  5. 根据权利要求1至4任一项所述的方法,其特征在于,所述图像信号是通过图像信号处理器对所述传感器信号进行处理得到的图像信号;
    所述图像信号的属性信息是通过图像信号处理器对所述传感器信号进行处理得到的图像信号的属性信息。
  6. 根据权利要求1至5任一项所述的方法,其特征在于,按照所述场景对所述图像信号进行增强处理,包括:
    采用与所述场景对应的增强算法,对所述图像信号进行增强处理。
  7. 根据权利要求1至6任一项所述的方法,其特征在于,所述增强处理在图像信号处理器和运算处理器中的至少一个处理器中进行。
  8. 根据权利要求1至7任一项所述的方法,其特征在于,所述利用所述图像信号的属性信息确定所述场景是否准确由图像信号处理器和运算处理器中的至少一个处理器执行。
  9. 一种图像信号处理装置,其特征在于,包括:
    获取单元,用于获取图像信号,所述图像信号来源于图像传感器采集的传感器信号;
    神经网络识别单元,用于利用神经网络识别出所述获取单元获取的图像信号所属的场景;
    图像信号处理单元,用于利用所述图像信号的属性信息确定所述神经网络识别单元识别的场景是否准确,若确定所述场景是准确的,则按照所述场景对所述图像信号进行增强处理,以生成增强后的图像信号。
  10. 根据权利要求9所述的装置,其特征在于,所述图像信号的属性信息包括所述图像信号所包含的光强度信息和前景位置信息中的至少一项。
  11. 根据权利要求10所述的装置,其特征在于,所述图像信号的属性信息包括所述光强度信息;
    所述图像信号处理单元具体用于采用如下方式利用所述图像信号的属性信息确定所述场景是否准确:
    根据所述光强度信息判断所述图像信号的光强度是否在预设的光强度阈值范围内,以确定所述场景是否准确。
  12. 根据权利要求10所述的装置,其特征在于,所述图像信号的属性信息包括所述前景位置信息;
    所述图像信号处理单元具体用于采用如下方式利用所述图像信号的属性信息确定所述场景是否准确:
    根据所述前景位置信息判断所述图像信号的前景位置是否在预设的距离阈值范围内,以确定所述场景是否准确。
  13. 根据权利要求9至12任一项所述的装置,其特征在于,所述图像信号是通过所述图像信号处理单元对所述传感器信号进行处理得到的图像信号;
    所述图像信号的属性信息是通过图像信号处理单元对所述传感器信号进行处理得到的图像信号的属性信息。
  14. 根据权利要求9至13任一项所述的装置,其特征在于,所述图像信号处理单元,具体用于采用与所述场景对应的增强算法,对所述图像信号进行增强处理。
  15. 一种图像信号处理设备,其特征在于,包括图像信号处理器、运算处理器和神经网络运算处理器,其中:
    所述神经网络运算处理器,用于获取图像信号,所述图像信号来源于所述图像传感器采集的传感器信号,利用神经网络识别出所述图像信号所属的场景;
    所述图像信号处理器和所述运算处理器中的至少一个,用于利用所述图像信号的属性信息确定所述神经网络运算处理器识别的场景是否准确,若确定所述场景是准确的,则按照所述场景对所述图像信号进行增强处理,以生成增强后的图像信号。
  16. 根据权利要求15所述的图像信号处理设备,其特征在于,所述图像信号的属性信息包括所述图像信号所包含的光强度信息和前景位置信息中的至少一项。
  17. 根据权利要求16所述的图像信号处理设备,其特征在于,所述图像信号的属性信息包括所述光强度信息;
    所述图像信号处理器和所述运算处理器中的至少一个,具体用于:根据所述光强度信息判断所述图像信号的光强度是否在预设的光强度阈值范围内,以确定所述场景是否准确。
  18. 根据权利要求16所述的图像信号处理设备,其特征在于,所述图像信号的属性信息包括所述前景位置信息;
    所述图像信号处理器和所述运算处理器中的至少一个,具体用于:根据所述前景位置信息判断所述图像信号的前景位置是否在预设的距离阈值范围内,以确定所述场景是否准确。
  19. 根据权利要求15至18任一项所述的图像信号处理设备,其特征在于,所述图像信号处理器,还用于对所述图像传感器采集的传感器信号进行处理得到所述图像信号和所述图像信号的属性信息。
  20. 根据权利要求15至19任一项所述的图像信号处理设备,其特征在于,所述图像信 号处理器和所述运算处理器中的至少一个,具体用于采用与所述场景对应的增强算法,对所述图像信号进行增强处理。
PCT/CN2018/104678 2017-10-13 2018-09-07 一种图像信号处理方法、装置及设备 WO2019072057A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP18867060.8A EP3674967B1 (en) 2017-10-13 2018-09-07 Image signal processing method, apparatus and device
US16/844,115 US11430209B2 (en) 2017-10-13 2020-04-09 Image signal processing method, apparatus, and device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710952413.3 2017-10-13
CN201710952413.3A CN109688351B (zh) 2017-10-13 2017-10-13 一种图像信号处理方法、装置及设备

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/844,115 Continuation US11430209B2 (en) 2017-10-13 2020-04-09 Image signal processing method, apparatus, and device

Publications (1)

Publication Number Publication Date
WO2019072057A1 true WO2019072057A1 (zh) 2019-04-18

Family

ID=66101232

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/104678 WO2019072057A1 (zh) 2017-10-13 2018-09-07 一种图像信号处理方法、装置及设备

Country Status (4)

Country Link
US (1) US11430209B2 (zh)
EP (1) EP3674967B1 (zh)
CN (1) CN109688351B (zh)
WO (1) WO2019072057A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3757879A1 (en) * 2019-06-24 2020-12-30 Samsung Electronics Co., Ltd. Method and apparatus for individually applying enhancement to detected objects in an image

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020232180A1 (en) * 2019-05-14 2020-11-19 Dolby Laboratories Licensing Corporation Method and apparatus for speech source separation based on a convolutional neural network
KR20210010133A (ko) * 2019-07-19 2021-01-27 삼성전자주식회사 음성 인식 방법, 음성 인식을 위한 학습 방법 및 그 장치들
CN110677635B (zh) * 2019-10-07 2020-10-30 董磊 数据参数现场设置系统
CN113051990B (zh) * 2020-11-04 2022-11-18 泰州程顺制冷设备有限公司 站位姿态标准程度分析平台及方法
TWI762055B (zh) * 2020-11-30 2022-04-21 鴻海精密工業股份有限公司 卷積神經網路、運算優化方法、裝置、電子設備及介質
KR20220078109A (ko) * 2020-12-03 2022-06-10 삼성전자주식회사 색공간 변환 방법 및 장치
CN112446880B (zh) * 2021-02-01 2021-04-30 安翰科技(武汉)股份有限公司 图像处理方法、电子设备及可读存储介质
EP4297397A4 (en) * 2021-04-26 2024-04-03 Huawei Tech Co Ltd ELECTRONIC DEVICE AND IMAGE PROCESSING METHOD OF AN ELECTRONIC DEVICE
CN114697548B (zh) * 2022-03-21 2023-09-29 迈克医疗电子有限公司 显微图像拍摄对焦方法及装置
CN117197260A (zh) * 2022-05-30 2023-12-08 北京小米移动软件有限公司 图像处理方法、装置、电子设备、存储介质及芯片
CN115908190B (zh) * 2022-12-08 2023-10-13 南京图格医疗科技有限公司 一种用于视频图像画质增强的方法及系统
CN117115799A (zh) * 2023-09-11 2023-11-24 广州市西克传感器有限公司 基于激光线扫3d相机的轮胎字符识别方法

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050105776A1 (en) * 2003-11-13 2005-05-19 Eastman Kodak Company Method for semantic scene classification using camera metadata and content-based cues
CN105224984A (zh) * 2014-05-31 2016-01-06 华为技术有限公司 一种基于深度神经网络的数据类别识别方法及装置
CN105302872A (zh) * 2015-09-30 2016-02-03 努比亚技术有限公司 图像处理装置和方法
CN105678278A (zh) * 2016-02-01 2016-06-15 国家电网公司 一种基于单隐层神经网络的场景识别方法
CN106250866A (zh) * 2016-08-12 2016-12-21 广州视源电子科技股份有限公司 基于神经网络的图像特征提取建模、图像识别方法及装置
CN107040726A (zh) * 2017-04-19 2017-08-11 宇龙计算机通信科技(深圳)有限公司 双摄像头同步曝光方法及系统

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8194992B2 (en) * 2008-07-18 2012-06-05 Xerox Corporation System and method for automatic enhancement of seascape images
US20160277724A1 (en) * 2014-04-17 2016-09-22 Sony Corporation Depth assisted scene recognition for a camera
CN106534707A (zh) * 2015-09-14 2017-03-22 中兴通讯股份有限公司 拍摄的方法及装置
US20170293837A1 (en) * 2016-04-06 2017-10-12 Nec Laboratories America, Inc. Multi-Modal Driving Danger Prediction System for Automobiles
CN107194318B (zh) * 2017-04-24 2020-06-12 北京航空航天大学 目标检测辅助的场景识别方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050105776A1 (en) * 2003-11-13 2005-05-19 Eastman Kodak Company Method for semantic scene classification using camera metadata and content-based cues
CN105224984A (zh) * 2014-05-31 2016-01-06 华为技术有限公司 一种基于深度神经网络的数据类别识别方法及装置
CN105302872A (zh) * 2015-09-30 2016-02-03 努比亚技术有限公司 图像处理装置和方法
CN105678278A (zh) * 2016-02-01 2016-06-15 国家电网公司 一种基于单隐层神经网络的场景识别方法
CN106250866A (zh) * 2016-08-12 2016-12-21 广州视源电子科技股份有限公司 基于神经网络的图像特征提取建模、图像识别方法及装置
CN107040726A (zh) * 2017-04-19 2017-08-11 宇龙计算机通信科技(深圳)有限公司 双摄像头同步曝光方法及系统

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3757879A1 (en) * 2019-06-24 2020-12-30 Samsung Electronics Co., Ltd. Method and apparatus for individually applying enhancement to detected objects in an image
US11487975B2 (en) 2019-06-24 2022-11-01 Samsung Electronics Co., Ltd. Electronic apparatus and method of controlling the same

Also Published As

Publication number Publication date
CN109688351A (zh) 2019-04-26
EP3674967A4 (en) 2020-11-18
US20200234044A1 (en) 2020-07-23
EP3674967A1 (en) 2020-07-01
CN109688351B (zh) 2020-12-15
US11430209B2 (en) 2022-08-30
EP3674967B1 (en) 2023-01-18

Similar Documents

Publication Publication Date Title
WO2019072057A1 (zh) 一种图像信号处理方法、装置及设备
US20220207680A1 (en) Image Processing Method and Apparatus
WO2020192461A1 (zh) 一种延时摄影的录制方法及电子设备
CN109961453B (zh) 一种图像处理方法、装置与设备
CN112887582A (zh) 一种图像色彩处理方法、装置及相关设备
WO2020172888A1 (zh) 一种图像处理方法和装置
CN110458902B (zh) 3d光照估计方法及电子设备
CN111179282A (zh) 图像处理方法、图像处理装置、存储介质与电子设备
CN112118388B (zh) 图像处理方法、装置、计算机设备和存储介质
WO2022100685A1 (zh) 一种绘制命令处理方法及其相关设备
WO2022017261A1 (zh) 图像合成方法和电子设备
CN110213502A (zh) 图像处理方法、装置、存储介质及电子设备
CN112202986A (zh) 图像处理方法、图像处理装置、可读介质及其电子设备
US20220245778A1 (en) Image bloom processing method and apparatus, and storage medium
CN110266954A (zh) 图像处理方法、装置、存储介质及电子设备
US11825179B2 (en) Auto exposure for spherical images
WO2022267861A1 (zh) 一种拍摄方法及设备
US20240046604A1 (en) Image processing method and apparatus, and electronic device
US20170026584A1 (en) Image processing apparatus and method of operating the same
CN111797694B (zh) 一种车牌检测方法及装置
CN114463191A (zh) 一种图像处理方法及电子设备
CN112037157A (zh) 数据处理方法及装置、计算机可读介质及电子设备
CN117014720A (zh) 图像拍摄方法、装置、终端、存储介质及产品
CN116437198A (zh) 图像处理方法与电子设备
WO2023005882A1 (zh) 拍摄方法、拍摄参数训练方法、电子设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18867060

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2018867060

Country of ref document: EP

Effective date: 20200323

NENP Non-entry into the national phase

Ref country code: DE