WO2024082863A1 - 图像处理方法及电子设备 - Google Patents

图像处理方法及电子设备 Download PDF

Info

Publication number
WO2024082863A1
WO2024082863A1 PCT/CN2023/117623 CN2023117623W WO2024082863A1 WO 2024082863 A1 WO2024082863 A1 WO 2024082863A1 CN 2023117623 W CN2023117623 W CN 2023117623W WO 2024082863 A1 WO2024082863 A1 WO 2024082863A1
Authority
WO
WIPO (PCT)
Prior art keywords
electronic device
film material
image
film
real time
Prior art date
Application number
PCT/CN2023/117623
Other languages
English (en)
French (fr)
Inventor
崔瀚涛
Original Assignee
荣耀终端有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 荣耀终端有限公司 filed Critical 荣耀终端有限公司
Publication of WO2024082863A1 publication Critical patent/WO2024082863A1/zh

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2621Cameras specially adapted for the electronic generation of special effects during image pickup, e.g. digital cameras, camcorders, video cameras having integrated special effects capability

Definitions

  • the present application relates to the field of terminals, and in particular to an image processing method and an electronic device.
  • the present application provides an image processing method and an electronic device, which realize that after the electronic device processes the real-time acquired image based on the selected film material, the image can present the corresponding film display effect, so that the image can be more rich and interesting in terms of visual effect display.
  • the present application provides an image processing method, which is applied to an electronic device, including: the electronic device acquires one or more film materials, the one or more film materials include a first film material, and the one or more film materials are used to process the image collected by the camera on the electronic device in real time, so that the processed image presents a corresponding film display effect.
  • the electronic device collects images in real time through the camera and displays a shooting interface, which includes a preview window.
  • the electronic device processes the image collected in real time based on the first film material to generate a preview stream.
  • the electronic device displays the screen content of the preview stream in the preview window, and the screen content of the preview stream presents a first film display effect. In this way, when the electronic device processes the image collected in real time based on the first film material, the screen content of the preview stream can present a richer and more interesting visual effect.
  • the electronic device in response to a selection operation on the first film material, processes the image collected in real time based on the first film material and generates a preview stream, which specifically includes: in response to a selection operation on the first film material, the electronic device determines whether the image collected in real time includes a portrait. When the electronic device determines that the image collected in real time includes a portrait, the electronic device divides the skin area and the non-skin area in the image collected in real time. The electronic device processes the non-skin area based on the first film material, and processes the skin area based on the second film material generated by the first film material. The electronic device generates a preview stream based on the real-time collected image after being processed by the first film material and the second film material. In this way, the electronic device can process the skin area and the non-skin area of the image based on different film materials respectively, so that the image processing effect can be more refined.
  • the electronic device processes the non-skin area based on the first film material, and processes the skin area based on the second film material generated by the first film material, specifically including: the electronic device performs denoising and softening processing on the first film material to generate the second film material.
  • the electronic device processes the non-skin area based on the first film material, and processes the skin area based on the second film material. In this way, the skin area processed based on the second film material can be made more delicate than the non-skin area processed based on the first film material.
  • the electronic device when the electronic device determines that the image acquired in real time does not include a portrait, the electronic device processes the image acquired in real time based on the first film material, thereby improving the efficiency of image processing.
  • the picture content of the preview stream also presents a second film display effect.
  • the one or more film materials are obtained by shooting an 18% gray card based on one or more films with different sensitivities, and then developing and scanning the film materials.
  • the average brightness of each of the one or more film materials is the first average value.
  • the duration of each film material is 15 seconds, and each second includes 30 frames of film images. In this way, the obtained film materials can improve the efficiency of image processing and save storage space of the electronic device.
  • the first film material includes a first pixel
  • the real-time collected image includes a second pixel
  • the first pixel corresponds to the second pixel.
  • the electronic device processes the real-time collected image based on the first film material to generate a preview stream, which specifically includes: in response to the selection operation on the first film material, the electronic device processes the real-time collected image based on the superposition formula and the first film material.
  • the electronic device generates a preview stream based on the real-time collected image processed by the first film material. In this way, when the brightness value of the pixel on the film material is relatively bright, the bright area in the image processed based on the pixel can be less affected, and the dark area in the image processed based on the pixel can be more affected. When the brightness value of the pixel on the film material is relatively dark, the bright area in the image processed based on the pixel can be more affected, and the dark area in the image processed based on the pixel can be less affected.
  • the first average value is 0.5.
  • the method further includes: the electronic device receives a selection operation for a first filter material, and the first filter material corresponds to a first LUT.
  • the electronic device processes the real-time collected image based on the first film material and generates a preview stream, which specifically includes: in response to the selection operation for the first film material, the electronic device maps the RGB value of the pixel point in the real-time collected image to a new RGB value based on the first LUT.
  • the electronic device processes the real-time collected image after being processed by the first LUT based on the first film material and generates a preview stream. In this way, the visual effect presented after the image processing can be made richer.
  • the first LUT is a 2D LUT, or a 3D LUT.
  • the first film display effect includes one or more of the following: a particle effect, Scratches and light leaks.
  • the shooting interface further includes a first option of the first film material.
  • the electronic device processes the image acquired in real time based on the first film material to generate a preview stream, specifically including: the electronic device receives a touch operation on the first option.
  • the electronic device processes the image acquired in real time based on the first film material to generate a preview stream.
  • an embodiment of the present application provides an electronic device, comprising: one or more processors, one or more memories, one or more cameras and a display screen.
  • the one or more memories are coupled to the one or more processors, and the one or more memories are used to store computer program codes, and the computer program codes include computer instructions.
  • the electronic device executes the method in any possible implementation of the first aspect. In this way, when the electronic device processes the image collected in real time based on the selected film material, the image can present a corresponding film display effect, so that the image can be more rich and interesting in the display of visual effects.
  • an embodiment of the present application provides a computer-readable storage medium, including computer instructions, which, when executed on an electronic device, causes the electronic device to execute a method in any possible implementation of the first aspect.
  • the electronic device processes the image acquired in real time based on the selected film material, the image can present a corresponding film display effect, making the image more rich and interesting in terms of visual effect display.
  • an embodiment of the present application provides a chip or a chip system, including a processing circuit and an interface circuit, the interface circuit is used to receive code instructions and transmit them to the processing circuit, and the processing circuit is used to run the code instructions to execute the method in any possible implementation of the first aspect.
  • the processing circuit processes the image collected in real time based on the selected film material, the image can present a corresponding film display effect, so that the image can be more rich and interesting in terms of visual effect display.
  • an embodiment of the present application provides a computer program product, which, when executed on an electronic device, enables the electronic device to execute the method in any possible implementation of the first aspect.
  • the electronic device processes the image acquired in real time based on the selected film material, the image can present a corresponding film display effect, making the image more rich and interesting in terms of visual effect display.
  • FIG1A is a schematic diagram of the hardware structure of an electronic device 100 provided in an embodiment of the present application.
  • FIG1B is a schematic diagram of multiple image processing algorithms in an ISP provided in an embodiment of the present application.
  • 2A-2G are schematic diagrams of a group of user interfaces provided in an embodiment of the present application.
  • FIG3A is a schematic diagram of a specific flow chart of an image processing method provided in an embodiment of the present application.
  • FIG3B is a schematic diagram of a process for obtaining film materials according to an embodiment of the present application.
  • FIG3C is a schematic diagram of an image processing flow provided in an embodiment of the present application.
  • FIG3D is a schematic diagram of another image processing flow provided in an embodiment of the present application.
  • FIG. 4 is a schematic diagram of a software module applied to an electronic device 100 provided in an embodiment of the present application.
  • an electronic device 100 provided in an embodiment of the present application is introduced.
  • FIG. 1A exemplarily shows a hardware structure diagram of an electronic device 100 provided in an embodiment of the present application.
  • the electronic device 100 may be a mobile phone, a tablet computer, a desktop computer, a laptop computer, a handheld computer, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, a cellular phone, a personal digital assistant (PDA), an augmented reality (AR) device, a virtual reality (VR) device, an artificial intelligence (AI) device, a wearable device, an in-vehicle device, a smart home device and/or a smart city device.
  • PDA personal digital assistant
  • AR augmented reality
  • VR virtual reality
  • AI artificial intelligence
  • wearable device an in-vehicle device
  • smart home device a smart home device and/or a smart city device.
  • the electronic device 100 may include a processor 101, a memory 102, a wireless communication module 103, a display screen 104, a microphone 105, an audio module 106, a speaker 107, and a camera 108, wherein:
  • the processor 101 may include one or more processor units, for example, the processor 101 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc.
  • different processing units can be independent devices or integrated in one or more processors.
  • the controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of fetching and executing instructions.
  • the processor 101 may also be provided with a memory for storing instructions and data.
  • the memory in the processor 101 is a cache memory.
  • the memory may store instructions or data that the processor 101 has just used or circulated. If the processor 101 needs to use the instruction or data again, it may be directly called from the memory. This avoids repeated access, reduces the waiting time of the processor 101, and thus improves the efficiency of the system.
  • the processor 101 may include one or more interfaces.
  • the interfaces may include an inter-integrated circuit (I2C) interface, an inter-integrated circuit sound (I2S) interface, a pulse code modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a mobile industry processor interface, a processor interface, MIPI), general-purpose input/output (GPIO) interface, subscriber identity module (SIM) interface, and/or USB interface, etc.
  • I2C inter-integrated circuit
  • I2S inter-integrated circuit sound
  • PCM pulse code modulation
  • UART universal asynchronous receiver/transmitter
  • MIPI general-purpose input/output
  • SIM subscriber identity module
  • USB interface etc.
  • the ISP may include a variety of image processing algorithms, such as a deBayer algorithm, an electronic image stabilization (EIS) algorithm, a color correction matrix (CCM) algorithm, and a gamma correction algorithm.
  • image processing algorithms such as a deBayer algorithm, an electronic image stabilization (EIS) algorithm, a color correction matrix (CCM) algorithm, and a gamma correction algorithm.
  • the DeBayer algorithm may be used to reconstruct a full-color image from an incomplete color sample output by an image sensor covered with a color filter array;
  • the EIS algorithm may be used to reduce the vibration of the electronic device 100 to improve the clarity of the image;
  • the CCM algorithm may be used to correct the color error caused by the color penetration between the color blocks at the filter plate;
  • the gamma correction algorithm may be used to edit the gamma curve of the image, detect the dark and light parts in the image signal, and increase the ratio of the two, thereby improving the image contrast effect, adding more dark color levels, and performing nonlinear color tone editing on the image.
  • the memory 102 is coupled to the processor 101 and is used to store various software programs and/or multiple sets of instructions.
  • the memory 102 may include a volatile memory (volatile memory), such as a random access memory (random access memory, RAM); it may also include a non-volatile memory (non-volatile memory), such as a ROM, a flash memory (flash memory), a hard disk drive (Hard Disk Drive, HDD) or a solid state drive (Solid State Drives, SSD); the memory 102 may also include a combination of the above-mentioned types of memory.
  • the memory 102 may also store some program codes so that the processor 101 can call the program codes stored in the memory 102 to implement the implementation method of the embodiment of the present application in the electronic device 100.
  • the memory 102 may store an operating system, such as an embedded operating system such as uCOS, VxWorks, RTLinux, etc.
  • the wireless communication module 103 can provide wireless communication solutions including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) network), bluetooth (BT), global navigation satellite system (GNSS), frequency modulation (FM), near field communication (NFC), infrared (IR) and the like applied to the electronic device 100.
  • WLAN wireless local area networks
  • BT wireless fidelity
  • GNSS global navigation satellite system
  • FM frequency modulation
  • NFC near field communication
  • IR infrared
  • the wireless communication module 103 can be one or more devices integrating at least one communication processing module.
  • the wireless communication module 103 receives electromagnetic waves via an antenna, modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 101.
  • the wireless communication module 103 can also receive the signal to be sent from the processor 101, modulate and amplify it, and convert it into electromagnetic waves for radiation through the antenna.
  • the electronic device 100 may also transmit signals through a Bluetooth module (not shown in FIG. 1A) and a WLAN module (not shown in FIG. 1A) in the wireless communication module 103 to detect or scan a device near the electronic device 100, and establish a wireless communication connection with the nearby device to transmit data.
  • the Bluetooth module may provide a solution including one or more Bluetooth communications in classic Bluetooth (basic rate/enhanced data rate, BR/EDR) or Bluetooth low energy (bluetooth low energy, BLE), and the WLAN module may provide a solution including one or more WLAN communications in Wi-Fi direct, Wi-Fi LAN or Wi-Fi softAP.
  • the display screen 104 can be used to display images, videos, etc.
  • the display screen 104 may include a display panel.
  • the display panel may be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode or an active-matrix organic light-emitting diode (AMOLED), a flexible light-emitting diode (FLED), Miniled, MicroLed, Micro-oLed, a quantum dot light emitting diode (QLED), etc.
  • the electronic device 100 may include 1 or N display screens 104, where N is a positive integer greater than 1.
  • Microphone 105 which may also be called “microphone” or “microphone”, may be used to collect sound signals in the surrounding environment of the electronic device, convert the sound signals into electrical signals, and then subject the electrical signals to a series of processes, such as analog-to-digital conversion, to obtain an audio signal in digital form that can be processed by the processor 101 of the electronic device.
  • the user may speak by approaching the microphone 105 with his or her mouth to input the sound signal into the microphone 105.
  • the electronic device 100 may be provided with at least one microphone 105.
  • the electronic device 100 may be provided with two microphones 105, which, in addition to collecting sound signals, may also implement noise reduction functions.
  • the electronic device 100 may also be provided with three, four or more microphones 105 to collect sound signals, reduce noise, identify the source of sound, implement directional recording functions, and the like.
  • the audio module 106 can be used to convert digital audio information into analog audio signal output, and can also be used to convert analog audio input into digital audio signal.
  • the audio module 106 can also be used to encode and decode audio signals.
  • the audio module 106 can also be arranged in the processor 101, or some functional modules of the audio module 106 can be arranged in the processor 101.
  • the speaker 107 which may also be called a “speaker”, is used to convert an audio electrical signal into a sound signal.
  • the electronic device 100 can listen to music or a hands-free phone call through the speaker 107 .
  • the camera 108 is used to capture still images or videos.
  • the object generates an optical image through the lens and projects it onto the photosensitive element.
  • the photosensitive element can be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the optical signal into an electrical signal, and then passes the electrical signal to the image signal processor (ISP) to convert it into a digital image signal.
  • the ISP outputs the digital image signal to the digital signal processor (DSP) for processing.
  • the DSP converts the digital image signal into an image signal in a standard RGB, YUV or other format.
  • the electronic device 100 may include 1 or N cameras 108, where N is a positive integer greater than 1.
  • the electronic device 100 may also include a sensor module (not shown in FIG. 1A) and/or a touch sensor (not shown in FIG. 1A).
  • the touch sensor may also be referred to as a "touch control device”.
  • the touch sensor may be provided on the display screen 104, and the touch sensor and the display screen 104 form a touch screen, also referred to as a "touch control screen”.
  • the touch sensor may be used to detect touch operations acting on or near it.
  • the sensor module may also include a gyroscope sensor (not shown in FIG. 1A), an acceleration sensor (not shown in FIG. 1A), and the like. Among them, the gyroscope sensor may be used to determine the motion posture of the electronic device 100.
  • the electronic device 100 may determine the angular velocity of the electronic device 100 around three axes (i.e., x, y, and z axes) through the gyroscope sensor.
  • the acceleration sensor may be used to detect the magnitude of the acceleration of the electronic device 100 in each direction (generally x, y, and z axes), and may also detect the magnitude and direction of gravity when the electronic device 100 is stationary.
  • the electronic device 100 may further include a mobile communication module (not shown in FIG. 1A ).
  • the mobile communication module may provide a solution for wireless communications including 2G/3G/4G/5G etc. applied to the electronic device 100 .
  • the structure illustrated in the embodiment of the present application does not constitute a specific limitation on the electronic device 100.
  • the electronic device 100 may also include more or fewer components than shown in the figure, or combine some components, or split some components, or arrange the components differently.
  • the components shown in the figure may be implemented in hardware, software, or a combination of software and hardware.
  • the embodiment of the present application provides an image processing method applied to the above-mentioned electronic device 100.
  • the electronic device 100 may acquire one or more film materials, and the one or more film materials may include a first film material.
  • the one or more film materials may be used to process the image collected by the electronic device 100 in real time, so that the processed image can display the corresponding film display effect.
  • the image processed by the first film material may display the first film display effect
  • the image processed by the second film material may display the second film display effect.
  • the film display effect may include a grain effect, and/or a scratch effect, and/or a light leakage effect.
  • the electronic device 100 can process the image collected in real time according to the first film material. Specifically, the electronic device 100 can obtain one or more film materials, the one or more film materials include the first film material, and the one or more film materials are used to process the image collected in real time by the camera on the electronic device 100, so that the processed image presents a corresponding film display effect.
  • the electronic device 100 can collect images in real time through the camera and display a shooting interface, which includes a preview window.
  • the electronic device 100 can process the image collected in real time based on the first film material to generate a preview stream.
  • the electronic device 100 can display the screen content of the preview stream in the preview window, and the screen content of the preview stream can present a first film display effect.
  • the electronic device 100 may receive and respond to a camera startup operation, and collect images in real time through the camera on the electronic device 100.
  • the collected image here may refer to the imaging of the object captured by the camera after startup.
  • the electronic device 100 may process the image collected in real time. Specifically, the electronic device 100 may determine whether the image collected in real time includes a first area. If the first area is included, the electronic device 100 may generate a second film material based on the first film material. Then, the electronic device 100 processes the first area in the image based on the second film material, and processes the non-first area in the image based on the first film material.
  • the first area is the skin area of the portrait as an example for explanation. It should be noted that in the actual implementation method, the first area may also be a sky area, a vegetation area, etc. in the image.
  • the process of the image processing method is described by taking the first area as the skin area of the portrait as an example.
  • the description of the electronic device 100 acquiring one or more film materials can refer to the above description, which will not be repeated here.
  • the electronic device 100 can receive and respond to the camera startup operation, and collect images in real time through the camera on the electronic device 100.
  • the electronic device 100 can process the image collected in real time. Specifically, the electronic device 100 can determine whether the image collected in real time includes a portrait. If a portrait is included, the electronic device 100 determines the skin area and the non-skin area in the image.
  • the electronic device 100 can generate a second film material based on the first film material.
  • the electronic device 100 can process the skin area in the image based on the second film material, and process the non-skin area in the image based on the first film material; if a portrait is not included, the electronic device 100 can process the image based on the first film material.
  • the electronic device 100 processes the acquired image according to the film material, so that the image can present a richer visual effect.
  • the electronic device 100 may display a desktop 200.
  • the desktop 200 may display one or more application icons.
  • the one or more application icons may include a weather application icon, a stock application icon, a calculator application icon, a settings application icon, an email application icon, a video application icon, a calendar application icon, and a gallery application icon, etc.
  • the desktop 200 may also display a status bar, a page indicator, and a tray icon area.
  • the status bar may include one or more signal strength indicators of a mobile communication signal (also referred to as a cellular signal), a signal strength indicator of a wireless fidelity (Wi-Fi) signal, a battery status indicator, a time indicator, and the like.
  • a mobile communication signal also referred to as a cellular signal
  • Wi-Fi wireless fidelity
  • the page indicator may be used to indicate the positional relationship between the currently displayed page and other pages.
  • the tray icon area includes a plurality of tray icons (e.g., a dialing application icon, a message application icon, a contact application icon, and a camera application icon 201, etc.), and the tray icon remains displayed when the page is switched.
  • the above page may also include a plurality of application icons and a page indicator, and the page indicator may not be part of the page but exist independently.
  • the above tray icon is also optional, and the embodiment of the present application does not limit this.
  • the electronic device 100 may receive a touch operation (also referred to as a camera start-up operation, for example, a click) on the camera application icon 201.
  • a touch operation also referred to as a camera start-up operation, for example, a click
  • the electronic device 100 may start the camera to collect images in real time and display a shooting interface.
  • the shooting interface 210 and the shooting interface 220 in the subsequent diagrams may both be referred to as shooting interfaces.
  • the camera start-up operation may also be a voice command or a gesture operation, that is, the electronic device 100 may also receive and respond to the user's voice command or gesture operation to start the camera to collect images in real time and display a shooting interface.
  • the electronic device 100 may display a shooting interface 210.
  • the shooting interface 210 may include a shooting control 211, one or more shooting mode controls (e.g., a night scene mode control 212A, a video mode control 212B, a movie mode control 212C, a professional mode control 212D, and a more mode control 212E), a preview window 213, etc.
  • the movie mode control 212C has been selected, that is, the current shooting mode is the movie mode.
  • the electronic device 100 may display one or more filter material options (e.g., a "Azure Symphony" option, a "City of Joy” option, etc.) on the shooting interface 210.
  • LUT can be used to adjust the RGB values in the image, that is, to map a set of RGB values corresponding to a certain pixel in the image to another set of RGB values.
  • LUT can be divided into 1D LUT for adjusting image brightness, 2D LUT for adjusting image contrast, and 3D LUT for adjusting the overall color of the image.
  • the embodiment of the present application takes 3D LUT as an example for explanation. Based on the shooting environment suitable for each filter material, different filter materials can be associated with different film materials.
  • filter materials suitable for shooting environments under natural light can be associated with low-sensitivity film materials and/or medium-sensitivity film materials; filter materials suitable for shooting environments under night scenes and artificial lights can be associated with medium-sensitivity film materials and/or high-sensitivity film materials.
  • the “City of Joy” filter material is suitable for shooting under natural light, so it can be associated with low-sensitivity film material and/or medium-sensitivity film material, such as 50D and 250D;
  • the “Blue Symphony” filter material is suitable for shooting under night scene, so it can be associated with medium-sensitivity film material and/or high-sensitivity film material, such as 250D and 500T.
  • the relevant description of the film material will be described in detail in the subsequent embodiments, and will not be repeated here.
  • the electronic device 100 has selected the “City of Joy” filter material to filter the image collected in real time. Processing.
  • the electronic device 100 may receive a touch operation (e.g., click) on option 214 (also referred to as the first option).
  • Option 214 is associated with the "City of Joy” filter material, and is an option corresponding to the 250D specification film material (also referred to as the first film material).
  • the "City of Joy” filter material may be referred to as the first filter material
  • the LUT corresponding to the "City of Joy” filter material may be referred to as the first LUT
  • the first LUT may be a 2D LUT or a 3D LUT.
  • the electronic device 100 receives a touch operation (e.g., click) on the option 214 as shown in the above figure, the option 214 may be highlighted, indicating that the film material has been selected to process the image.
  • the picture in the preview window 213 may display the film display effect corresponding to the film material, such as a grain effect, and/or a scratch effect, and/or a light leakage effect.
  • the film material eg. the first film material, etc.
  • the options corresponding to the film material may not be associated with the options corresponding to the filter material.
  • the electronic device 100 may receive a touch operation (eg, a click) on the shooting control 211 .
  • a touch operation eg, a click
  • the electronic device 100 may display a shooting interface 220.
  • the shooting interface 220 may include a preview window 213 and a stop shooting control 221.
  • the electronic device 100 may process and encode the image collected in real time based on the "City of Joy” filter material and the 250D film material.
  • a prompt message may be displayed in the preview window 213, such as a text message "City of Joy-250D", to prompt the user that the electronic device 100 processes the image based on the "City of Joy" filter material and the 250D film material.
  • the picture in the preview window 213 may display the film display effect corresponding to the film material, such as a grain effect, and/or a scratch effect, and/or a light leakage effect.
  • the electronic device 100 can also receive and respond to the user's voice commands or gesture operations, display the shooting interface 220, and process and encode the real-time captured images based on the selected film material and/or filter material.
  • the electronic device 100 may receive a touch operation (e.g., a click) on the stop shooting control 221. In response to the touch operation, the electronic device 100 may stop video shooting, and then obtain and save the first video file.
  • the video screen in the first video file may present a film display effect corresponding to the first film material, such as a grain effect, and/or a scratch effect, and/or a light leakage effect.
  • the electronic device 100 can also receive and respond to the user's voice command or gesture operation to stop video shooting, and then obtain and save the first video file.
  • Figure 3A exemplarily shows a specific flow chart of an image processing method provided by an embodiment of the present application.
  • the specific flow of the method may include:
  • the electronic device 100 obtains one or more film materials.
  • the one or more film materials include a first Film material.
  • the specifications of the film may include the sensitivity of the film, the type of film, and the like.
  • the sensitivity of the film may refer to: the speed at which the photosensitive emulsion (e.g., silver chloride, silver bromide, or silver iodide, etc.) in the film decomposes and forms an image when it sees light.
  • the numerical value of the film sensitivity may be 50, 100, 150, and the like. The larger the numerical value, the higher the sensitivity of the film, and the coarser the grain of the picture when the film is formed.
  • a film sensitivity value below 100 may be referred to as low sensitivity, between 200 and 800 may be referred to as medium sensitivity, and above 800 may be referred to as high sensitivity.
  • the type of film may refer to: whether the film is a daylight (D) film or a tungsten (T) film.
  • the standard color temperature of daylight film is 5400 Kelvin (K)-5600 K, and the standard color temperature of lighting film is 3200 K-3400 K. Therefore, the specifications of the film can be different specifications such as 50D, 150D, 250D or 250T, and this application does not limit this.
  • developers can use films of different specifications to shoot film images of 18% gray cards, and then expose and develop the film images. Developers can use a film scanner to scan the developed film images to obtain corresponding film materials.
  • the brightness average value of the obtained film materials is a specified value A2 (also referred to as a first average value, for example, 0.5, 0.6, etc.).
  • the developer can obtain one or more film materials based on the above processing.
  • the one or more film materials may also include the imaging of non-actual objects on the film (which may be called artifacts), such as particles, scratches, light leakage, etc. on the screen when the film is imaged.
  • Each film material may include a film image with a duration of s seconds (for example, 15 seconds, 20 seconds, etc.) and t frames per second (for example, 20 frames, 30 frames, etc.). That is to say, each film material includes s ⁇ t frames of film images.
  • the specified value A2 can be taken as 0.5, which is conducive to improving the calculation efficiency of subsequent image processing; the duration s seconds is taken as 15 seconds, and the t frames per second is taken as 30 frames, so that the visual effect of the processed multiple images is more natural when playing, and the repetitiveness of the film display effect is not easy to be detected, and the storage resources of the electronic device 100 can also be saved.
  • the electronic device 100 can obtain the above-mentioned one or more film materials.
  • the one or more film materials may include a first film material.
  • the electronic device 100 can use bit depth compression to store film materials, for example, using 32 binary bits, 16 binary bits, 8 binary bits, 4 binary bits or 2 binary bits to compress and store the information of each pixel in the film material.
  • the embodiment of the present application can use low bit depth (for example, using 8 binary bits, 4 binary bits or 2 binary bits) to compress and store the information of each pixel in the film material, so that the storage space of the electronic device 100 can be saved, and the efficiency of the subsequent electronic device 100 processing the video based on the film material can be improved.
  • the electronic device 100 receives a camera start operation.
  • the camera activation operation may be a touch operation (e.g., click) on the camera application icon 201 as shown in FIG2A .
  • touch operation e.g., click
  • voice commands, gesture operations, etc. that can activate the camera may also be referred to as camera activation operations.
  • the electronic device 100 collects images in real time through the camera.
  • the electronic device 100 can capture images in real time through a front camera or a rear camera configured on the electronic device 100.
  • the electronic device 100 can obtain images sent by other electronic devices.
  • the electronic device 100 can obtain images sent by a cloud server. In other words, this application does not limit the way of obtaining images.
  • the electronic device 100 can process the images obtained by the electronic device 100 from other electronic devices and/or cloud servers based on the first film material according to subsequent processes, so that the images present a corresponding film display effect.
  • the electronic device 100 determines whether the image acquired in real time (the image acquired in real time may also be referred to as an image for short) includes a portrait.
  • the selection operation for the first film material may be the aforementioned touch operation (eg, click) on the option 214 as shown in FIG. 2D .
  • the electronic device 100 can determine whether the image collected in real time includes a portrait by using a re-identification (ReID) algorithm, an AdaBoost pedestrian detection algorithm based on dual-threshold motion region segmentation, or the like.
  • ReID re-identification
  • AdaBoost AdaBoost pedestrian detection algorithm based on dual-threshold motion region segmentation
  • the electronic device 100 determines that the image includes a portrait
  • the electronic device 100 divides the image into a skin area and a non-skin area.
  • the electronic device 100 can divide the skin area and the non-skin area in the image by using algorithms such as a skin segmentation algorithm based on generalized Gaussian distribution and a skin segmentation method based on RealAdaBoost algorithm.
  • a skin segmentation algorithm based on generalized Gaussian distribution and a skin segmentation method based on RealAdaBoost algorithm.
  • the present application does not limit how to divide the skin area and the non-skin area in the image.
  • the electronic device 100 processes the non-skin area in the image based on the first film material, and processes the skin area in the image based on the second film material generated from the first film material.
  • the electronic device 100 can perform denoising and softening processing on the first film material to generate a second film material.
  • the electronic device 100 can perform denoising and softening processing on the first film material by using a median filter method, a gradient model algorithm, or the like.
  • the electronic device 100 can perform softening processing on the first film material by using an arbitrary pixel point as the center, taking an area of a specified size (3*3), and taking the RGB average value of multiple pixels in the area as the RGB value of the central pixel.
  • the electronic device 100 can also perform denoising and softening processing on the first film material by other methods.
  • An arbitrary frame of image collected in real time is taken as an example to illustrate the implementation method of this step.
  • the object can generate an optical image through the lens of the camera and project it onto the photosensitive element.
  • the photosensitive element converts the optical signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal to generate an image.
  • the electronic device 100 selects the filter material and the first film material to process the image collected in real time, the electronic device 100 processes the image based on the filter material, and then processes the image based on the first film material and the second film material. If the electronic device 100 does not select the filter material, the electronic device 100 directly processes the image collected in real time based on the first film material and the second film material.
  • Fig. 3C takes the electronic device 100 selecting the filter material and the first film material to process the real-time collected image as an example.
  • the object can generate an optical image through the lens of the camera and project it onto the photosensitive element.
  • the photosensitive element converts the light signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal to generate an image.
  • the electronic device 100 can map the RGB values of the pixels in the real-time collected image to new RGB values based on the aforementioned selected filter material (for example, the "Blue Symphony” filter material and the “City of Joy” filter material shown in the above figure), and adjust the ratio between the R value, G value and B value of each pixel in the M-th frame image according to the corresponding 3D LUT and SMPTE ST 2084 function.
  • M can be used to represent any value
  • the M-th frame image is any frame of real-time collected image.
  • the electronic device 100 can process the Mth frame image based on the film material (for example, the first film material, the second film material, etc.).
  • the film material for example, the first film material, the second film material, etc.
  • the R value, G value and B value of the corresponding pixel in the Mth frame image can be reduced while the ratio between the R value, G value and B value remains unchanged, that is, the brightness of the pixel becomes darker;
  • the R value, G value and B value of the corresponding pixel in the Mth frame image can be increased while the ratio between the R value, G value and B value remains unchanged, that is, the brightness of the pixel becomes brighter, so that the Mth frame image can present the corresponding grain effect, and/or scratch effect, and/or light leakage effect of the film material.
  • the electronic device 100 can superimpose the first film material on the non-skin area in the M-th frame image and the second film material on the skin area in the M-th frame image based on the superposition algorithm formula.
  • the M-th frame image presents the first film display effect in the non-skin area and the second film display effect in the skin area.
  • the superposition algorithm formula of the film material e.g., the first film material, the second film material, etc.
  • the superposition algorithm formula of the film material can be as follows:
  • a in the formula can represent the RGB value of the pixel i1 to be processed in the Mth frame image (in the formula, the RGB value of the pixel i1 can be normalized to between 0 and 1)
  • B can represent the brightness value of the pixel i2 (that is, any pixel in the condition) corresponding to the pixel i1 in the Mth frame film image in the film material
  • f(i1) can represent the RGB value of the pixel i1 in the Mth frame image after the film material is superimposed.
  • the 0.5 in the formula is the specified value B1.
  • the specified value B1 can be the same as the brightness average value of the aforementioned film material, that is, the specified value A2 (that is, the first average value).
  • the processing process can be: the first film material includes a first pixel, the real-time collected image includes a second pixel, and the first pixel corresponds to the second pixel.
  • the area with high brightness value in the image can be less affected by the artifacts with high brightness value in the film material (for example, particles, scratches, etc.), while the area with low brightness value in the image is more affected by the artifacts with high brightness value in the film material (for example, particles, scratches, etc.); the area with high brightness value in the image is more affected by the artifacts with low brightness value in the film material (for example, stains, etc.), while the area with low brightness value in the image is less affected by the artifacts with low brightness value in the film material (for example, stains, etc.).
  • the electronic device 100 can fuse the skin area and the non-skin area in the Mth frame image, and feather the adjacent edges of the skin area and the non-skin area so that the RGB values of the pixels at the adjacent edges of the skin area and the non-skin area can change smoothly.
  • the electronic device 100 determines that the image does not include a portrait
  • the electronic device 100 processes the image based on the first film material.
  • the object can generate an optical image through the lens of the camera and project it onto the photosensitive element.
  • the photosensitive element converts the optical signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal to generate an image.
  • the electronic device 100 selects the filter material and the first film material to process the image collected in real time, the electronic device 100 processes the image based on the filter material, and then processes the image based on the first film material. If the electronic device 100 does not select the filter material, the electronic device 100 directly processes the image collected in real time based on the first film material.
  • FIG3D takes the case where the electronic device 100 selects the filter material and the first film material to process the real-time collected image as an example.
  • the object can generate an optical image through the lens of the camera and project it onto the photosensitive element.
  • the photosensitive element converts the light signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal to generate an image.
  • the electronic device 100 can adjust the ratio between the R value, G value, and B value of each pixel in the M-th frame image based on the aforementioned selected filter material (for example, the aforementioned illustrated "Blue Symphony” filter material and “City of Joy” filter material), according to the corresponding 3D LUT and SMPTE ST 2084 function.
  • M can be used to represent any value
  • the M-th frame image is any frame of real-time collected image.
  • the electronic device 100 can overlay the first film material onto the Mth frame image based on the overlay algorithm formula, and the Mth frame image only presents the first film display effect.
  • the overlay method can refer to the description in S306 above, which will not be repeated here.
  • the electronic device 100 generates a preview stream based on the images collected in real time and processed by S304-S307.
  • the multiple frames of images collected by the electronic device 100 in real time in chronological order can be referred to as an image stream.
  • a preview stream can be obtained, and the screen content in the preview stream can be displayed in the preview window of the electronic device 100 (for example, the aforementioned preview window 213).
  • the screen content in the preview stream may include a first film display effect and a second film display effect; when the image stream does not include an image of a portrait, the screen content in the preview stream may only present the first film display effect.
  • the electronic device 100 collects and processes the video based on the real-time data.
  • the image processed by S304-S307 generates a video stream and performs corresponding processing.
  • the electronic device 100 can copy the image stream collected in real time into two image streams, namely image stream 1 and image stream 2.
  • a preview stream can be obtained.
  • the picture content in the preview stream can be displayed in the preview window of the electronic device 100 (for example, the aforementioned preview window 213) during the video shooting.
  • a video stream can be obtained.
  • the electronic device 100 can calculate the corresponding dynamic metadata based on the above video stream through SMPTE ST 2094, and then encode it through the compiler based on the above dynamic metadata and the video stream.
  • the electronic device 100 can also process the video stream in other ways.
  • the end shooting operation may be a touch operation (eg, click) on the stop shooting control 221 as shown in FIG. 2F .
  • the electronic device 100 can obtain and save a first video file in a specified format.
  • the electronic device 100 can obtain and save a first video file in HDR10+ format based on the aforementioned encoded video stream.
  • the electronic device 100 can also obtain and save a first video file in other formats (for example, HDR10, etc.), and this application does not limit this.
  • the first film material can cyclically process the images captured in real time by the electronic device 100.
  • the electronic device 100 can process the 31st frame image captured by the electronic device 100 according to the 1st frame image of the first film material.
  • the electronic device 100 can process the 32nd frame image captured by the electronic device 100 according to the 2nd frame image of the first film material, and so on.
  • FIG. 4 exemplarily shows a schematic diagram of a software module applied to an electronic device 100 provided in an embodiment of the present application.
  • the electronic device 100 may include: a storage module 401, an image processing module 402 and an image encoding module 403, wherein:
  • the storage module 401 can be used to store one or more film materials and a first video file, wherein the one or more film materials include the first film material.
  • the storage module 401 can also store some program codes to implement the implementation method of the embodiment of the present application in the electronic device 100.
  • the specific implementation method can refer to the steps shown in the aforementioned flow chart, which will not be repeated here.
  • the image processing module 402 may be used to process the image captured in real time by the electronic device 100 based on the first film material and/or the filter material.
  • the specific implementation method may refer to the steps shown in the aforementioned flow chart, which will not be described in detail here.
  • the image encoding module 403 can be used to encode the image that has been processed by the image processing module 402 to obtain The first video file is obtained.
  • the specific implementation method can refer to the steps shown in the above flow chart, which will not be repeated here.
  • the term "when" may be interpreted to mean “if" or “after" or “in response to determining" or “in response to detecting", depending on the context.
  • the phrases “upon determining" or “if (the stated condition or event) is detected” may be interpreted to mean “if determining" or “in response to determining" or “upon detecting (the stated condition or event)” or “in response to detecting (the stated condition or event)", depending on the context.
  • the computer program product includes one or more computer instructions.
  • the computer can be a general-purpose computer, a special-purpose computer, a computer network, or other programmable device.
  • the computer instructions can be stored in a computer-readable storage medium, or transmitted from one computer-readable storage medium to another computer-readable storage medium.
  • the computer instructions can be transmitted from a website site, computer, server or data center by wired (e.g., coaxial cable, optical fiber, digital subscriber line) or wireless (e.g., infrared, wireless, microwave, etc.) mode to another website site, computer, server or data center.
  • the computer-readable storage medium can be any available medium that a computer can access or a data storage device such as a server or data center that includes one or more available media integration.
  • the available medium can be a magnetic medium, (e.g., a floppy disk, a hard disk, a tape), an optical medium (e.g., a DVD), or a semiconductor medium (e.g., a solid-state hard disk), etc.
  • the processes can be completed by computer programs to instruct related hardware, and the programs can be stored in computer-readable storage media.
  • the programs can include the processes of the above-mentioned method embodiments.
  • the aforementioned storage media include: ROM or random access memory RAM, magnetic disk or optical disk and other media that can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

本申请公开了一种图像处理方法及电子设备,涉及终端领域,该方法包括:电子设备可以获取到一个或多个胶片素材,该一个或多个胶片素材中可以包括第一胶片素材。当电子设备接收并响应于摄像头启动操作时,电子设备100可以通过摄像头实时采集图像。当电子设备100接收并响应于针对第一胶片素材的选择操作时,电子设备100可以根据第一胶片素材对实时采集到的图像进行处理,使得经过第一胶片素材处理后的图像呈现出第一胶片显示效果。其中,第一胶片显示效果可以包括:颗粒效果,和/或划痕效果,和/或漏光效果。

Description

图像处理方法及电子设备
本申请要求于2022年10月21日提交中国专利局、申请号为202211297222.5、申请名称为“图像处理方法及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及终端领域,尤其涉及一种图像处理方法及电子设备。
背景技术
随着终端技术的发展,用户日常使用电子设备进行拍摄、制作以及分享视频已经成为一种趋势。用户会使用电子设备拍摄视频,也可能从网络下载视频到电子设备。在日常生活中,出于娱乐的考量,在电子设备录制视频的过程中,电子设备通过摄像头拍摄到图像后,还可能会需要对图像进行处理,使得处理后的图像呈现出更有趣丰富的视觉效果。然而,目前电子设备在对录制视频过程中所拍摄的图像进行处理后,无法使得上述图像呈现出视觉上的颗粒效果。
发明内容
本申请提供了一种图像处理方法及电子设备,实现了当电子设备基于所选择的胶片素材对实时采集到的图像进行处理后,该图像可以呈现出对应的胶片显示效果,使得图像可以在视觉效果的显示上更为丰富有趣。
第一方面,本申请提供了一种图像处理方法,应用于电子设备,包括:该电子设备获取到一个或多个胶片素材,该一个或多个胶片素材包括第一胶片素材,该一个或多个胶片素材用于对该电子设备上的摄像头实时采集到的图像进行处理,使得处理后的图像呈现出对应的胶片显示效果。响应于摄像头启动操作,该电子设备通过该摄像头实时采集图像,显示出拍摄界面,该拍摄界面包括预览窗口。响应于针对该第一胶片素材的选择操作,该电子设备基于该第一胶片素材,对实时采集到的图像进行处理,生成预览流。该电子设备在该预览窗口中,显示出该预览流的画面内容,该预览流的画面内容呈现出第一胶片显示效果。这样,当电子设备基于第一胶片素材对实时采集到的图像进行处理后,可以使得预览流的画面内容呈现出更加丰富有趣的视觉效果。
在一种可能的实现方式中,响应于针对该第一胶片素材的选择操作,该电子设备基于该第一胶片素材,对实时采集到的图像进行处理,生成预览流,具体包括:响应于针对该第一胶片素材的选择操作,该电子设备确定该实时采集到的图像是否包括人像。当该电子设备确定该实时采集到的图像包括人像时,该电子设备划分出该实时采集到的图像中的皮肤区域和非皮肤区域。该电子设备基于该第一胶片素材对该非皮肤区域进行处理,基于该第一胶片素材生成的第二胶片素材对该皮肤区域进行处理。该电子设备基于经过该第一胶片素材和该第二胶片素材处理后的该实时采集到的图像,生成预览流。这样,电子设备可以根据图像的皮肤区域和非皮肤区域分别基于不同的胶片素材进行处理,可以使得图像的处理效果更为精细。
在一种可能的实现方式中,该电子设备基于该第一胶片素材对该非皮肤区域进行处理,基于该第一胶片素材生成的第二胶片素材对该皮肤区域进行处理,具体包括:该电子设备对该第一胶片素材进行去噪处理和柔化处理,生成该第二胶片素材。该电子设备基于该第一胶片素材对该非皮肤区域进行处理,基于该第二胶片素材对该皮肤区域进行处理。这样,可以使得基于第二胶片素材处理后的皮肤区域比基于第一胶片素材处理后的非皮肤区域更加细腻。
在一种可能的实现方式中,当该电子设备确定该实时采集到的图像不包括人像时,该电子设备基于该第一胶片素材处理该实时采集到的图像。这样,可以提高图像处理的效率。
在一种可能的实现方式中,该预览流的画面内容还呈现出第二胶片显示效果。
在一种可能的实现方式中,该一个或多个胶片素材为基于一种或多种不同感光度的胶卷拍摄18%灰卡,并进行冲洗、扫描后所获得的胶片素材,该一个或多个胶片素材中各胶片素材的亮度平均值为第一平均值,该各胶片素材的时长为15秒,每秒包括30帧胶片图像。这样,获取到的胶片素材可以提高图像处理的效率,且节省电子设备的存储空间。
在一种可能的实现方式中,该第一胶片素材包括第一像素点,该实时采集到的图像包括第二像素点,该第一像素点对应该第二像素点。响应于针对该第一胶片素材的选择操作,该电子设备基于该第一胶片素材,对实时采集到的图像进行处理,生成预览流,具体包括:响应于针对该第一胶片素材的选择操作,该电子设备基于叠加公式和该第一胶片素材,对该实时采集到的图像进行处理。其中,当该第一像素点的亮度值小于或等于第一平均值时,该叠加公式为该第二像素点的新RGB值=该第一像素点的亮度值×该第二像素点的原RGB值÷第一平均值。当该第一胶片素材中第一像素点的亮度值大于第一平均值时,该叠加公式为该第二像素点的新RGB值=1-(1-该第二像素点的原RGB值)×(1-该第一像素点的亮度值)÷第一平均值。该电子设备基于经过该第一胶片素材处理后的该实时采集到的图像,生成预览流。这样,当胶片素材上的像素点亮度值比较亮时,可以使得图像中基于该像素点进行处理的亮部区域受影响较小,使得基于图像中基于该像素点进行处理的暗部区域受影响较大。当胶片素材上的像素点亮度值比较暗时,可以使得图像中基于该像素点进行处理的亮部区域受影响较大,使得基于图像中基于该像素点进行处理的暗部区域受影响较小。
在一种可能的实现方式中,该第一平均值为0.5。
在一种可能的实现方式中,在响应于针对该第一胶片素材的选择操作,该电子设备基于该第一胶片素材,对实时采集到的图像进行处理,生成预览流之前,该方法还包括:该电子设备接收到针对于第一滤镜素材的选择操作,该第一滤镜素材对应第一LUT。响应于针对该第一胶片素材的选择操作,该电子设备基于该第一胶片素材,对实时采集到的图像进行处理,生成预览流,具体包括:响应于针对该第一胶片素材的选择操作,该电子设备基于该第一LUT,将该实时采集到的图像中像素点的RGB值,映射为新RGB值。该电子设备基于该第一胶片素材,对经过该第一LUT处理后的该实时采集到的图像进行处理,生成预览流。这样,可以使得图像处理后呈现出的视觉效果更为丰富。
在一种可能的实现方式中,该第一LUT为2D LUT,或者,3D LUT。
在一种可能的实现方式中,该第一胶片显示效果包括以下的一种或多种:颗粒效果、 划痕效果和漏光效果。
在一种可能的实现方式中,该拍摄界面还包括该第一胶片素材的第一选项。响应于针对该第一胶片素材的选择操作,该电子设备基于该第一胶片素材,对实时采集到的图像进行处理,生成预览流,具体包括:该电子设备接收到作用于该第一选项上的触摸操作。响应于该触摸操作,该电子设备基于该第一胶片素材,对该实时采集到的图像进行处理,生成预览流。
第二方面,本申请实施例提供了一种电子设备,包括:一个或多个处理器、一个或多个存储器、一个或多个摄像头和显示屏。该一个或多个存储器与一个或多个处理器耦合,该一个或多个存储器用于存储计算机程序代码,计算机程序代码包括计算机指令,当该一个或多个处理器执行该计算机指令时,使得该电子设备执行上述第一方面任一项可能的实现方式中的方法。这样,实现了当电子设备基于所选择的胶片素材对实时采集到的图像进行处理后,该图像可以呈现出对应的胶片显示效果,使得图像可以在视觉效果的显示上更为丰富有趣。
第三方面,本申请实施例提供了一种计算机可读存储介质,包括计算机指令,当该计算机指令在电子设备上运行时,使得该电子设备执行上述第一方面任一项可能的实现方式中的方法。这样,实现了当电子设备基于所选择的胶片素材对实时采集到的图像进行处理后,该图像可以呈现出对应的胶片显示效果,使得图像可以在视觉效果的显示上更为丰富有趣。
第四方面,本申请实施例提供了一种芯片或芯片系统,包括处理电路和接口电路,该接口电路用于接收代码指令并传输至该处理电路,该处理电路用于运行该代码指令以执行上述第一方面任一项可能的实现方式中的方法。这样,实现了当处理电路基于所选择的胶片素材对实时采集到的图像进行处理后,该图像可以呈现出对应的胶片显示效果,使得图像可以在视觉效果的显示上更为丰富有趣。
第五方面,本申请实施例提供了一种计算机程序产品,当该计算机程序产品在电子设备上运行时,使得该电子设备执行第一方面任一项可能的实现方式中的方法。这样,实现了当电子设备基于所选择的胶片素材对实时采集到的图像进行处理后,该图像可以呈现出对应的胶片显示效果,使得图像可以在视觉效果的显示上更为丰富有趣。
附图说明
图1A为本申请实施例提供的一种电子设备100的硬件结构示意图;
图1B为本申请实施例提供的一种ISP中多种针对图像的处理算法示意图;
图2A-图2G为本申请实施例提供的一组用户界面示意图;
图3A为本申请实施例提供的一种图像处理方法的具体流程示意图;
图3B为本申请实施例提供的一种用于获取胶片素材的流程示意图;
图3C为本申请实施例提供的一种图像处理流程示意图;
图3D为本申请实施例提供的另一种图像处理流程示意图;
图4为本申请实施例提供的一种应用于电子设备100的软件模块示意图。
具体实施方式
本申请以下实施例中所使用的术语只是为了描述特定实施例的目的,而并非旨在作为对本申请的限制。如在本申请得到说明书和所附权利要书中所使用的那样,单数表达形式“一个”、“一种”、“所述”、“上述”、“该”和“这一”旨在也包括复数表达形式,除非其上下文中明确地有相反指示。还应当理解,本申请中使用的术语“和/或”是指包含一个或多个所列出醒目的任何或所有可能组合。在本申请实施例中,术语“第一”、“第二”仅用于描述目的,而不能理解为暗示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征,在本申请实施例的描述中,除非另有说明,“多个”的含义是两个或两个以上。
首先,介绍本申请实施例提供的一种电子设备100。
请参考图1A,图1A示例性示出了本申请实施例提供的一种电子设备100的硬件结构示意图。
电子设备100可以是手机、平板电脑、桌面型计算机、膝上型计算机、手持计算机、笔记本电脑、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本、以及蜂窝电话、个人数字助理(personal digital assistant,PDA)、增强现实(augmented reality,AR)设备、虚拟现实(virtual reality,VR)设备、人工智能(artificial intelligence,AI)设备、可穿戴式设备、车载设备、智能家居设备和/或智慧城市设备,本申请实施例对该电子设备100的具体类型不作特殊限制。
如图1A所示,电子设备100可以包括处理器101、存储器102、无线通信模块103、显示屏104、麦克风105、音频模块106、扬声器107和摄像头108,其中:
处理器101可以包括一个或多个处理器单元,例如处理器101可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
处理器101中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器101中的存储器为高速缓冲存储器。该存储器可以保存处理器101刚用过或循环使用的指令或数据。如果处理器101需要再次使用该指令或数据,可以从所述存储器中直接调用。避免了重复存取,减少了处理器101的等待时间,因而提高了系统的效率。
在一些实施例中,处理器101可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,集成电路内置音频(inter-integrated circuit sound,I2S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry  processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,用户标识模块(subscriber identity module,SIM)接口,和/或USB接口等。
在本申请实施例中,如图1B所示,ISP可以包括多种针对图像的处理算法法,例如解拜耳(DeBayer)算法、电子图像稳定(electronic image stabilization,EIS)算法、色彩校正矩阵(color correction matrix,CCM)算法和伽马(Gamma)校正算法。其中,DeBayer算法可以用于从覆盖有颜色滤波器阵列的图像传感器输出的不完整颜色样本重建为全色图像;EIS算法可以用于减少电子设备100的振动从而提高图像的清晰度;CCM算法可以用于校正在滤光板处各颜色块之间的颜色渗透带来的颜色误差;Gamma校正算法可以用于对图像的伽玛曲线进行编辑,检出图像信号中的深色部分和浅色部分,并使两者比例增大,从而提高图像对比度效果,增加更多的暗部色阶,以对图像进行非线性色调编辑。
存储器102与处理器101耦合,用于存储各种软件程序和/或多组指令。具体实现中,存储器102可以包括易失性存储器(volatile memory),例如随机存取存储器(random access memory,RAM);也可以包括非易失性存储器(non-vlatile memory),例如ROM、快闪存储器(flash memory)、硬盘驱动器(Hard Disk Drive,HDD)或固态硬盘(Solid State Drives,SSD);存储器102还可以包括上述种类的存储器的组合。存储器102还可以存储一些程序代码,以便于处理器101调用存储器102中存储的程序代码,以实现本申请实施例在电子设备100中的实现方法。存储器102可以存储操作系统,例如uCOS、VxWorks、RTLinux等嵌入式操作系统。
无线通信模块103可以提供应用在电子设备100上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块103可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块103经由天线接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器101。无线通信模块103还可以从处理器101中接收待发送的信号,对其进行调频、放大,经天线转为电磁波辐射出去。在一些实施例中,电子设备100还可以通过无线通信模块103中的蓝牙模块(图1A未示出)、WLAN模块(图1A未示出)发射信号探测或扫描在电子设备100附近的设备,并与该附近的设备建立无线通信连接以传输数据。其中,蓝牙模块可以提供包括经典蓝牙(basic rate/enhanced data rate,BR/EDR)或蓝牙低功耗(bluetooth low energy,BLE)中一项或多项蓝牙通信的解决方案,WLAN模块可以提供包括Wi-Fi direct、Wi-Fi LAN或Wi-Fi softAP中一项或多项WLAN通信的解决方案。
显示屏104可以用于显示图像、视频等。显示屏104可以包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,电子设备100可以包括1个或N个显示屏104,N为大于1的正整数。
麦克风105,也可以称“话筒”,“传声器”,可以用于采集电子设备周围环境中的声音信号,再将该声音信号转换为电信号,再将该电信号经过一系列处理,例如模数转换等,得到电子设备的处理器101可以处理的数字形式的音频信号。当拨打电话或发送语音信息时,用户可以通过人嘴靠近麦克风105发声,将声音信号输入到麦克风105。电子设备100可以设置至少一个麦克风105。在另一些实施例中,电子设备100可以设置两个麦克风105,除了采集声音信号,还可以实现降噪功能。在另一些实施例中,电子设备100还可以设置三个,四个或更多麦克风105,实现采集声音信号,降噪,还可以识别声音来源,实现定向录音功能等。
音频模块106可以用于将数字音频信息转换成模拟音频信号输出,也可以用于将模拟音频输入转换成数字音频信号。音频模块106还可以用于对音频信号编码和解码。在一些实施例中,音频模块106还可以设置于处理器101中,或将音频模块106的部分功能模块设置于处理器101中。
扬声器107,也可以称为“喇叭”,用于将音频电信号转换成为声音信号。电子设备100可以通过扬声器107收听音乐,或收听免提电话。
摄像头108用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元件。感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-oxide-semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给图像信号处理器(image signal processor,ISP)转换成数字图像信号。ISP将数字图像信号输出到数字信号处理器(digital signal processor,DSP)加工处理。DSP将数字图像信号转换成标准的RGB,YUV等格式的图像信号。在一些实施例中,电子设备100可以包括1个或N个摄像头108,N为大于1的正整数。
电子设备100还可以包括传感器模块(图1A未示出)和/或触摸传感器(图1A未示出)。触摸传感器也可以称为“触控器件”。触摸传感器可以设置于显示屏104,由触摸传感器与显示屏104组成触摸屏,也称“触控屏”。触摸传感器可以用于检测作用于其上或附近的触摸操作。可选的,传感器模块还可以包括有陀螺仪传感器(图1A未示出)、加速度传感器(图1A未示出)等等。其中,陀螺仪传感器可以用于确定电子设备100的运动姿态,在一些实施例中,电子设备100可以通过陀螺仪传感器确定出电子设备100围绕三个轴(即,x,y和z轴)的角速度。加速度传感器可以用于检测电子设备100在各个方向上(一般为x,y和z轴)的加速度大小,当电子设备100静止时也可以检测出重力的大小及方向。
电子设备100还可以包括移动通信模块(图1A未示出)。该移动通信模块可以提供应用在电子设备100上的包括2G/3G/4G/5G等无线通信的解决方案。
可以理解的是,本申请实施例示意的结构并不构成对电子设备100的具体限定。在本申请另一些实施例中,电子设备100还可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合来实现。
本申请实施例提供了一种应用于上述电子设备100的图像处理方法。
具体的,在该图像处理方法中,电子设备100可以获取到一个或多个胶片素材,该一个或多个胶片素材中可以包括第一胶片素材。其中,该一个或多个胶片素材可以用于,对电子设备100实时采集到的图像进行处理,使得处理后的图像可以显示出对应的胶片显示效果,例如,经过第一胶片素材处理后的图像可以显示出第一胶片显示效果,经过第二胶片素材处理后的图像可以显示出第二胶片显示效果。其中,胶片显示效果可以包括有颗粒效果,和/或划痕效果,和/或漏光效果。
当电子设备100接收并响应于针对第一胶片素材的选择操作时,电子设备100可以根据第一胶片素材对实时采集到的图像进行处理。具体的,电子设备100可以获取到一个或多个胶片素材,该一个或多个胶片素材包括第一胶片素材,该一个或多个胶片素材用于对电子设备100上的摄像头实时采集到的图像进行处理,使得处理后的图像呈现出对应的胶片显示效果。响应于摄像头启动操作,电子设备100可以通过该摄像头实时采集图像,显示出拍摄界面,该拍摄界面包括预览窗口。然后,响应于针对该第一胶片素材的选择操作,电子设备100可以基于该第一胶片素材,对实时采集到的图像进行处理,生成预览流。接下来,电子设备100可以在预览窗口中,显示出该预览流的画面内容,该预览流的画面内容可以呈现出第一胶片显示效果。
在一些应用场景中,电子设备100可以接收并响应于摄像头启动操作,通过电子设备100上的摄像头实时采集图像。这里的采集图像可以指的是,摄像头在启动之后所捕获到的物体成像。当电子设备100接收并响应于针对第一胶片素材的选择操作时,电子设备100可以对实时采集到的图像进行处理。具体的,电子设备100可以确定出实时采集到的图像是否包括第一区域。若包括第一区域,则电子设备100可以基于第一胶片素材,生成第二胶片素材。然后,电子设备100基于第二胶片素材对该图像中的第一区域进行处理,基于第一胶片素材对该图像中的非第一区域进行处理。后续实施例中,以第一区域为人像的皮肤区域为例进行说明。需要说明的是,在实际的实现方式中,第一区域也可以是图像中的天空区域、植被区域等。
示例性的,以第一区域为人像的皮肤区域为例,说明本图像处理方法的流程。电子设备100获取一个或多个胶片素材的说明可以参考前述描述,在此不再赘述。电子设备100可以接收并响应于摄像头启动操作,通过电子设备100上的摄像头实时采集图像。当电子设备100接收并响应于针对第一胶片素材的选择操作时,电子设备100可以对实时采集到的图像进行处理。具体的,电子设备100可以确定出实时采集到的图像是否包括人像。若包括人像,则电子设备100确定出该图像中的皮肤区域,以及,非皮肤区域。电子设备100可以基于第一胶片素材生成第二胶片素材。然后,电子设备100可以基于第二胶片素材对该图像中的皮肤区域进行处理,基于第一胶片素材对该图像中的非皮肤区域进行处理;若不包括人像,则电子设备100可以基于第一胶片素材,对该图像进行处理。
从上述图像处理方法的流程中可以看出,电子设备100根据胶片素材对获取到的图像进行处理,可以使得图像呈现出更加丰富的视觉效果。
下面,结合图2A-图2G所示的用户界面,介绍本申请实施例提供的一种图像处理方法的应用场景。
如图2A所示,电子设备100可以显示出桌面200。桌面200可以显示有一个或多个应用图标。其中,该一个或多个应用图标可以包括天气应用图标、股票应用图标、计算器应用图标、设置应用图标、邮件应用图标、视频应用图标、日历应用图标和图库应用图标等等。可选的,桌面200还可以显示有状态栏、页面指示符和托盘图标区域。其中,状态栏可以包括移动通信信号(又可以称为蜂窝信号)的一个或多个信号强度指示符、无线保真(wireless fidelity,Wi-Fi)信号的信号强度指示符、电池状态指示符、时间指示符等等。页面指示符可以用于表明当前显示的页面与其他页面的位置关系。托盘图标区域包括有多个托盘图标(例如拨号应用图标、信息应用图标、联系人应用图标和相机应用图标201等等),托盘图标在页面切换时保持显示。上述页面也可以包括多个应用图标和页面指示符,页面指示符可以不是页面的一部分而单独存在,上述托盘图标也是可选的,本申请实施例对此不作限制。
电子设备100可以接收到作用于相机应用图标201上的触摸操作(也可以被称为摄像头启动操作,例如,点击)。响应于上述摄像头启动操作,电子设备100可以启动摄像头实时采集图像,显示出拍摄界面。后续图示中的拍摄界面210和拍摄界面220都可以被称为拍摄界面。在一些示例中,不限于上述作用在相机应用图标201上的触摸操作,摄像头启动操作也可以是语音指令或手势操作,也即是说,电子设备100也可以接收并响应于用户的语音指令或手势操作,启动摄像头实时采集图像,显示出拍摄界面。
如图2B所示,电子设备100可以显示出拍摄界面210。该拍摄界面210可以包括拍摄控件211、一个或多个拍摄模式控件(例如,夜景模式控件212A、录像模式控件212B、电影模式控件212C、专业模式控件212D和更多模式控件212E)、预览窗口213等。当前,电影模式控件212C已被选中,也即是说,当前的拍摄模式为电影模式。在电影模式下,电子设备100可以在拍摄界面210上显示出一个或多个滤镜素材选项(例如,“蔚蓝交响”选项、“欢乐之城”选项等等)。其中,不同的滤镜可以对应不同的查找表(look up table,LUT),例如“蔚蓝交响”滤镜可以对应LUT1,“欢乐之城”滤镜可以对应LUT2。其中,LUT可以用于调节图像中的RGB值,也即是将图像中某个像素点对应的一组RGB值映射为另一组RGB值,LUT可以分为用于调节图像亮度的1D LUT、用于调节图像对比度的2D LUT和用于调节图像整体色彩的3D LUT。本申请实施例以3D LUT为例进行说明。基于各滤镜素材适合的拍摄环境,不同的滤镜素材可以与不同的胶片素材相关联。例如,适合自然光线下拍摄环境的滤镜素材,可以关联低感光度的胶片素材和/或中感光度的胶片素材;适合夜景环境下、人造灯光下拍摄环境的滤镜素材,可以关联中感光度的胶片素材和/或高感光度的胶片素材。示例性的,如图2B所示,“欢乐之城”滤镜素材适合自然光线下的拍摄环境,因此可以关联低感光度的胶片素材和/或中感光度的胶片素材,例如50D和250D;如图2C所示,“蔚蓝交响”滤镜素材适合夜景环境下的拍摄环境,因此可以关联中感光度的胶片素材和/或高感光度的胶片素材,例如250D和500T。关于胶片素材的相关说明,后续实施例中将详细描述,在此先不赘述。
如图2D所示,电子设备100此时已选中“欢乐之城”滤镜素材对实时采集到的图像 进行处理。电子设备100可以接收到作用于选项214(也可以被称为第一选项)上的触摸操作(例如,点击)。其中,该选项214是与“欢乐之城”滤镜素材相关联的,250D规格胶片素材(也可以被称为第一胶片素材)所对应的选项。其中,“欢乐之城”滤镜素材可以被称为第一滤镜素材,“欢乐之城”滤镜素材对应的LUT可以被称为第一LUT,第一LUT可以是2D LUT,也可以是3D LUT。
如图2E所示,当电子设备100如前述图示接收到作用于选项214上的触摸操作(例如,点击)之后,该选项214可以高亮显示,表示此时已选中该胶片素材对图像进行处理。此时,预览窗口213中的画面可以显示出该胶片素材对应的胶片显示效果,例如颗粒效果,和/或划痕效果,和/或漏光效果。
需要说明的是,胶片素材(例如,第一胶片素材等)可以不与滤镜素材相关联,因此,胶片素材对应的选项可以不与滤镜素材对应的选项相关联。
如图2F所示,电子设备100可以接收到作用于拍摄控件211上的触摸操作(例如,点击)。
如图2G所示,响应于作用在拍摄控件211上的触摸操作(例如,点击),电子设备100可以显示出拍摄界面220。该拍摄界面220可以包括预览窗口213和停止拍摄控件221。电子设备100可以对实时采集到的图像,基于“欢乐之城”滤镜素材和250D胶片素材,进行处理并编码。此时,预览窗口213中可以显示出提示信息,例如文本信息“欢乐之城-250D”,以用于提示用户电子设备100基于“欢乐之城”滤镜素材和250D胶片素材对图像进行处理。预览窗口213中的画面可以显示出该胶片素材对应的胶片显示效果,例如颗粒效果,和/或划痕效果,和/或漏光效果。
在一些示例中,不限于上述作用在拍摄控件211上的触摸操作,电子设备100也可以接收并响应于用户的语音指令或手势操作,显示出拍摄界面220,并基于所选择的胶片素材和/或滤镜素材对实时采集到的图像进行处理并编码。
电子设备100可以接收到作用于停止拍摄控件221上的触摸操作(例如,点击)。响应于该触摸操作,电子设备100可以停止视频拍摄,然后,获取并保存第一视频文件。其中,第一视频文件中的视频画面可以呈现出第一胶片素材对应的胶片显示效果,例如颗粒效果,和/或划痕效果,和/或漏光效果。
在一些示例中,不限于上述作用在停止拍摄控件221上的触摸操作,电子设备100也可以接收并响应于用户的语音指令或手势操作,停止视频拍摄,然后,获取并保存第一视频文件。
可以理解的是,前述用户界面仅仅用于示例性解释本申请实施例,并不构成对本申请的具体限制。
下面,介绍本申请实施例提供的一种图像处理方法的具体流程。
请参考图3A,图3A示例性示出了本申请实施例提供的一种图像处理方法的具体流程示意图。如图3A所示,该方法具体流程可以包括:
S301.电子设备100获取到一个或多个胶片素材。其中,一个或多个胶片素材包括第一 胶片素材。
首先,如图3B所示,开发人员可以使用不同规格的胶片得到不同的胶片素材。其中,胶片的规格可以包括胶片的感光度、胶片的类型等等。胶片的感光度可以指的是:胶片中的感光乳剂(例如,氯化银、溴化银或碘化银等)见光分解成像的速度。胶片感光度的数值可以是50、100、150等等,其数值越大,胶片的感光度越高,胶片成像时画面的颗粒越粗大。在本申请实施例中,胶片感光度数值在100以下可以被称为低感光度,在200-800间可以被称为中感光度,在800以上可以被称为高感光度。不限于上述划分,高、中、低感光度的数值阈值也可以是其他划分方式,本申请对此不作限制。胶片的类型可以指的是:胶片是日光型(daylight,D)胶片还是灯光型(tungsten,T)胶片。日光型胶片的标准色温为5400开尔文(kelvins,K)-5600K,灯光型胶片的标准色温为3200K-3400K。因此,胶片的规格可以有50D、150D、250D或250T等不同的规格,本申请对此不作限制。
具体的,在开发人员使用不同规格的胶片得到不同的胶片素材这一过程中,开发人员可以利用不同规格的胶片,拍摄18%灰卡的胶片图像,然后对该胶片图像进行曝光与冲洗。开发人员可以使用胶片扫描仪对冲洗后的胶片图像进行扫描,获取到对应的胶片素材。获取到的胶片素材的亮度平均值为指定数值A2(也可以被称为第一平均值,例如,0.5、0.6等)。
然后,开发人员可以基于上述处理过程得到一个或多个胶片素材。其中,在该一个或多个胶片素材中,除了18%灰卡的图像外,还可以包括非实际物体在胶片上的成像(可以被称为伪像),例如胶片成像时画面上的颗粒、划痕、漏光等等。各胶片素材可以包括时长为s秒(例如,15秒、20秒等),每秒t帧(例如,20帧、30帧等)的胶片图像。也即是说,每一种胶片素材包括s×t帧的胶片图像。优选的,指定数值A2可以取值为0.5,这样有利于提升后续对图像处理的计算效率;时长s秒取值为15秒,每秒t帧取值为30帧,这样使得处理后的多张图像在播放时视觉效果比较自然,不易察觉胶片显示效果的重复性,同时也可以节约电子设备100的存储资源。
电子设备100可以获取到上述一个或多个胶片素材。其中,该一个或多个胶片素材可以包括第一胶片素材。电子设备100可以采用比特位深(bit depth)压缩存储胶片素材,例如,采用32个二进制位、16个二进制位、8个二进制位、4个二进制位或2个二进制位来压缩存储胶片素材中每一个像素点的信息。优选的,本申请实施例可以采用低比特位深(例如,采用8个二进制位、4个二进制位或2个二进制位)压缩存储胶片素材中每一个像素点的信息,这样,可以节省电子设备100的存储空间,提升后续电子设备100基于胶片素材对视频进行处理的效率。
S302.电子设备100接收到摄像头启动操作。
示例性的,摄像头启动操作可以是前述如图2A所示的作用于相机应用图标201上的触摸操作(例如,点击)。不限于上述触摸操作,语音指令、手势操作等可以启动摄像头的操作也可以被称为摄像头启动操作。
S303.响应于摄像头启动操作,电子设备100通过摄像头实时采集图像。
具体的,电子设备100可以通过配置在电子设备100上的前置摄像头或者后置摄像头,实时采集图像。在一些示例中,电子设备100可以获取到其他电子设备发送的图像。在另一些示例中,电子设备100可以获取到云服务器发送的图像。也即是说,对于图像的获取途径,本申请并不作限制。电子设备100可以根据后续的流程,基于第一胶片素材对上述电子设备100从其他电子设备和/或云服务器获取到的图像,进行处理,使得图像呈现出对应的胶片显示效果。
S304.当电子设备100接收并响应于针对第一胶片素材的选择操作时,电子设备100确定实时采集到的图像(该实时采集到的图像也可以简称为图像)是否包括人像。
示例性的,针对第一胶片素材的选择操作可以是前述如图2D所示的作用于选项214上的触摸操作(例如,点击)。
具体的,电子设备100可以通过重识别(re-identification,ReID)算法、基于双阈值运动区域分割的AdaBoost行人检测算法等算法,确定实时采集到的图像是否包括人像。也即是说,对于如何确定实时采集到的图像是否包括人像,本申请并不作限制。
S305.当电子设备100确定图像中包括人像,电子设备100划分出图像中的皮肤区域,和,非皮肤区域。
具体的,电子设备100可以通过基于广义高斯分布的皮肤分割算法、基于RealAdaBoost算法的皮肤分割方法等算法,划分出图像中的皮肤区域和非皮肤区域。也即是说,对于如何划分图像中的皮肤区域和非皮肤区域,本申请并不作限制。
S306.电子设备100基于第一胶片素材对图像中的非皮肤区域进行处理,基于第一胶片素材生成的第二胶片素材对图像中的皮肤区域进行处理。
具体的,电子设备100可以对第一胶片素材进行去噪处理和柔化处理,生成第二胶片素材。其中,电子设备100对第一胶片素材进行去噪处理的方式可以为:基于中值滤波器方法、基于梯度模型算法等方式。电子设备100对第一胶片素材进行柔化处理的方式可以为:以任意像素点为中心,取指定大小(3*3)的区域,以该区域中多个像素点的RGB平均值为该中心像素的RGB值。不限于上述方式,电子设备100还可以通过其他方法对第一胶片素材进行去噪处理和柔化处理。
以实时采集到的任意一帧图像为例,来说明本步骤的实现方式。
具体的,物体可以通过摄像头的镜头生成光学图像投射到感光元件。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号,生成图像。然后,若电子设备100选择了滤镜素材和第一胶片素材对实时采集到的图像进行处理,则电子设备100在对图像基于滤镜素材处理后,再对图像基于第一胶片素材和第二胶片素材对图像进行处理。若电子设备100没有选择滤镜素材,则电子设备100直接基于第一胶片素材和第二胶片素材对实时采集到的图像进行处理。
示例性的,图3C以电子设备100选择了滤镜素材和第一胶片素材对实时采集到的图像进行处理为例。如图3C所示,物体可以通过摄像头的镜头生成光学图像投射到感光元件。 感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号,生成图像。然后,电子设备100可以基于前述选择的滤镜素材(例如,前述图示的“蔚蓝交响”滤镜素材、“欢乐之城”滤镜素材),按照对应的3D LUT和SMPTE ST 2084函数,将实时采集到的图像中像素点的RGB值,映射为新RGB值,调整第M帧图像中每个像素点的R数值、G数值和B数值之间的比例。其中,M可以用于表示任意数值,第M帧图像即是任意一帧实时采集到的图像。
接下来,电子设备100可以基于胶片素材(例如,第一胶片素材、第二胶片素材等)对第M帧图像进行处理。当胶片素材中任意像素点的亮度值小于或等于指定数值B1时,可以使得第M帧图像中对应位置像素点在R数值、G数值和B数值间比例不变的情况下,减小该R数值、G数值和B数值,也即是使该像素点的亮度变暗;当胶片素材中任意像素点的亮度值大于指定数值B1时,可以使得第M帧图像中对应位置像素点在R数值、G数值和B数值间比例不变的情况下,增大该R数值、G数值和B数值,也即是使该像素点的亮度变亮,这样,可以使得第M帧图像呈现出该胶片素材相应的颗粒效果,和/或划痕效果,和/或漏光效果。
电子设备100可以基于叠加算法公式,将第一胶片素材叠加至第M帧图像中的非皮肤区域,将第二胶片素材叠加至第M帧图像中的皮肤区域。此时,第M帧图像在非皮肤区域呈现出第一胶片显示效果,在皮肤区域呈现出第二胶片显示效果。示例性的,胶片素材(例如,第一胶片素材、第二胶片素材等)的叠加算法公式可以如下:
A、当胶片素材任意像素点的亮度值小于或等于指定数值B1(例如,0.5)时:
f(i1)=A×B÷0.5
其中,该公式中的A可以表示第M帧图像中待处理像素点i1的RGB数值(在该公式中,该像素点i1的RGB数值可以被归一化到0-1之间),B可以表示i1像素点在胶片素材中第M帧胶片图像所对应的像素点i2(也即条件中的任意像素点)的亮度值,f(i1)可以表示在胶片素材叠加之后,第M帧图像中像素点i1的RGB数值。公式中的0.5即是指定数值B1。该指定数值B1可以和前述胶片素材的亮度平均值即指定数值A2(也即第一平均值)相同。
B、当胶片素材任意像素点的亮度值大于指定数值B1(例如,0.5)时:
f(i1)=1-(1-A)×(1-B)÷0.5
其中,该公式中的A、B和f(i1)的说明,可以参考前述描述,在此不再赘述。
示例性的,当电子设备100基于第一胶片素材对实时采集到的图像进行处理时,基于上述两个公式,其处理过程可以为:该第一胶片素材包括第一像素点,该实时采集到的图像包括第二像素点,第一像素点对应第二像素点。当该第一像素点的亮度值小于或等于第一平均值时,该叠加公式为该第二像素点的新RGB值=该第一像素点的亮度值×该第二像素点的原RGB值÷第一平均值;当该第一胶片素材中第一像素点的亮度值大于第一平均值时,该叠加公式为该第二像素点的新RGB值=1-(1-该第二像素点的原RGB值)×(1-该第一像素点的亮度值)÷第一平均值。
需要说明的是,上述公式仅仅用于示例性说明本申请,并不对本申请构成任何限制。
可以看出,基于上述叠加方法对图像进行处理,可以使得图像中亮度值高的区域受胶片素材中亮度值较高的伪像(例如,颗粒、划痕等)的影响较小,而图像中亮度值低的区域受胶片素材中亮度值较高的伪像(例如,颗粒、划痕等)的影响较大;图像中亮度值高的区域受胶片素材中亮度值较低的伪像(例如,污渍等)的影响较大,而图像中亮度值低的区域受胶片素材中亮度值较低的伪像(例如,污渍等)的影响较小。
随后,电子设备100可以融合第M帧图像中的皮肤区域和非皮肤区域,将皮肤区域和非皮肤区域的邻接边缘进行羽化处理,使得皮肤区域和非皮肤区域的邻接边缘的像素点RGB值可以平滑变化。
S307.当电子设备100确定图像不包括人像时,电子设备100基于第一胶片素材对图像进行处理。
以采集到的任意一帧图像为例,来说明本步骤的实现方式。
具体的,物体可以通过摄像头的镜头生成光学图像投射到感光元件。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号,生成图像。然后,若电子设备100选择了滤镜素材和第一胶片素材对实时采集到的图像进行处理,则电子设备100在对图像基于滤镜素材处理后,再对图像基于第一胶片素材对图像进行处理。若电子设备100没有选择滤镜素材,则电子设备100直接基于第一胶片素材对实时采集到的图像进行处理。
示例性的,图3D以电子设备100选择了滤镜素材和第一胶片素材对实时采集到的图像进行处理为例,如图3D所示,物体可以通过摄像头的镜头生成光学图像投射到感光元件。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号,生成图像。然后,可选的,电子设备100可以基于前述选择的滤镜素材(例如,前述图示的“蔚蓝交响”滤镜素材、“欢乐之城”滤镜素材),按照对应的3D LUT和SMPTE ST 2084函数,调整第M帧图像中每个像素点的R数值、G数值和B数值之间的比例。其中,M可以用于表示任意数值,第M帧图像即是任意一帧实时采集到的图像。
接下来,电子设备100可以基于叠加算法公式,将第一胶片素材叠加至第M帧图像上,此时,第M帧图像仅呈现出第一胶片显示效果。其叠加方法可以参考上述S306中的说明,在此不再赘述。
S308.电子设备100基于实时采集且经过S304-S307处理后的图像,生成预览流。
具体的,电子设备100按照时间顺序实时采集到的多帧图像可以被称为图像流。该图像流中的每一帧图像经过上述S304-S307处理后,可以得到预览流,该预览流中的画面内容可以显示于电子设备100的预览窗口(例如,前述预览窗口213)中。当该图像流中的一帧或多帧图像包括人像时,预览流中的画面内容可以包括第一胶片显示效果和第二胶片显示效果;当该图像流中没有包括人像的图像时,预览流中的画面内容可以仅呈现出第一胶片显示效果。
S309.当电子设备100接收并响应于视频拍摄操作时,电子设备100基于实时采集且经 过S304-S307处理后的图像,生成录像流并进行相应处理。
具体的,当电子设备100接收并响应于视频拍摄操作,此时,电子设备100可以将实时采集到的图像流复制为两路图像流,分别为图像流1和图像流2。图像流1中的每一帧图像经过上述S304-S307处理后,可以得到预览流,该预览流中的画面内容可以在视频拍摄期间,显示于电子设备100的预览窗口(例如,前述预览窗口213)中。图像流2中的每一帧图像经过上述S304-S307处理后,可以得到录像流,电子设备100可以基于上述录像流,通过SMPTE ST 2094计算出对应的动态元数据,然后基于上述动态元数据和录像流通过编译器进行编码。在一些示例中,不限于上述处理方式,电子设备100也可以通过其他方式对录像流进行处理。
S310.当电子设备100接收并响应于结束拍摄操作时,电子设备100获取并保存第一视频文件。
示例性的,该结束拍摄操作可以是前述如图2F所示的作用于停止拍摄控件221上的触摸操作(例如,点击)。
具体的,电子设备100可以获取并保存指定格式的第一视频文件,例如,电子设备100可以基于前述编码后的录像流,获取并保存HDR10+格式的第一视频文件。在具体的实现方式中,电子设备100也可以获取并保存其他格式(例如,HDR10等)的第一视频文件,本申请对此不作限制。
需要说明的是,当第一胶片素材的总帧数小于电子设备100通过摄像头采集图像的帧数时,第一胶片素材可以循环处理电子设备100实时采集到的图像。
示例性的,若第一胶片素材的总帧数为30帧。当电子设备100采集到第31帧图像,则电子设备100可以根据第一胶片素材的第1帧胶片图像,处理电子设备100采集到的第31帧图像。当电子设备100此时采集到第32帧图像,则电子设备100可以根据第一胶片素材的第2帧胶片图像,处理电子设备100采集到的第32帧图像,以此类推。
接下来,介绍本申请实施例提供的一种应用于电子设备100的软件模块。
请参考图4,图4示例性示出了本申请实施例提供的一种应用于电子设备100的软件模块示意图。
如图4所示,电子设备100可以包括:存储模块401、图像处理模块402和图像编码模块403,其中:
存储模块401可以用于存储一个或多个胶片素材和第一视频文件,一个或多个胶片素材中包括第一胶片素材。存储模块401还可以存储一些程序代码,以实现本申请实施例在电子设备100中的实现方法。具体的实现方式可以参考前述流程图所示的步骤,在此不再赘述。
图像处理模块402可以用于基于第一胶片素材和/或滤镜素材处理电子设备100实时采集到的图像。具体的实现方式可以参考前述流程图所示的步骤,在此不再赘述。
图像编码模块403可以用于将已经过图像处理模块402处理过的图像进行编码,以获 取到第一视频文件。具体的实现方式可以参考前述流程图所示的步骤,在此不再赘述。
上述实施例中所用,根据上下文,术语“当…时”可以被解释为意思是“如果…”或“在…后”或“响应于确定…”或“响应于检测到…”。类似地,根据上下文,短语“在确定…时”或“如果检测到(所陈述的条件或事件)”可以被解释为意思是“如果确定…”或“响应于确定…”或“在检测到(所陈述的条件或事件)时”或“响应于检测到(所陈述的条件或事件)”。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线)或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如DVD)、或者半导体介质(例如固态硬盘)等。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,该流程可以由计算机程序来指令相关的硬件完成,该程序可存储于计算机可读取存储介质中,该程序在执行时,可包括如上述各方法实施例的流程。而前述的存储介质包括:ROM或随机存储记忆体RAM、磁碟或者光盘等各种可存储程序代码的介质。

Claims (16)

  1. 一种图像处理方法,其特征在于,应用于电子设备,包括:
    所述电子设备获取到一个或多个胶片素材,所述一个或多个胶片素材包括第一胶片素材,所述一个或多个胶片素材用于对所述电子设备上的摄像头实时采集到的图像进行处理,使得处理后的图像呈现出对应的胶片显示效果;
    响应于摄像头启动操作,所述电子设备通过所述摄像头实时采集图像,显示出拍摄界面,所述拍摄界面包括预览窗口;
    响应于针对所述第一胶片素材的选择操作,所述电子设备基于所述第一胶片素材,对实时采集到的图像进行处理,生成预览流;
    所述电子设备在所述预览窗口中,显示出所述预览流的画面内容,所述预览流的画面内容呈现出第一胶片显示效果。
  2. 根据权利要求1所述的方法,其特征在于,响应于针对所述第一胶片素材的选择操作,所述电子设备基于所述第一胶片素材,对实时采集到的图像进行处理,生成预览流,具体包括:
    响应于针对所述第一胶片素材的选择操作,所述电子设备确定所述实时采集到的图像是否包括人像;
    当所述电子设备确定所述实时采集到的图像包括人像时,所述电子设备划分出所述实时采集到的图像中的皮肤区域和非皮肤区域;
    所述电子设备基于所述第一胶片素材对所述非皮肤区域进行处理,基于所述第一胶片素材生成的第二胶片素材对所述皮肤区域进行处理;
    所述电子设备基于经过所述第一胶片素材和所述第二胶片素材处理后的所述实时采集到的图像,生成预览流。
  3. 根据权利要求2所述的方法,其特征在于,所述电子设备基于所述第一胶片素材对所述非皮肤区域进行处理,基于所述第一胶片素材生成的第二胶片素材对所述皮肤区域进行处理,具体包括:
    所述电子设备对所述第一胶片素材进行去噪处理和柔化处理,生成所述第二胶片素材;
    所述电子设备基于所述第一胶片素材对所述非皮肤区域进行处理,基于所述第二胶片素材对所述皮肤区域进行处理。
  4. 根据权利要求2所述的方法,其特征在于,所述方法还包括:
    当所述电子设备确定所述实时采集到的图像不包括人像时,所述电子设备基于所述第一胶片素材处理所述实时采集到的图像。
  5. 根据权利要求2或3所述的方法,其特征在于,所述预览流的画面内容还呈现出第二胶片显示效果。
  6. 根据权利要求1所述的方法,其特征在于,所述一个或多个胶片素材为基于一种或多种不同感光度的胶卷拍摄18%灰卡,并进行冲洗、扫描后所获得的胶片素材,所述一个或多个胶片素材中各胶片素材的亮度平均值为第一平均值,所述各胶片素材的时长为15秒,每秒包括30帧胶片图像。
  7. 根据权利要求6所述的方法,其特征在于,所述第一胶片素材包括第一像素点,所述实时采集到的图像包括第二像素点,所述第一像素点对应所述第二像素点;
    响应于针对所述第一胶片素材的选择操作,所述电子设备基于所述第一胶片素材,对实时采集到的图像进行处理,生成预览流,具体包括:
    响应于针对所述第一胶片素材的选择操作,所述电子设备基于叠加公式和所述第一胶片素材,对所述实时采集到的图像进行处理;其中,当所述第一像素点的亮度值小于或等于第一平均值时,所述叠加公式为所述第二像素点的新RGB值=所述第一像素点的亮度值×所述第二像素点的原RGB值÷第一平均值;当所述第一胶片素材中第一像素点的亮度值大于第一平均值时,所述叠加公式为所述第二像素点的新RGB值=1-(1-所述第二像素点的原RGB值)×(1-所述第一像素点的亮度值)÷第一平均值;
    所述电子设备基于经过所述第一胶片素材处理后的所述实时采集到的图像,生成预览流。
  8. 根据权利要求7所述的方法,其特征在于,所述第一平均值为0.5。
  9. 根据权利要求1所述的方法,其特征在于,在响应于针对所述第一胶片素材的选择操作,所述电子设备基于所述第一胶片素材,对实时采集到的图像进行处理,生成预览流之前,所述方法还包括:
    所述电子设备接收到针对于第一滤镜素材的选择操作,所述第一滤镜素材对应第一LUT;
    响应于针对所述第一胶片素材的选择操作,所述电子设备基于所述第一胶片素材,对实时采集到的图像进行处理,生成预览流,具体包括:
    响应于针对所述第一胶片素材的选择操作,所述电子设备基于所述第一LUT,将所述实时采集到的图像中像素点的RGB值,映射为新RGB值;
    所述电子设备基于所述第一胶片素材,对经过所述第一LUT处理后的所述实时采集到的图像进行处理,生成预览流。
  10. 根据权利要求9所述的方法,其特征在于,所述第一LUT为2D LUT,或者,3D LUT。
  11. 根据权利要求1所述的方法,其特征在于,所述第一胶片显示效果包括以下的一种或多种:
    颗粒效果、划痕效果和漏光效果。
  12. 根据权利要求1所述的方法,其特征在于,所述拍摄界面还包括所述第一胶片素材的第一选项;
    响应于针对所述第一胶片素材的选择操作,所述电子设备基于所述第一胶片素材,对实时采集到的图像进行处理,生成预览流,具体包括:
    所述电子设备接收到作用于所述第一选项上的触摸操作;
    响应于所述触摸操作,所述电子设备基于所述第一胶片素材,对所述实时采集到的图像进行处理,生成预览流。
  13. 一种电子设备,其特征在于,包括:一个或多个处理器、一个或多个存储器、一个或多个摄像头和显示屏;所述一个或多个存储器与一个或多个处理器耦合,所述一个或多个存储器用于存储计算机程序代码,计算机程序代码包括计算机指令,当所述一个或多个处理器执行所述计算机指令时,使得所述电子设备执行如权利要求1-12中的任一项所述的方法。
  14. 一种计算机可读存储介质,其特征在于,包括计算机指令,当所述计算机指令在电子设备上运行时,使得所述电子设备执行如权利要求1-12中的任一项所述的方法。
  15. 一种芯片或芯片系统,其特征在于,包括处理电路和接口电路,所述接口电路用于接收代码指令并传输至所述处理电路,所述处理电路用于运行所述代码指令以执行如权利要求1-12中任一项所述的方法。
  16. 一种计算机程序产品,其特征在于,当所述计算机程序产品在电子设备上运行时,使得所述电子设备执行如权利要求1-12中的任一项所述的方法。
PCT/CN2023/117623 2022-10-21 2023-09-08 图像处理方法及电子设备 WO2024082863A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211297222.5 2022-10-21
CN202211297222.5A CN116723416B (zh) 2022-10-21 2022-10-21 图像处理方法及电子设备

Publications (1)

Publication Number Publication Date
WO2024082863A1 true WO2024082863A1 (zh) 2024-04-25

Family

ID=87863730

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/117623 WO2024082863A1 (zh) 2022-10-21 2023-09-08 图像处理方法及电子设备

Country Status (2)

Country Link
CN (1) CN116723416B (zh)
WO (1) WO2024082863A1 (zh)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020102978A1 (zh) * 2018-11-20 2020-05-28 华为技术有限公司 图像处理方法及电子设备
CN113810602A (zh) * 2021-08-12 2021-12-17 荣耀终端有限公司 一种拍摄方法及电子设备
WO2022127787A1 (zh) * 2020-12-18 2022-06-23 华为技术有限公司 一种图像显示的方法及电子设备
CN114979459A (zh) * 2021-02-27 2022-08-30 华为技术有限公司 拍摄素材的管理方法、电子设备及系统

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
ES2767976T3 (es) * 2010-09-14 2020-06-19 Teravolt Gmbh Procedimiento para la elaboración de secuencias de film
CN103218778B (zh) * 2013-03-22 2015-12-02 华为技术有限公司 一种图像和视频的处理方法及装置
CN108476306B (zh) * 2016-12-30 2021-04-20 华为技术有限公司 一种图像显示的方法及终端设备
CN109951736B (zh) * 2019-04-11 2021-06-08 北京大生在线科技有限公司 一种在线实时视频的滤镜方法及系统
CN111107267A (zh) * 2019-12-30 2020-05-05 广州华多网络科技有限公司 图像处理方法、装置、设备及存储介质
CN115379112A (zh) * 2020-09-29 2022-11-22 华为技术有限公司 一种图像处理方法及相关装置
CN113766120B (zh) * 2021-08-09 2022-08-09 荣耀终端有限公司 拍摄模式的切换方法及电子设备

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020102978A1 (zh) * 2018-11-20 2020-05-28 华为技术有限公司 图像处理方法及电子设备
WO2022127787A1 (zh) * 2020-12-18 2022-06-23 华为技术有限公司 一种图像显示的方法及电子设备
CN114979459A (zh) * 2021-02-27 2022-08-30 华为技术有限公司 拍摄素材的管理方法、电子设备及系统
CN113810602A (zh) * 2021-08-12 2021-12-17 荣耀终端有限公司 一种拍摄方法及电子设备

Also Published As

Publication number Publication date
CN116723416B (zh) 2024-04-02
CN116723416A (zh) 2023-09-08

Similar Documents

Publication Publication Date Title
WO2021036991A1 (zh) 高动态范围视频生成方法及装置
US11470294B2 (en) Method, device, and storage medium for converting image from raw format to RGB format
WO2020057198A1 (zh) 图像处理方法、装置、电子设备及存储介质
WO2017215501A1 (zh) 图像降噪处理方法、装置及计算机存储介质
US9536479B2 (en) Image display device and method
WO2023015981A1 (zh) 图像处理方法及其相关设备
WO2021037227A1 (zh) 一种图像处理方法、电子设备及云服务器
US20230162324A1 (en) Projection data processing method and apparatus
WO2021190348A1 (zh) 图像处理方法和电子设备
CN115526787B (zh) 视频处理方法和装置
EP3893495A1 (en) Method for selecting images based on continuous shooting and electronic device
WO2023160295A1 (zh) 视频处理方法和装置
US20240119566A1 (en) Image processing method and apparatus, and electronic device
EP4274224A1 (en) Multi-scene video recording method and apparatus, and electronic device
KR20200009922A (ko) 전자 장치 및 이미지의 전송 상태에 기반하여 이미지를 보정하는 방법
WO2023226612A1 (zh) 一种曝光参数确定方法和装置
WO2024082863A1 (zh) 图像处理方法及电子设备
WO2022115996A1 (zh) 图像处理方法及设备
CN115150542B (zh) 一种视频防抖方法及相关设备
CN113891008B (zh) 一种曝光强度调节方法及相关设备
US10148874B1 (en) Method and system for generating panoramic photographs and videos
JP2012533922A (ja) 映像処理方法及び装置
CN115460343B (zh) 图像处理方法、设备及存储介质
CN116996777B (zh) 一种拍摄方法、电子设备及存储介质
US20240056677A1 (en) Co-photographing method and electronic device