CN111179282A - Image processing method, image processing apparatus, storage medium, and electronic device - Google Patents

Image processing method, image processing apparatus, storage medium, and electronic device Download PDF

Info

Publication number
CN111179282A
CN111179282A CN201911373483.9A CN201911373483A CN111179282A CN 111179282 A CN111179282 A CN 111179282A CN 201911373483 A CN201911373483 A CN 201911373483A CN 111179282 A CN111179282 A CN 111179282A
Authority
CN
China
Prior art keywords
image
sky
region
mask
processor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911373483.9A
Other languages
Chinese (zh)
Other versions
CN111179282B (en
Inventor
颜海强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201911373483.9A priority Critical patent/CN111179282B/en
Priority claimed from CN201911373483.9A external-priority patent/CN111179282B/en
Publication of CN111179282A publication Critical patent/CN111179282A/en
Application granted granted Critical
Publication of CN111179282B publication Critical patent/CN111179282B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Abstract

The disclosure provides an image processing method, an image processing device, a storage medium and an electronic device, and relates to the technical field of image processing. The image processing method comprises the following steps: identifying a sky region in the first image based on a segmentation model to obtain a mask image corresponding to the sky region; segmenting a foreground area image from the first image and segmenting a background area image from the second image through the mask image; and splicing the foreground area image and the background area image to obtain a target image. The method and the device realize the 'sky change' processing of the image, can automatically segment the sky area in the image, do not need manual cutout of a user, and are convenient to use.

Description

Image processing method, image processing apparatus, storage medium, and electronic device
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image processing method, an image processing apparatus, a computer-readable storage medium, and an electronic device.
Background
With the development of image processing technology, in some image processing and image beautifying software, a function of replacing an image background appears, for example, the background can be replaced by a pure color curtain, a landscape or other special effects, so as to meet the diversified demands of users.
In the related art, when replacing the image background, a user is generally required to manually cut out to segment the foreground part of the image and then replace the remaining background part. Some software can automatically recognize the portrait in the image and replace the background of the part except the portrait, but the software cannot be applied to the non-portrait and can mistake the foreground part except the portrait as the background. Therefore, if background replacement is to be performed on an image other than a portrait, for example, a sky background is replaced in a landscape image, the related art requires a user to manually cut out the foreground, and is inconvenient to use.
It is to be noted that the information disclosed in the above background section is only for enhancement of understanding of the background of the present disclosure, and thus may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
The present disclosure provides an image processing method, an image processing apparatus, a computer-readable storage medium, and an electronic device, thereby solving, at least to some extent, a problem that a user needs to manually extract a foreground in a related art.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows, or in part will be obvious from the description, or may be learned by practice of the disclosure.
According to a first aspect of the present disclosure, there is provided an image processing method including: identifying a sky region in the first image based on a segmentation model to obtain a Mask image (Mask) corresponding to the sky region; segmenting a foreground area image from the first image and segmenting a background area image from the second image through the mask image; and splicing the foreground area image and the background area image to obtain a target image.
According to a second aspect of the present disclosure, there is provided an image processing apparatus comprising: a sky identification module, configured to identify a sky region in the first image, and obtain a mask image corresponding to the sky region; the image segmentation module is used for segmenting a foreground region image from the first image and segmenting a background region image from the second image through the mask image; and the image splicing module is used for splicing the foreground area image and the background area image to obtain a target image.
According to a third aspect of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the above-described image processing method.
According to a fourth aspect of the present disclosure, there is provided an electronic device comprising: a processor; and a memory for storing executable instructions of the processor; wherein the processor is configured to perform the above-described image processing method via execution of the executable instructions.
The technical scheme of the disclosure has the following beneficial effects:
according to the image processing method, the image processing device, the storage medium and the electronic equipment, firstly, a sky area in a first image is identified through a segmentation model so as to obtain a mask image corresponding to the sky area; then, segmenting a foreground area image from the first image by using the mask image, and segmenting a background area image from the second image; and finally, splicing the foreground area image and the background area image to obtain a target image. On the one hand, the sky region can be accurately segmented from the image based on the recognition of the segmentation model to the sky region and the processing of the mask image, the manual cutout of a user is not needed, the intelligent degree is high, the use is convenient, and the user experience is good. On the other hand, the scheme realizes the 'sky changing' processing of the image, the sky area in the first image is changed into the corresponding part in the second image, the interestingness is high, and the diversified demands of the user can be met.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure. It is apparent that the drawings in the following description are only some embodiments of the present disclosure, and that other drawings can be obtained from those drawings without inventive effort for a person skilled in the art.
FIG. 1 shows a schematic diagram of a system architecture of the present exemplary embodiment;
fig. 2 shows a schematic diagram of an electronic device of the present exemplary embodiment;
fig. 3 shows a flowchart of an image processing method of the present exemplary embodiment;
FIG. 4 illustrates a sub-flowchart of one image processing method of the present exemplary embodiment;
fig. 5 shows a sub-flowchart of another image processing method of the present exemplary embodiment;
fig. 6 shows a schematic flow of image processing in the present exemplary embodiment;
fig. 7 shows a block diagram of a configuration of an image processing apparatus of the present exemplary embodiment;
fig. 8 shows a schematic diagram of a computer-readable storage medium of the present exemplary embodiment.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the disclosure. One skilled in the relevant art will recognize, however, that the subject matter of the present disclosure can be practiced without one or more of the specific details, or with other methods, components, devices, steps, and the like. In other instances, well-known technical solutions have not been shown or described in detail to avoid obscuring aspects of the present disclosure.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus their repetitive description will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
Fig. 1 shows a schematic diagram of a system architecture of an exemplary embodiment of the present disclosure. As shown in fig. 1, the system architecture 100 may include: terminal 110, network 120, and server 130. The terminal 110 may be various electronic devices having an image capturing function, including but not limited to a mobile phone, a tablet computer, a digital camera, a personal computer, and the like. The medium used by network 120 to provide communications links between terminals 110 and server 130 may include various connection types, such as wired, wireless communications links, or fiber optic cables. It should be understood that the number of terminals, networks, and servers in fig. 1 are merely illustrative. There may be any number of terminals, networks, and servers, as desired for an implementation. For example, the server 130 may be a server cluster composed of a plurality of servers, and the like.
The image processing method provided by the embodiment of the present disclosure may be executed by the terminal 110, for example, after the terminal 110 captures an image, the image is processed; the server 130 may also execute, for example, the terminal 110 captures an image, uploads the image to the server 130, and causes the server 130 to process the image. The present disclosure is not limited thereto.
An exemplary embodiment of the present disclosure provides an electronic device for implementing an image processing method, which may be the terminal 110 or the server 130 in fig. 1. The electronic device comprises at least a processor and a memory for storing executable instructions of the processor, the processor being configured to perform the image processing method via execution of the executable instructions.
The electronic device may be implemented in various forms, and may include, for example, a mobile device such as a mobile phone, a tablet computer, a notebook computer, a Personal Digital Assistant (PDA), a navigation device, a wearable device, an unmanned aerial vehicle, and a stationary device such as a desktop computer and a smart television. The following takes the mobile terminal 200 in fig. 2 as an example, and exemplifies the configuration of the electronic device. It will be appreciated by those skilled in the art that the configuration of figure 2 can also be applied to fixed type devices, in addition to components specifically intended for mobile purposes. In other embodiments, mobile terminal 200 may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware. The interfacing relationship between the components is only schematically illustrated and does not constitute a structural limitation of the mobile terminal 200. In other embodiments, the mobile terminal 200 may also interface differently than shown in fig. 2, or a combination of multiple interfaces.
As shown in fig. 2, the mobile terminal 200 may specifically include: a processor 210, an internal memory 221, an external memory interface 222, a Universal Serial Bus (USB) interface 230, a charging management Module 240, a power management Module 241, a battery 242, an antenna 1, an antenna 2, a mobile communication Module 250, a wireless communication Module 260, an audio Module 270, a speaker 271, a microphone 272, a microphone 273, an earphone interface 274, a sensor Module 280, a display 290, a camera Module 291, a pointer 292, a motor 293, a button 294, and a Subscriber Identity Module (SIM) card interface 295. Wherein the sensor module 280 may include a depth sensor 2801, a pressure sensor 2802, a gyroscope sensor 2803, a barometric pressure sensor 2804, and the like.
Processor 210 may include one or more processing units, such as: the Processor 210 may include an Application Processor (AP), a modem Processor, a Graphics Processor (GPU), an Image Signal Processor (ISP), a controller, a video codec, a Digital Signal Processor (DSP), a baseband Processor, and/or a Neural-Network Processing Unit (NPU), and the like. The different processing units may be separate devices or may be integrated into one or more processors.
The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution.
A memory may also be provided in processor 210 for storing instructions and data. The memory may store instructions for implementing six modular functions: detection instructions, connection instructions, information management instructions, analysis instructions, data transmission instructions, and notification instructions, and execution is controlled by processor 210. In some embodiments, the memory in processor 210 is a cache memory. The memory may hold instructions or data that have just been used or recycled by processor 210. If the processor 210 needs to use the instruction or data again, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of the processor 210, thereby increasing the efficiency of the system.
In some implementations, the processor 210 may include one or more interfaces. The Interface may include an Integrated Circuit (I2C) Interface, an Inter-Integrated Circuit built-in audio (I2S) Interface, a Pulse Code Modulation (PCM) Interface, a Universal Asynchronous Receiver/Transmitter (UART) Interface, a Mobile Industry Processor Interface (MIPI), a General-purpose input/Output (GPIO) Interface, a Subscriber Identity Module (SIM) Interface, and/or a Universal Serial Bus (USB) Interface, etc. Connections are made with other components of mobile terminal 200 through different interfaces.
The USB interface 230 is an interface conforming to the USB standard specification, and may specifically be a MiniUSB interface, a microsusb interface, a USB type c interface, or the like. The USB interface 230 may be used to connect a charger to charge the mobile terminal 200, may also be connected to an earphone to play audio through the earphone, and may also be used to connect the mobile terminal 200 to other electronic devices, such as a computer and a peripheral device.
The charge management module 240 is configured to receive a charging input from a charger. The charger may be a wireless charger or a wired charger. In some wired charging embodiments, the charging management module 240 may receive charging input from a wired charger via the USB interface 230. In some wireless charging embodiments, the charging management module 240 may receive a wireless charging input through a wireless charging coil of the mobile terminal 200. The charging management module 240 may also supply power to the electronic device through the power management module 241 while charging the battery 242.
The power management module 241 is used for connecting the battery 242, the charging management module 240 and the processor 210. The power management module 241 receives the input of the battery 242 and/or the charging management module 240, and supplies power to the processor 210, the internal memory 221, the display screen 290, the camera module 291, the wireless communication module 260, and the like. The power management module 241 may also be used to monitor parameters such as battery capacity, battery cycle number, battery state of health (leakage, impedance), etc. In other embodiments, the power management module 241 may also be disposed in the processor 210. In other embodiments, the power management module 241 and the charging management module 240 may be disposed in the same device.
The wireless communication function of the mobile terminal 200 may be implemented by the antenna 1, the antenna 2, the mobile communication module 250, the wireless communication module 260, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in mobile terminal 200 may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 250 may provide a solution including 2G/3G/4G/5G wireless communication applied on the mobile terminal 200. The mobile communication module 250 may include at least one filter, a switch, a power Amplifier, a Low Noise Amplifier (LNA), and the like. The mobile communication module 250 may receive the electromagnetic wave from the antenna 1, filter, amplify, etc. the received electromagnetic wave, and transmit the electromagnetic wave to the modem processor for demodulation. The mobile communication module 250 may also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through the antenna 1 to radiate the electromagnetic wave. In some embodiments, at least some of the functional modules of the mobile communication module 250 may be disposed in the processor 210. In some embodiments, at least some of the functional modules of the mobile communication module 250 may be disposed in the same device as at least some of the modules of the processor 210.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating a low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then passes the demodulated low frequency baseband signal to a baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs a sound signal through an audio device (not limited to the speaker 271, the receiver 272, etc.) or displays an image or video through the display screen 290. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be separate from the processor 210, and may be disposed in the same device as the mobile communication module 250 or other functional modules.
The Wireless Communication module 260 may provide solutions for Wireless Communication applied to the mobile terminal 200, including Wireless Local Area Networks (WLANs) (e.g., Wireless Fidelity (Wi-Fi) Networks), Bluetooth (BT), Global Navigation Satellite System (GNSS), Frequency Modulation (FM), Near Field Communication (NFC), Infrared (IR), and the like. The wireless communication module 260 may be one or more devices integrating at least one communication processing module. The wireless communication module 260 receives electromagnetic waves via the antenna 2, performs frequency modulation and filtering processing on electromagnetic wave signals, and transmits the processed signals to the processor 210. The wireless communication module 260 may also receive a signal to be transmitted from the processor 210, frequency-modulate and amplify the signal, and convert the signal into electromagnetic waves via the antenna 2 to radiate the electromagnetic waves.
In some embodiments, antenna 1 of the mobile terminal 200 is coupled to the mobile communication module 250 and antenna 2 is coupled to the wireless communication module 260, such that the mobile terminal 200 can communicate with networks and other devices through wireless communication techniques. The wireless communication technology may include Global System for mobile communications (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (Code Division Multiple Access, CDMA), Wideband Code Division Multiple Access (WCDMA), Time Division-Code Division Multiple Access (TD-SCDMA), Long Term Evolution (Long Term Evolution, LTE), New air interface (New Radio, NR), BT, GNSS, WLAN, NFC, FM, and/or IR technologies, etc. The GNSS may include a Global Positioning System (GPS), a Global Navigation Satellite System (GLONASS), a Beidou Navigation Satellite System (BDS), a Quasi-Zenith Satellite System (QZSS), and/or a Satellite Based Augmentation System (SBAS).
The mobile terminal 200 implements a display function through the GPU, the display screen 290, the application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display screen 290 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 210 may include one or more GPUs that execute program instructions to generate or alter display information.
The display screen 290 is used to display images, video, etc. The display screen 290 includes a display panel. The Display panel may be a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), an Active matrix Organic Light-Emitting Diode (Active-matrix Organic Light-Emitting Diode, AMOLED), a flexible Light-Emitting Diode (FLED), a miniature, a Micro-o led, a Quantum dot Light-Emitting Diode (QLED), or the like. In some embodiments, the mobile terminal 200 may include 1 or N display screens 290, N being a positive integer greater than 1.
The mobile terminal 200 may implement a photographing function through the ISP, the camera module 291, the video codec, the GPU, the display screen 290, the application processor, and the like.
The ISP is used to process data fed back by the camera module 291. For example, when a photo is taken, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing and converting into an image visible to naked eyes. The ISP can also carry out algorithm optimization on the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in the camera module 291.
The camera module 291 is used to capture still images or videos. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a Complementary Metal-Oxide-Semiconductor (CMOS) phototransistor. The light sensing element converts the optical signal into an electrical signal, which is then passed to the ISP where it is converted into a digital image signal. And the ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into image signal in standard RGB, YUV and other formats. In some embodiments, the mobile terminal 200 may include 1 or N camera modules 291, where N is a positive integer greater than 1, and if the mobile terminal 200 includes N cameras, one of the N cameras is a main camera.
The digital signal processor is used for processing digital signals, and can process digital image signals and other digital signals. For example, when the mobile terminal 200 selects a frequency point, the digital signal processor is used to perform fourier transform or the like on the frequency point energy.
Video codecs are used to compress or decompress digital video. The mobile terminal 200 may support one or more video codecs. In this way, the mobile terminal 200 may play or record video in a variety of encoding formats, such as: moving Picture Experts Group (MPEG) 1, MPEG2, MPEG3, MPEG4, and the like.
The external memory interface 222 may be used to connect an external memory card, such as a Micro SD card, to extend the memory capability of the mobile terminal 200. The external memory card communicates with the processor 210 through the external memory interface 222 to implement a data storage function. For example, files such as music, video, etc. are saved in an external memory card.
Internal memory 221 may be used to store computer-executable program code, including instructions. The internal memory 221 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like. The storage data area may store data (e.g., audio data, a phonebook, etc.) created during use of the mobile terminal 200, and the like. In addition, the internal memory 221 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk Storage device, a Flash memory device, a Universal Flash Storage (UFS), and the like. The processor 210 executes various functional applications of the mobile terminal 200 and data processing by executing instructions stored in the internal memory 221 and/or instructions stored in a memory provided in the processor.
The mobile terminal 200 may implement an audio function through the audio module 270, the speaker 271, the receiver 272, the microphone 273, the earphone interface 274, the application processor, and the like. Such as music playing, recording, etc.
Audio module 270 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. Audio module 270 may also be used to encode and decode audio signals. In some embodiments, the audio module 270 may be disposed in the processor 210, or some functional modules of the audio module 270 may be disposed in the processor 210.
The speaker 271, also called "horn", is used to convert the audio electrical signal into a sound signal. The mobile terminal 200 can listen to music through the speaker 271 or listen to a hands-free call.
The receiver 272, also called "earpiece", is used to convert the electrical audio signal into an acoustic signal. When the mobile terminal 200 receives a call or voice information, it is possible to receive voice by placing the receiver 272 close to the human ear.
The microphone 273, also known as a "microphone," is used to convert sound signals into electrical signals. When making a call or transmitting voice information, the user can input a voice signal to the microphone 273 by sounding a voice signal near the microphone 273 through the mouth. The mobile terminal 200 may be provided with at least one microphone 273. In other embodiments, the mobile terminal 200 may be provided with two microphones 273, which may implement a noise reduction function in addition to collecting sound signals. In other embodiments, the mobile terminal 200 may further include three, four or more microphones 273 for collecting sound signals, reducing noise, identifying sound sources, and implementing directional recording functions.
The earphone interface 274 is used to connect wired earphones. The headset interface 274 may be a USB interface 230, or may be a 3.5mm Open Mobile Terminal Platform (OMTP) standard interface, or a Cellular Telecommunications Industry Association of america (CTIA) standard interface.
The depth sensor 2801 is used to acquire depth information of a scene. In some embodiments, a depth sensor may be provided to the camera module 291.
The pressure sensor 2802 is used to sense a pressure signal and convert the pressure signal into an electrical signal. In some embodiments, pressure sensor 2802 may be disposed on display screen 290. Pressure sensor 2802 can be of a wide variety, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like.
The gyro sensor 2803 may be used to determine a motion gesture of the mobile terminal 200. In some embodiments, the angular velocity of the mobile terminal 200 about three axes (i.e., x, y, and z axes) may be determined by the gyroscope sensor 2803. The gyro sensor 2803 may be used for photographing anti-shake. Illustratively, when the shutter is pressed, the gyro sensor 2803 detects the shake angle of the mobile terminal 200, calculates the distance to be compensated for by the lens module according to the shake angle, and allows the lens to counteract the shake of the mobile terminal 200 through a reverse motion, thereby achieving anti-shake. The gyro sensor 2803 may also be used for navigation, somatosensory gaming scenes.
The air pressure sensor 2804 is used to measure air pressure. In some embodiments, mobile terminal 200 may calculate altitude, aid in positioning and navigation, from barometric pressure values measured by barometric pressure sensor 2804.
In addition, other functional sensors, such as a magnetic sensor, an acceleration sensor, a distance sensor, a proximity light sensor, a fingerprint sensor, a temperature sensor, a touch sensor, an ambient light sensor, a bone conduction sensor, etc., may be disposed in the sensor module 280 according to actual needs.
The keys 294 include a power-on key, a volume key, and the like. The keys 294 may be mechanical keys. Or may be touch keys. The mobile terminal 200 may receive a key input, and generate a key signal input related to user setting and function control of the mobile terminal 200.
The motor 293 may generate a vibration prompt, such as a vibration prompt for incoming call, alarm clock, receiving information, etc., and may also be used for touch vibration feedback, such as touch operations applied to different applications (e.g., photographing, game, audio playing, etc.), or touch operations applied to different areas of the display screen 290, which may correspond to different vibration feedback effects. The touch vibration feedback effect may support customization.
Indicator 292 may be an indicator light that may be used to indicate a state of charge, a change in charge, or may be used to indicate a message, missed call, notification, etc.
The SIM card interface 295 is used to connect a SIM card. The SIM card can be attached to and detached from the mobile terminal 200 by being inserted into the SIM card interface 295 or being pulled out of the SIM card interface 295. The mobile terminal 200 may support 1 or N SIM card interfaces, where N is a positive integer greater than 1. The SIM card interface 295 may support a Nano SIM card, a Micro SIM card, a SIM card, etc. Multiple cards can be inserted into the same SIM card interface 295 at the same time. The types of the plurality of cards may be the same or different. The SIM card interface 295 may also be compatible with different types of SIM cards. The SIM card interface 295 may also be compatible with external memory cards. The mobile terminal 200 interacts with the network through the SIM card to implement functions such as communication and data communication. In some embodiments, the mobile terminal 200 employs eSIM, namely: an embedded SIM card. The eSIM card may be embedded in the mobile terminal 200 and may not be separated from the mobile terminal 200.
The following specifically describes an image processing method and an image processing apparatus according to exemplary embodiments of the present disclosure.
Fig. 3 shows a flow of an image processing method in the present exemplary embodiment, including the following steps S310 to S330:
step S310, a sky area in the first image is identified based on the segmentation model, and a mask image corresponding to the sky area is obtained.
The first image is an image to be processed containing sky, and can be an original image acquired by a camera when a user takes a picture. The segmentation model is a machine learning model trained in advance, and can be used for performing feature processing on an image and identifying a sky Region in the image, for example, YOLO (a real-time object detection algorithm framework including multiple versions v1, v2, v3, etc. any version of these versions, SSD (Single Shot multi-box object detector, Single-step multi-box object detection), R-CNN (Region-Convolutional Neural Network, or improved versions of Fast R-CNN, etc.) and other object detection Neural networks can be used in the present disclosure to detect the sky Region in the first image. After the position of the sky region in the first image is obtained, the mask image corresponding to the sky region may be obtained by setting the pixels inside the sky region to 1 (white) and the pixels outside the sky region to 0 (black).
In an alternative embodiment, referring to fig. 4, step S310 may include the following steps S401 to S403:
step S401, carrying out normalization processing on the pixel value of the first image to obtain a normalized image;
step S402, processing the normalized image based on a pre-trained full convolution neural network to obtain a response spectrum of the normalized image to the sky;
step S403, performing binarization processing on the response spectrum to obtain a mask image corresponding to the sky region in the first image.
The full convolution neural network (FCN) is an image processing network based on semantic segmentation, and may extract local features in an image by performing convolution and downsampling on the image, and then restore the size of the original image by performing deconvolution and upsampling to realize pixel-level classification. The full convolutional neural network includes various upgraded versions, such as the Unet (a segmentation model), and the like. The following takes the Unet as an example to specifically explain the training process of the network:
according to actual requirements, a single-channel input or three-channel input Unet is set, wherein the single channel is used for inputting a gray image, and the three channels are used for inputting an RGB color image. Taking three channels as an example, a sample image is obtained, manual cutout can be performed through image processing software, and a sky area in the sample image is marked to be used as a label corresponding to the sample image. The RGB pixel values of the sample image are respectively processed with 255 for normalization processing, then three channels of Unet are respectively input, and parameters of Unet are adjusted for training by calculating errors between output data and labels. And when the accuracy of the Unet in the test set reaches a certain standard, finishing training to obtain the available Unet.
In practical application, the pixel value of the first image can be normalized according to the requirement of the input channel of the full convolution neural network, so as to obtain a normalized image; and then inputting the full convolution neural network, and outputting a response spectrum of the normalized image to the sky. In the response spectrum, the value of each pixel position represents the probability that the pixel belongs to the sky region. And then carrying out binarization processing on the response spectrum, and carrying out 0/1 classification on each pixel point by utilizing a threshold through artificial setting or self-adaptive calculation of the threshold, so that a binarization image, namely a mask image, corresponding to the sky area in the first image can be obtained.
In step S320, a foreground region image is segmented from the first image and a background region image is segmented from the second image by the mask image.
The second image is an image required for replacing the sky background in the first image, and may be a pre-configured template image, and the sky material is used as a main image content. For example, some template images containing sky may be downloaded from the network, and when the user needs to change the background of the image, a selection interface of the template images is displayed, so that the user selects one of the template images as the second image.
After obtaining the mask image, extracting a part except a sky area from the first image by using the mask image to obtain a foreground area image; conversely, a portion corresponding to the sky region is extracted from the second image, and a background region image is obtained.
In an alternative embodiment, step S320 may include:
multiplying the reverse mask image of the mask image with the first image to segment a foreground region image from the first image;
the mask image is multiplied by the second image to segment the background area image from the second image.
Let the first image be IMG1, the second image be IMG2, the Mask image be Mask, the reverse Mask image be Max-Mask, Max be the maximum pixel value, typically 1 or 255. As shown in the following equations (1) and (2):
IMGF=IMG1×(Max-Mask)/Max; (1)
IMGB=IMG2×Mask/Max; (2)
wherein, IMGFFor foreground region images, IMGBIs a background area image. In the mask image, the part of the sky area is white, and the part outside the sky is black; in the reverse mask image, the sky area is black and the other sky area is white, and the reverse mask image can be obtained by performing reverse phase processing on the mask image. Multiplying the anti-mask image with the first image, and equivalently keeping partial images except the sky, namely foreground region images; multiplying the mask image by the second image corresponds to a background region image in which a partial image corresponding to the sky region remains
And step S330, splicing the foreground area image and the background area image to obtain a target image.
The sky part of the target image is a corresponding partial image in the second image relative to the first image, so that the processing effect of changing days is realized.
In an alternative embodiment, the foreground area image and the background area image may be added to obtain the target image. As shown in the following equation (3):
IMGT=IMGF+IMGB; (3)
wherein IMGTIs the target image. Since the pixel value of the black part is 0, after the addition, the foreground area image and the background area image are complementary to each other, and a complete target image is obtained.
In an optional implementation, after the target image is obtained, the target image may be subjected to an edge smoothing process. The method aims to form an excessive color effect at the spliced edge parts of the foreground area and the background area, relieve the influence caused by the segmentation of the sawteeth and reduce the visual sense of sudden change. Generally, the position coordinates of the edge part can be determined in the process of splicing the foreground area image and the background area image; further, in the target image, the edge portion is expanded to a certain extent, for example, each pixel point of the edge portion can be used as a circle center, and each pixel point is circularly expanded through a preset radius (which is related to the size of the target image, for example, 5 pixels), so that an edge region to be smoothed is formed; any smoothing may then be used, for example gaussian smoothing: selecting a central point of an area to be smoothed at the edge based on two-dimensional normal distribution of a target image so as to construct a smoothing algorithm formula based on normal distribution; calculating a weight matrix of 3x3, 5x5 or higher order, and performing convolution with the target image; and (4) setting iteration times, namely convolution times, according to needs, and finally obtaining the target image with excessive edges.
Since the first image and the second image may have a large difference in color tone, after the foreground region image and the background region image are stitched together, two color tones with large contrast may exist in the obtained target image. Color migration can be performed to blend the hues of the first and second images to make the target image visually more realistic and natural. Referring to fig. 5, color migration may be achieved by the following steps S501 and S502.
Step S501, obtaining the mean value and the variance of the second image in Lab color space;
step S502, adjusting the target image on a Lab color space according to the mean value and the variance so as to perform color migration.
The Lab color space is a color pattern that fits the human visual perception features, L denotes luminance, a and b are two color channels, a is from dark green (low luminance value) to gray (medium luminance value) to bright pink red (high luminance value), and b is from bright blue (low luminance value) to gray (medium luminance value) to yellow (high luminance value). In color migration, it is often desirable to change one color attribute without affecting the other color attributes. Since three channels of the RGB color space have a high correlation and each channel of the Lab color space has a low correlation, color migration on the Lab color space is chosen.
Firstly, converting a second image and a target image from an RGB color space to a Lab color space; then, calculating the Lab channel value of each pixel point in the second image, and respectively calculating the mean value and the variance (or standard deviation, the meanings of the two indexes are basically the same, and the disclosure does not make special distinction) of each channel; and then according to the mean value and the variance, carrying out color adjustment on pixel points in the target image, integrally moving the Lab channel value of the target image according to the difference between the target image and the Lab channel mean value of the second image, and adjusting the distribution of the Lab channel value of the target image according to the Lab channel variance of the second image, thereby fusing the color characteristics of the second image into the target image.
The edge smoothing processing and the color migration are two modes of post-processing on the target image, and both aims to eliminate defects possibly caused in image processing and improve the quality and visual reality of the image. Therefore, other post-processing methods can be adopted, such as filtering processing of the whole target image, optimization of image brightness and contrast, elimination of image distortion and the like.
Fig. 6 shows a schematic flow of image processing in the present exemplary embodiment. As shown in fig. 6, after the first image is acquired, the sky area in the first image is segmented by the segmentation model to extract the mask image from the first image; performing phase reversal processing on the mask image, namely subtracting the pixel value of each pixel point by 1 to obtain a reverse mask image; multiplying the first image by using the reverse mask image to extract a foreground area image from the first image; multiplying the mask image with the second image to extract a background area image from the second image; then splicing the foreground area image and the background area image to obtain an initial target image (namely a target image 1); and performing post-processing on the target image by means of edge smoothing, color migration and the like to obtain a final target image (namely the target image 2).
In summary, in the exemplary embodiment, based on the image processing method, on one hand, the sky region can be accurately segmented from the image based on the recognition of the segmentation model to the sky region and the processing of the mask image, and a user does not need to manually cut out the sky region, so that the intelligent degree is high, the use is convenient, and the user experience is good. On the other hand, the scheme realizes the 'sky changing' processing of the image, the sky area in the first image is changed into the corresponding part in the second image, the interestingness is high, and the diversified demands of the user can be met.
Fig. 7 shows an image processing apparatus 700 of the present exemplary embodiment, which may include the following modules:
a sky identification module 710, configured to identify a sky region in the first image, and obtain a mask image corresponding to the sky region;
an image segmentation module 720, configured to segment a foreground region image from the first image and segment a background region image from the second image through the mask image;
and the image stitching module 730 is configured to stitch the foreground region image and the background region image to obtain a target image.
In an alternative embodiment, the sky identification module 710 is configured to obtain the mask image by performing the following steps:
carrying out normalization processing on the pixel value of the first image to obtain a normalized image;
processing the normalized image based on a pre-trained full convolution neural network to obtain a response spectrum of the normalized image to the sky;
and carrying out binarization processing on the response spectrum to obtain a mask image corresponding to the sky area in the first image.
In an alternative embodiment, the image segmentation module 720 is configured to multiply the inverse mask image of the mask image with the first image to segment the foreground region image from the first image, and multiply the mask image with the second image to segment the background region image from the second image.
In an optional implementation, the image stitching module 730 is configured to add the foreground region image and the background region image to obtain the target image.
In an alternative embodiment, the image processing apparatus 700 further comprises: and the post-processing module is used for performing edge smoothing processing on the target image after the target image is obtained.
In an alternative embodiment, the image processing apparatus 700 further comprises: and the post-processing module is used for adjusting the target image on the Lab color space according to the mean value and the variance of the second image on the Lab color space after the target image is obtained so as to perform color migration.
In an alternative embodiment, the first image may be an original image captured by a camera, and the second image may be a pre-configured template image.
The specific details of each module in the above apparatus have been described in detail in the method section, and details that are not disclosed may refer to the method section, and thus are not described again.
As will be appreciated by one skilled in the art, aspects of the present disclosure may be embodied as a system, method or program product. Accordingly, various aspects of the present disclosure may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system.
Exemplary embodiments of the present disclosure also provide a computer-readable storage medium having stored thereon a program product capable of implementing the above-described method of the present specification. In some possible embodiments, various aspects of the disclosure may also be implemented in the form of a program product including program code for causing a terminal device to perform the steps according to various exemplary embodiments of the disclosure described in the "exemplary methods" section above of this specification, when the program product is run on the terminal device, for example, any one or more of the steps in fig. 3, fig. 4 or fig. 5 may be performed.
Referring to fig. 8, a program product 800 for implementing the above method according to an exemplary embodiment of the present disclosure is described, which may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer. However, the program product of the present disclosure is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
A computer readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is to be limited only by the terms of the appended claims.

Claims (10)

1. An image processing method, comprising:
identifying a sky region in the first image based on a segmentation model to obtain a mask image corresponding to the sky region;
segmenting a foreground area image from the first image and segmenting a background area image from the second image through the mask image;
and splicing the foreground area image and the background area image to obtain a target image.
2. The method of claim 1, wherein the identifying a sky region in the first image based on a segmentation model, and obtaining a mask image corresponding to the sky region comprises:
carrying out normalization processing on the pixel value of the first image to obtain a normalized image;
processing the normalized image based on a pre-trained full convolution neural network to obtain a response spectrum of the normalized image to the sky;
and carrying out binarization processing on the response spectrum to obtain a mask image corresponding to the sky area in the first image.
3. The method of claim 1, wherein segmenting a foreground region image from the first image and segmenting a background region image from the second image through the mask image comprises:
multiplying a reverse mask image of the mask image with the first image to segment a foreground region image from the first image;
multiplying the mask image with the second image to segment a background area image from the second image.
4. The method of claim 3, wherein said stitching the foreground region image and the background region image to obtain a target image comprises:
and adding the foreground area image and the background area image to obtain the target image.
5. The method of claim 1, wherein after obtaining the target image, the method further comprises:
and performing edge smoothing processing on the target image.
6. The method of claim 1, wherein after obtaining the target image, the method further comprises:
acquiring the mean value and the variance of the second image in the Lab color space;
and adjusting the target image on a Lab color space according to the mean value and the variance so as to perform color migration.
7. The method of any one of claims 1 to 6, wherein the first image is a raw image captured by a camera and the second image is a pre-configured template image.
8. An image processing apparatus characterized by comprising:
a sky identification module, configured to identify a sky region in the first image, and obtain a mask image corresponding to the sky region;
the image segmentation module is used for segmenting a foreground region image from the first image and segmenting a background region image from the second image through the mask image;
and the image splicing module is used for splicing the foreground area image and the background area image to obtain a target image.
9. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method of any one of claims 1 to 7.
10. An electronic device, comprising:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the method of any of claims 1 to 7 via execution of the executable instructions.
CN201911373483.9A 2019-12-27 Image processing method, image processing device, storage medium and electronic apparatus Active CN111179282B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911373483.9A CN111179282B (en) 2019-12-27 Image processing method, image processing device, storage medium and electronic apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911373483.9A CN111179282B (en) 2019-12-27 Image processing method, image processing device, storage medium and electronic apparatus

Publications (2)

Publication Number Publication Date
CN111179282A true CN111179282A (en) 2020-05-19
CN111179282B CN111179282B (en) 2024-04-23

Family

ID=

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111598903A (en) * 2020-05-21 2020-08-28 Oppo广东移动通信有限公司 Portrait segmentation method, portrait segmentation device, storage medium and electronic equipment
CN111709873A (en) * 2020-05-27 2020-09-25 北京百度网讯科技有限公司 Training method and device of image conversion model generator
CN111968134A (en) * 2020-08-11 2020-11-20 影石创新科技股份有限公司 Object segmentation method and device, computer readable storage medium and computer equipment
CN112241941A (en) * 2020-10-20 2021-01-19 北京字跳网络技术有限公司 Method, device, equipment and computer readable medium for acquiring image
CN112561847A (en) * 2020-12-24 2021-03-26 Oppo广东移动通信有限公司 Image processing method and device, computer readable medium and electronic device
CN113034648A (en) * 2021-04-30 2021-06-25 北京字节跳动网络技术有限公司 Image processing method, device, equipment and storage medium
CN113096069A (en) * 2021-03-08 2021-07-09 北京达佳互联信息技术有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN113099127A (en) * 2021-02-24 2021-07-09 影石创新科技股份有限公司 Video processing method, filter, device and medium for making stealth special effect
CN113256499A (en) * 2021-07-01 2021-08-13 北京世纪好未来教育科技有限公司 Image splicing method, device and system
CN114494004A (en) * 2022-04-15 2022-05-13 北京美摄网络科技有限公司 Sky image processing method and device
WO2023071810A1 (en) * 2021-10-29 2023-05-04 上海商汤智能科技有限公司 Image processing
CN112561847B (en) * 2020-12-24 2024-04-12 Oppo广东移动通信有限公司 Image processing method and device, computer readable medium and electronic equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103745456A (en) * 2013-12-23 2014-04-23 深圳先进技术研究院 Image segmentation method and apparatus
US20150348249A1 (en) * 2014-05-27 2015-12-03 Fuji Xerox Co., Ltd. Image processing apparatus, and non-transitory computer readable medium
US20170236287A1 (en) * 2016-02-11 2017-08-17 Adobe Systems Incorporated Object Segmentation, Including Sky Segmentation
CN108171677A (en) * 2017-12-07 2018-06-15 腾讯科技(深圳)有限公司 A kind of image processing method and relevant device
CN109961446A (en) * 2019-03-27 2019-07-02 深圳视见医疗科技有限公司 CT/MR three-dimensional image segmentation processing method, device, equipment and medium
CN110136161A (en) * 2019-05-31 2019-08-16 苏州精观医疗科技有限公司 Image characteristics extraction analysis method, system and device
CN110288614A (en) * 2019-06-24 2019-09-27 睿魔智能科技(杭州)有限公司 Image processing method, device, equipment and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103745456A (en) * 2013-12-23 2014-04-23 深圳先进技术研究院 Image segmentation method and apparatus
US20150348249A1 (en) * 2014-05-27 2015-12-03 Fuji Xerox Co., Ltd. Image processing apparatus, and non-transitory computer readable medium
US20170236287A1 (en) * 2016-02-11 2017-08-17 Adobe Systems Incorporated Object Segmentation, Including Sky Segmentation
CN108171677A (en) * 2017-12-07 2018-06-15 腾讯科技(深圳)有限公司 A kind of image processing method and relevant device
CN109961446A (en) * 2019-03-27 2019-07-02 深圳视见医疗科技有限公司 CT/MR three-dimensional image segmentation processing method, device, equipment and medium
CN110136161A (en) * 2019-05-31 2019-08-16 苏州精观医疗科技有限公司 Image characteristics extraction analysis method, system and device
CN110288614A (en) * 2019-06-24 2019-09-27 睿魔智能科技(杭州)有限公司 Image processing method, device, equipment and storage medium

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111598903B (en) * 2020-05-21 2023-09-29 Oppo广东移动通信有限公司 Portrait segmentation method, device, storage medium and electronic equipment
CN111598903A (en) * 2020-05-21 2020-08-28 Oppo广东移动通信有限公司 Portrait segmentation method, portrait segmentation device, storage medium and electronic equipment
CN111709873A (en) * 2020-05-27 2020-09-25 北京百度网讯科技有限公司 Training method and device of image conversion model generator
CN111968134A (en) * 2020-08-11 2020-11-20 影石创新科技股份有限公司 Object segmentation method and device, computer readable storage medium and computer equipment
CN111968134B (en) * 2020-08-11 2023-11-28 影石创新科技股份有限公司 Target segmentation method, device, computer readable storage medium and computer equipment
CN112241941A (en) * 2020-10-20 2021-01-19 北京字跳网络技术有限公司 Method, device, equipment and computer readable medium for acquiring image
CN112241941B (en) * 2020-10-20 2024-03-22 北京字跳网络技术有限公司 Method, apparatus, device and computer readable medium for acquiring image
CN112561847A (en) * 2020-12-24 2021-03-26 Oppo广东移动通信有限公司 Image processing method and device, computer readable medium and electronic device
CN112561847B (en) * 2020-12-24 2024-04-12 Oppo广东移动通信有限公司 Image processing method and device, computer readable medium and electronic equipment
CN113099127A (en) * 2021-02-24 2021-07-09 影石创新科技股份有限公司 Video processing method, filter, device and medium for making stealth special effect
CN113099127B (en) * 2021-02-24 2024-02-02 影石创新科技股份有限公司 Video processing method, device, equipment and medium for making stealth special effects
CN113096069A (en) * 2021-03-08 2021-07-09 北京达佳互联信息技术有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN113034648A (en) * 2021-04-30 2021-06-25 北京字节跳动网络技术有限公司 Image processing method, device, equipment and storage medium
CN113256499A (en) * 2021-07-01 2021-08-13 北京世纪好未来教育科技有限公司 Image splicing method, device and system
WO2023071810A1 (en) * 2021-10-29 2023-05-04 上海商汤智能科技有限公司 Image processing
CN114494004A (en) * 2022-04-15 2022-05-13 北京美摄网络科技有限公司 Sky image processing method and device
CN114494004B (en) * 2022-04-15 2022-08-05 北京美摄网络科技有限公司 Sky image processing method and device

Similar Documents

Publication Publication Date Title
US11759143B2 (en) Skin detection method and electronic device
CN113810601B (en) Terminal image processing method and device and terminal equipment
CN110706310B (en) Image-text fusion method and device and electronic equipment
CN113810600B (en) Terminal image processing method and device and terminal equipment
CN111552451B (en) Display control method and device, computer readable medium and terminal equipment
CN113810764B (en) Video editing method and video editing device
CN111741303B (en) Deep video processing method and device, storage medium and electronic equipment
CN112954251B (en) Video processing method, video processing device, storage medium and electronic equipment
CN113744257A (en) Image fusion method and device, terminal equipment and storage medium
WO2022148319A1 (en) Video switching method and apparatus, storage medium, and device
CN110138999B (en) Certificate scanning method and device for mobile terminal
WO2022022319A1 (en) Image processing method, electronic device, image processing system and chip system
CN112188094B (en) Image processing method and device, computer readable medium and terminal equipment
CN106982327A (en) Image processing method and device
CN113436576A (en) OLED display screen dimming method and device applied to two-dimensional code scanning
CN115546858B (en) Face image processing method and electronic equipment
CN111626931B (en) Image processing method, image processing device, storage medium and electronic apparatus
CN115631250B (en) Image processing method and electronic equipment
CN113096022B (en) Image blurring processing method and device, storage medium and electronic device
CN111179282B (en) Image processing method, image processing device, storage medium and electronic apparatus
CN111179282A (en) Image processing method, image processing apparatus, storage medium, and electronic device
CN111294905B (en) Image processing method, image processing device, storage medium and electronic apparatus
CN111738107A (en) Video generation method, video generation device, storage medium, and electronic apparatus
CN115801987A (en) Video frame insertion method and device
CN117519555A (en) Image processing method, electronic equipment and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant