CN111179282B - Image processing method, image processing device, storage medium and electronic apparatus - Google Patents

Image processing method, image processing device, storage medium and electronic apparatus Download PDF

Info

Publication number
CN111179282B
CN111179282B CN201911373483.9A CN201911373483A CN111179282B CN 111179282 B CN111179282 B CN 111179282B CN 201911373483 A CN201911373483 A CN 201911373483A CN 111179282 B CN111179282 B CN 111179282B
Authority
CN
China
Prior art keywords
image
region
sky
mask
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911373483.9A
Other languages
Chinese (zh)
Other versions
CN111179282A (en
Inventor
颜海强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201911373483.9A priority Critical patent/CN111179282B/en
Publication of CN111179282A publication Critical patent/CN111179282A/en
Application granted granted Critical
Publication of CN111179282B publication Critical patent/CN111179282B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The disclosure provides an image processing method, an image processing device, a storage medium and electronic equipment, and relates to the technical field of image processing. The image processing method comprises the following steps: identifying a sky area in the first image based on a segmentation model to obtain a mask image corresponding to the sky area; segmenting a foreground region image from the first image and a background region image from the second image through the mask image; and splicing the foreground region image and the background region image to obtain a target image. The method and the device realize the day changing processing of the image, can automatically separate the sky area in the image, do not need manual image matting of a user, and are convenient to use.

Description

Image processing method, image processing device, storage medium and electronic apparatus
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image processing method, an image processing apparatus, a computer readable storage medium, and an electronic device.
Background
With the development of image processing technology, in some image processing and image beautifying software, a function of changing the background of an image appears, for example, the background can be changed into a pure color curtain, a landscape or other special effects, etc., so as to meet the diversified demands of users.
In the related art, when changing the background of an image, a user is generally required to manually cut out the foreground portion of the image, and then replace the rest of the background portion. Some software can automatically identify the figures in the image, and change the background of the parts other than the figures, but cannot be applied to non-figures, and the foreground parts other than the figures can be mistakenly regarded as the background. Therefore, if the background of the non-portrait image is to be changed, for example, the sky background is to be changed in the landscape image, the related art needs the user to manually pick up the foreground, which is inconvenient to use.
It should be noted that the information disclosed in the above background section is only for enhancing understanding of the background of the present disclosure and thus may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
The disclosure provides an image processing method, an image processing device, a computer readable storage medium and an electronic device, so as to solve the problem that a user needs to manually scratch a foreground in the related art at least to a certain extent.
Other features and advantages of the present disclosure will be apparent from the following detailed description, or may be learned in part by the practice of the disclosure.
According to a first aspect of the present disclosure, there is provided an image processing method including: identifying a sky area in the first image based on a segmentation model to obtain a Mask image (Mask) corresponding to the sky area; segmenting a foreground region image from the first image and a background region image from the second image through the mask image; and splicing the foreground region image and the background region image to obtain a target image.
According to a second aspect of the present disclosure, there is provided an image processing apparatus including: the sky identification module is used for identifying a sky area in the first image and obtaining a mask image corresponding to the sky area; an image segmentation module, configured to segment a foreground region image from the first image and segment a background region image from the second image through the mask image; and the image stitching module is used for stitching the foreground region image and the background region image to obtain a target image.
According to a third aspect of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the above-described image processing method.
According to a fourth aspect of the present disclosure, there is provided an electronic device comprising: a processor; and a memory for storing executable instructions of the processor; wherein the processor is configured to perform the above-described image processing method via execution of the executable instructions.
The technical scheme of the present disclosure has the following beneficial effects:
According to the image processing method, the image processing device, the storage medium and the electronic equipment, firstly, a sky area in a first image is identified through a segmentation model so as to obtain a mask image corresponding to the sky area; then, segmenting a foreground region image from the first image by using the mask image, and segmenting a background region image from the second image; and finally, splicing the foreground region image and the background region image to obtain a target image. On the one hand, the sky area can be accurately segmented from the image based on the recognition of the segmentation model to the sky area and the processing of the mask image, manual image matting of a user is not needed, the intelligent degree is high, the use is convenient, and the user experience is good. On the other hand, the method and the device realize the day-changing processing of the images, change the sky area in the first image into the corresponding part in the second image, have higher interestingness and can meet the diversified demands of users.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure. It will be apparent to those of ordinary skill in the art that the drawings in the following description are merely some embodiments of the present disclosure and that other drawings may be derived from these drawings without undue effort.
Fig. 1 shows a schematic diagram of a system architecture of the present exemplary embodiment;
fig. 2 shows a schematic diagram of an electronic device of the present exemplary embodiment;
fig. 3 shows a flowchart of an image processing method of the present exemplary embodiment;
fig. 4 shows a sub-flowchart of an image processing method of the present exemplary embodiment;
Fig. 5 shows a sub-flowchart of another image processing method of the present exemplary embodiment;
fig. 6 shows a schematic flow of the image processing of the present exemplary embodiment;
Fig. 7 shows a block diagram of the structure of an image processing apparatus of the present exemplary embodiment;
fig. 8 shows a schematic diagram of a computer-readable storage medium of the present exemplary embodiment.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. However, the exemplary embodiments may be embodied in many forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the present disclosure. One skilled in the relevant art will recognize, however, that the aspects of the disclosure may be practiced without one or more of the specific details, or with other methods, components, devices, steps, etc. In other instances, well-known technical solutions have not been shown or described in detail to avoid obscuring aspects of the present disclosure.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus a repetitive description thereof will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in software or in one or more hardware modules or integrated circuits or in different networks and/or processor devices and/or microcontroller devices.
Fig. 1 shows a schematic diagram of a system architecture of an exemplary embodiment of the present disclosure. As shown in fig. 1, the system architecture 100 may include: terminal 110, network 120, and server 130. The terminal 110 may be various electronic devices having an image photographing function, including, but not limited to, a mobile phone, a tablet computer, a digital camera, a personal computer, etc. The medium used by network 120 to provide a communication link between terminal 110 and server 130 may include various connection types, such as wired, wireless communication links, or fiber optic cables. It should be understood that the number of terminals, networks and servers in fig. 1 is merely illustrative. There may be any number of terminals, networks, and servers, as desired for implementation. For example, the server 130 may be a server cluster formed by a plurality of servers.
The image processing method provided by the embodiment of the present disclosure may be performed by the terminal 110, for example, after the terminal 110 captures an image, the image is processed; or may be executed by the server 130, for example, after the terminal 110 captures an image, the image may be uploaded to the server 130, so that the server 130 processes the image. The present disclosure is not limited in this regard.
Exemplary embodiments of the present disclosure provide an electronic device for implementing an image processing method, which may be the terminal 110 or the server 130 in fig. 1. The electronic device comprises at least a processor and a memory for storing executable instructions of the processor, the processor being configured to perform the image processing method via execution of the executable instructions.
The electronic device may be implemented in various forms, and may include mobile devices such as a mobile phone, a tablet computer, a notebook computer, a Personal digital assistant (Personal DIGITAL ASSISTANT, PDA), a navigation device, a wearable device, a drone, and fixed devices such as a desktop computer and a smart television. The configuration of the electronic device will be exemplarily described below using the mobile terminal 200 of fig. 2 as an example. It will be appreciated by those skilled in the art that the configuration of fig. 2 can also be applied to stationary type devices in addition to components specifically for mobile purposes. In other embodiments, mobile terminal 200 may include more or less components than illustrated, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware. The interfacing relationship between the components is shown schematically only and does not constitute a structural limitation of the mobile terminal 200. In other embodiments, the mobile terminal 200 may also employ a different interface from that of fig. 2, or a combination of interfaces.
As shown in fig. 2, the mobile terminal 200 may specifically include: processor 210, internal memory 221, external memory interface 222, universal serial bus (Universal Serial Bus, USB) interface 230, charge management module 240, power management module 241, battery 242, antenna 1, antenna 2, mobile communication module 250, wireless communication module 260, audio module 270, speaker 271, receiver 272, microphone 273, headset interface 274, sensor module 280, display screen 290, camera module 291, indicator 292, motor 293, keys 294, and subscriber identity module (Subscriber Identification Module, SIM) card interface 295, among others. Wherein the sensor module 280 may include a depth sensor 2801, a pressure sensor 2802, a gyroscope sensor 2803, a barometric pressure sensor 2804, and the like.
Processor 210 may include one or more processing units such as, for example: the Processor 210 may include an application Processor (Application Processor, AP), a modem Processor, a graphics Processor (Graphics Processing Unit, GPU), an image signal Processor (IMAGE SIGNAL Processor, ISP), a controller, a video codec, a digital signal Processor (DIGITAL SIGNAL Processor, DSP), a baseband Processor and/or a neural network Processor (Neural-Network Processing Unit, NPU), and the like. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 210 for storing instructions and data. The memory may store instructions for implementing six modular functions: detection instructions, connection instructions, information management instructions, analysis instructions, data transfer instructions, and notification instructions, and are controlled to be executed by the processor 210. In some implementations, the memory in the processor 210 is a cache memory. The memory may hold instructions or data that the processor 210 has just used or recycled. If the processor 210 needs to reuse the instruction or data, it may be called directly from the memory. Repeated accesses are avoided and the latency of the processor 210 is reduced, thereby improving the efficiency of the system.
In some implementations, the processor 210 may include one or more interfaces. The interfaces may include an integrated circuit (Inter-INTEGRATED CIRCUIT, I2C) interface, an integrated circuit built-in audio (Inter-INTEGRATED CIRCUIT SOUND, I2S) interface, a pulse code modulation (Pulse Code Modulation, PCM) interface, a universal asynchronous receiver Transmitter (Universal Asynchronous Receiver/Transmitter, UART) interface, a mobile industry processor interface (Mobile Industry Processor Interface, MIPI), a General-Purpose Input/Output (GPIO) interface, a subscriber identity module (Subscriber Identity Module, SIM) interface, and/or a universal serial bus (Universal Serial Bus, USB) interface, among others. Connections are made through different interfaces with other components of mobile terminal 200.
The USB interface 230 is an interface conforming to the USB standard specification, and may specifically be a MiniUSB interface, a micro USB interface, USBTypeC interface, or the like. The USB interface 230 may be used to connect to a charger to charge the mobile terminal 200, may also be connected to a headset to play audio, and may also be used to connect to other electronic devices, such as a computer, a peripheral device, etc. with the mobile terminal 200.
The charge management module 240 is configured to receive a charge input from a charger. The charger can be a wireless charger or a wired charger. In some wired charging embodiments, the charge management module 240 may receive a charging input of a wired charger through the USB interface 230. In some wireless charging embodiments, the charge management module 240 may receive wireless charging input through a wireless charging coil of the mobile terminal 200. The charging management module 240 may also provide power to the electronic device through the power management module 241 while charging the battery 242.
The power management module 241 is used for connecting the battery 242, the charge management module 240 and the processor 210. The power management module 241 receives input from the battery 242 and/or the charge management module 240 and provides power to the processor 210, the internal memory 221, the display 290, the camera module 291, the wireless communication module 260, and the like. The power management module 241 may also be configured to monitor battery capacity, battery cycle times, battery health (leakage, impedance), and other parameters. In other embodiments, the power management module 241 may also be disposed in the processor 210. In other embodiments, the power management module 241 and the charge management module 240 may be disposed in the same device.
The wireless communication function of the mobile terminal 200 may be implemented by the antenna 1, the antenna 2, the mobile communication module 250, the wireless communication module 260, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in mobile terminal 200 may be configured to cover a single or multiple communication bands. Different antennas may also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed into a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 250 may provide a solution including 2G/3G/4G/5G wireless communication applied on the mobile terminal 200. The mobile communication module 250 may include at least one filter, switch, power amplifier, low noise amplifier (Low Noise Amplifier, LNA), or the like. The mobile communication module 250 may receive electromagnetic waves from the antenna 1, perform processes such as filtering and amplifying the received electromagnetic waves, and transmit the electromagnetic waves to the modem processor for demodulation. The mobile communication module 250 can amplify the signal modulated by the modem processor, and convert the signal into electromagnetic waves through the antenna 1 to radiate. In some embodiments, at least some of the functional modules of the mobile communication module 250 may be disposed in the processor 210. In some embodiments, at least some of the functional modules of the mobile communication module 250 may be provided in the same device as at least some of the modules of the processor 210.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating the low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then transmits the demodulated low frequency baseband signal to the baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs sound signals through an audio device (not limited to the speaker 271, the receiver 272, etc.), or displays images or videos through the display screen 290. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 250 or other functional module, independent of the processor 210.
The wireless Communication module 260 may provide solutions for wireless Communication including wireless local area network (Wireless Local Area Networks, WLAN) (e.g., wireless fidelity (WIRELESS FIDELITY, wi-Fi) network), bluetooth (BT), global navigation satellite system (Global Navigation SATELLITE SYSTEM, GNSS), frequency modulation (Frequency Modulation, FM), near field Communication (NEAR FIELD Communication), infrared (IR), etc., applied on the mobile terminal 200. The wireless communication module 260 may be one or more devices that integrate at least one communication processing module. The wireless communication module 260 receives electromagnetic waves via the antenna 2, modulates the electromagnetic wave signals, filters the electromagnetic wave signals, and transmits the processed signals to the processor 210. The wireless communication module 260 may also receive a signal to be transmitted from the processor 210, frequency modulate it, amplify it, and convert it to electromagnetic waves for radiation via the antenna 2.
In some embodiments, antenna 1 and mobile communication module 250 of mobile terminal 200 are coupled, and antenna 2 and wireless communication module 260 are coupled, so that mobile terminal 200 may communicate with a network and other devices through wireless communication technology. The wireless communication techniques may include the Global System for Mobile communications (Global System for Mobile communications, GSM), general Packet Radio Service (GPRS), code Division multiple access (Code Division Multiple Access, CDMA), wideband code Division multiple access (Wideband Code Division Multiple Access, WCDMA), time Division multiple access (TD-Synchronous Code Division Multiple Access, TD-SCDMA), long term evolution (Long Term Evolution, LTE), new Radio, NR, BT, GNSS, WLAN, NFC, FM, and/or IR techniques, among others. The GNSS may include a global satellite positioning system (Global Positioning System, GPS), a global navigation satellite system (Global Navigation SATELLITE SYSTEM, GLONASS), a Beidou satellite navigation system (Beidou Navigation SATELLITE SYSTEM, BDS), a Quasi-Zenith satellite system (Quasi-Zenith SATELLITE SYSTEM, QZSS) and/or a satellite based augmentation system (SATELLITE BASED AUGMENTATION SYSTEMS, SBAS).
The mobile terminal 200 implements display functions through a GPU, a display screen 290, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display screen 290 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 210 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 290 is used for displaying images, videos, and the like. The display screen 290 includes a display panel. The display panel may employ a Liquid crystal display (Liquid CRYSTAL DISPLAY, LCD), an Organic Light-Emitting Diode (OLED), an Active-Matrix Organic LIGHT EMITTING Diode (AMOLED), a flexible Light-Emitting Diode (FLED), miniled, microLed, micro-oLed, quantum dot LIGHT EMITTING Diodes (QLED), or the like. In some embodiments, mobile terminal 200 may include 1 or N displays 290, N being a positive integer greater than 1.
The mobile terminal 200 may implement a photographing function through an ISP, a camera module 291, a video codec, a GPU, a display screen 290, an application processor, and the like.
The ISP is used to process the data fed back by the camera module 291. For example, when photographing, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electric signal, and the camera photosensitive element transmits the electric signal to the ISP for processing and is converted into an image visible to naked eyes. ISP can also optimize the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some implementations, an ISP may be provided in the camera module 291.
The camera module 291 is used for capturing still images or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (Charge Coupled Device, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, which is then transferred to the ISP to be converted into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard RGB, YUV, or the like format. In some embodiments, the mobile terminal 200 may include 1 or N camera modules 291, where N is a positive integer greater than 1, and if the mobile terminal 200 includes N cameras, one of the N cameras is a master camera.
The digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals. For example, when the mobile terminal 200 selects a frequency bin, the digital signal processor is used to fourier transform the frequency bin energy, etc.
Video codecs are used to compress or decompress digital video. The mobile terminal 200 may support one or more video codecs. In this way, the mobile terminal 200 may play or record video in a variety of encoding formats, such as: dynamic picture experts group (Moving Picture Experts Group, MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
The external memory interface 222 may be used to connect an external memory card, such as a Micro SD card, to enable expansion of the memory capabilities of the mobile terminal 200. The external memory card communicates with the processor 210 via an external memory interface 222 to implement data storage functions. For example, files such as music, video, etc. are stored in an external memory card.
Internal memory 221 may be used to store computer executable program code that includes instructions. The internal memory 221 may include a storage program area and a storage data area. The storage program area may store an application program (such as a sound playing function, an image playing function, etc.) required for at least one function of the operating system, etc. The storage data area may store data (e.g., audio data, phonebook, etc.) created during use of the mobile terminal 200, and the like. In addition, the internal memory 221 may include a high-speed random access memory, and may further include a nonvolatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (Universal Flash Storage, UFS), and the like. The processor 210 performs various functional applications of the mobile terminal 200 and data processing by executing instructions stored in the internal memory 221 and/or instructions stored in a memory provided in the processor.
The mobile terminal 200 may implement audio functions through an audio module 270, a speaker 271, a receiver 272, a microphone 273, an earphone interface 274, an application processor, and the like. Such as music playing, recording, etc.
The audio module 270 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 270 may also be used to encode and decode audio signals. In some implementations, the audio module 270 may be disposed in the processor 210, or some functional modules of the audio module 270 may be disposed in the processor 210.
A speaker 271, also called "horn", is used to convert the audio electrical signal into a sound signal. The mobile terminal 200 can listen to music through the speaker 271 or listen to hands-free calls.
A receiver 272, also referred to as a "earpiece", is used to convert the audio electrical signal into a sound signal. When the mobile terminal 200 receives a telephone call or voice message, the voice can be received by placing the receiver 272 close to the human ear.
A microphone 273, also called "microphone" or "microphone", is used to convert sound signals into electrical signals. When making a call or transmitting voice information, the user can sound near the microphone 273 through the mouth, inputting a sound signal to the microphone 273. The mobile terminal 200 may be provided with at least one microphone 273. In other embodiments, the mobile terminal 200 may be provided with two microphones 273, and may implement a noise reduction function in addition to collecting sound signals. In other embodiments, the mobile terminal 200 may further be provided with three, four or more microphones 273 to collect sound signals, reduce noise, identify sound sources, implement directional recording functions, etc.
The earphone interface 274 is used to connect a wired earphone. The headset interface 274 may be a USB interface 230 or a 3.5mm open mobile electronic device platform (Open Mobile Terminal Platform, OMTP) standard interface, a american cellular telecommunications industry association (Cellular Telecommunications Industry Association of the USA, CTIA) standard interface.
The depth sensor 2801 is used to acquire depth information of a scene. In some embodiments, a depth sensor may be provided to the camera module 291.
The pressure sensor 2802 is used to sense a pressure signal, and may convert the pressure signal into an electrical signal. In some embodiments, pressure sensor 2802 may be disposed on display 290. The pressure sensor 2802 is of various types, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like.
The gyro sensor 2803 may be used to determine a motion gesture of the mobile terminal 200. In some embodiments, the angular velocity of mobile terminal 200 about three axes (i.e., x, y, and z axes) may be determined by gyro sensor 2803. The gyro sensor 2803 can be used for photographing anti-shake. For example, when the shutter is pressed, the gyro sensor 2803 detects the angle of the shake of the mobile terminal 200, calculates the distance to be compensated by the lens module according to the angle, and allows the lens to counteract the shake of the mobile terminal 200 by the reverse motion, thereby realizing anti-shake. The gyro sensor 2803 can also be used for navigation, somatosensory of game scenes.
The air pressure sensor 2804 is used to measure air pressure. In some embodiments, the mobile terminal 200 calculates altitude from barometric pressure values measured by the barometric pressure sensor 2804, aiding in positioning and navigation.
In addition, sensors for other functions, such as magnetic sensors, acceleration sensors, distance sensors, proximity sensors, fingerprint sensors, temperature sensors, touch sensors, ambient light sensors, bone conduction sensors, etc., may be provided in the sensor module 280 according to actual needs.
The keys 294 include a power on key, a volume key, etc. The keys 294 may be mechanical keys. Or may be a touch key. The mobile terminal 200 may receive key inputs, generating key signal inputs related to user settings and function controls of the mobile terminal 200.
The motor 293 may generate vibration cues, such as vibration cues of a call, an alarm clock, a received message, etc., and may also be used for touch vibration feedback, such as touch operations on different applications (e.g., photographing, gaming, audio playing, etc.), or touch operations on different areas of the display screen 290, which may correspond to different vibration feedback effects. The touch vibration feedback effect may support customization.
The indicator 292 may be an indicator light, which may be used to indicate a state of charge, a change in power, a message indicating a missed call, a notification, etc.
The SIM card interface 295 is for interfacing with a SIM card. The SIM card may be inserted into the SIM card interface 295 or removed from the SIM card interface 295 to enable contact and separation from the mobile terminal 200. The mobile terminal 200 may support 1 or N SIM card interfaces, N being a positive integer greater than 1. The SIM card interface 295 may support Nano SIM cards, micro SIM cards, and the like. The same SIM card interface 295 may be used to insert multiple cards simultaneously. The types of the plurality of cards may be the same or different. The SIM card interface 295 may also be compatible with different types of SIM cards. The SIM card interface 295 may also be compatible with external memory cards. The mobile terminal 200 interacts with the network through the SIM card to realize functions such as communication and data communication. In some implementations, the mobile terminal 200 employs esims, i.e.: an embedded SIM card. The eSIM card may be embedded in the mobile terminal 200 and cannot be separated from the mobile terminal 200.
An image processing method and an image processing apparatus according to exemplary embodiments of the present disclosure are specifically described below.
Fig. 3 shows a flow of an image processing method in the present exemplary embodiment, including the following steps S310 to S330:
step S310, a sky area in the first image is identified based on the segmentation model, and a mask image corresponding to the sky area is obtained.
The first image is an image to be processed including sky, and may be an original image acquired by a camera when a user photographs. The segmentation model is a pre-trained machine learning model, and can be used for performing feature processing on an image, identifying a sky Region in the image, for example, YOLO (You Look Only Once, an algorithm framework for real-time object detection, including multiple versions of v1, v2, v3, etc.), SSD (Single Shot Multibox Detector, single-step multi-frame object detection), R-CNN (Region-Convolutional Neural Network, regional convolutional neural network, or improved versions of Fast R-CNN, etc.), and the like, and detecting the sky Region in the first image. After the position of the sky area in the first image is obtained, the pixels in the sky area may be set to 1 (white), and the pixels outside the sky area may be set to 0 (black), thereby obtaining a mask image corresponding to the sky area.
In an alternative embodiment, referring to fig. 4, step S310 may include the following steps S401 to S403:
Step S401, carrying out normalization processing on pixel values of the first image to obtain a normalized image;
Step S402, processing the normalized image based on a pre-trained full convolution neural network to obtain a response spectrum of the normalized image to sky;
step S403, binarizing the response spectrum to obtain a mask image corresponding to the sky area in the first image.
The full convolution neural network (Fully Convolutional Networks, FCN) is an image processing network based on semantic segmentation, local features in an image can be extracted by convolving and downsampling the image, and then the image is restored to the original image size by deconvolution and upsampling, so that pixel-level classification is realized. The full convolutional neural network includes a variety of upgraded versions, such as Unet (a segmentation model), etc. The following specifically describes the training process of the network, taking Unet as an example:
According to the actual demand, unet of single-channel input or three-channel input is set, wherein a single channel is used for inputting gray images, and three channels are used for inputting RGB color images. Taking three channels as an example, a sample image is obtained, and manual matting can be performed through image processing software to label a sky area in the sample image as a label corresponding to the sample image. The RGB pixel values of the sample image are respectively normalized by 255, then three channels of Unet are respectively input, and parameters of Unet are adjusted by calculating errors of output data and labels so as to perform training. When Unet accuracy in the test set reaches a certain level, training is completed, and available Unet is obtained.
In practical application, the pixel value of the first image can be normalized according to the requirement of the input channel of the full convolution neural network to obtain a normalized image; and then inputting the full convolution neural network and outputting a response spectrum of the normalized image to the sky. In the response spectrum, the value of each pixel location represents the probability that the pixel belongs to the sky region. And further, the response spectrum is subjected to binarization processing, and a threshold value is manually set or adaptively calculated, and each pixel point is classified by 0/1 by using the threshold value, so that a binarization image corresponding to the sky area in the first image, namely a mask image, can be obtained.
In step S320, a foreground region image is segmented from the first image and a background region image is segmented from the second image by the mask image.
The second image is an image required for replacing the sky background in the first image, and may be a pre-configured template image, and uses sky materials as main image contents. For example, some template images containing sky may be downloaded from the network, and when the user needs to change the image background, a selection interface of the template images is displayed, so that the user selects one of the template images as the second image.
After the mask image is obtained, the mask image can be used for extracting the part except the sky area from the first image to obtain a foreground area image; and conversely, extracting a part corresponding to the sky area from the second image to obtain a background area image.
In an alternative embodiment, step S320 may include:
Multiplying the inverse mask image of the mask image with the first image to segment the foreground region image from the first image;
The mask image is multiplied with the second image to separate the background area image from the second image.
The first image is IMG1, the second image is IMG2, the Mask image is Mask, the inverse Mask image is Max-Mask, max is the maximum pixel value, typically 1 or 255. As shown in the following formulas (1) and (2):
IMGF=IMG1×(Max-Mask)/Max; (1)
IMGB=IMG2×Mask/Max; (2)
Wherein IMG F is a foreground region image and IMG B is a background region image. In the mask image, the sky area is white, and the sky area is black; in the inverse mask image, the sky area is black, and the sky area is white, so that the inverse mask image can be obtained by performing inverse processing on the mask image. Multiplying the inverse mask image with the first image, which is equivalent to reserving part of the image outside the sky, namely the foreground region image; multiplying the mask image with the second image to obtain a partial image corresponding to the sky region, i.e. a background region image
Step S330, the foreground region image and the background region image are spliced to obtain a target image.
Compared with the first image, the sky part of the target image is a corresponding part image in the second image, so that the processing effect of changing days is realized.
In an alternative embodiment, the foreground region image and the background region image may be added to obtain the target image. As shown in the following formula (3):
IMGT=IMGF+IMGB; (3)
Wherein IMG T is the target image. Since the pixel value of the black part is 0, after addition, the foreground region image and the background region image just form complementation, and a complete target image is obtained.
In an alternative embodiment, after the target image is obtained, edge smoothing may also be performed on the target image. The method aims to form an excessive color effect on the edge part spliced by the foreground area and the background area, relieve the influence caused by segmentation saw teeth and reduce the visual abrupt change sense. In general, in stitching the foreground region image and the background region image, the position coordinates of the edge portion may be determined; furthermore, in the target image, the edge part is expanded to a certain extent, for example, each pixel point of the edge part can be used as a circle center, and each pixel point is expanded circularly by a preset radius (which is related to the size of the target image, for example, 5 pixels can be used) to form an edge region to be smoothed; the processing may then be performed in any smoothing manner, for example, gaussian smoothing may be used: selecting a center point of an area to be smoothed of an edge based on the two-dimensional normal distribution of the target image to construct a smoothing algorithm formula based on the normal distribution; calculating a weight matrix of 3x3, 5x5 or higher order, and convolving with the target image; and setting iteration times, namely convolution times, according to the requirements, and finally obtaining the target image with excessive edges.
Since the first image and the second image may have a large difference in hue, after the foreground region image and the background region image are stitched, there may be two hues with large contrast in the resulting target image. Therefore, color migration can be performed to blend the hues of the first image and the second image, so that the target image is more real and natural in vision. As shown with reference to fig. 5, color migration can be achieved through the following steps S501 and S502.
Step S501, the mean value and the variance of the second image in the Lab color space are obtained;
step S502, adjusting the target image on the Lab color space according to the mean and the variance so as to perform color migration.
Lab color space is a color pattern that is consistent with the visual perception of the human eye, L represents luminance, a and b are two color channels, a is from dark green (low luminance value) to gray (medium luminance value) to bright pink (high luminance value), and b is from bright blue (low luminance value) to gray (medium luminance value) to yellow (high luminance value). In color migration, it is generally desirable that one color attribute is changed without affecting the other color attribute. Since the three channels of the RGB color space have a higher correlation, and the channels of the Lab color space have a lower correlation, color migration on the Lab color space is selected.
Firstly, converting the second image and the target image from an RGB color space to a Lab color space; then, the Lab channel value of each pixel point in the second image is counted, and the mean value and the variance (or standard deviation, the meaning of the two indexes is basically the same) of each channel are calculated respectively, and the disclosure does not particularly distinguish; and then, according to the mean value and the variance, carrying out color adjustment on pixel points in the target image, carrying out integral movement on Lab channel values of the target image according to the difference between Lab channel mean values of the target image and the second image, and according to Lab channel variance of the second image, adjusting distribution of Lab channel values of the target image, thereby integrating color features of the second image into the target image.
The above-mentioned edge smoothing and color migration are two ways of post-processing the target image, and the purpose of the edge smoothing and color migration is to eliminate possible defects in image processing and improve the quality and visual sense of reality of the image. Therefore, other post-processing modes can be adopted, such as filtering processing on the whole target image, optimizing the brightness and contrast of the image, eliminating image distortion and the like.
Fig. 6 shows a schematic flow of image processing in the present exemplary embodiment. As shown in fig. 6, after the first image is acquired, the sky area in the first image is segmented by the segmentation model to extract the mask image from the first image; performing inverse processing on the mask image, namely subtracting the pixel value of each pixel point by 1 to obtain an inverse mask image; multiplying the first image by the inverse mask image to extract a foreground region image from the first image; multiplying the second image by the mask image to extract a background area image from the second image; then, splicing the foreground region image and the background region image to obtain an initial target image (namely a target image 1); and then post-processing is carried out on the target image by means of edge smoothing processing, color migration and the like to obtain a final target image (namely a target image 2).
In summary, in the present exemplary embodiment, based on the above image processing method, on the one hand, the method is based on the recognition of the sky area by the segmentation model and the processing of the mask image, so that the sky area can be accurately segmented from the image, without manual matting by the user, with high intelligence, convenient use and good user experience. On the other hand, the method and the device realize the day-changing processing of the images, change the sky area in the first image into the corresponding part in the second image, have higher interestingness and can meet the diversified demands of users.
Fig. 7 shows an image processing apparatus 700 of the present exemplary embodiment, which may include the following modules:
The sky identification module 710 is configured to identify a sky area in the first image, and obtain a mask image corresponding to the sky area;
An image segmentation module 720 for segmenting the foreground region image from the first image and the background region image from the second image by using the mask image;
the image stitching module 730 is configured to stitch the foreground region image and the background region image to obtain a target image.
In an alternative embodiment, the sky identification module 710 is configured to obtain the mask image by performing the following method:
normalizing the pixel value of the first image to obtain a normalized image;
processing the normalized image based on a pre-trained full convolution neural network to obtain a response spectrum of the normalized image to the sky;
and performing binarization processing on the response spectrum to obtain a mask image corresponding to the sky area in the first image.
In an alternative embodiment, the image segmentation module 720 is configured to multiply the inverse mask image of the mask image with the first image to segment the foreground region image from the first image, and multiply the mask image with the second image to segment the background region image from the second image.
In an alternative embodiment, the image stitching module 730 is configured to add the foreground region image and the background region image to obtain the target image.
In an alternative embodiment, the image processing apparatus 700 further includes: and the post-processing module is used for carrying out edge smoothing processing on the target image after the target image is obtained.
In an alternative embodiment, the image processing apparatus 700 further includes: and the post-processing module is used for adjusting the target image on the Lab color space according to the mean value and the variance of the second image on the Lab color space after the target image is obtained so as to carry out color migration.
In an alternative embodiment, the first image may be a raw image captured by a camera and the second image may be a pre-configured template image.
The specific details of each module in the above apparatus are already described in the method section, and the details that are not disclosed can be referred to the embodiment of the method section, so that they will not be described in detail.
Those skilled in the art will appreciate that the various aspects of the present disclosure may be implemented as a system, method, or program product. Accordingly, various aspects of the disclosure may be embodied in the following forms, namely: an entirely hardware embodiment, an entirely software embodiment (including firmware, micro-code, etc.) or an embodiment combining hardware and software aspects may be referred to herein as a "circuit," module "or" system.
Exemplary embodiments of the present disclosure also provide a computer-readable storage medium having stored thereon a program product capable of implementing the method described above in the present specification. In some possible implementations, various aspects of the disclosure may also be implemented in the form of a program product comprising program code for causing a terminal device to carry out the steps according to the various exemplary embodiments of the disclosure as described in the "exemplary methods" section of this specification, when the program product is run on the terminal device, e.g. any one or more of the steps of fig. 3,4 or 5 may be carried out.
Referring to fig. 8, a program product 800 for implementing the above-described method according to an exemplary embodiment of the present disclosure is described, which may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer. However, the program product of the present disclosure is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium can be, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The computer readable signal medium may include a data signal propagated in baseband or as part of a carrier wave with readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., connected via the Internet using an Internet service provider).
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any adaptations, uses, or adaptations of the disclosure following the general principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (8)

1. An image processing method, comprising:
Identifying a sky area in the first image based on the segmentation model to obtain a mask image corresponding to the sky area;
dividing a foreground region image except the sky region from the first image through the mask image, and dividing a background region image corresponding to the sky region from a second image;
Splicing the foreground region image and the background region image to obtain a target image;
Acquiring the mean value and the variance of the second image in the Lab color space;
according to the difference between the average value of the target image in the Lab color space and the average value of the second image in the Lab color space, the Lab channel value of the target image is integrally moved;
according to the variance of the second image in the Lab color space, the distribution of Lab channel values of the target image is adjusted;
Determining an edge portion between the foreground region image and the background region image in the target image; expanding the edge part to form an edge region to be smoothed; selecting a center point of the region to be smoothed of the edge based on the two-dimensional normal distribution of the target image, constructing a smoothing algorithm formula based on the normal distribution, and calculating a weight matrix; and carrying out iteration convolution on the target image by utilizing a weight matrix according to the set iteration times.
2. The method of claim 1, wherein the identifying, based on the segmentation model, a sky region in the first image to obtain a mask image corresponding to the sky region comprises:
Normalizing the pixel value of the first image to obtain a normalized image;
processing the normalized image based on a pre-trained full convolution neural network to obtain a response spectrum of the normalized image to sky;
And performing binarization processing on the response spectrum to obtain a mask image corresponding to the sky area in the first image.
3. The method of claim 1, wherein the segmenting the foreground region image from the first image and the background region image from the second image by the mask image comprises:
Multiplying an inverse mask image of the mask image with the first image to segment a foreground region image from the first image;
the mask image is multiplied with the second image to divide a background area image from the second image.
4. A method according to claim 3, wherein said stitching said foreground region image and said background region image to obtain a target image comprises:
And adding the foreground region image and the background region image to obtain the target image.
5. The method of any one of claims 1 to 4, wherein the first image is a raw image captured by a camera and the second image is a pre-configured template image.
6. An image processing apparatus, comprising:
The sky identification module is used for identifying a sky area in the first image based on the segmentation model to obtain a mask image corresponding to the sky area;
The image segmentation module is used for segmenting foreground region images except the sky region from the first image through the mask image and segmenting background region images corresponding to the sky region from the second image;
the image stitching module is used for stitching the foreground region image and the background region image to obtain a target image;
The post-processing module is used for acquiring the mean value and the variance of the second image in the Lab color space; according to the difference between the average value of the target image in the Lab color space and the average value of the second image in the Lab color space, the Lab channel value of the target image is integrally moved; according to the variance of the second image in the Lab color space, the distribution of Lab channel values of the target image is adjusted;
The post-processing module is further configured to determine an edge portion between the foreground region image and the background region image in the target image; expanding the edge part to form an edge region to be smoothed; selecting a center point of the region to be smoothed of the edge based on the two-dimensional normal distribution of the target image, constructing a smoothing algorithm formula based on the normal distribution, and calculating a weight matrix; and carrying out iteration convolution on the target image by utilizing a weight matrix according to the set iteration times.
7. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the method of any one of claims 1 to 5.
8. An electronic device, comprising:
A processor; and
A memory for storing executable instructions of the processor;
wherein the processor is configured to perform the method of any one of claims 1 to 5 via execution of the executable instructions.
CN201911373483.9A 2019-12-27 2019-12-27 Image processing method, image processing device, storage medium and electronic apparatus Active CN111179282B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911373483.9A CN111179282B (en) 2019-12-27 2019-12-27 Image processing method, image processing device, storage medium and electronic apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911373483.9A CN111179282B (en) 2019-12-27 2019-12-27 Image processing method, image processing device, storage medium and electronic apparatus

Publications (2)

Publication Number Publication Date
CN111179282A CN111179282A (en) 2020-05-19
CN111179282B true CN111179282B (en) 2024-04-23

Family

ID=70650383

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911373483.9A Active CN111179282B (en) 2019-12-27 2019-12-27 Image processing method, image processing device, storage medium and electronic apparatus

Country Status (1)

Country Link
CN (1) CN111179282B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111598903B (en) * 2020-05-21 2023-09-29 Oppo广东移动通信有限公司 Portrait segmentation method, device, storage medium and electronic equipment
CN111709873B (en) * 2020-05-27 2023-06-20 北京百度网讯科技有限公司 Training method and device for image conversion model generator
CN111968134B (en) * 2020-08-11 2023-11-28 影石创新科技股份有限公司 Target segmentation method, device, computer readable storage medium and computer equipment
CN112241941B (en) * 2020-10-20 2024-03-22 北京字跳网络技术有限公司 Method, apparatus, device and computer readable medium for acquiring image
CN112561847B (en) * 2020-12-24 2024-04-12 Oppo广东移动通信有限公司 Image processing method and device, computer readable medium and electronic equipment
CN113099127B (en) * 2021-02-24 2024-02-02 影石创新科技股份有限公司 Video processing method, device, equipment and medium for making stealth special effects
CN113096069A (en) * 2021-03-08 2021-07-09 北京达佳互联信息技术有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN113034648A (en) * 2021-04-30 2021-06-25 北京字节跳动网络技术有限公司 Image processing method, device, equipment and storage medium
CN113256499B (en) * 2021-07-01 2021-10-08 北京世纪好未来教育科技有限公司 Image splicing method, device and system
CN113920032A (en) * 2021-10-29 2022-01-11 上海商汤智能科技有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN114494004B (en) * 2022-04-15 2022-08-05 北京美摄网络科技有限公司 Sky image processing method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103745456A (en) * 2013-12-23 2014-04-23 深圳先进技术研究院 Image segmentation method and apparatus
CN108171677A (en) * 2017-12-07 2018-06-15 腾讯科技(深圳)有限公司 A kind of image processing method and relevant device
CN109961446A (en) * 2019-03-27 2019-07-02 深圳视见医疗科技有限公司 CT/MR three-dimensional image segmentation processing method, device, equipment and medium
CN110136161A (en) * 2019-05-31 2019-08-16 苏州精观医疗科技有限公司 Image characteristics extraction analysis method, system and device
CN110288614A (en) * 2019-06-24 2019-09-27 睿魔智能科技(杭州)有限公司 Image processing method, device, equipment and storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6349962B2 (en) * 2014-05-27 2018-07-04 富士ゼロックス株式会社 Image processing apparatus and program
US9858675B2 (en) * 2016-02-11 2018-01-02 Adobe Systems Incorporated Object segmentation, including sky segmentation

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103745456A (en) * 2013-12-23 2014-04-23 深圳先进技术研究院 Image segmentation method and apparatus
CN108171677A (en) * 2017-12-07 2018-06-15 腾讯科技(深圳)有限公司 A kind of image processing method and relevant device
CN109961446A (en) * 2019-03-27 2019-07-02 深圳视见医疗科技有限公司 CT/MR three-dimensional image segmentation processing method, device, equipment and medium
CN110136161A (en) * 2019-05-31 2019-08-16 苏州精观医疗科技有限公司 Image characteristics extraction analysis method, system and device
CN110288614A (en) * 2019-06-24 2019-09-27 睿魔智能科技(杭州)有限公司 Image processing method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN111179282A (en) 2020-05-19

Similar Documents

Publication Publication Date Title
CN111179282B (en) Image processing method, image processing device, storage medium and electronic apparatus
CN111050269B (en) Audio processing method and electronic equipment
CN108594997B (en) Gesture skeleton construction method, device, equipment and storage medium
US11759143B2 (en) Skin detection method and electronic device
US20220319077A1 (en) Image-text fusion method and apparatus, and electronic device
CN113810601B (en) Terminal image processing method and device and terminal equipment
CN111552451B (en) Display control method and device, computer readable medium and terminal equipment
CN110138999B (en) Certificate scanning method and device for mobile terminal
CN113810600A (en) Terminal image processing method and device and terminal equipment
CN112954251B (en) Video processing method, video processing device, storage medium and electronic equipment
CN113542580B (en) Method and device for removing light spots of glasses and electronic equipment
US20220245778A1 (en) Image bloom processing method and apparatus, and storage medium
CN113810764B (en) Video editing method and video editing device
WO2022148319A1 (en) Video switching method and apparatus, storage medium, and device
CN113810603A (en) Point light source image detection method and electronic equipment
CN115129410B (en) Desktop wallpaper configuration method and device, electronic equipment and readable storage medium
CN113744257A (en) Image fusion method and device, terminal equipment and storage medium
WO2022022319A1 (en) Image processing method, electronic device, image processing system and chip system
CN116206100A (en) Image processing method based on semantic information and electronic equipment
CN113096022B (en) Image blurring processing method and device, storage medium and electronic device
CN115546858B (en) Face image processing method and electronic equipment
CN111626931B (en) Image processing method, image processing device, storage medium and electronic apparatus
CN115686182B (en) Processing method of augmented reality video and electronic equipment
CN115706869A (en) Terminal image processing method and device and terminal equipment
CN111294905B (en) Image processing method, image processing device, storage medium and electronic apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant