WO2020078027A1 - 一种图像处理方法、装置与设备 - Google Patents

一种图像处理方法、装置与设备 Download PDF

Info

Publication number
WO2020078027A1
WO2020078027A1 PCT/CN2019/091717 CN2019091717W WO2020078027A1 WO 2020078027 A1 WO2020078027 A1 WO 2020078027A1 CN 2019091717 W CN2019091717 W CN 2019091717W WO 2020078027 A1 WO2020078027 A1 WO 2020078027A1
Authority
WO
WIPO (PCT)
Prior art keywords
color processing
processing method
image
target area
images
Prior art date
Application number
PCT/CN2019/091717
Other languages
English (en)
French (fr)
Inventor
李宇
王提政
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to JP2021521025A priority Critical patent/JP7226851B2/ja
Priority to KR1020217014480A priority patent/KR20210073568A/ko
Priority to BR112021007094-0A priority patent/BR112021007094A2/pt
Priority to EP19873919.5A priority patent/EP3859670A4/en
Priority to CN201980068271.1A priority patent/CN112840376B/zh
Priority to MX2021004295A priority patent/MX2021004295A/es
Priority to AU2019362347A priority patent/AU2019362347B2/en
Publication of WO2020078027A1 publication Critical patent/WO2020078027A1/zh
Priority to US17/230,169 priority patent/US20210241432A1/en

Links

Images

Classifications

    • G06T5/94
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration by the use of local operators
    • G06T5/60
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/35Categorising the entire scene, e.g. birthday party or wedding scene
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2621Cameras specially adapted for the electronic generation of special effects during image pickup, e.g. digital cameras, camcorders, video cameras having integrated special effects capability
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Definitions

  • the present invention relates to the technical field of terminals, and in particular, to an image processing method, device, and equipment.
  • Video has quickly become the main concentration of traffic on the network, and is expected to account for 80% to 90% of traffic in the next few years.
  • the terminal recording function is generally monotonous.
  • Existing video shooting usually can only provide regular shooting, but cannot achieve some personalized effects.
  • the present invention provides an image processing method, which determines the target area and the background area in the image by segmenting the image template; by applying different color processing methods to the target area and the background area, the brightness or chroma of the target area is made
  • the brightness and chromaticity higher than the background area make the theme corresponding to the target area more prominently displayed, enabling the end user to have a movie special effect when taking pictures or shooting videos, and enhance the user's photo experience.
  • an embodiment of the present invention provides an image processing method.
  • the method is applied to a process of recording a video.
  • the method includes: collecting N1 images in a first period; collecting N2 images in a second period; wherein, The first period and the second period are adjacent periods, and N1 and N2 are both positive integers; for each of the N1 images, the first target area and the first background area are determined in the image; the first The background area is the part of the image other than the first target area; wherein, the first target area in each of the N1 images corresponds to the first object; for each of the N2 images, in the image The second target area and the second background area are determined; the second background area is the part of the image other than the second target area; wherein, the second target area in each of the N2 images corresponds to the second object ;
  • the first target area is processed by a first color processing method
  • the first background area is processed by a second color processing method
  • the second target area is processed by a third color processing method
  • an embodiment of the present invention provides an image processing apparatus.
  • the apparatus is used in the process of shooting a video.
  • the apparatus includes: a shooting module for acquiring N1 images in a first period and N2 in a second period Images; where the first period and the second period are adjacent periods, and N1 and N2 are both positive integers; the determination module is used to determine the first target area and the first target area in each of the N1 images
  • the first background area; the first background area is the part of the image other than the first target area; wherein, the first target area in each of the N1 images corresponds to the first object; for each of the N2 images Image, the second target area and the second background area are determined in the image; the second background area is the part of the image other than the second target area; wherein, the second target area in each of the N2 images corresponds to A second object;
  • a color processing module for processing the first target area using the first color processing method, and processing the first background area using the second color processing method, for the second target The area is processed by
  • the first object and the second object correspond to the same object.
  • the first object and the second object correspond to the same object or different objects.
  • the first object or the second object includes at least one individual among people, animals, or plants.
  • the determination of the first object and the second object is determined by a user's selection instruction.
  • the first object is determined according to the user's selection instruction, and all images in the first period use the first object as the target object; similarly, in the second period In the first frame of the image, the second object is determined according to the user's selection instruction, and all images in the second period use the second object as the target object.
  • the first object and the second object are respectively determined by the terminal according to the contents of the two images at preset time intervals.
  • the first object is determined, and all images in the first period use the first object as the target object; similarly, the first frame image in the second period In, the second object is determined, and all the images in the second period take the second object as the target object.
  • the first object is determined in the first frame image of the first period, and the second object is determined in the first frame image of the second period, including but not limited to one of the following ways:
  • the two segmentation templates include an object template and a background template
  • the image area corresponding to the object template is determined as the target area
  • the area corresponding to the background template is determined as the background area
  • the object corresponding to the object template is the first object or the second object; or,
  • the image area corresponding to the k0 object templates is determined as the target area, and the image areas corresponding to the remaining segmentation templates are determined Is the background area; correspondingly, the object corresponding to the object template is the first object or the second object; where k0 is a non-negative integer less than k; or,
  • the image region corresponding to the segmentation template with the largest number of pixels among the k segmentation templates is determined as the target region, and the image region corresponding to the remaining segmentation templates is determined as the background region; correspondingly, the object corresponding to the object template is Is the first object or the second object; or,
  • the target template is determined among the k segmentation templates according to the preset priority of the object category; the image area corresponding to the target template is determined as the target area, and the image areas corresponding to the remaining segmentation templates are determined as the background Area; correspondingly, the object corresponding to the object template is the first object or the second object; or,
  • the target template is determined among the k segmentation templates according to the user's selection instruction; the image area corresponding to the target template is determined as the target area, and the image areas corresponding to the remaining segmentation templates are determined as the background area; the corresponding The object corresponding to the object template is the first object or the second object.
  • the method may be specifically executed by the determination module.
  • the first color processing manner is the same as the third color processing manner
  • the second color processing manner is the same as the fourth color processing manner.
  • the first color processing manner is the same as the third color processing manner, and the second color processing manner is different from the fourth color processing manner.
  • the first color processing manner is different from the third color processing manner, and the second color processing manner is the same as the fourth color processing manner.
  • the first color processing manner is different from the third color processing manner
  • the second color processing manner is different from the fourth color processing manner.
  • the first color processing method or the third color processing method includes: one of retaining color or color enhancement.
  • the second color processing method or the fourth color processing method includes: one of black and white, darkening, blurring, or retro.
  • an embodiment of the present invention provides a terminal device including a camera, a memory, a processor, and a bus; the camera, the memory, and the processor are connected by a bus; the camera is used to collect images, and the memory is used to store computer programs and instructions, The processor is used to call computer programs and instructions stored in the memory and the collected images, and is also specifically used to cause the terminal device to execute any of the possible design methods as described above.
  • the terminal device further includes an antenna system, and under the control of the processor, the antenna system transmits and receives wireless communication signals to realize wireless communication with the mobile communication network;
  • the mobile communication network includes one of the following Or multiple: GSM network, CDMA network, 3G network, 4G network, 5G network, FDMA, TDMA, PDC, TACS, AMPS, WCDMA, TDSCDMA, WIFI and LTE network.
  • shooting videos and images does not add any individual distinction and color distinction in the image, and the special effect is not rich enough.
  • different regions in the image can be differentiated by color, making photos or videos Special effects are enhanced to highlight the subject and target in the image, making the main characters more prominent.
  • the present invention can also provide more color changes and main body changes, and improve the user's stickiness.
  • FIG. 1 is a schematic structural diagram of a terminal according to an embodiment of the present invention.
  • FIG. 2 is a flowchart of an image processing method according to an embodiment of the present invention.
  • FIG. 3 is an example of a segmentation template identification in an embodiment of the present invention.
  • FIG. 4 is another example of segmentation template identification in an embodiment of the present invention.
  • FIG. 5 is a schematic diagram of determining a target template in an embodiment of the present invention.
  • FIG. 6 is a schematic diagram of another target template determination according to an embodiment of the present invention.
  • FIG. 7 is a schematic diagram of another target template determination according to an embodiment of the present invention.
  • FIG. 8 is a schematic diagram of another target template determination according to an embodiment of the present invention.
  • FIG. 9 is a schematic diagram of an image processing device according to an embodiment of the present invention.
  • FIG. 10 is a schematic diagram of another image processing device in an embodiment of the present invention.
  • the terminal may be a device that provides users with connectivity for shooting video and / or data, a handheld device with a wireless connection function, or other processing devices connected to a wireless modem, such as a digital camera, a SLR camera , Mobile phones (or “cellular” phones), smart phones, which can be portable, pocket, handheld, wearable devices (such as smart watches, etc.), tablets, personal computers (PC, Personal Computer), PDA ( Personal Digital Assistant, car computer, drone, aerial camera, etc.
  • a wireless modem such as a digital camera, a SLR camera , Mobile phones (or “cellular" phones), smart phones, which can be portable, pocket, handheld, wearable devices (such as smart watches, etc.), tablets, personal computers (PC, Personal Computer), PDA ( Personal Digital Assistant, car computer, drone, aerial camera, etc.
  • FIG. 1 shows a schematic diagram of an optional hardware structure of the terminal 100.
  • the terminal 100 may include a radio frequency unit 110, a memory 120, an input unit 130, a display unit 140, a camera 150, an audio circuit 160 (including a speaker 161, a microphone 162), a processor 170, an external interface 180, and a power supply 190 Other parts.
  • FIG. 1 is only an example of a smart terminal or a multi-function device, and does not constitute a limitation on the smart terminal or multi-function device, and may include more or less components than shown, or a combination of certain Parts, or different parts.
  • at least the memory 120, the processor 170, and the camera 150 exist.
  • the camera 150 is used to collect images or videos, and can be triggered by an application program instruction to realize a photo or video function.
  • the camera may include components such as an imaging lens, a filter, and an image sensor. The light emitted or reflected by the object enters the imaging lens, passes through the filter, and finally converges on the image sensor.
  • the imaging lens is mainly used for converging imaging of light emitted or reflected from all objects in the camera's viewing angle (also known as the scene to be photographed, the object to be photographed, the target scene or the target object, and can also be understood as the scene image that the user expects to shoot) ;
  • the filter is mainly used to filter out excess light waves (such as light waves other than visible light, such as infrared) in the light;
  • the image sensor is mainly used to photoelectrically convert the received light signal into an electrical signal, and Input to the processor 170 for subsequent processing.
  • the camera can be located in front of the terminal device or the back of the terminal device. The specific number and arrangement of cameras can be flexibly determined according to the requirements of the designer or manufacturer's strategy, and this application is not limited.
  • the input unit 130 may be used to receive input numeric or character information, and generate key signal input related to user settings and function control of the portable multi-function device.
  • the input unit 130 may include a touch screen 131 and / or other input devices 132.
  • the touch screen 131 can collect user's touch operations on or near it (such as user operations on the touch screen or near the touch screen using any suitable objects such as fingers, joints, stylus, etc.), and drive the corresponding according to a preset program Connection device.
  • the touch screen can detect the user's touch action on the touch screen, convert the touch action into a touch signal and send it to the processor 170, and can receive and execute the command sent by the processor 170; the touch signal includes at least touch Point coordinate information.
  • the touch screen 131 may provide an input interface and an output interface between the terminal 100 and the user.
  • the touch screen can be implemented in various types such as resistive, capacitive, infrared, and surface acoustic waves.
  • the input unit 130 may include other input devices.
  • other input devices 132 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys 132, switch keys 133, etc.), trackball, mouse, joystick, and the like.
  • the display unit 140 may be used to display information input by the user or provided to the user, various menus, interactive interfaces of the terminal 100, file display, and / or playback of any kind of multimedia file.
  • the display unit is also used to display images / videos acquired by the device using the camera 150, which may include preview images / videos in certain shooting modes, initial images / videos captured, and certain algorithm processing after shooting After the target image / video.
  • the touch screen 131 may cover the display panel 141, and when the touch screen 131 detects a touch operation on or near it, it is transmitted to the processor 170 to determine the type of touch event, and then the processor 170 is displayed on the display panel according to the type of touch event 141 provides corresponding visual output.
  • the touch screen and the display unit can be integrated into one component to realize the input, output, and display functions of the terminal 100; for ease of description, the embodiment of the present invention uses a touch display to represent the set of functions of the touch screen and the display unit; In some embodiments, the touch screen and the display unit can also be used as two independent components.
  • the memory 120 may be used to store instructions and data.
  • the memory 120 may mainly include a storage instruction area and a storage data area.
  • the storage data area may store various data, such as multimedia files, text, etc .
  • the storage instruction area may store an operating system, applications, At least one functional unit such as instructions required by a function, or their subset, extension set. It may also include a non-volatile random access memory; providing the processor 170 with hardware, software, and data resources included in the management of the computing processing device, and supporting control software and applications. It is also used for the storage of multimedia files and the storage of running programs and applications.
  • the processor 170 is the control center of the terminal 100, and uses various interfaces and lines to connect various parts of the entire mobile phone, and executes various operations of the terminal 100 by running or executing instructions stored in the memory 120 and calling data stored in the memory 120. Function and process data to control the phone overall.
  • the processor 170 may include one or more processing units; preferably, the processor 170 may integrate an application processor and a modem processor, where the application processor mainly processes an operating system, a user interface, and application programs, etc.
  • the modem processor mainly handles wireless communication. It can be understood that the above-mentioned modem processor may not be integrated into the processor 170.
  • the processor and the memory can be implemented on a single chip.
  • the processor 170 may also be used to generate corresponding operation control signals, send to corresponding components of the computing processing device, read and process data in the software, especially read and process data and programs in the memory 120, so that Each functional module performs the corresponding function, thereby controlling the corresponding component to act according to the requirements of the instruction.
  • the radio frequency unit 110 may be used for receiving and sending information or receiving and sending signals during a call, for example, after receiving the downlink information of the base station and processing it to the processor 170; in addition, sending the designed uplink data to the base station.
  • the RF circuit includes but is not limited to an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier (LNA), a duplexer, and the like.
  • the radio frequency unit 110 can also communicate with network devices and other devices through wireless communication.
  • the wireless communication may use any communication standard or protocol, including but not limited to Global System of Mobile (GSM), General Packet Radio Service (GPRS), and Code Division Multiple Access (Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), E-mail, Short Message Service (SMS), etc.
  • GSM Global System of Mobile
  • GPRS General Packet Radio Service
  • CDMA Code Division Multiple Access
  • WCDMA Wideband Code Division Multiple Access
  • LTE Long Term Evolution
  • E-mail Short Message Service
  • the audio circuit 160, the speaker 161, and the microphone 162 may provide an audio interface between the user and the terminal 100.
  • the audio circuit 160 can convert the received audio data into electrical signals, transmit them to the speaker 161, and convert the speakers 161 into sound signals for output; on the other hand, the microphone 162 is used to collect sound signals, and can also convert the collected sound signals into
  • the electrical signal is received by the audio circuit 160 and converted into audio data, and then processed by the audio data output processor 170, and then sent to another terminal via the radio frequency unit 110, or the audio data is output to the memory 120 for further processing, audio
  • the circuit may also include a headphone jack 163 for providing a connection interface between the audio circuit and the headphones.
  • the specific number and arrangement of speakers and microphones can be flexibly determined according to the requirements of the designer or manufacturer's strategy, and this application is not limited.
  • the terminal 100 further includes a power supply 190 (such as a battery) that supplies power to various components.
  • a power supply 190 such as a battery
  • the power supply can be logically connected to the processor 170 through a power management system, so as to realize functions such as charging, discharging, and power consumption management through the power management system.
  • the terminal 100 also includes an external interface 180, which can be a standard Micro USB interface, or a multi-pin connector, which can be used to connect the terminal 100 to communicate with other devices, or to connect a charger to the terminal 100 Charge.
  • an external interface 180 can be a standard Micro USB interface, or a multi-pin connector, which can be used to connect the terminal 100 to communicate with other devices, or to connect a charger to the terminal 100 Charge.
  • the terminal 100 may further include a flashlight, a wireless fidelity (WiFi) module, a Bluetooth module, sensors with different functions, and so on, which will not be repeated here. Part or all of the methods described below can be applied to the terminal shown in FIG. 1.
  • WiFi wireless fidelity
  • Bluetooth Bluetooth
  • the invention can be applied to terminal devices with the function of shooting (at least one of taking pictures or taking pictures).
  • the form of the floor product can be a smart terminal, such as a mobile phone, a tablet, a DV, a camera, a camera, a portable computer, a notebook computer, an intelligent robot Cameras, such as TVs, security systems, and drones.
  • the functional module of the present invention may be deployed on a DSP chip of a related device, specifically an application program or software therein; the present invention is deployed on a terminal device, and is provided through software installation or upgrade, and through hardware call coordination. Image processing function.
  • the invention is mainly applied to a scene where a terminal device takes a picture or a video. People's requirements for video shooting are getting higher and higher, and they hope to complete the special effects processing of the video while shooting to realize the WYSIWYG video shooting experience.
  • the invention can divide the main body of the picture or video, and adjust the color of different areas to realize the real-time special effects of the picture.
  • FIG. 2 is a flowchart of an image processing method according to an embodiment of the present invention. This method takes place in the process of taking pictures.
  • the terminal can configure a certain photographing mode; in this photographing mode, the method can include the following steps:
  • Step 21 Acquire (also can be understood as shooting or collecting) images.
  • the corresponding preview stream will also be displayed on the screen.
  • the preview image may refer to an image in the preview stream.
  • the captured image is obtained, and the size is, for example, but not limited to 1920 * 1080.
  • Step 22 Determine the target area and background area in the image according to the content of the captured image (which can be understood as scene semantics), more specifically, the target area and background can be determined in the image according to the type of object in the image Area; where the background area is the part of the image other than the target area; the target area corresponds to the target object or target object in the image, that is, the object or object that the user wants to highlight in the image, can interact with the user to select or system Settings related.
  • step 22 may include s221-s224.
  • the down-sampled original image is converted to a lower resolution image. Calculating based on small images can reduce the amount of calculation.
  • the original size (such as m0 * n0) can be downsampled to the size of m * n; where the smaller the value of m and n, the smaller the amount of subsequent calculation; but if m and n are too small, it will As a result, the resolution of subsequent pixels decreases.
  • the reasonable value interval of m and n is [128, 512], more specifically, [256, 300], m and n may or may not be equal. For example, a 1920 * 1080 image can be down-sampled to 256 * 256.
  • s222 Input the m * n image after downsampling to the neural network for semantic segmentation, and determine an image segmentation template (Mask).
  • Semantic segmentation refers to the pixel-level segmentation of objects in the image, each pixel can indicate what type of object it belongs to; for the parts that do not indicate a category, they are marked as "background”.
  • semantic segmentation can use CNN (Convolutional Neural Networks) -based deep learning algorithms.
  • CNN Convolutional Neural Networks
  • the CNN-based network model is described as follows:
  • the value of z and the multiple relationship can be determined according to the algorithm performance and design requirements.
  • the terminal can recognize k object categories (such as: at least one of people, animals, plants, other preset objects, background, etc.), it can get k images; each pixel in the image will get a The score value of each category. The higher the score, the higher the probability that the pixel belongs to the category.
  • k object categories such as: at least one of people, animals, plants, other preset objects, background, etc.
  • any pixel once the category is determined, it can be identified, such as 1 for people, 2 for vehicles, 3 for animals, 4 for plants, and 0 for background.
  • the user can arbitrarily design the classification number, type and identification method according to the design requirements.
  • FIG. 3 the pixel area where the vehicle is located is classified as a car by the neural network and is identified as 1, and the pixel area of the surrounding background portion is classified as the background by the neural network and is identified as 0.
  • the regions of the same type of objects have the same label, for example, the background label is 0, the cat label is 1, and the skateboard label is 2.
  • the same color can also be used to represent the same category of tags, such as people, horses, and backgrounds are respectively marked with different colors.
  • Mask is the result of the semantic segmentation algorithm.
  • the pixels in the image that belong to a certain type of object are marked with a certain color or logo, and the background is also marked with a certain color or logo.
  • the picture obtained after this processing is called Mask. , In order to visually display the results of the segmentation.
  • the content of the image may include the main body and the background.
  • the image segmentation template may correspondingly include the main body template and the background template.
  • the main body template can correspond to the main body identified by the segmentation method, including the individual that the user wants to highlight in the photo or the captured image, such as a person, animal, plant, or a specific object (cup, table, clothes, decoration ... ...) Etc .
  • the background template corresponds to other areas of the image that are not recognized as subject templates
  • the image segmentation template corresponds to the entire image.
  • the recognition ability of the subject template is related to the performance of the neural network.
  • some neural networks can only recognize people and background; some neural networks can recognize people, vehicles and background; some neural networks can only recognize cars and background; some neural networks Can recognize people, animals and backgrounds; some neural networks can only recognize animals and backgrounds, some neural networks can recognize animals, plants and backgrounds ...
  • Deep neural network training needs to use a large amount of segmentation training data.
  • the training data set includes a large number of images containing segmentation categories, including input images and segmentation template images.
  • the training set can cover various typical application scenarios of segmented objects, and has a variety of data.
  • the input image and the segmentation template map in the training set are used to train the network to obtain excellent network parameters, that is, the user's satisfactory segmentation performance; and the obtained network parameters are used as the final calculation parameters of the neural network.
  • s223 Determine the target template according to the segmentation template.
  • the terminal may further determine which of the segmentation templates correspond to the objects that need to be highlighted and displayed prominently, that is, the target template needs to be determined.
  • the determination of the target template includes but is not limited to the following methods.
  • Method 1 If the segmentation template has only one subject template and background template, then determine the subject template as the target template.
  • the image area corresponding to the object template is determined as the target area, and the area corresponding to the background template is determined as the background area.
  • the segmentation template of the image output by the neural network has only the subject template A1 and the background template, then A1 can be determined as the target template.
  • Method 2 If there are multiple subject templates and background templates in the segmentation template, if the number of pixels contained in any one of the subject templates is greater than a certain threshold, the subject template is determined as the target subject; if any of the subject templates contains pixels When the number is less than a certain threshold, the subject template is re-identified, and also identified as the background.
  • the number of pixels contained in the subject template may refer to the number of pixels contained in the connected image area of the individual.
  • the image is semantically segmented to obtain k segmentation templates; where k segmentation templates correspond to different object categories; if k is greater than 2, and k0 object templates among the k segmentation templates contain more pixels than If a threshold is set, the image regions corresponding to k0 object templates are determined as target regions, and the image regions corresponding to the remaining segmentation templates are determined as background regions; where k0 is a non-negative integer less than k.
  • the image segmentation templates output by the neural network include subject templates A1 and A2 and background templates. If the number of pixels contained in A1 is greater than the preset threshold, and the number of pixels contained in A2 is not greater than the preset threshold, A1 is determined as the target template, and the main template A2 is re-labeled as the background template.
  • the re-labeled template can be as shown in FIG. 5 Show. If the number of pixels contained in A1 is greater than the preset threshold, and the number of pixels contained in A2 is also greater than the preset threshold, then both A1 and A2 are determined as target templates. If the number of pixels contained in A1 and A2 is not greater than the preset threshold, A1 and A2 are re-identified as a background template, that is, there is no subject template in the image.
  • A1 and A2 may be the same category or different categories.
  • Method 3 If there are multiple body templates and background templates in the segmentation template, select the body template that contains the largest number of pixels as the target template; and re-identify the other body templates as the background template.
  • the image is semantically segmented to obtain k segmentation templates; where k segmentation templates correspond to different object categories; if k is greater than 2, the k segmentation templates include the image area corresponding to the segmentation template with the largest number of pixels Determined as the target area, and the image area corresponding to the remaining segmentation templates is determined as the background area.
  • the segmentation template of the image output by the neural network includes the subject template A1, A2 and the background template, then A1 with the largest number of pixels is determined as the target template, and the subject template A2 is re-identified as the background template.
  • the template can be shown in Figure 5.
  • A1 and A2 may be the same category or different categories.
  • Method 4 If there are multiple subject templates and background templates in the segmentation template, and there are multiple categories in the multiple subject templates, the target template is determined according to the priority of the category. For example, the character template is higher than the priority of the vehicle template, then the character template is the target template, and the vehicle template can be re-identified as the background. For example, the character template is higher than the animal template than the plant template. If the priority set by the system is higher than the plant template are both theme templates, then the character template and animal template are both target templates, and the plant template can be re-identified as the background. It should be understood that there may be one or more individuals belonging to the unified category template.
  • the image is semantically segmented to obtain k segmentation templates; where k segmentation templates correspond to different object categories; if k is greater than 2, it is determined among the k segmentation templates according to the preset priority of the object category
  • the target template is generated; the image area corresponding to the target template is determined as the target area, and the image areas corresponding to the remaining segmentation templates are determined as the background area.
  • the segmentation templates of the image output by the neural network include subject templates A1, B1 and background templates, A1 and B1 are of different categories, and the priority of A1 is higher than the priority of B1; if the system settings include B1 and Theme templates with priority above B1 can be used as target templates, then A1 and B1 are both target templates; if the system sets theme templates with priority above B1 can be used as target templates, A1 will be determined as the target template, and B1 will be renewed Logo, logo as background template.
  • Method 5 If there are multiple subject templates and background templates in the segmentation template, the target template can be determined according to the selection operation input by the user.
  • the input methods include but are not limited to selection instructions such as touch screen and voice.
  • the individual template selected by the user and the corresponding template of the individual is the target template.
  • the image is semantically segmented to obtain k segmentation templates; wherein, the k segmentation templates correspond to different object categories; if k is greater than 2, the target template is determined from the k segmentation templates according to the user's selection instruction; The image area corresponding to the target template is determined as the target area, and the image areas corresponding to the remaining segmentation templates are determined as the background area.
  • the segmentation templates of the image output by the neural network include subject templates A1, B1 and background templates. If the user touches the individual corresponding to A1 during the photographing process, A1 is determined as the target template, and B1 is re-identified as the background template. If the user touches the individual corresponding to B1 during the photographing process, B1 is determined as the target template; and A1 is re-identified as the background template.
  • Method 6 If there are multiple subject templates and background templates in the segmentation template, and there are multiple categories in the multiple subject templates, the target template can be determined according to the user's input selection operation, and the input methods include but are not limited to Selection instructions such as touch screen and voice. Which individual the user selects and which individual corresponds to all subject templates of the category is the target template.
  • the image is semantically segmented to obtain k segmentation templates; wherein, the k segmentation templates correspond to different object categories; if k is greater than 2, the target template is determined from the k segmentation templates according to the user's selection instruction; The image area corresponding to the target template is determined as the target area, and the image areas corresponding to the remaining segmentation templates are determined as the background area.
  • the image segmentation templates output by the neural network include subject templates A1, A2, B1, B2, and background templates, where A1, A2 are in the same category, and B1, B2 are in the same category. If the user touches the individual corresponding to A1 during the photographing process, A1 and A2 of the same category are determined as target templates, and B1 and B2 are re-identified as background templates. If the user touches the individual corresponding to B2 during the photographing process, B1 and B2 of the same category are determined as the target template; and A1 and A2 are re-identified as the background template.
  • target templates can be obtained.
  • a category can also contain multiple categories, and there can be one or more individuals under each category; the displayed results are related to the rules for determining the target template set by the terminal system and the user's input.
  • an image It may also contain only background templates.
  • s224 Determine the target area and background area in the original image.
  • the target template and background template in the segmentation template are also upsampled.
  • the area of the upsampled target template in all pixels corresponding to the original image is the target area.
  • the area composed of all pixels corresponding to the original image in the sampled background template is the background area.
  • Step 23 Use different color processing methods to process the target area and the background area in the image to obtain the target image; wherein, use different color processing methods to make the chroma of the target area larger than the chroma of the background area or the target area
  • the brightness of is greater than the brightness of the background area. That is, the chromaticity of the target area in the target image is greater than the chromaticity of the background area, or the brightness of the target area is greater than the brightness of the background area.
  • the first color processing method and the second color processing method are adopted for the target area and the background area in the image, respectively. Including but not limited to the following methods:
  • the first color processing method is to retain color
  • the second color processing method is to filter, such as converting the color of the background area to black and white; typical filters also include black and white, darken, retro, film, blur, blur Wait for any of them.
  • the black and white filter maps each pixel value to a gray value to achieve the effect of the black and white filter; for another example, the darkening filter reduces the brightness of each pixel value to achieve a special effect of darkening.
  • the first color processing method is the first filter method
  • the second color processing method is the second filter method
  • the first filter method is different from the second filter method.
  • the first filter method has higher chroma than the second filter method.
  • the first color processing method is the third filter method
  • the second color processing method is the fourth filter method
  • the third filter method is different from the fourth filter method.
  • the third filter method has higher brightness than the fourth filter method.
  • color processing includes processing of brightness and / or chroma.
  • the filter can include adjusting the chroma, brightness, and color equality, and can also include superimposed textures, etc.
  • adjusting the chroma and hue you can specifically adjust a certain color system to make it darker, lighter, or Change the hue, while the other colors are unchanged.
  • the filter can also be understood as a kind of pixel-to-pixel mapping. The pixel value of the input image is mapped to the pixel value of the target pixel through a preset mapping table, so as to achieve a special effect.
  • the filter may be a preset parameter template, and these color-related parameters may be parameters in a filter template known in the industry, or may be parameters independently designed by the user.
  • step 23 the method further includes step 24: saving the image processed in step 23.
  • the terminal in the process of taking pictures, can determine the target individual and background according to the content of the picture, and apply different color processing to the target individual and background, so that the picture taken by the user can more prominently the subject, so that the picture taken has The blockbuster feels like a movie.
  • the image processing method of recording and photographing in the present invention is similar, the difference is that the object of photographing processing is one image, and the object of video processing is continuous video frames, that is, multiple consecutive images, which may be A complete video can also be a segment in a complete video, or a user-defined video segment in a certain period of time.
  • the processing flow of each frame of image in the video or video clip reference may be made to the processing method in Example 1 above.
  • the image processing method in shooting video may include the following steps:
  • Step 31 Obtain the captured N frames of images, where N is a positive integer, and perform steps 32-33 for each frame of images, where the N frames of images can be adjacent video frames, and the sum of the N frames of images can be understood as a piece of video ; N-frame images can also be non-adjacent.
  • Step 32 an optional implementation may be the same as step 22.
  • Step 33 an optional implementation may be the same as step 23.
  • step 33 can have a richer implementation in addition to step 23.
  • any of the methods of confirming the subject in s223 can be delayed, for example, the person and background are determined in the L1 frame, and through the pixel tag and template comparison, the L1 + 1 frame to the L1 + frame can be compared The L0 frame still determines that the person in these images is the subject, and the corresponding area is the target area 0. It is not necessary to determine the subject and background every frame.
  • the time for confirming the subject each time can be defined by the user autonomously, and the subject can also be confirmed periodically, such as but not limited to determining every 2s or every 10s, etc.
  • the method of determining the subject every time includes but is not limited to the 6 ways in s223.
  • Step 34 Save the video composed of N frames of color-processed images.
  • the user can determine the target individual and background according to the content of the video during the recording process, and apply different color processing to the target individual and background, so that the video taken by the user can more prominently. It makes the filmed video have a large sense, just like a movie, bringing a cool feeling to the video and enhancing the user experience.
  • the image processing method of video recording and photographing in the present invention is similar, except that the object of photographing processing is one image, and the object of video processing is continuous video frames, that is, continuous multiple images. Therefore, the processing flow of each frame of image can refer to the processing method in Example 1 above.
  • some areas in the image may be misdetected. If the same area is marked as a target or background in adjacent frames, respectively, then process the same area according to the above method of processing colors For different colors, this change in the color of the same area in adjacent frames will cause flickers in the senses, so it is necessary to judge the flickers during the processing and eliminate the flickers. Blinking can be understood as a misjudgment of the object category.
  • a method of judging the occurrence of flicker in the video can process the previous frame segmentation template based on the optical flow to obtain an optical flow segmentation template. Compare the optical flow segmentation template with the current frame segmentation template. When the degree exceeds a certain percentage, it is judged as not blinking, otherwise it is judged as blinking. In addition, it should be understood that judging blinking is a continuous process.
  • a specific method for determining whether there is flicker is as follows:
  • the optical flow indicates the displacement relationship of pixels in the preceding and following frames (t-1 frame and t frame).
  • the first N0 (positive integer greater than 2) images of the current image have the same object judged that the number of adjacent image groups of different categories is greater than the preset threshold, it can be determined that the current frame needs to be flickered abnormally deal with. If the number of adjacent image groups where the same object is judged to be in different categories is not greater than a preset threshold, it can be judged that the current frame needs to be processed for abnormal blinking.
  • if more than half of the frames are judged to be flickering (such as 3 of the 5 adjacent frames of the current video frame) If the video frame is judged to be flickering, you can judge that the current frame needs to be processed for flickering abnormality; if there are no more than half of the frames are judged to be flickering (for example, one of the 5 frames before the current video frame is If it is determined to be flickering), it can be judged that there is no need to perform flicker abnormality processing on the current frame.
  • the current video image can be understood as the image being recorded at a certain moment, and a certain moment here can be understood as a generalized moment in some scenes; it can also be understood as some specific moments in some scenes, Such as the latest moment, or the moment when the user is interested.
  • the image processing method in the captured video in this example may include the following steps:
  • Step 41 Obtain the captured N frames of images, where N is a positive integer, and perform steps 32-33 on each frame of images, where the N frames of images can be adjacent video frames, and the sum of the N frames of images can be understood as a piece of video ; N-frame images can also be non-adjacent.
  • Step 42 Determine whether the number of adjacent image groups in which flicker occurs in the previous N0 frame of the current frame (current image) is greater than a preset threshold.
  • the N0 and the threshold here may be set by the user.
  • N0 is the number of selected historical video frame samples
  • the threshold may be 1/2 or 2/3 of N0, which is not limited by way of example.
  • Step 43 an optional implementation may be the same as step 32.
  • Step 44 an optional implementation may be the same as step 33.
  • step 44 If the judgment result is greater than the preset threshold, the operation of step 44 is performed on the currently captured or collected image.
  • Step 45 Use the same color processing method to process all image areas of the current frame to obtain the target image.
  • the same color processing method may be the same as the color processing method of the background area in the previous frame, or may be the same as the color processing method of the target area in the previous frame, or may be the same as the color processing method of the entire image in the previous frame .
  • the same color processing method as that for the background area in step 33 (23) may be used for the entire image; or the same color processing method as for the target area in step 33 (23) may be used for the entire image.
  • all images remain in color, or all become black and white, or all images adopt the first or second color processing method (including but not limited to the color processing method mentioned in Example 1 above).
  • the template segmentation process as in step 22 may exist or may be omitted, which is not limited in this example.
  • step 46 save the video composed of N frames of image after color processing.
  • N is a positive integer.
  • the user can determine the target individual and background according to the content of the video during the recording process, and apply different color processing to the target individual and background, so that the video taken by the user can more prominently. It makes the filmed video have a large sense, just like a movie, bringing a cool feeling to the video and enhancing the user experience.
  • the content of the picture taken by the user often changes, so the change of the main body of the picture often occurs.
  • the user also wants to freely choose the color processing method of the main body in different pictures to achieve independent control of the video style.
  • the image processing method in the process of shooting video may include the following steps:
  • Step 51 Obtain video frames
  • Step 52 For any video frame acquired in the video; determine the main area and background area in the video frame;
  • Step 53 Any color processing method can be used for the subject area at any time, and any color processing method can be used for the background area at any time. However, it is necessary to ensure that for any image, the brightness or chroma of the main body area after color processing is higher than that of the background area after color processing; or, for any image, the color processing method applied to the main body area is higher than that of the background area The chroma or brightness of the image obtained by the applied color processing method is higher.
  • the content of the picture taken by the user often changes, so the change of the main body of the picture often occurs.
  • the user also wants to freely choose the color processing method of the main body in different pictures to achieve independent control of the video style. In particular, the color is changed in different time periods.
  • the image processing method in the process of shooting video may include the following steps:
  • Step 61 Collect N1 images in the first period; collect N2 images in the second period; where the first period and the second period are adjacent periods, and N1 and N2 are both positive integers; the first period
  • the second time period may be a time period during which the user can distinguish the image change with the naked eye, and N1 and N2 are determined by the frame rate when recording and the time period of the time period, which are not limited in the present invention.
  • Step 62 For each of the N1 images, the first target area and the first background area are determined in the image; the first background area is the portion of the image other than the first target area; wherein, in the N1 image The first target area in each image corresponds to the first object (may contain at least one object); for each of the N2 images, the second target area and the second background area are determined in the image; the second background The area is the part of the image other than the second target area; wherein, the second target area in each of the N2 images corresponds to the second object (may contain at least one object);
  • Step 63 The first target area is processed by a first color processing method, the first background area is processed by a second color processing method, the second target area is processed by a third color processing method, and the second background area is processed Use the fourth color processing method to obtain the target video; in the target video, the chroma of the first target area is greater than the chroma of the first background area, or the brightness of the first target area is greater than the brightness of the first background area ; And, the chromaticity of the second target area is greater than the chromaticity of the second background area, or the brightness of the second target area is greater than the brightness of the second background area.
  • the content of the screen shot by the user often changes, so the main body of the screen often changes.
  • the user also wants to freely select the target main body in different screens. For example, it is determined that the image area corresponding to the first object is the target area in the first time period, and the image area corresponding to the second object is determined in the second time period as the target area, and the first object and the second object are different objects or individuals or categories.
  • the image processing method during video shooting may include the following steps:
  • Step 71 An optional implementation may be the same as step 61.
  • Step 72 Determine the first target area and the first background area in any one of the N1 images according to the image content; determine the second target area and the second background area in the any one of the N2 images according to the image content
  • the objects corresponding to the second target area are different from the objects corresponding to the first target area or in different categories, allowing the system and the user to independently select the target subject and the target area of the image.
  • An image is composed of the subject and the background, and correspondingly, is composed of the target area and the background area.
  • the first object is a person and the second object is an animal; for example, the first object is a person A and the second object is a person B; for example, the first object is two people and the second object is a dog and two cats ...
  • the rest of the areas that are not recognized are marked as background.
  • This method can determine the image segmentation template by the methods of s221 and s222 described above, but the subsequent method is not limited to determining the target object in the segmentation template for each frame of image.
  • the user can freely input, the first object and the second object are determined by the user input selection instruction, for example, if the user clicks on an individual, the system will recognize the pixel corresponding to the user input instruction, And further identify which (/ some) individuals (/ may be at least one individual) or which (/ some) categories (/ may be at least one category) of the template selected by the user, and then which (/ some) individuals Or all individuals under that category (s) are determined as the first object, the first object or the corresponding image area is determined as the first target area, and can be maintained for a period of time, that is, in the next few frames, the first
  • the regions corresponding to the corresponding segmentation templates of the objects are all the first target region until the user selects another individual at the next moment, and the region corresponding to the new individual is determined as the second target region according to the similar method described above.
  • the image area other than the first target area or the second target area is the background area. That is, the area corresponding to the segmentation template of the first object in the first period is the first target area; that is, the area corresponding to the segmentation template corresponding to the second object in the second period is the second target area.
  • the system may determine the image segmentation template according to a preset time interval (such as but not limited to 1s, 2s, etc.) or a preset number of frames (such as but not limited to 50 frames or 100, etc.)
  • a target template for images within a time period For example, the first target template is determined in the 101st frame.
  • the segmentation template with the same category or the same individual as the first target template in the 201st frame is used as the first Target template; until the second target template is determined at frame 201, for each of the next 202-200 frames, a segmentation template with the same category or the same individual as the second target template in frame 201 is used as a second target template.
  • the numbers exemplified above may be defined in advance according to users or systems. That is, the target template is determined at a certain moment, and the template of this type or the template of the individual is continuously applied for a period of time.
  • the method for determining the first first target template and the first second target template can refer to but not limited to any one of the six ways in step s223. Therefore, the first target template and the second target template may be the same category or the same individual, or may be different categories or different individuals. It is related to the recognition ability of the network, the transformation of the scene screen, or the input command of the user.
  • first target area and the first background area, the second target area and the second background area are further determined according to the method such as s224. This example will not repeat them.
  • Step 73 An optional implementation may be the same as step 63.
  • the first color processing method is the same as the third color processing method
  • the second color processing method is the same as the fourth color processing method.
  • This color processing method has good consistency before and after.
  • the first color processing method is the same as the third color processing method
  • the second color processing method is not the same as the fourth color processing method. This color processing method keeps the target subject's color consistent and the background color will change, making the overall vision cooler.
  • the first color processing method is different from the third color processing method, and the second color processing method is the same as the fourth color processing method.
  • This color processing method keeps the background color consistent and the target subject's color will change, making the target subject more prominent.
  • the first color processing method is different from the third color processing method
  • the second color processing method is different from the fourth color processing method.
  • This color processing method can provide more color transformation methods, and can provide more color coordination under the requirements of different scenes.
  • the first color processing method or the third color processing method includes: color retention, or color enhancement filters; the second color processing method or the fourth color processing method includes: black and white, darkening, retro, film, blur, blur, etc. Filter.
  • step 23 for the color processing method of the target area and the background area of the same image, refer to step 23.
  • the third color processing method and the fourth color processing method are similar to the first color processing method and the second color processing method, respectively.
  • users can freely choose the color processing method of the background in different pictures to achieve different background settings.
  • users can realize the freedom to choose the color processing mode of the main body in different pictures, so as to realize the main body of different degrees or forms.
  • the signals indicated by the same reference signs may have different sources or may be obtained in different ways, and do not constitute a limitation.
  • the "same step xx" is more focused on the similar signal processing logic of the two, and is not limited to the input and output of the two being exactly the same, nor is it limited to the two method flow It is completely equivalent, and those skilled in the art can cause reasonable citations and deformations to fall within the protection scope of the present invention.
  • the invention provides an image processing method, which determines the target area and the background area in the image by segmenting the image template; by applying different color processing methods to the target area and the background area, the brightness or chroma of the target area is made
  • the brightness and chromaticity higher than the background area make the theme corresponding to the target area more prominently displayed to achieve movie special effects.
  • an embodiment of the present invention provides an image processing apparatus 900; the apparatus may be applied to a variety of terminal devices, and may be implemented in any form of the terminal 100, such as a terminal including a camera function , Please refer to Figure 9, the device includes:
  • the shooting module 901 is used to obtain an image, which may be a photo or a video.
  • This module is specifically used to perform the methods mentioned in step 21, step 31, step 51, step 61, or step 71 in the above example and methods that can be replaced equivalently; this module can be called by the processor to the corresponding program instructions in the memory Control the camera to collect images.
  • the determining module 902 is used to determine the target area and the background area in the image according to the image content. This module is specifically used to perform the methods mentioned in step 22, step 32, step 52, step 62, or step 72 in the above example and methods that can be equivalently replaced; this module can be called by the processor to the corresponding program instructions in the memory To implement the corresponding algorithm.
  • the color processing module 903 is used to obtain the target image or target video by using different color processing methods for the target area and the background area in the image; making the target area chroma greater than the background area chroma, or making the target area brightness greater than The brightness of the background area.
  • This module is specifically used to perform the methods mentioned in step 23, step 33, step 53, step 63, or step 73 in the above example and methods that can be replaced equivalently; this module can be called by the processor to the corresponding program instructions in the memory Through a certain algorithm.
  • the device may also include a saving module 904 for storing color-processed images or videos.
  • the present invention provides an image processing device that determines the target area and background area in an image based on the image content by segmenting the image template; by applying different color processing methods to the target area and background area, the brightness of the target area is made Or the chroma is higher than the brightness and chroma of the background area, so that the theme corresponding to the target area is more prominently displayed, and the movie special effect is realized.
  • an embodiment of the present invention further provides an image processing apparatus 1000; the apparatus can be applied to a variety of terminal devices, and can be implemented in any form of the terminal 100, such as a camera function
  • the terminal please refer to FIG. 10, the device includes:
  • the shooting module 1001 is used to obtain an image, which may be a photo or a video.
  • This module is specifically used to perform the methods mentioned in step 21, step 31, step 51, step 61, or step 71 in the above example and methods that can be replaced equivalently; this module can be called by the processor to the corresponding program instructions in the memory Control the camera to collect images.
  • the determining module 1002 is configured to determine whether the number of flashing frames in the previous N0 frame of the current frame is greater than a preset threshold. If the judgment result is not greater than the preset threshold, the judgment module 1002 continues to trigger the determination module 1003 and the color processing module 1004 to perform related functions; if the judgment result is greater than the preset threshold, the judgment module 1002 continues to trigger the flicker repair module 1005 to perform related functions .
  • the module 1002 is specifically used to perform the method mentioned in step 42 in the above example and a method that can be equivalently replaced; the module can be implemented by the processor invoking the corresponding program instructions in the memory to implement the corresponding algorithm.
  • the determining module 1003 is used for determining that the number of flashing frames in the previous N0 frame of the current frame is not greater than a preset threshold; determining the target area and the background area in the image according to the image content.
  • This module is specifically used to perform the methods mentioned in step 22, step 32, step 43, step 52, step 62, or step 72 in the above example and methods that can be replaced equivalently; this module can be called by the processor in the memory Program instructions to implement the corresponding algorithm.
  • the color processing module 1004 is used to adopt different color processing methods for the target area and the background area in the image; make the chromaticity of the target area greater than the chromaticity of the background area, or make the brightness of the target area greater than the brightness of the background area.
  • This module is specifically used to perform the methods mentioned in step 23, step 33, step 44, step 53, step 63, or step 73 in the above example, and methods that can be equivalently replaced; this module can be called by the processor in the memory
  • the program instructions are implemented by a certain algorithm.
  • the flicker repair module 1005 is used for the judging module 1002 to judge that the number of frames in the previous N0 frame of the current frame is greater than a preset threshold; the same color processing method is used for all image areas of the current frame.
  • the same color processing method may be the same as the color processing method of the background area in the previous frame, or may be the same as the color processing method of the target area in the previous frame.
  • This module is specifically used to perform the method mentioned in step 45 in the above example and a method that can be equivalently replaced; this module can be implemented by the processor by calling a corresponding program instruction in the memory through a certain algorithm.
  • the device 1000 may further include a saving module 1006 for storing color-processed images or videos.
  • the present invention provides an image processing device that determines the target area and background area in an image based on the image content by segmenting the image template; by applying different color processing methods to the target area and background area, the brightness of the target area is made Or the chroma is higher than the brightness and chroma of the background area, so that the theme corresponding to the target area is more prominently displayed, and the movie special effect is realized.
  • each module in the above device is only a division of logical functions, and in actual implementation, it may be fully or partially integrated into a physical entity or may be physically separated.
  • each of the above modules may be a separately established processing element, or may be integrated in a chip of the terminal, and may also be stored in the storage element of the controller in the form of program code and processed by a processor
  • the component calls and executes the functions of the above modules.
  • various modules can be integrated together or can be implemented independently.
  • the processing element described herein may be an integrated circuit chip with signal processing capabilities.
  • each step of the above method or each of the above modules may be completed by instructions in the form of hardware integrated logic circuits or software in processor elements.
  • the processing element may be a general-purpose processor, such as a central processing unit (English: central processing unit, CPU for short), or one or more integrated circuits configured to implement the above method, for example, one or more specific integrations Circuit (English: application-specific integrated circuit, ASIC for short), or, one or more microprocessors (English: digital signal processor, DSP for short), or, one or more field programmable gate arrays (English: field-programmable gate (array for short: FPGA), etc.
  • a general-purpose processor such as a central processing unit (English: central processing unit, CPU for short), or one or more integrated circuits configured to implement the above method, for example, one or more specific integrations Circuit (English: application-specific integrated circuit, ASIC for short), or, one or more microprocessors (English: digital signal processor, DSP for short), or, one or more field programmable gate arrays (English: field-programmable gate (array for short: FPGA), etc.
  • the embodiments of the present invention may be provided as methods, systems, or computer program products. Therefore, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware. Moreover, the present invention may take the form of a computer program product implemented on one or more computer usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) containing computer usable program code.
  • computer usable storage media including but not limited to disk storage, CD-ROM, optical storage, etc.
  • These computer program instructions may also be stored in a computer readable memory that can guide a computer or other programmable data processing device to work in a specific manner, so that the instructions stored in the computer readable memory produce an article of manufacture including an instruction device, the instructions The device implements the functions specified in one block or multiple blocks of the flowchart one flow or multiple flows and / or block diagrams.
  • These computer program instructions can also be loaded onto a computer or other programmable data processing device, so that a series of operating steps are performed on the computer or other programmable device to produce computer-implemented processing, which is executed on the computer or other programmable device
  • the instructions provide steps for implementing the functions specified in one block or multiple blocks of the flowchart one flow or multiple flows and / or block diagrams.

Abstract

本发明提供了一种图像处理方法,通过对图像进行模板分割,在图像中确定出目标区域和背景区域;通过对目标区域和背景区域施加不同的色彩处理方式,使得目标区域的亮度或色度高于背景区域的亮度和色度,使得目标区域对应的主题更加显著地突出显示,实现终端用户拍照或拍摄视频时具有电影特效,提升用户拍照体验。此外,本发明还提供了一种分时段确认目标区域以及分时段变换颜色处理方式的方法,使得视频内容的主体变化和颜色变化更加灵活自主,加强人机交互。

Description

一种图像处理方法、装置与设备 技术领域
本发明涉及终端技术领域,尤其涉及一种图像处理方法、装置与设备。
背景技术
拍摄,是用摄影机﹑录像机把人﹑物的形象记录下来。不同的场景有不同的拍摄技巧,有夜景拍摄、雨景拍摄、建筑物拍摄、人像拍摄等,电影动态艺术拍摄同样是拍摄的一类,但都要遵循一定的原则。随着科技的进步,拍摄也变得越来越简单,越来越符合大众化。
随着网络带宽的提升以及终端处理能力的增强,视频和图像的拍摄和分享越来越便捷,视频消费已经成为新的全民生活方式。视频已经快速成为网络上流量的主要集中地,在未来几年中预期会占到80%~90%的流量。
日常生活中,拍摄已经成为人们展示自己和探寻美好的主要途径,人们都希望自己拍出更多有趣的风格;例如在拍摄的同时,完成图像或者视频的特效处理,实现所见即所得的拍摄体验。因此,对于非专业人士,终端中需要集成更多的新颖的图像处理技术。
目前终端录像功能普遍比较单调。现有的视频拍摄通常只能提供常规拍摄,而无法实现一些个性化的效果。
发明内容
本发明提供了一种图像处理方法,通过对图像进行模板分割,在图像中确定出目标区域和背景区域;通过对目标区域和背景区域施加不同的色彩处理方式,使得目标区域的亮度或色度高于背景区域的亮度和色度,使得目标区域对应的主题更加显著地突出显示,实现终端用户拍照或拍摄视频时具有电影特效,提升用户拍照体验。
本发明实施例提供的具体技术方案如下:
第一方面,本发明实施例提供一种图像处理方法,方法应用于录制视频的过程中,方法包括:在第一时段内采集N1个图像;在第二时段内采集N2个图像;其中,所述第一时段与所述第二时段为相邻时段,N1和N2均为正整数;对于N1个图像中的每个图像,在图像中确定出第一目标区域和第一背景区域;第一背景区域为图像中除第一目标区域以外的部分;其中,所述N1个图像中每个图像中的第一目标区域都对应第一物体;对于N2个图像中的每个图像,在图像中确定出第二目标区域和第二背景区域;第二背景区域为图像中除第二目标区域以外的部分;其中,所述N2个图像中每个图像中的第二目标区域都对应第二物体;对第一目标区域采用第一颜色处理方式进行处理,对第一背景区域采用第二颜色处理方式进行处理,对第二目标区域采用第 三颜色处理方式进行处理,对第二背景区域采用第四颜色处理方式进行处理,得到目标视频;其中,在目标视频中,第一目标区域的色度大于第一背景区域的色度,或第一目标区域的亮度大于第一背景区域的亮度;并且,第二目标区域的色度大于第二背景区域的色度,或第二目标区域的亮度大于第二背景区域的亮度。
第二方面,本发明实施例提供一种图像处理装置,装置以用于拍摄视频的过程中,装置包括:拍摄模块,用于在第一时段内采集N1个图像,在第二时段内采集N2个图像;其中,第一时段与第二时段为相邻时段,N1和N2均为正整数;确定模块,用于对于N1个图像中的每个图像,在图像中确定出第一目标区域和第一背景区域;第一背景区域为图像中除第一目标区域以外的部分;其中,N1个图像中每个图像中的第一目标区域都对应第一物体;对于N2个图像中的每个图像,在图像中确定出第二目标区域和第二背景区域;第二背景区域为图像中除第二目标区域以外的部分;其中,N2个图像中每个图像中的第二目标区域都对应第二物体;颜色处理模块,用于对第一目标区域采用第一颜色处理方式进行处理,对第一背景区域采用第二颜色处理方式进行处理,对第二目标区域采用第三颜色处理方式进行处理,对第二背景区域采用第四颜色处理方式进行处理,得到目标视频;其中,在目标视频中,第一目标区域的色度大于第一背景区域的色度,或第一目标区域的亮度大于第一背景区域的亮度;并且,第二目标区域的色度大于第二背景区域的色度,或第二目标区域的亮度大于第二背景区域的亮度。
根据第一方面或者第二方面,在一种可能的设计中,第一物体和第二物体对应的是相同的物体。
根据第一方面或者第二方面,在一种可能的设计中,第一物体和第二物体对应的是相同的物体或者是不同的物体。
根据第一方面或者第二方面,在一种可能的设计中,第一物体或第二物体包括人物、动物或植物中的至少一个个体。
根据第一方面或者第二方面,在一种可能的设计中,第一物体和第二物体的确定是由用户的选择指令决定的。
具体地,例如在第一时段的第一帧图像中,根据用户的选择指令确定出第一物体,并且第一时段内所有的图像都以第一物体作为目标物体;同理,在第二时段的第一帧图像中,根据用户的选择指令确定出第二物体,并且第二时段内所有的图像都以第二物体作为目标物体。如可以对图像进行语义分割,得到k个分割模板;其中,k个分割模板对应于不同的物体类别,用户输入的选择指令对应哪个或哪些分割模板,哪个或哪些分割模板对应的物体即目标物体(包括第一目标物体或第二目标物体)。
根据第一方面或者第二方面,在一种可能的设计中,第一物体和第二物体是由终端根据预设时间间隔的两个图像的内容分别确定出来的。
具体地,例如在第一时段的第一帧图像中,确定出第一物体,并且第一时段内所有的图像都以第一物体作为目标物体;同理,在第二时段的第一帧图像中,确定出第二物体,并且第二时段内所有的图像都以第二物体作为目标物体。在第一时段的第一帧图像中确定出第一物体,以及在第二时段的第一帧图像中确定出第二物体包括但不限于下面的方式之一:
对图像进行语义分割,得到k个分割模板;其中,k个分割模板对应于不同的物体类别;
若k=2,且这2个分割模板中包含1个物体模板和1个背景模板,则将物体模板对应的图像区域确定为目标区域;将背景模板对应的区域确定为背景区域;相应的,物体模板对应的物体即为第一物体或第二物体;或者,
若k大于2,且k个分割模板当中有k0个物体模板包含的像素数量大于预设阈值,则将k0个物体模板对应的图像区域确定为目标区域,将其余的分割模板对应的图像区域确定为背景区域;相应的,物体模板对应的物体即为第一物体或第二物体;其中,k0为小于k的非负整数;或者,
若k大于2,将k个分割模板当中包含像素数量最多的分割模板对应的图像区域确定为目标区域,将其余的分割模板对应的图像区域确定为背景区域;相应的,物体模板对应的物体即为第一物体或第二物体;或者,
若k大于2,则根据预先设置的物体类别的优先级在k个分割模板中确定出目标模板;将目标模板对应的图像区域确定为目标区域,将其余的分割模板对应的图像区域确定为背景区域;相应的,物体模板对应的物体即为第一物体或第二物体;或者,
若k大于2,则根据用户的选择指令在k个分割模板中确定出目标模板;将目标模板对应的图像区域确定为目标区域,将其余的分割模板对应的图像区域确定为背景区域;相应的,物体模板对应的物体即为第一物体或第二物体。该方法可以由确定模块具体执行。
根据第一方面或者第二方面,在一种可能的设计中,第一颜色处理方式与第三颜色处理方式相同,且第二颜色处理方式与第四颜色处理方式相同。
根据第一方面或者第二方面,在一种可能的设计中,第一颜色处理方式与第三颜色处理方式相同,且第二颜色处理方式与第四颜色处理方式不相同。
根据第一方面或者第二方面,在一种可能的设计中,第一颜色处理方式与第三颜色处理方式不相同,且第二颜色处理方式与所述第四颜色处理方式相同。
根据第一方面或者第二方面,在一种可能的设计中,第一颜色处理方式与第三颜色处理方式不相同,且第二颜色处理方式与所述第四颜色处理方式不相同。
根据第一方面或者第二方面,在一种可能的设计中,第一颜色处理方式或第三颜色处理方式包括:保留色彩,或色彩增强中的一种。
根据第一方面或者第二方面,在一种可能的设计中,第二颜色处理方式或第四颜色处理方式包括:黑白、变暗、模糊、或复古中的一种。
第三方面,本发明实施例提供一种终端设备,包含摄像头、存储器、处理器、总线;摄像头、存储器、以及处理器通过总线相连;摄像头用于采集图像,存储器用于存储计算机程序和指令,处理器用于调用存储器中存储的计算机程序和指令以及采集的图像,还具体用于使终端设备执行如上述任何一种可能的设计方法。
根据第三方面,在一种可能的设计中,终端设备还包括天线系统、天线系统在处理器的控制下,收发无线通信信号实现与移动通信网络的无线通信;移动通信网络包括以下的一种或多种:GSM网络、CDMA网络、3G网络、4G网络、5G网络、FDMA、 TDMA、PDC、TACS、AMPS、WCDMA、TDSCDMA、WIFI以及LTE网络。
对于上述任何一种可能的设计中的技术方案,在不违背自然规律的前提下,可以进行方案之间的组合。
现有技术中拍摄视频和图像,不加任何图像中个体的区分以及颜色的区分,特效效果不够丰富,通过本发明,能够将图像中的不同区域通过颜色进行差异性区分,使得照片或者视频的特效增强,更能突显图像中的主体和目标,使得主要角色更加突出。同时本发明还能提供更多的颜色变换和主体变化,提升用户的使用粘性。
附图说明
图1为本发明实施例中一种终端结构示意图;
图2为本发明实施例中一种图像处理方法流程图;
图3为本发明实施例中一种分割模板标识示例;
图4为本发明实施例中另一种分割模板标识示例;
图5为本发明实施例中一种确定目标模板的示意图;
图6为本发明实施例中另一种确定目标模板的示意图;
图7为本发明实施例中另一种确定目标模板的示意图;
图8为本发明实施例中另一种确定目标模板的示意图;
图9为本发明实施例中一种图像处理装置示意图;
图10为本发明实施例中另一种图像处理装置示意图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,并不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
本发明实施例中,终端,可以是向用户提供拍摄视频和/或数据连通性的设备,具有无线连接功能的手持式设备、或连接到无线调制解调器的其他处理设备,比如:数码相机、单反相机、移动电话(或称为“蜂窝”电话)、智能手机,可以是便携式、袖珍式、手持式、可穿戴设备(如智能手表等)、平板电脑、个人电脑(PC,Personal Computer)、PDA(Personal Digital Assistant,个人数字助理)、车载电脑、无人机、航拍器等。
图1示出了终端100的一种可选的硬件结构示意图。
参考图1所示,终端100可以包括射频单元110、存储器120、输入单元130、显示单元140、摄像头150、音频电路160(包含扬声器161、麦克风162)、处理器170、外部接口180、电源190等部件。本领域技术人员可以理解,图1仅仅是智能终端或 多功能设备的举例,并不构成对智能终端或多功能设备的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件。如,至少存在存储器120、处理器170、摄像头150。
摄像头150用于采集图像或视频,可以通过应用程序指令触发开启,实现拍照或者摄像功能。摄像头可以包括成像镜头,滤光片,图像传感器等部件。物体发出或反射的光线进入成像镜头,通过滤光片,最终汇聚在图像传感器上。成像镜头主要是用于对拍照视角中的所有物体(也可称为待拍摄场景、待拍摄对象、目标场景或目标对象,也可以理解为用户期待拍摄的场景图像)发出或反射的光汇聚成像;滤光片主要是用于将光线中的多余光波(例如除可见光外的光波,如红外)滤去;图像传感器主要是用于对接收到的光信号进行光电转换,转换成电信号,并输入到处理器170进行后续处理。其中,摄像头可以位于终端设备的前面,也可以位于终端设备的背面,摄像头具体个数以及排布方式可以根据设计者或厂商策略的需求灵活确定,本申请不做限定。
输入单元130可用于接收输入的数字或字符信息,以及产生与所述便携式多功能装置的用户设置以及功能控制有关的键信号输入。具体地,输入单元130可包括触摸屏131和/或其他输入设备132。所述触摸屏131可收集用户在其上或附近的触摸操作(比如用户使用手指、关节、触笔等任何适合的物体在触摸屏上或在触摸屏附近的操作),并根据预先设定的程序驱动相应的连接装置。触摸屏可以检测用户对触摸屏的触摸动作,将所述触摸动作转换为触摸信号发送给所述处理器170,并能接收所述处理器170发来的命令并加以执行;所述触摸信号至少包括触点坐标信息。所述触摸屏131可以提供所述终端100和用户之间的输入界面和输出界面。此外,可以采用电阻式、电容式、红外线以及表面声波等多种类型实现触摸屏。除了触摸屏131,输入单元130还可以包括其他输入设备。具体地,其他输入设备132可以包括但不限于物理键盘、功能键(比如音量控制按键132、开关按键133等)、轨迹球、鼠标、操作杆等中的一种或多种。
所述显示单元140可用于显示由用户输入的信息或提供给用户的信息、终端100的各种菜单、交互界面、文件显示和/或任意一种多媒体文件的播放。在本发明实施例中,显示单元还用于显示设备利用摄像头150获取到的图像/视频,可以包括某些拍摄模式下的预览图像/视频、拍摄的初始图像/视频以及拍摄后经过一定算法处理后的目标图像/视频。
进一步的,触摸屏131可覆盖显示面板141,当触摸屏131检测到在其上或附近的触摸操作后,传送给处理器170以确定触摸事件的类型,随后处理器170根据触摸事件的类型在显示面板141上提供相应的视觉输出。在本实施例中,触摸屏与显示单元可以集成为一个部件而实现终端100的输入、输出、显示功能;为便于描述,本发明实施例以触摸显示屏代表触摸屏和显示单元的功能集合;在某些实施例中,触摸屏与显示单元也可以作为两个独立的部件。
所述存储器120可用于存储指令和数据,存储器120可主要包括存储指令区和存储数据区,存储数据区可存储各种数据,如多媒体文件、文本等;存储指令区可存储操作系统、应用、至少一个功能所需的指令等软件单元,或者他们的子集、扩展集。 还可以包括非易失性随机存储器;向处理器170提供包括管理计算处理设备中的硬件、软件以及数据资源,支持控制软件和应用。还用于多媒体文件的存储,以及运行程序和应用的存储。
处理器170是终端100的控制中心,利用各种接口和线路连接整个手机的各个部分,通过运行或执行存储在存储器120内的指令以及调用存储在存储器120内的数据,执行终端100的各种功能和处理数据,从而对手机进行整体控制。可选的,处理器170可包括一个或多个处理单元;优选的,处理器170可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作系统、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器170中。在一些实施例中,处理器、存储器、可以在单一芯片上实现,在一些实施例中,他们也可以在独立的芯片上分别实现。处理器170还可以用于产生相应的操作控制信号,发给计算处理设备相应的部件,读取以及处理软件中的数据,尤其是读取和处理存储器120中的数据和程序,以使其中的各个功能模块执行相应的功能,从而控制相应的部件按指令的要求进行动作。
所述射频单元110可用于收发信息或通话过程中信号的接收和发送,例如,将基站的下行信息接收后,给处理器170处理;另外,将设计上行的数据发送给基站。通常,RF电路包括但不限于天线、至少一个放大器、收发信机、耦合器、低噪声放大器(Low Noise Amplifier,LNA)、双工器等。此外,射频单元110还可以通过无线通信与网络设备和其他设备通信。所述无线通信可以使用任一通信标准或协议,包括但不限于全球移动通讯系统(Global System of Mobile communication,GSM)、通用分组无线服务(General Packet Radio Service,GPRS)、码分多址(Code Division Multiple Access,CDMA)、宽带码分多址(Wideband Code Division Multiple Access,WCDMA)、长期演进(Long Term Evolution,LTE)、电子邮件、短消息服务(Short Messaging Service,SMS)等。
音频电路160、扬声器161、麦克风162可提供用户与终端100之间的音频接口。音频电路160可将接收到的音频数据转换为电信号,传输到扬声器161,由扬声器161转换为声音信号输出;另一方面,麦克风162用于收集声音信号,还可以将收集的声音信号转换为电信号,由音频电路160接收后转换为音频数据,再将音频数据输出处理器170处理后,经射频单元110以发送给比如另一终端,或者将音频数据输出至存储器120以便进一步处理,音频电路也可以包括耳机插孔163,用于提供音频电路和耳机之间的连接接口。扬声器、麦克风的具体个数以及排布方式可以根据设计者或厂商策略的需求灵活确定,本申请不做限定。
终端100还包括给各个部件供电的电源190(比如电池),优选的,电源可以通过电源管理系统与处理器170逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。
终端100还包括外部接口180,所述外部接口可以是标准的Micro USB接口,也可以使多针连接器,可以用于连接终端100与其他装置进行通信,也可以用于连接充电器为终端100充电。
尽管未示出,终端100还可以包括闪光灯、无线保真(wireless fidelity,WiFi)模 块、蓝牙模块、不同功能的传感器等,在此不再赘述。下文中描述的部分或全部方法均可以应用在如图1所示的终端中。
本发明可应用于具有拍摄(至少包括拍照或摄像中的一个)功能的终端设备,落地产品形态可以是智能终端,如手机、平板、DV、摄像机、照相机、便携电脑、笔记本电脑、智能机器人、电视、安防系统、无人机等安装有摄像头的产品。具体地,本发明的功能模块可以部署在相关设备的DSP芯片上,具体的可以是其中的应用程序或软件;本发明部署在终端设备上,通过软件安装或升级,通过硬件的调用配合,提供图像处理功能。
本发明主要应用在终端设备拍摄图片或者拍摄视频的场景。人们对视频拍摄的要求也越来越高,希望在拍摄的同时,完成视频的特效处理,实现所见即所得的视频拍摄体验。本发明可以将图片或视频进行主体分割,并对不同的区域进行颜色调整,以实现画面的实时特效。
下面以示例的方式对本发明进行说明。
示例1
具体地,请参阅图2,图2为本发明实施例中一种图像处理方法流程图。该方法发生在拍摄图片的过程中,在具体实现过程中,终端可以配置某一种拍照模式;在该拍照模式下该方法可以包括以下步骤:
步骤21:获取(也可以理解为拍摄或采集)图像。
具体地,当用户拍照时,屏幕中也会显示相应的预览流,预览图像可以泛指预览流中的一个图像,当用户点击快门时,获取到拍摄到的图像,尺寸例如但不限于1920*1080。
步骤22:根据拍摄到的图像中的内容(可以理解为场景语义)在该图像中确定出目标区域和背景区域,更具体的,可以根据图像中物体的类别在图像中确定出目标区域和背景区域;其中,背景区域为图像中除了目标区域以外的部分;目标区域对应于图像中的目标物体或目标对象,即用户想在图像中突出显示的物体或对象,可以与用户的交互选择或者系统设置有关。具体地,步骤22可以包括s221-s224。
s221:图像预处理。
将拍摄到的原始尺寸的图像进行下采样,转换为分辨率更小的图像。基于小图进行计算,可以降低运算量。具体实现过程中,可以将原始尺寸(如m0*n0)下采样到m*n的尺寸;其中m、n的值越小,后续运算量也就越小;但m、n如果过小则会导致后续的像素分辨能力下降。实验表明,m、n的合理取值区间为[128,512],更具体地,[256,300],m和n可以相等也可以不相等。例如,可以将1920*1080的图像下采样到256*256。
s222:将上述经下采样后的m*n的图像输入到神经网络进行语义分割,确定图像分割模板(Mask)。
语义分割是指对图像中物体进行像素级的分割,每个像素都可以标明属于哪类物体;对没有标明类别的部分,则标为“背景”。
具体地,语义分割可以采用基于CNN(Convolutional Neural Networks)的深度学习算法。基于CNN的网络模型,具体描述如下:
1)根据上述m*n的图像进行下采样和卷积;如下采样到m1*n1、m2*n2、……、mz*nz,逐层提取图片语义特征,得到m1*n1的特征图,m2*n2的特征图,……,mz*nz的特征图,即多尺度语义特征;其中m1、m2、……、mz成倍数关系且小于m;n1、n2、……、nz成倍数关系且小于n。例如,m=2m1=4m2=,……,=2 z*mz;n=2n1=4n2=,……,=2 z*nz。z的取值以及倍数关系可以根据算法性能和设计需求来确定。
2)根据m1*n1、m2*n2、……、mz*nz的特征图进行卷积和上采样,对多尺度语义特征进行融合。
对于上述提到的卷积、下采样、上采样的方法,可以采用业界公知的技术,本发明中不予以限定和列举。
3)确定出图像需要识别的类别,计算出各个类别在每个像素上的分值,取分值最大的物体类别(可简称类别)作为该像素的分类结果,最终得到Mask图,即分割模板。
例如,如果终端可以识别k个物体类别(如:人、动物、植物、其他预设物体、背景等等中的至少一个),则可以得到k张图;图像中每个像素都会得到一个属于某个类别的分数值,分数越高表示该像素属于该类别的概率越高。
对任意一个像素,一旦确定了类别,就可以进行标识,如用1表示人物,用2表示车辆,用3表示动物,用4表示植物,0表示背景等。仅作举例,不构成任何限定。用户可以根据设计需求任意设计分类数目、种类以及标识方法。一种具体实例可以如图3所示,车辆所在像素区域均被神经网络分类为车,且标识为1,而周围的背景部分的像素区域都神经网络分类为背景,且被标识为0。再例如,神经网络输出的分割模板中,同一类物体的区域具有相同的标签,比如背景的标签为0,猫的标签为1,滑板的标签为2。在如图4中的分割模板中,还可以使用同一种颜色表示相同类别的标签,如人、马、背景分别用不同的颜色标识。
Mask是语义分割算法的结果,将图像中属于某一类物体的像素都标注为某一种颜色或标识,背景也标注为某一种颜色或标识,这样处理后所得到的图,称为Mask,以便于直观地显示分割的结果。
图像的内容可以包含主体和背景,为了便于叙述,相对应地,图像分割模板中可以包含主体模板和背景模板。主体模板可以对应于通过分割方法识别出来的主体,包括用户在拍照或者拍摄的图像中更想突出显示的个体,例如人物、动物、植物、或某种特定物体(杯子、桌子、衣服、装饰……)等;背景模板对应于图像中没有被识别为主体模板的其他区域;图像分割模板对应于整个图像。主体模板的识别能力与神经网络的性能有关,例如有些神经网络仅能识别出人和背景;有些神经网络能够识别出人、车和背景;有些神经网络仅能识别出车和背景;有些神经网络能够识别出人、动物和背景;有些神经网络仅能识别出动物和背景,有些神经网络能够识别出动物、植物和背景……
应理解,一个图片中也可以只有主体,也可以只有背景,当只有主体的时候也可 以被标识为背景,这些设置可以由用户灵活设计和决定。
深度神经网络的训练需要采用大量的分割训练数据,训练数据集包括含有分割类别的大量图像,包含输入图像和分割模板图。训练集可以覆盖分割对象的各种典型应用场景,且具有数据的多样性。用训练集中的输入图像和分割模板图对网络进行训练,得到优良的网络参数,即得到用户满意的分割性能;并将得到的网络参数作为最终的使用的神经网络的计算参数。
s223:根据分割模板确定目标模板。
对于不同的图像,不同能力的神经网络,可能会得到多种多样的分割模板,终端还可以进一步地确定分割模板中哪些模板对应最需要突出、显著显示的物体,即需要确定目标模板。目标模板的确定包括但不限于以下几种方式。
方式1:如果分割模板仅有一个主体模板和背景模板,则将该主体模板确定为目标模板。
具体地,假设对图像进行语义分割得到k个分割模板;其中,k个分割模板对应于不同的物体类别;若k=2,且这2个分割模板中包含1个物体模板和1个背景模板,则将物体模板对应的图像区域确定为目标区域,将背景模板对应的区域确定为背景区域。
如图5所示,神经网络输出的图像的分割模板仅有主体模板A1和背景模板,则A1可以确定为目标模板。
方式2:如果分割模板中存在多个主体模板和背景模板的时候,若任意一个主体模板包含的像素数量大于一定阈值时,则将该主体模板确定为目标主体;若任意一个主体模板包含的像素数量小于一定阈值时,则将该主体模板进行重新标识,也标识为背景。主体模板包含的像素数量可以指该个体的图像连通区域包含的像素的数量。
具体地,假设对图像进行语义分割得到k个分割模板;其中,k个分割模板对应于不同的物体类别;若k大于2,且k个分割模板当中有k0个物体模板包含的像素数量大于预设阈值,则将k0个物体模板对应的图像区域确定为目标区域,将其余的分割模板对应的图像区域确定为背景区域;其中,k0为小于k的非负整数。
如图6所示,神经网络输出的图像的分割模板中有主体模板A1、A2和背景模板。若其中A1包含的像素数量大于预设阈值,A2包含的像素数量不大于预设阈值,则将A1确定为目标模板,主体模板A2重新标识为背景模板,重新标识后的模板可以如图5所示。若其中A1包含的像素数量大于预设阈值,A2包含的像素数量也大于预设阈值,则将A1、A2均确定为目标模板。若其中A1、A2包含的像素数量均不大于预设阈值,则将A1、A2重新标识,标识为背景模板,即该图像中没有主体模板。
应理解,在具体实现过程中,A1、A2可以是同一类别也可以是不同类别。
方式3:如果分割模板中存在多个主体模板和背景模板的时候,选择包含像素数量最多的主体模板作为目标模板;并将其他主体模板进行重新标识,也标识为背景模板。
具体地,假设对图像进行语义分割得到k个分割模板;其中,k个分割模板对应于不同的物体类别;若k大于2,将k个分割模板当中包含像素数量最多的分割模板对应的图像区域确定为目标区域,将其余的分割模板对应的图像区域确定为背景区域。
如图6所示,神经网络输出的图像的分割模板中有主体模板A1、A2和背景模板,则将含有像素数量最多的A1确定为目标模板,主体模板A2重新标识为背景模板,重新标识后的模板可以如图5所示。
应理解,在具体实现过程中,A1、A2可以是同一类别也可以是不同类别。
方式4:如果分割模板中存在多个主体模板和背景模板的时候,且多个主体模板中存在多个类别时,则按照类别的优先级来确定目标模板。例如,人物模板高于车辆模板的优先级,那么人物模板为目标模板,车辆模板可以被重新标识为背景。例如,人物模板高于动物模板高于植物模板,如果系统设置的优先级为高于植物模板的均为主题模板,那么人物模板和动物模板均为目标模板,植物模板可以被重新标识为背景。应理解,同属统一类别模板的个体可以是一个或者多个。
具体地,假设对图像进行语义分割得到k个分割模板;其中,k个分割模板对应于不同的物体类别;若k大于2,则根据预先设置的物体类别的优先级在k个分割模板中确定出目标模板;将目标模板对应的图像区域确定为目标区域,将其余的分割模板对应的图像区域确定为背景区域。
如图7所示,神经网络输出的图像的分割模板中有主体模板A1、B1和背景模板,A1和B1为不同类别,且A1的优先级高于B1的优先级;如果系统设置包括B1以及B1以上优先级的主题模板都可以作为目标模板,则A1、B1均为目标模板;如果系统设置B1以上优先级的主题模板可以作为目标模板,则将确定A1为目标模板,并将B1进行重新标识,标识为背景模板。
方式5:如果分割模板中存在多个主体模板和背景模板的时候,可以根据用户的输入的选择操作来确定目标模板,输入的方式包括但不限于触屏、语音等选择指令。用户选择了哪个个体,哪个个体对应的主体模板就是目标模板。
具体地,假设对图像进行语义分割得到k个分割模板;其中,k个分割模板对应于不同的物体类别;若k大于2,则根据用户的选择指令在k个分割模板中确定出目标模板;将目标模板对应的图像区域确定为目标区域,将其余的分割模板对应的图像区域确定为背景区域。
如图7所示,神经网络输出的图像的分割模板中有主体模板A1、B1和背景模板。如果用户在拍照过程中触屏点击了A1对应的个体,则将A1确定为目标模板,并将B1进行重新标识,标识为背景模板。如果用户在拍照过程中触屏点击了B1对应的个体,则将B1确定为目标模板;并将A1进行重新标识,标识为背景模板。
方式6:如果分割模板中存在多个主体模板和背景模板的时候,且多个主体模板中存在多个类别时,可以根据用户的输入的选择操作来确定目标模板,输入的方式包括但不限于触屏、语音等选择指令。用户选择了哪个个体,哪个个体对应类别的所有主体模板就是目标模板。
具体地,假设对图像进行语义分割得到k个分割模板;其中,k个分割模板对应于不同的物体类别;若k大于2,则根据用户的选择指令在k个分割模板中确定出目标模板;将目标模板对应的图像区域确定为目标区域,将其余的分割模板对应的图像区域确定为背景区域。
如图8所示,神经网络输出的图像的分割模板中有主体模板A1、A2、B1、B2和 背景模板,其中A1、A2为同一类别,B1、B2为同一类别。如果用户在拍照过程中触屏点击了A1对应的个体,则将同一类别的A1、A2确定为目标模板,并将B1、B2进行重新标识,标识为背景模板。如果用户在拍照过程中触屏点击了B2对应的个体,则将同一类别的B1、B2确定为目标模板;并将A1、A2进行重新标识,标识为背景模板。
应理解,上述方式的示例仅为举例不应构成限定,上述方式在不违背逻辑的情况下可以自由组合,因此图像在分个模板后,可以得到一个或者多个目标模板,这些目标模板可以只有一个类别,也可以包含多个类别,每个类别下还可以有一个或多个个体;显示的结果与终端系统的设置的确定目标模板的规则以及用户的输入有关,一些场景中,一个图像中也可能只含有背景模板。
s224:在原始图像中确定目标区域和背景区域。
将分割模板上采样到拍摄到的图像的原始尺寸,分割模板中的目标模板和背景模板也被上采样,上采样后的目标模板在原始图像中对应的所有像素组成的区域即目标区域,上采样后的背景模板在原始图像中对应的所有像素组成的区域即背景区域。
步骤23:对图像中的目标区域和背景区域采用不同的颜色处理方式进行处理,得到目标图像;其中,采用不同的颜色处理方式进行处理使得目标区域的色度大于背景区域的色度或者目标区域的亮度大于背景区域的亮度。即在目标图像中目标区域的色度大于背景区域的色度,或者目标区域的亮度大于背景区域的亮度。
具体地,对图像中的目标区域和背景区域分别采用第一颜色处理方式和第二颜色处理方式。包括但不限于以下方式:
方式1:第一颜色处理方式为保留色彩,第二颜色处理方式为滤镜,如对背景区域的颜色转换为黑白;典型的滤镜还包括黑白、变暗、复古、胶片、模糊、虚化等等中的任何一个。
例如,黑白滤镜是将每一个像素值映射为灰度值,实现黑白滤镜的效果;再如,变暗滤镜是将每一个像素值的亮度变低,实现变暗的特殊效果。
方式2:第一颜色处理方式为第一种滤镜方式,第二颜色处理方式为第二种滤镜方式,第一种滤镜方式和第二种滤镜方式不同。其中,对于同一个图像来说,第一种滤镜方式比第二种滤镜方式得到的图像色度更高。
方式3:第一颜色处理方式为第三种滤镜方式,第二颜色处理方式为第四种滤镜方式,第三种滤镜方式和第四种滤镜方式不同。其中,对于同一个图像来说,第三种滤镜方式比第四种滤镜方式得到的图像亮度更高。
应理解,颜色/色彩是由亮度和色度共同表示的,色度是不包括亮度在内的颜色的性质,它反映的是颜色的色调和饱和度,亮度是指色彩的明亮程度。因此颜色处理包含对亮度和/或色度的处理。
具体来说,滤镜可以包括调节色度、亮度、色相等,还可以包括叠加纹理等,通过调节色度和色相,可以有针对性的调节某一个色系,使之变浓、变淡或者改变色调,而其他色系不变。滤镜也可以理解为一种像素对像素的映射,通过预设的映射表,将输入图像的像素值映射为目标像素的像素值,从而实现特效效果。应理解,滤镜可以是预先设定的参数模板,这些与色彩有关的参数可以是业界公知的滤镜模板中的参数, 也可以是由用户自主设计的参数。
作为补充的,在步骤23之后,方法还包括步骤24:保存经步骤23处理的图片。
通过本发明,在拍照的过程中,终端可以根据图片内容确定出目标个体和背景,并对目标个体和背景应用不同的颜色处理,使用户拍摄的图片能更突出主体,使得拍出来的图片具有大片儿感,犹如影片。
示例2
具体地,本发明中录像与拍照的图像处理方法类似,不同之处在于拍照处理的对象是一张图像,而录像处理的对象是连续的视频帧,即连续的多张的图像,既可以是一个完整的视频,也可以是一个完整视频中的一个片段,或者用户自定义的某个时段区间的视频片段。对于视频或者视频片段中的每一帧图像的处理流程都可参考上述示例1中的处理方法。
具体地,拍摄视频中的图像处理方法可以包括以下步骤:
步骤31,获取拍摄的N帧图像,N为正整数,对每一帧图像执行步骤32-33操作,其中,N帧图像可以是相邻的视频帧,N帧图像的总和可以理解为一段视频;N帧图像也可以是非相邻关系。
步骤32,一种可选的实现方式可以同步骤22。
步骤33,一种可选的实现方式可以同步骤23。
作为补充,由于视频是连续的图像组成,因此个体的确定方式也会跟时序有关,因此步骤33除了步骤23之外还可以有更丰富的实现方式。可选的,s223中任意一种确认主体的方式可以具有延时性,例如在第L1帧中确定出人物和背景,通过像素标记和模板比对,可以在第L1+1帧到第L1+L0帧依旧确定这些图像中的该人物为主体,其对应的区域为目标区域0。可以不必每一帧都要判定主体和背景。每次确认主体的时刻可以由用户自主定义,还可以周期性确认主体,例如但不限于每2s或者每10s确定一次等等,每次确定主体的方式包括但不限于s223中的6种方式。
步骤34,保存经过颜色处理的N帧图像组成的视频。
通过本发明,用户可以在录像的过程中,终端根据视频中的内容确定出目标个体和背景,并对目标个体和背景应用不同的颜色处理,使用户拍摄的视频能更突出主体。使得拍出来的视频具有大片儿感,犹如影片,带来视频炫酷感,提升用户体验。
示例3
本发明中录像与拍照的图像处理方法类似,不同之处在于拍照处理的对象是一张图像,而录像处理的对象是连续的视频帧,即连续的多张的图像。因此每一帧图像的处理流程都可参考上述示例1中的处理方法。在一些复杂的拍摄视频的场景中,图像中一些区域可能会被误检,如果同一个区域在相邻帧中分别被标记为目标或背景,则按照上述示例处理颜色的方法,对同一区域处理为不同的颜色,这种相邻帧中同一区 域的颜色的变化会造成感官上的闪烁,因此需要在处理过程中对闪烁进行判断,并消除闪烁。闪烁可以理解为对物体类别的误判。
一种判断视频发生闪烁的方法可以基于光流对前一帧分割模板进行处理,得出一个基于光流的分割模板,对比光流分割模板与当前帧的分割模板的不同,当重合度或相似度超过一定比例时,判断为不闪烁,否则判断为闪烁。另外,应理解,判断闪烁是一个持续的过程。可选的,一种具体的判断是否存在闪烁的方法如下:
1)首先计算相邻帧的光流,光流表明了前后帧(t-1帧和t帧)中像素的位移关系。
2)获得t-1帧的分割模板,并根据t-1帧的分割模板,以及t-1帧和t帧的光流信息,计算出t帧的光流分割模板F,该光流分割模板是根据光流计算得到的。
3)获得t帧的分割模板S;
4)统计光流分割模板F中主体的像素集合SF,统计分割模板S的主体的像素集合SS,计算出SF和SS并集和交集的像素个数分别为Nu和Ni,当(Nu-Ni)/Nu大于一定阈值时,则认为相邻帧t-1帧和t帧的分割模板相差较大,判断为t-1帧和t帧之间会发生闪烁,也可以理解为第t帧发生闪烁。其中,相差较大表明同一个物体可能被误判为不同的类别。例如,t-1帧和t帧中的同一个个体被分别判断成为了人和猴。
可选地,若当前图像的前N0(大于2的正整数)个图像中,存在同一物体被判断为不同类别的相邻图像组数大于预设阈值,则可以判断需要对当前帧进行闪烁异常处理。存在同一物体被判断为不同类别的相邻图像组数不大于预设阈值,则可以判断需要对当前帧进行闪烁异常处理。
可选地,例如,对于预定数量个历史相邻帧或预设数目个历史帧中,若有超过半数的帧都被判断为闪烁(如当前视频帧的前相邻5帧中,有3个视频帧被判定为闪烁),则可以判断需要对当前帧进行闪烁异常处理;若有不超过半数的帧被判断为闪烁(如当前视频帧的前相邻5帧中,有1个视频帧被判定为闪烁),则可以判断不需要对当前帧进行闪烁异常处理。
应理解,当前视频图像可以理解为某一时刻正在录制的图像,这里的某一时刻,在一些场景中可以理解为是一个泛指的时刻;在某一些场景中也可以理解为一些特定时刻,如最新的时刻,或者用户感兴趣的时刻。
具体地,本示例中拍摄视频中的图像处理方法可以包括以下步骤:
步骤41:获取拍摄的N帧图像,N为正整数,对每一帧图像执行步骤32-33操作,其中,N帧图像可以是相邻的视频帧,N帧图像的总和可以理解为一段视频;N帧图像也可以是非相邻关系。
步骤42:判断当前帧(当前图像)的前N0帧中发生闪烁的相邻图像组数是否大于预设阈值。这里的N0以及阈值可以是由用户设定的,如N0为选取的历史视频帧样本数量,阈值可以为N0的1/2或2/3等,仅作举例不做限定。
若判断结果为不大于预设阈值,对当前拍摄或采集到的图像执行步骤42-43的操作。
步骤43,一种可选的实现方式可以同步骤32。
步骤44,一种可选的实现方式可以同步骤33。
若判断结果为大于预设阈值,对当前拍摄或采集的图像执行步骤44操作。
步骤45:对当前帧的全部图像区域采用同一种颜色处理方法进行处理,得到目标图像。该同一种颜色处理方法可以与上一帧中的背景区域的颜色处理方法相同,或者可以与上一帧中的目标区域的颜色处理方法相同,或者可以与上一帧整个图像的颜色处理方法相同。例如可以对整幅图像采用与在步骤33(23)中对背景区域相同的颜色处理方法;也可以对整幅图像采用与在步骤33(23)中对目标区域相同的颜色处理方法。例如,图像全部保持彩色、或者全部变成黑白,或者图像全部采用第一或第二种颜色处理方式(包括但不限于上述示例1中提到的颜色处理方式)。
该情形下对于当前帧,既可以存在也可以省略如同步骤22的模板分割流程,本示例中不予以限定。
步骤45之后,执行步骤46:保存经过颜色处理的N帧图像组成的视频。N为正整数。
通过本发明,用户可以在录像的过程中,终端根据视频中的内容确定出目标个体和背景,并对目标个体和背景应用不同的颜色处理,使用户拍摄的视频能更突出主体。使得拍出来的视频具有大片儿感,犹如影片,带来视频炫酷感,提升用户体验。
示例4
在一些应用场景中,用户拍摄的画面内容常常是会发生变化的,因此常常出现画面主体的变换,用户也希望在不同的画面中自由选择主体的颜色处理方式,以实现自主掌控视频风格。
拍摄视频过程中的图像处理方法可以包括如下步骤:
步骤51:获取视频帧;
步骤52:对于视频中获取的任意视频帧;确定该视频帧中的主体区域和背景区域;
步骤53:对于主体区域可以随时采用任何色彩处理方式,对于背景区域也可以随时采用任何色彩处理方式。但需要保证,对于任意一个图像,色彩处理后的主体区域的亮度或色度要高于色彩处理后的背景区域;或者,对于任意一个图像来说,主体区域应用的色彩处理方式要比背景区域应用的色彩处理方式得到的图像的色度或亮度更高。
示例5
在一些应用场景中,用户拍摄的画面内容常常是会发生变化的,因此常常出现画面主体的变换,用户也希望在不同的画面中自由选择主体的颜色处理方式,以实现自主掌控视频风格。尤其是分时段进行变换色彩。
拍摄视频过程中的图像处理方法可以包括如下步骤:
步骤61:在第一时段内,采集N1个图像;在第二时段内,采集N2个图像;其中,第一时段与第二时段为相邻时段,N1和N2均为正整数;第一时段和第二时段可以是用户能够用肉眼区别出来图像变化的时长,N1、N2由录像时的帧率和时段的时长决定,本发明中不予以限定。
步骤62:对于N1个图像中的每个图像,在图像中确定出第一目标区域和第一背景区域;第一背景区域为图像中除第一目标区域以外的部分;其中,N1个图像中每个图像中的第一目标区域都对应第一物体(可以包含至少一个物体);对于N2个图像中的每个图像,在图像中确定出第二目标区域和第二背景区域;第二背景区域为图像中除第二目标区域以外的部分;其中,N2个图像中每个图像中的第二目标区域都对应第二物体(可以包含至少一个物体);
步骤63:对第一目标区域采用第一颜色处理方式进行处理,对第一背景区域采用第二颜色处理方式进行处理,对第二目标区域采用第三颜色处理方式进行处理,对第二背景区域采用第四颜色处理方式进行处理,得到目标视频;其中,在目标视频中,第一目标区域的色度大于第一背景区域的色度,或第一目标区域的亮度大于第一背景区域的亮度;并且,第二目标区域的色度大于第二背景区域的色度,或第二目标区域的亮度大于第二背景区域的亮度。
示例6
在一些应用场景中,用户拍摄的画面内容常常是会发生变化的,因此常常出现画面主体的变换,用户也希望在不同的画面中自由选择想要突出的目标主体。例如第一时段内确定第一物体对应的图像区域为目标区域,第二时段内确定第二物体对应的图像区域为目标区域,第一物体和第二物体为不同的物体或个体或者类别。
该场景下,拍摄视频过程中的图像处理方法可以包括如下步骤:
步骤71:一种可选的实现方式可以同步骤61。
步骤72:根据图像内容在N1个图像中的任意一个图像中确定第一目标区域和第一背景区域;根据图像内容在N2个图像中的任意一个图像中确定第二目标区域和第二背景区域;第二目标区域对应的物体与第一目标区域对应的物体不同或者类别不同,可以让系统和用户自主选择目标主体和图像的目标区域。一个图像,是由主体和背景组成的,对应地,是由目标区域和背景区域组成的。
例如,第一物体为人,第二物体为动物;例如第一物体为人物甲,第二物体为人物乙;例如第一物体为两个人物,第二物体为一只狗和两只猫……其余没有别识别出来的区域均被标记为背景。
该方法可以通过如上述s221和s222的方法确定出图像分割模板,但是后续的方法并不局限于每帧图像都要在分割模板中确定出目标物体。
可选地,在图像分割模板中,用户可以自由输入,第一物体和第二物体是由用户输入选择指令来决定的,例如用户点选个体,则系统会识别出用户输入指令对应的像素,并进一步识别出用户选择的模板是哪一个(/些)个体(/可以是至少一个个体)或者哪一个(/些)类别(/可以是至少一个类别),进而将那一个(/些)个体或者那一个(/些)类别下的所有个体确定为第一物体,将第一物体或者对应的图像区域确定为第一目标区域,并可以维持一段时间,即接下来的若干帧中,第一物体相应的分割模板对应的区域均为第一目标区域,直到用户在下一个时刻点选其他个体,将新个体所对应的区域按照上述类似的方法为确定为第二目标区域。在一个图像中,第一目标区域或第二目标区域以外的图像区域为背景区域。即第一个时段内第一物体相应的分割 模板对应的区域为第一目标区域;即第二个时段内变为第二物体相应的分割模板对应的区域为第二目标区域。
可选地,在图像分割模板中,系统可以根据预设时间间隔(例如但不限于1s、2s等)或者预设帧数(例如但不限于50帧或100等)在图像分割模板中确定出一个时间段内的图像的目标模板。例如第101帧确定出第一目标模板,对于接下来的102帧-200帧中的每一帧,均采用与该第201帧中的第一目标模板类别相同或个体相同的分割模板作为第一目标模板;直到第201帧确定出第二目标模板,对于接下来的202帧-200帧中的每一帧,均采用与该第201帧中的第二目标模板类别相同或个体相同的分割模板作为第二目标模板,应理解,上述举例的数字可以根据用户或系统进行事先定义。即在某一刻确定出目标模板,并用该类型的模板或该个体的模板持续应用一段时间。
确定首个第一目标模板和首个第二目标模板的方法可以参照但不限于步骤s223中的6种方式中的任意一种。因此,第一目标模板和第二目标模板即可能是相同的类别或相同的个体,也有可能是不同的类别或不同的个体。与网络的识别能力、场景画面的变换、或者用户的输入命令有关。
此外,并进一步根据如s224的方法确定出第一目标区域和第一背景区域、第二目标区域和第二背景区域。本示例中不再赘述。
步骤73:一种可选的实现方式可以同步骤63。
此外,由于本示例分时段可能会有所不同,因此颜色处理的方法的组合就会很多。
如:第一颜色处理方式与第三颜色处理方式相同,且第二颜色处理方式与第四颜色处理方式相同。这种颜色处理方式前后一致性很好。
如:第一颜色处理方式与第三颜色处理方式相同,且第二颜色处理方式与第四颜色处理方式不相同。这种颜色处理方式使得目标主体颜色保持一致,背景颜色会发生变化,使得整体视觉更为炫酷。
如:第一颜色处理方式与第三颜色处理方式不相同,且第二颜色处理方式与第四颜色处理方式相同。这种颜色处理方式使得背景颜色保持一致,目标主体颜色会发生变化,使得目标主体更为突出。
如:第一颜色处理方式与第三颜色处理方式不相同,且第二颜色处理方式与第四颜色处理方式不相同。这种颜色处理方式能够提供更多的色彩变换方式,可以在不同场景的需求下,提供更多的色彩配合。
第一颜色处理方式或第三颜色处理方式包括:保留色彩,或色彩增强等滤镜;第二颜色处理方式或第四颜色处理方式包括:黑白、变暗、复古、胶片、模糊、虚化等滤镜。
具体地,对同一个图像的目标区域和背景区域的颜色处理方法可参考步骤23。(其中,对于N2个图像来说,第三颜色处理方式与第四颜色处理方式分别类似于第一颜色处理方式和第二颜色处理方式。)
通过上述方案,一些场景中用户可以实现在不同的画面中自由选择背景的颜色处理方式,以实现不同的背景衬托。一些场景中用户可以实现在不同的画面中自由选择主体的颜色处理方式,以实现不同程度或形式的主体衬托。
应理解,在本发明不同的示例中,相同的标号所指的信号可以有不同的来源或可以通过不同的方式得到,并不构成限定。另外,在不同示例的步骤引用中,“同步骤xx”更侧重于指两者的信号处理逻辑类似,并不限定于两者的输入和输出都要完全相同,也并不限定两个方法流程完全等同,本领域技术人员能够引发合理的引用和变形都应属于本发明保护范围内。
本发明提供了一种图像处理方法,通过对图像进行模板分割,确定出图像中的目标区域和背景区域;通过对目标区域和背景区域施加不同的色彩处理方式,使得目标区域的亮度或色度高于背景区域的亮度和色度,使得目标区域对应的主题更加显著地突出显示,实现电影特效。
基于上述实施例提供的图像处理方法,本发明实施例提供一种图像处理装置900;所述装置可以应用于多种终端设备,可以如终端100的任意一种实现形式,如包含摄像功能的终端,请参阅图9,该装置包括:
拍摄模块901,用于获取图像,可以是拍摄照片或者拍摄视频。该模块具体用于执行上述示例中步骤21、步骤31、步骤51、步骤61、或步骤71中所提到的方法以及可以等同替换的方法;该模块可以由处理器调用存储器中相应的程序指令控制摄像头采集图像。
确定模块902,用于根据图像内容在图像中确定出目标区域和背景区域。该模块具体用于执行上述示例中步骤22、步骤32、步骤52、步骤62、或步骤72中所提到的方法以及可以等同替换的方法;该模块可以由处理器调用存储器中相应的程序指令,实现相应的算法来实现。
颜色处理模块903,用于对图像中的目标区域和背景区域采用不同的颜色处理方式,得到目标图像或目标视频;使得目标区域的色度大于背景区域的色度,或者使得目标区域的亮度大于背景区域的亮度。该模块具体用于执行上述示例中步骤23、步骤33、步骤53、步骤63、或步骤73中所提到的方法以及可以等同替换的方法;该模块可以由处理器调用存储器中相应的程序指令通过一定的算法来实现。
此外,该装置还可以包括保存模块904,用于存储被颜色处理过的图像或者视频。
其中,上述具体的方法示例以及实施例中技术特征的解释、表述、以及多种实现形式的扩展也适用于装置中的方法执行,装置实施例中不予以赘述。
本发明提供了一种图像处理装置,通过对图像进行模板分割,根据图像内容确定出图像中的目标区域和背景区域;通过对目标区域和背景区域施加不同的色彩处理方式,使得目标区域的亮度或色度高于背景区域的亮度和色度,使得目标区域对应的主题更加显著地突出显示,实现电影特效。
基于上述实施例提供的图像处理方法,本发明实施例还提供一种图像处理装置1000;所述装置可以应用于多种终端设备,可以如终端100的任意一种实现形式,如包含摄像功能的终端,请参阅图10,该装置包括:
拍摄模块1001,用于获取图像,可以是拍摄照片或者拍摄视频。该模块具体用于执行上述示例中步骤21、步骤31、步骤51、步骤61、或步骤71中所提到的方法以及可以等同替换的方法;该模块可以由处理器调用存储器中相应的程序指令控制摄像头 采集图像。
判断模块1002,用于判断当前帧的前N0帧中发生闪烁的帧数是否大于预设阈值。若判断结果为不大于预设阈值,则判断模块1002继续触发确定模块1003和颜色处理模块1004执行相关功能;若判断结果为大于预设阈值,则判断模块1002继续触发闪烁修复模块1005执行相关功能。该模块1002具体用于执行上述示例中步骤42中所提到的方法以及可以等同替换的方法;该模块可以由处理器调用存储器中相应的程序指令,实现相应的算法来实现。
确定模块1003,用于判断模块1002判断当前帧的前N0帧中发生闪烁的帧数不大于预设阈值时;根据图像内容在图像中确定出目标区域和背景区域。该模块具体用于执行上述示例中步骤22、步骤32、步骤43、步骤52、步骤62、或步骤72中所提到的方法以及可以等同替换的方法;该模块可以由处理器调用存储器中相应的程序指令,实现相应的算法来实现。
颜色处理模块1004,用于对图像中的目标区域和背景区域采用不同的颜色处理方式;使得目标区域的色度大于背景区域的色度,或者使得目标区域的亮度大于背景区域的亮度。该模块具体用于执行上述示例中步骤23、步骤33、步骤44、步骤53、步骤63、或步骤73中所提到的方法以及可以等同替换的方法;该模块可以由处理器调用存储器中相应的程序指令通过一定的算法来实现。
闪烁修复模块1005,用于判断模块1002判断当前帧的前N0帧中发生闪烁的帧数大于预设阈值时;对当前帧的全部图像区域采用同一种颜色处理方法。该同一种颜色处理方法可以与上一帧中的背景区域的颜色处理方法相同,或者可以与上一帧中的目标区域的颜色处理方法相同。该模块具体用于执行上述示例中步骤45中所提到的方法以及可以等同替换的方法;该模块可以由处理器调用存储器中相应的程序指令通过一定的算法来实现。
此外,该装置1000还可以包括保存模块1006,用于存储被颜色处理过的图像或者视频。
其中,上述具体的方法示例以及实施例中技术特征的解释、表述、以及多种实现形式的扩展也适用于装置中的方法执行,装置实施例中不予以赘述。
本发明提供了一种图像处理装置,通过对图像进行模板分割,根据图像内容确定出图像中的目标区域和背景区域;通过对目标区域和背景区域施加不同的色彩处理方式,使得目标区域的亮度或色度高于背景区域的亮度和色度,使得目标区域对应的主题更加显著地突出显示,实现电影特效。
应理解以上装置中的各个模块的划分仅仅是一种逻辑功能的划分,实际实现时可以全部或部分集成到一个物理实体上,也可以物理上分开。例如,以上各个模块可以为单独设立的处理元件,也可以集成在终端的某一个芯片中实现,此外,也可以以程序代码的形式存储于控制器的存储元件中,由处理器的某一个处理元件调用并执行以上各个模块的功能。此外各个模块可以集成在一起,也可以独立实现。这里所述的处理元件可以是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法的各步骤或以上各个模块可以通过处理器元件中的硬件的集成逻辑电路或者软件形式的 指令完成。该处理元件可以是通用处理器,例如中央处理器(英文:central processing unit,简称:CPU),还可以是被配置成实施以上方法的一个或多个集成电路,例如:一个或多个特定集成电路(英文:application-specific integrated circuit,简称:ASIC),或,一个或多个微处理器(英文:digital signal processor,简称:DSP),或,一个或者多个现场可编程门阵列(英文:field-programmable gate array,简称:FPGA)等。
应理解本发明的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的实施例能够以除了在这里图示或描述的内容以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或模块的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或模块,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或模块。
本领域内的技术人员应明白,本发明的实施例可提供为方法、系统、或计算机程序产品。因此,本发明可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本发明可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本发明是参照根据本发明实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
尽管已描述了本发明的部分实施例,但本领域内的技术人员一旦得知了基本创造性概念,则可对这些实施例作出另外的变更和修改。所以,所附权利要求意欲解释为包括已列举实施例以及落入本发明范围的所有变更和修改。显然,本领域的技术人员可以对本发明实施例进行各种改动和变型而不脱离本发明实施例的精神和范围。倘若本发明实施例的这些修改和变型属于本发明权利要求及其等同技术的范围之内,则本发明也包含这些改动和变型在内。

Claims (25)

  1. 一种图像处理方法,其特征在于,所述方法包括:
    在第一时段内采集N1个图像;
    在第二时段内采集N2个图像;其中,所述第一时段与所述第二时段为相邻时段,N1和N2均为正整数,所述N1个图像和所述N2个图像组成一段视频;
    对于N1个图像中的每个图像,
    在图像中确定出第一目标区域和第一背景区域;第一背景区域为图像中除第一目标区域以外的部分;其中,所述N1个图像中每个图像中的第一目标区域都对应第一物体;对于N2个图像中的每个图像,
    在图像中确定出第二目标区域和第二背景区域;第二背景区域为图像中除第二目标区域以外的部分;其中,所述N2个图像中每个图像中的第二目标区域都对应第二物体;对第一目标区域采用第一颜色处理方式进行处理,对第一背景区域采用第二颜色处理方式进行处理,对第二目标区域采用第三颜色处理方式进行处理,对第二背景区域采用第四颜色处理方式进行处理,得到目标视频;其中,所述第一颜色处理方式或所述第三颜色处理方式包括:保留色彩,或色彩增强;所述第二颜色处理方式或所述第四颜色处理方式包括:黑白、变暗、模糊、或复古。
  2. 如权利要求1所述方法,其特征在于,所述第一物体和所述第二物体对应的是相同的物体。
  3. 如权利要求1所述方法,其特征在于,所述第一物体和所述第二物体对应的是不同的物体。
  4. 如权利要求1-3中任一项所述方法,其特征在于,所述第一颜色处理方式与所述第三颜色处理方式相同,且所述第二颜色处理方式与所述第四颜色处理方式相同。
  5. 如权利要求1-3中任一项所述方法,其特征在于,所述第一颜色处理方式与所述第三颜色处理方式相同,且所述第二颜色处理方式与所述第四颜色处理方式不相同。
  6. 如权利要求1-3中任一项所述方法,其特征在于,所述第一颜色处理方式与所述第三颜色处理方式不相同,且所述第二颜色处理方式与所述第四颜色处理方式相同。
  7. 如权利要求1-3中任一项所述方法,其特征在于,所述第一颜色处理方式与所述第三颜色处理方式不相同,且所述第二颜色处理方式与所述第四颜色处理方式不相同。
  8. 如权利要求1-7中任一项所述方法,其特征在于,所述第一物体或所述第二物体包括人物、动物或植物中的至少一个个体。
  9. 如权利要求1-7中任一项所述方法,其特征在于,所述第一物体和所述第二物体的确定是由用户的选择指令决定的。
  10. 如权利要求1-7中任一项所述方法,其特征在于,所述第一物体和所述第二物体是由终端根据预设时间间隔的两个图像的内容分别确定出来的。
  11. 一种图像处理装置,其特征在于,所述装置包括:
    拍摄模块,用于在第一时段内采集N1个图像,在第二时段内采集N2个图像;其中,所述第一时段与所述第二时段为相邻时段,N1和N2均为正整数,所述N1个图像和所述N2个图像组成一段视频;
    确定模块,用于对于N1个图像中的每个图像,在图像中确定出第一目标区域和第一 背景区域;第一背景区域为图像中除第一目标区域以外的部分;其中,所述N1个图像中每个图像中的第一目标区域都对应第一物体;对于N2个图像中的每个图像,在图像中确定出第二目标区域和第二背景区域;第二背景区域为图像中除第二目标区域以外的部分;其中,所述N2个图像中每个图像中的第二目标区域都对应第二物体;颜色处理模块,用于对第一目标区域采用第一颜色处理方式进行处理,对第一背景区域采用第二颜色处理方式进行处理,对第二目标区域采用第三颜色处理方式进行处理,对第二背景区域采用第四颜色处理方式进行处理,得到目标视频;其中,所述第一颜色处理方式或所述第三颜色处理方式包括:保留色彩,或色彩增强;所述第二颜色处理方式或所述第四颜色处理方式包括:黑白、变暗、模糊、或复古。
  12. 如权利要求11所述装置,其特征在于,所述第一物体和所述第二物体对应的是相同的物体或者是不同的物体。
  13. 如权利要求11或12所述装置,其特征在于,
    所述第一颜色处理方式与所述第三颜色处理方式相同,且所述第二颜色处理方式与所述第四颜色处理方式相同;或者,
    所述第一颜色处理方式与所述第三颜色处理方式相同,且所述第二颜色处理方式与所述第四颜色处理方式不相同;或者,
    所述第一颜色处理方式与所述第三颜色处理方式不相同,且所述第二颜色处理方式与所述第四颜色处理方式相同;或者,
    所述第一颜色处理方式与所述第三颜色处理方式不相同,且所述第二颜色处理方式与所述第四颜色处理方式不相同。
  14. 如权利要求11-13中任一项所述装置,其特征在于,所述第一物体或所述第二物体包括人物、动物或植物中的至少一个个体。
  15. 如权利要求11-13中任一项所述装置,其特征在于,所述第一物体和所述第二物体的确定是由用户的选择指令决定的。
  16. 如权利要求11-15中任一项所述装置,其特征在于,所述第一物体和所述第二物体是由终端根据预设时间间隔的两个图像的内容分别确定出来的。
  17. 一种终端设备,其特征在于,所述终端设备包含摄像头、存储器、处理器、总线;
    所述摄像头、所述存储器、以及所述处理器通过所述总线相连;
    所述摄像头用于采集图像;
    所述存储器用于存储计算机程序和指令;
    所述处理器用于调用所述存储器中存储的所述计算机程序、指令和采集的图像,用于执行如权利要求1-10中任一项所述方法。
  18. 如权利要求17所述的终端设备,所述终端设备还包括天线系统、所述天线系统在处理器的控制下,收发无线通信信号实现与移动通信网络的无线通信;所述移动通信网络包括以下的一种或多种:GSM网络、CDMA网络、3G网络、4G网络、5G网络、FDMA、TDMA、PDC、TACS、AMPS、WCDMA、TDSCDMA、WIFI以及LTE网络。
  19. 一种图像处理方法,其特征在于,所述方法包括:
    拍摄视频时,
    在视频画面中确定出主体;
    对视频画面中的目标区域和背景区域进行不同的颜色处理,得到目标视频;其中,目标区域对应于所述主体,背景区域为视频画面中除目标区域以外的部分。
  20. 如权利要求19所述方法,其特征在于,所述对视频画面中的目标区域和背景区域进行不同的颜色处理包括:
    对视频画面中的目标区域保留色彩,对视频画面中的背景区域进行灰度处理。
  21. 如权利要求19或20所述方法,其特征在于,所述对视频画面中的目标区域和背景区域进行不同的颜色处理包括:
    对视频画面中的目标区域保留色彩,对视频画面中的背景区域进行虚化处理。
  22. 一种图像处理装置,其特征在于,所述装置包括:
    拍摄模块,用于拍摄视频;
    确定模块,用于在视频画面中确定出主体;
    颜色处理模块,用于对视频画面中的目标区域和背景区域进行不同的颜色处理,得到目标视频;其中,目标区域对应于所述主体,背景区域为视频画面中除目标区域以外的部分。
  23. 如权利要求22所述装置,其特征在于,所述颜色处理模块具体用于:
    对视频画面中的目标区域保留色彩,对视频画面中的背景区域进行灰度处理。
  24. 如权利要求22或23所述装置,其特征在于,所述颜色处理模块具体用于:
    对视频画面中的目标区域保留色彩,对视频画面中的背景区域进行虚化处理。
  25. 一种终端设备,其特征在于,所述终端设备包含摄像头、存储器、处理器、总线;
    所述摄像头、所述存储器、以及所述处理器通过所述总线相连;
    所述摄像头用于采集图像;
    所述存储器用于存储计算机程序和指令;
    所述处理器用于调用所述存储器中存储的所述计算机程序、指令和采集的图像,用于执行如权利要求19-21中任一项所述方法。
PCT/CN2019/091717 2018-10-15 2019-06-18 一种图像处理方法、装置与设备 WO2020078027A1 (zh)

Priority Applications (8)

Application Number Priority Date Filing Date Title
JP2021521025A JP7226851B2 (ja) 2018-10-15 2019-06-18 画像処理の方法および装置並びにデバイス
KR1020217014480A KR20210073568A (ko) 2018-10-15 2019-06-18 이미지 처리 방법 및 장치, 및 디바이스
BR112021007094-0A BR112021007094A2 (pt) 2018-10-15 2019-06-18 método de processamento de imagem, aparelho, e dispositivo
EP19873919.5A EP3859670A4 (en) 2018-10-15 2019-06-18 IMAGE PROCESSING METHOD, APPARATUS AND DEVICE
CN201980068271.1A CN112840376B (zh) 2018-10-15 2019-06-18 一种图像处理方法、装置与设备
MX2021004295A MX2021004295A (es) 2018-10-15 2019-06-18 Método y aparato de procesamiento de imágenes y dispositivo.
AU2019362347A AU2019362347B2 (en) 2018-10-15 2019-06-18 Image processing method and apparatus, and device
US17/230,169 US20210241432A1 (en) 2018-10-15 2021-04-14 Image Processing Method and Apparatus, and Device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811199234.8 2018-10-15
CN201811199234.8A CN109816663B (zh) 2018-10-15 2018-10-15 一种图像处理方法、装置与设备

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/230,169 Continuation US20210241432A1 (en) 2018-10-15 2021-04-14 Image Processing Method and Apparatus, and Device

Publications (1)

Publication Number Publication Date
WO2020078027A1 true WO2020078027A1 (zh) 2020-04-23

Family

ID=66601864

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/091717 WO2020078027A1 (zh) 2018-10-15 2019-06-18 一种图像处理方法、装置与设备

Country Status (9)

Country Link
US (1) US20210241432A1 (zh)
EP (1) EP3859670A4 (zh)
JP (1) JP7226851B2 (zh)
KR (1) KR20210073568A (zh)
CN (4) CN113129312B (zh)
AU (1) AU2019362347B2 (zh)
BR (1) BR112021007094A2 (zh)
MX (1) MX2021004295A (zh)
WO (1) WO2020078027A1 (zh)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111598902A (zh) * 2020-05-20 2020-08-28 北京字节跳动网络技术有限公司 图像分割方法、装置、电子设备及计算机可读介质
CN111815505A (zh) * 2020-07-14 2020-10-23 北京字节跳动网络技术有限公司 用于处理图像的方法、装置、设备和计算机可读介质
CN113225477A (zh) * 2021-04-09 2021-08-06 天津畅索软件科技有限公司 一种拍摄方法、装置和相机应用
JP2022050319A (ja) * 2020-08-28 2022-03-30 ティーエムアールダブリュー ファウンデーション アイピー エスエーアールエル 空間ビデオベースの仮想プレゼンスを可能にするシステム及び方法
CN115422986A (zh) * 2022-11-07 2022-12-02 深圳传音控股股份有限公司 处理方法、处理设备及存储介质

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113129312B (zh) * 2018-10-15 2022-10-28 华为技术有限公司 一种图像处理方法、装置与设备
CN111369598B (zh) * 2020-03-02 2021-03-30 推想医疗科技股份有限公司 深度学习模型的训练方法及装置、应用方法及装置
CN113395441A (zh) * 2020-03-13 2021-09-14 华为技术有限公司 图像留色方法及设备
CN111726476B (zh) * 2020-07-06 2022-05-31 北京字节跳动网络技术有限公司 图像处理方法、装置、设备和计算机可读介质
CN112188260A (zh) * 2020-10-26 2021-01-05 咪咕文化科技有限公司 视频的分享方法、电子设备及可读存储介质
US11335048B1 (en) * 2020-11-19 2022-05-17 Sony Group Corporation Neural network-based image colorization on image/video editing applications
CN113569713A (zh) * 2021-07-23 2021-10-29 浙江大华技术股份有限公司 视频图像的条纹检测方法及装置、计算机可读存储介质
CN114363659A (zh) * 2021-12-15 2022-04-15 深圳万兴软件有限公司 降低视频闪烁的方法、装置、设备及存储介质
CN114422682B (zh) * 2022-01-28 2024-02-02 安谋科技(中国)有限公司 拍摄方法、电子设备和可读存储介质
CN115118948B (zh) * 2022-06-20 2024-04-05 北京华录新媒信息技术有限公司 一种全景视频中无规则遮挡的修复方法及装置

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120007939A1 (en) * 2010-07-06 2012-01-12 Tessera Technologies Ireland Limited Scene Background Blurring Including Face Modeling
CN105872509A (zh) * 2015-11-23 2016-08-17 乐视致新电子科技(天津)有限公司 一种图像对比度调节方法及装置
CN108010037A (zh) * 2017-11-29 2018-05-08 腾讯科技(深圳)有限公司 图像处理方法、装置及存储介质
CN108133695A (zh) * 2018-01-02 2018-06-08 京东方科技集团股份有限公司 一种图像显示方法、装置、设备和介质
CN108234882A (zh) * 2018-02-11 2018-06-29 维沃移动通信有限公司 一种图像虚化方法及移动终端
CN108648139A (zh) * 2018-04-10 2018-10-12 光锐恒宇(北京)科技有限公司 一种图像处理方法和装置
CN109816663A (zh) * 2018-10-15 2019-05-28 华为技术有限公司 一种图像处理方法、装置与设备

Family Cites Families (36)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6952286B2 (en) 2000-12-07 2005-10-04 Eastman Kodak Company Doubleprint photofinishing service with the second print having subject content-based modifications
CN100464569C (zh) * 2007-04-17 2009-02-25 北京中星微电子有限公司 对图像添加特效的方法和系统
CN101072289B (zh) * 2007-06-11 2010-06-02 北京中星微电子有限公司 一种图像特效的自动生成方法及装置
CN101790020B (zh) * 2009-09-28 2013-03-20 苏州佳世达电通有限公司 胶片扫描方法
CN102567727B (zh) * 2010-12-13 2014-01-01 中兴通讯股份有限公司 一种背景目标替换方法和装置
CN102542593A (zh) * 2011-09-30 2012-07-04 中山大学 一种基于视频解译的交互式视频风格化渲染方法
CN102880873B (zh) * 2012-08-31 2015-06-03 公安部第三研究所 基于图像分割和语义提取实现人员行为识别的系统及方法
TWI542201B (zh) * 2013-12-26 2016-07-11 智原科技股份有限公司 降低視訊畫面抖動的方法與裝置
CN104156947B (zh) * 2014-07-23 2018-03-16 小米科技有限责任公司 图像分割方法、装置及设备
EP3195201A4 (en) * 2014-08-28 2018-01-10 Qualcomm Incorporated Temporal saliency map
WO2016197303A1 (en) * 2015-06-08 2016-12-15 Microsoft Technology Licensing, Llc. Image semantic segmentation
CN105049695A (zh) * 2015-07-07 2015-11-11 广东欧珀移动通信有限公司 一种视频录制方法及装置
CN105005980B (zh) * 2015-07-21 2019-02-01 深圳Tcl数字技术有限公司 图像处理方法及装置
CN105513081A (zh) * 2015-12-21 2016-04-20 中国兵器工业计算机应用技术研究所 一种多目标的跟踪识别方法
US10580140B2 (en) * 2016-05-23 2020-03-03 Intel Corporation Method and system of real-time image segmentation for image processing
JP6828333B2 (ja) 2016-09-13 2021-02-10 富士ゼロックス株式会社 画像処理装置及び画像処理プログラム
JP2018085008A (ja) * 2016-11-25 2018-05-31 株式会社ジャパンディスプレイ 画像処理装置および画像処理装置の画像処理方法
CN106846321B (zh) * 2016-12-08 2020-08-18 广东顺德中山大学卡内基梅隆大学国际联合研究院 一种基于贝叶斯概率与神经网络的图像分割方法
CN108230252B (zh) * 2017-01-24 2022-02-01 深圳市商汤科技有限公司 图像处理方法、装置以及电子设备
EP3577894B1 (en) * 2017-02-06 2022-01-26 Intuitive Surgical Operations, Inc. System and method for extracting multiple feeds from a rolling-shutter sensor
US10049308B1 (en) * 2017-02-21 2018-08-14 A9.Com, Inc. Synthesizing training data
CN106851124B (zh) * 2017-03-09 2021-03-02 Oppo广东移动通信有限公司 基于景深的图像处理方法、处理装置和电子装置
CN106997595A (zh) * 2017-03-09 2017-08-01 广东欧珀移动通信有限公司 基于景深的图像颜色处理方法、处理装置及电子装置
US9965865B1 (en) * 2017-03-29 2018-05-08 Amazon Technologies, Inc. Image data segmentation using depth data
CN107509045A (zh) * 2017-09-11 2017-12-22 广东欧珀移动通信有限公司 图像处理方法和装置、电子装置和计算机可读存储介质
CN107566723B (zh) * 2017-09-13 2019-11-19 维沃移动通信有限公司 一种拍摄方法、移动终端及计算机可读存储介质
CN107798653B (zh) * 2017-09-20 2019-12-24 北京三快在线科技有限公司 一种图像处理的方法和一种装置
CN107665482B (zh) * 2017-09-22 2021-07-23 北京奇虎科技有限公司 实现双重曝光的视频数据实时处理方法及装置、计算设备
US20190130191A1 (en) * 2017-10-30 2019-05-02 Qualcomm Incorporated Bounding box smoothing for object tracking in a video analytics system
CN107948519B (zh) * 2017-11-30 2020-03-27 Oppo广东移动通信有限公司 图像处理方法、装置及设备
CN107977940B (zh) * 2017-11-30 2020-03-17 Oppo广东移动通信有限公司 背景虚化处理方法、装置及设备
US10528820B2 (en) * 2017-12-07 2020-01-07 Canon Kabushiki Kaisha Colour look-up table for background segmentation of sport video
CN108108697B (zh) * 2017-12-25 2020-05-19 中国电子科技集团公司第五十四研究所 一种实时无人机视频目标检测与跟踪方法
CN108305223B (zh) * 2018-01-09 2020-11-03 珠海格力电器股份有限公司 图像背景虚化处理方法及装置
CN108491889A (zh) * 2018-04-02 2018-09-04 深圳市易成自动驾驶技术有限公司 图像语义分割方法、装置及计算机可读存储介质
CN108648284A (zh) * 2018-04-10 2018-10-12 光锐恒宇(北京)科技有限公司 一种视频处理的方法和装置

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120007939A1 (en) * 2010-07-06 2012-01-12 Tessera Technologies Ireland Limited Scene Background Blurring Including Face Modeling
CN105872509A (zh) * 2015-11-23 2016-08-17 乐视致新电子科技(天津)有限公司 一种图像对比度调节方法及装置
CN108010037A (zh) * 2017-11-29 2018-05-08 腾讯科技(深圳)有限公司 图像处理方法、装置及存储介质
CN108133695A (zh) * 2018-01-02 2018-06-08 京东方科技集团股份有限公司 一种图像显示方法、装置、设备和介质
CN108234882A (zh) * 2018-02-11 2018-06-29 维沃移动通信有限公司 一种图像虚化方法及移动终端
CN108648139A (zh) * 2018-04-10 2018-10-12 光锐恒宇(北京)科技有限公司 一种图像处理方法和装置
CN109816663A (zh) * 2018-10-15 2019-05-28 华为技术有限公司 一种图像处理方法、装置与设备

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111598902A (zh) * 2020-05-20 2020-08-28 北京字节跳动网络技术有限公司 图像分割方法、装置、电子设备及计算机可读介质
CN111598902B (zh) * 2020-05-20 2023-05-30 抖音视界有限公司 图像分割方法、装置、电子设备及计算机可读介质
CN111815505A (zh) * 2020-07-14 2020-10-23 北京字节跳动网络技术有限公司 用于处理图像的方法、装置、设备和计算机可读介质
JP2022050319A (ja) * 2020-08-28 2022-03-30 ティーエムアールダブリュー ファウンデーション アイピー エスエーアールエル 空間ビデオベースの仮想プレゼンスを可能にするシステム及び方法
CN113225477A (zh) * 2021-04-09 2021-08-06 天津畅索软件科技有限公司 一种拍摄方法、装置和相机应用
CN115422986A (zh) * 2022-11-07 2022-12-02 深圳传音控股股份有限公司 处理方法、处理设备及存储介质
CN115422986B (zh) * 2022-11-07 2023-08-22 深圳传音控股股份有限公司 处理方法、处理设备及存储介质

Also Published As

Publication number Publication date
CN112840376B (zh) 2022-08-09
EP3859670A1 (en) 2021-08-04
AU2019362347B2 (en) 2023-07-06
JP7226851B2 (ja) 2023-02-21
US20210241432A1 (en) 2021-08-05
CN113129312A (zh) 2021-07-16
CN112840376A (zh) 2021-05-25
CN113112505B (zh) 2022-04-29
EP3859670A4 (en) 2021-12-22
BR112021007094A2 (pt) 2021-07-27
CN113129312B (zh) 2022-10-28
CN109816663B (zh) 2021-04-20
JP2022505115A (ja) 2022-01-14
KR20210073568A (ko) 2021-06-18
CN113112505A (zh) 2021-07-13
AU2019362347A1 (en) 2021-05-27
CN109816663A (zh) 2019-05-28
MX2021004295A (es) 2021-08-05

Similar Documents

Publication Publication Date Title
WO2020078027A1 (zh) 一种图像处理方法、装置与设备
WO2020078026A1 (zh) 一种图像处理方法、装置与设备
US10372991B1 (en) Systems and methods that leverage deep learning to selectively store audiovisual content
CN110100251B (zh) 用于处理文档的设备、方法和计算机可读存储介质
WO2020192692A1 (zh) 图像处理方法以及相关设备
US20220245823A1 (en) Image Processing Method and Apparatus, and Device
WO2021036715A1 (zh) 一种图文融合方法、装置及电子设备
WO2021078001A1 (zh) 一种图像增强方法及装置
US20220094846A1 (en) Method for selecting image based on burst shooting and electronic device
US20220408020A1 (en) Image Processing Method, Electronic Device, and Cloud Server
WO2022252649A1 (zh) 一种视频的处理方法及电子设备
WO2021180046A1 (zh) 图像留色方法及设备
RU2791810C2 (ru) Способ, аппаратура и устройство для обработки и изображения
RU2794062C2 (ru) Устройство и способ обработки изображения и оборудование
CN114257775A (zh) 视频特效添加方法、装置及终端设备
WO2024055333A1 (zh) 图像处理方法、智能设备及存储介质
CN116627560A (zh) 一种生成主题壁纸的方法及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19873919

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021521025

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

REG Reference to national code

Ref country code: BR

Ref legal event code: B01A

Ref document number: 112021007094

Country of ref document: BR

ENP Entry into the national phase

Ref document number: 2019873919

Country of ref document: EP

Effective date: 20210429

ENP Entry into the national phase

Ref document number: 20217014480

Country of ref document: KR

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2019362347

Country of ref document: AU

Date of ref document: 20190618

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 112021007094

Country of ref document: BR

Kind code of ref document: A2

Effective date: 20210414