WO2021180046A1 - 图像留色方法及设备 - Google Patents

图像留色方法及设备 Download PDF

Info

Publication number
WO2021180046A1
WO2021180046A1 PCT/CN2021/079603 CN2021079603W WO2021180046A1 WO 2021180046 A1 WO2021180046 A1 WO 2021180046A1 CN 2021079603 W CN2021079603 W CN 2021079603W WO 2021180046 A1 WO2021180046 A1 WO 2021180046A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
area
color
target
electronic device
Prior art date
Application number
PCT/CN2021/079603
Other languages
English (en)
French (fr)
Inventor
陈晓萌
郭鑫
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to US17/911,279 priority Critical patent/US20230188830A1/en
Priority to EP21767582.6A priority patent/EP4109879A4/en
Publication of WO2021180046A1 publication Critical patent/WO2021180046A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/10Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/633Control of cameras or camera modules by using electronic viewfinders for displaying additional information relating to control or operation of the camera
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/667Camera operation mode switching, e.g. between still and video, sport and normal or high- and low-resolution modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2621Cameras specially adapted for the electronic generation of special effects during image pickup, e.g. digital cameras, camcorders, video cameras having integrated special effects capability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/272Means for inserting a foreground image in a background image, i.e. inlay, outlay

Definitions

  • This application relates to the field of electronic technology, in particular to an image retention method and device.
  • the electronic device can use the camera function to shoot photos or videos with different effects in multiple modes such as night scene mode, large aperture mode, or multi-channel video mode.
  • the electronic device can also perform processing such as filtering or beautifying the image in the photograph or video obtained by shooting, so as to obtain a better image effect.
  • the embodiments of the present application provide an image color retention method and device, which can retain the color of one or more individual objects in the image, and improve the flexibility of color retention and the user experience.
  • an embodiment of the present application provides an image retention method, which is applied to an electronic device.
  • the electronic device includes a color camera.
  • the method includes: the electronic device starts a camera application and displays a preview interface.
  • the electronic device determines that the first individual object is the target object, and determines the target processing mode.
  • the electronic device generates a first preview image according to the image obtained by the color camera.
  • the first preview image includes a first individual object and a second individual object, and the second individual object is different from the first individual object.
  • the electronic device displays the first preview image in the preview interface, the image of the first area in the first preview image is displayed in color, and the image of the second area in the first preview image is an image processed according to the target processing mode.
  • the first area is the image area occupied by the first individual object in the first preview image
  • the second area is the area other than the first area in the first preview image
  • the electronic device determines in response to the user's first operation
  • the second individual object is the target object.
  • the electronic device displays the second preview image in the preview interface, the image in the third area in the second preview image is displayed in color, and the image in the fourth area in the second preview image is an image processed according to the target processing mode.
  • the third area is an image area occupied by the second individual object in the second preview image
  • the fourth area is an area in the second preview image excluding the third area.
  • the first individual subject and the second individual subject may include one individual subject, or may include multiple individual subjects.
  • the electronic device can determine the target object and the target processing mode. Among them, the color of the image in the area where one or more individuals of the target object is located is preserved. That is, the electronic device can be set to retain the color of one individual, multiple individuals of different types, or multiple individuals of the same type with a single individual as a unit, so as to improve the flexibility and precise pertinence of color retention, highlight the target object, and improve users. Shooting experience. In addition, areas other than the area where the target object is located can be processed according to the target processing mode to obtain personalized image processing effects.
  • the method further includes: the electronic device switches the target processing mode in response to the second operation of the user.
  • the electronic device updates the image of the fourth area in the preview image displayed on the preview interface according to the switched target processing mode.
  • the electronic device can also switch the target processing mode according to the user's instruction, thereby switching the processing effect of other areas than the area where the target object is located.
  • the method before the electronic device determines that the first individual object is the target object, the method further includes: the electronic device displays a third preview image in the preview interface, and the third preview image is an image obtained by a color camera The converted grayscale image.
  • the preview image may be a pure grayscale image to distinguish it from the color image in the non-color retention mode.
  • the method further includes: the electronic device displays a shooting interface in response to a user's video recording operation, the shooting interface includes a recorded image, and the recorded image includes the third area and the fourth area. After the electronic device responds to the user's stop recording operation, it stops recording and generates a video.
  • the color of the image in the first area where one or more individuals of the target object is located is preserved. That is to say, electronic equipment can retain the color of one individual, multiple individuals of different types, or multiple individuals of the same type with a single individual as a unit, improve the flexibility and precise pertinence of color retention, highlight the target object, and improve User shooting experience.
  • areas other than the area where the target object is located can be post-processed according to the target mode to obtain a personalized image processing effect.
  • an embodiment of the present application provides an image retention method.
  • the method is applied to an electronic device.
  • the electronic device includes a color camera.
  • the method includes: the electronic device starts a camera application and displays a preview interface.
  • the electronic device displays a shooting interface in response to the user's video recording operation.
  • the electronic device determines that the first individual object is the target object, and determines the target processing mode.
  • the electronic device generates a first recorded image according to the image obtained by the color camera.
  • the first recorded image includes a first individual object and a second individual object, and the second individual object is different from the first individual object.
  • the electronic device displays the first recorded image in the shooting interface, the image of the first area in the first recorded image is displayed in color, and the image of the second area in the first recorded image is an image processed according to the target processing mode.
  • the first area is an image area occupied by the first individual object in the first recorded image
  • the second area is an area other than the first area in the first recorded image.
  • the electronic device determines that the second individual object is the target object.
  • the electronic device displays the second recorded image in the shooting interface, the image of the third area in the second recorded image is displayed in color, and the image of the fourth area in the second recorded image is an image processed according to the target processing mode.
  • the third area is the image area occupied by the second individual object in the second recorded image
  • the fourth area is the area in the second recorded image excluding the third area.
  • the electronic device stops recording and generates a video.
  • the first individual object and the second individual object may be one individual object or multiple individual objects.
  • the electronic device can determine the target object and the target processing mode. In the video recording process, the electronic device can perform color retention and post-processing on the collected images, so as to capture the video. On the video image, the color of the image in the first area where one or more individuals of the target object is located is preserved. That is to say, electronic equipment can retain the color of one individual, multiple individuals of different types, or multiple individuals of the same type with a single individual as a unit, improve the flexibility and precise pertinence of color retention, highlight the target object, and improve User shooting experience. In addition, areas other than the area where the target object is located can be processed according to the target processing mode to obtain personalized video images.
  • the method before the electronic device determines that the first individual object is the target object, the method further includes: the electronic device displays a third recorded image in the shooting interface, and the third recorded image is obtained by the color camera The grayscale image to which the image is converted.
  • the electronic device after the electronic device has just entered the color retention mode and before determining the target object, it can display a pure grayscale image to distinguish it from the color image in the non-color retention mode.
  • the preview image displayed on the shooting interface includes a third individual object, and the third individual object is different from the second individual object.
  • the method further includes: the electronic device determines that the third individual object is the target object in response to a third operation of the user.
  • the fourth recorded image is displayed in the shooting interface, the image of the fifth area in the fourth recorded image is displayed in color, and the image of the sixth area in the fourth recorded image is an image processed according to the target processing mode.
  • the fifth area is the image area occupied by the third individual object in the fourth recorded image
  • the sixth area is the area other than the fifth area in the fourth recorded image.
  • the electronic device can change the individual objects included in the target object according to the user's instruction, so as to shoot and obtain a video of the dynamic change of the target object.
  • the method before the electronic device stops recording and generates the video, the method further includes: the electronic device switches the target processing mode in response to the user's fourth operation.
  • the electronic device updates the image of the fourth area in the recorded image displayed on the shooting interface according to the switched target processing mode.
  • the electronic device can change the target processing mode according to the user's instruction, thereby shooting a video with dynamically changing post-processing effects, and obtaining personalized and diversified video images.
  • the electronic device determining the first individual object as the target object includes: the electronic device determining that the first individual object is a person on an image obtained by a color camera, and the first individual object is the target object. Or, the electronic device determines that the first individual object is the target object in response to the user's operation on the first individual object.
  • the default target object can be adopted, or the target object can be determined according to the user's instruction operation.
  • the target processing mode is the first mode, and the image in the second area is a grayscale image processed according to the first mode; or, the target processing mode is the second mode, and the image in the second area The image is a blurred image processed according to the second mode; or, the target processing mode is the third mode, and the image in the second area is an image processed according to the third mode and replaced with another image.
  • the electronic device determining the target processing mode includes: the electronic device determining the target processing mode as the default first mode.
  • the target processing mode defaults to the graying processing mode.
  • an embodiment of the present application provides a color retention processing method, which is applied to an electronic device, the electronic device includes a color camera, and the method includes: the electronic device starts a camera application and displays a preview interface.
  • the electronic device determines that the first individual object is the target object, and determines the target processing mode.
  • the electronic device generates a first preview image according to the image obtained by the color camera.
  • the first preview image includes a first individual object and a second individual object, and the second individual object is different from the first individual object.
  • the electronic device displays the first preview image in the preview interface, the image of the first area in the first preview image is displayed in color, and the image of the second area in the first preview image is an image processed according to the target processing mode.
  • the first area is an image area occupied by the first individual object in the first preview image
  • the second area is an area other than the first area in the first preview image.
  • the electronic device determines that the second individual object is the target object.
  • the electronic device displays the second preview image in the preview interface, the image in the third area in the second preview image is displayed in color, and the image in the fourth area in the second preview image is an image processed according to the target processing mode.
  • the third area is an image area occupied by the second individual object in the second preview image
  • the fourth area is an area in the second preview image excluding the third area.
  • the electronic device generates a photo in response to the user's photographing operation, and the photo includes the third area and the fourth area.
  • the electronic device can perform color retention and post-processing on the collected images, so as to take photos.
  • the color of the first area where one or more individuals of the target object are located on the photo is preserved. That is to say, electronic equipment can retain the color of one individual, multiple individuals of different types, or multiple individuals of the same type with a single individual as a unit, improve the flexibility and precise pertinence of color retention, highlight the target object, and improve User shooting experience.
  • areas other than the area where the target object is located can post-process the image according to the target mode to obtain personalized and diversified photos.
  • the method further includes: the electronic device switches the target processing mode in response to a second operation of the user.
  • the electronic device updates the image of the fourth area according to the switched target processing mode.
  • the method before the electronic device determines that the first individual object is the target object, the method further includes:
  • a third preview image is displayed in the preview interface, and the third preview image is a grayscale image converted from an image acquired by a color camera.
  • an embodiment of the present application provides an image retention method, including: an electronic device detects a user's fifth operation on a target image, and the target image is a color image.
  • the electronic device enters the target editing mode and displays the first interface, and the target image on the first interface is a grayscale image.
  • the electronic device restores the pixel points on the target image whose pixel value difference from the first position is less than the preset threshold to color.
  • the electronic device can edit the target image that has been obtained, so as to retain the specific color on the target image and obtain a personalized image processing effect.
  • the electronic device restores the pixel points on the target image whose pixel value difference from the first position on the target image is smaller than a preset threshold to color, which includes: , The pixel whose value difference with the pixel value of the first position is less than the preset threshold value is restored to color; or, the electronic device restores the color of the individual to which the first position belongs on the target image, and the pixel value difference with the pixel value of the first position is less than the preset value.
  • a preset threshold to color which includes: , The pixel whose value difference with the pixel value of the first position is less than the preset threshold value is restored to color; or, the electronic device restores the color of the individual to which the first position belongs on the target image, and the pixel value difference with the pixel value of the first position is less than the preset value.
  • the electronic device can retain the color of a partial area on the target image in units of individuals or parts according to the color specified by the user.
  • the first interface further includes a first control
  • the method further includes: if the first control is selected and the electronic device detects that the user uses the first control, targeting the first control in the seventh area on the color image Sixth operation, the electronic device changes the color image in the seventh area into a grayscale image.
  • the user can use the first control to change the area that has become a color image into a grayscale image.
  • the first interface further includes a second control
  • the method further includes: after the electronic device detects a user's operation on the second control, adjusting the area of the scope of the first control.
  • the electronic device can adjust the size of the active area of the first control.
  • an embodiment of the present application provides an image retention method, including: an electronic device detects a user's fifth operation on a target image, and the target image is a color image.
  • the electronic device enters the target editing mode, and displays the first interface, and the target image on the first interface is a grayscale image. After detecting the user's operation on the first position, the electronic device restores the component belonging to the first position on the target image to color.
  • the electronic device can edit the target image that has been obtained, so as to retain the color of a specific component on the target image, obtain a personalized image processing effect, and provide the flexibility and accuracy of color retention settings.
  • an embodiment of the present application provides an image retention method, including: an electronic device detects a user's fifth operation on a target image, and the target image is a color image.
  • the electronic device enters the target editing mode and displays the first interface, and the target image on the first interface is a grayscale image. After detecting the user's operation on the first position, the electronic device restores the individual belonging to the first position on the target image to color.
  • the electronic device can edit the target image that has been obtained, so as to retain the color of a specific individual on the target image, obtain a personalized image processing effect, and provide the flexibility and accuracy of color retention settings.
  • an embodiment of the present application provides an image retention method, including: an electronic device detects a user's seventh operation on a target image.
  • the electronic device enters the target editing mode and displays a second interface.
  • the second interface includes an eighth area, a ninth area, and a third control, and the image in the ninth area is a blurred image.
  • the electronic device adjusts the blur degree of the image in the ninth area.
  • the electronic device can edit the obtained target image, so as to preserve the clear image in some areas and turn other areas into blurred images.
  • the second interface further includes a fourth control, and the fourth control is used to switch the shape of the eighth area. After detecting the user's operation on the fourth control, the electronic device adjusts the eighth area according to the switched shape.
  • the user can indicate the shape of the area where the clear image is located, for example, the shape can be a circle or a square.
  • the method further includes: adjusting the size of the eighth area after the electronic device detects the eighth operation of the user on the eighth area.
  • the electronic device can adjust the size of the area where the clear image is located.
  • an embodiment of the present application provides an image processing device, which is included in an electronic device.
  • the device has the function of realizing the behavior of the electronic device in any one of the above aspects and possible designs, so that the electronic device executes the image retention method performed by the electronic device in any one of the above possible designs.
  • This function can be realized by hardware, or by hardware executing corresponding software.
  • the hardware or software includes at least one module or unit corresponding to the above-mentioned functions.
  • the device may include a display unit, a determination unit, a detection unit, an update unit, and so on.
  • an embodiment of the present application provides an electronic device, including: a color camera for collecting color images; a screen for displaying an interface, one or more processors; and a memory, where codes are stored.
  • the code is executed by the electronic device, the electronic device is caused to execute the image retention method executed by the electronic device in any one of the possible designs in the foregoing aspects.
  • an embodiment of the present application provides an electronic device, including: one or more processors; and a memory, in which code is stored.
  • the electronic device is caused to execute the image retention method executed by the electronic device in any one of the possible designs in the foregoing aspects.
  • an embodiment of the present application provides a computer-readable storage medium, including computer instructions, which when the computer instructions run on an electronic device, cause the electronic device to execute the image retention method in any one of the possible designs of the above aspects. .
  • the embodiments of the present application provide a computer program product, which when the computer program product runs on a computer, enables the computer to execute the image retention method executed by the electronic device in any one of the possible designs in the foregoing aspects.
  • an embodiment of the present application provides a chip system, which is applied to an electronic device.
  • the chip system includes one or more interface circuits and one or more processors; the interface circuit and the processor are interconnected by wires; the interface circuit is used to receive signals from the memory of the electronic device and send signals to the processor.
  • the signals include the memory Stored computer instructions; when the processor executes the computer instructions, it causes the electronic device to execute any of the above-mentioned aspects of the possible design of the image retention method.
  • FIG. 1A is a schematic diagram of the hardware structure of an electronic device according to an embodiment of the application.
  • FIG. 1B is a flowchart of an image retention method according to an embodiment of the application.
  • FIG. 2 is a schematic diagram of the software architecture of an electronic device provided by an embodiment of the application.
  • FIG. 3 is a flowchart of another image retention method provided by an embodiment of the application.
  • FIG. 4 is a schematic diagram of a set of interfaces provided by an embodiment of the application.
  • FIG. 5 is a schematic diagram of another set of interfaces provided by an embodiment of the application.
  • FIG. 6 is a schematic diagram of another set of interfaces provided by an embodiment of the application.
  • FIG. 7 is a schematic diagram of the effect of a set of instance segmentation and semantic segmentation provided by an embodiment of the application.
  • FIG. 8 is a schematic diagram of another set of interfaces provided by an embodiment of the application.
  • FIG. 9 is a schematic diagram of another set of interfaces provided by an embodiment of the application.
  • FIG. 10 is a schematic diagram of another set of interfaces provided by an embodiment of the application.
  • FIG. 11 is a schematic diagram of another set of interfaces provided by an embodiment of the application.
  • FIG. 12 is a schematic diagram of another set of interfaces provided by an embodiment of the application.
  • FIG. 13A is a schematic diagram of another set of interfaces provided by an embodiment of the application.
  • FIG. 13B is a schematic diagram of a set of images and interfaces provided by an embodiment of the application.
  • FIG. 13C is a schematic diagram of another set of images and interfaces provided by an embodiment of the application.
  • FIG. 13D is a schematic diagram of another set of images and interfaces provided by an embodiment of the application.
  • FIG. 13E is a schematic diagram of an interface provided by an embodiment of the application.
  • FIG. 14 is a schematic diagram of another set of interfaces provided by an embodiment of the application.
  • FIG. 15 is a schematic diagram of another set of interfaces provided by an embodiment of the application.
  • FIG. 16 is a schematic diagram of another set of interfaces provided by an embodiment of the application.
  • FIG. 17 is a schematic diagram of another set of interfaces provided by an embodiment of the application.
  • FIG. 19 is a schematic diagram of another set of interfaces provided by an embodiment of the application.
  • FIG. 21 is a schematic diagram of a shooting interface and photos obtained by shooting according to an embodiment of the application.
  • FIG. 22 is a schematic diagram of another set of interfaces provided by an embodiment of the application.
  • FIG. 23 is a schematic diagram of another interface provided by an embodiment of the application.
  • FIG. 25 is a schematic diagram of another set of interfaces provided by an embodiment of the application.
  • FIG. 26 is a schematic diagram of another set of interfaces provided by an embodiment of the application.
  • FIG. 27 is a schematic diagram of another set of interfaces provided by an embodiment of the application.
  • first and second are only used for descriptive purposes, and cannot be understood as indicating or implying relative importance or implicitly indicating the number of indicated technical features. Thus, the features defined with “first” and “second” may explicitly or implicitly include one or more of these features. In the description of the present embodiment, unless otherwise specified, “plurality” means two or more.
  • the embodiment of the present application provides an image color retention method, which can retain the color of one or more individual objects during a photo or video recording process, and can also perform color retention processing on the obtained image to retain the color of one or more individual objects.
  • the electronic device can perform image color retention in units of individual objects, achieve personalized image processing effects, and improve color retention flexibility and user experience.
  • the image retention method provided by the embodiments of this application can be applied to mobile phones, tablet computers, wearable devices, vehicle-mounted devices, augmented reality (AR)/virtual reality (VR) devices, notebook computers, and ultra-mobile personal
  • AR augmented reality
  • VR virtual reality
  • electronic devices such as a computer (ultra-mobile personal computer, UMPC), netbook, and personal digital assistant (personal digital assistant, PDA)
  • UMPC ultra-mobile personal computer
  • PDA personal digital assistant
  • FIG. 1A shows a schematic structural diagram of an electronic device 100.
  • the electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, and an antenna 2.
  • Mobile communication module 150 wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, earphone jack 170D, sensor module 180, buttons 190, motor 191, indicator 192, camera 193, display screen 194, and Subscriber identification module (subscriber identification module, SIM) card interface 195, etc.
  • SIM Subscriber identification module
  • the sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, and ambient light Sensor 180L, bone conduction sensor 180M, etc.
  • the processor 110 may include one or more processing units.
  • the processor 110 may include an application processor (AP), a modem processor, a graphics processing unit (GPU), and an image signal processor. (image signal processor, ISP), controller, memory, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural-network processing unit (NPU) Wait.
  • AP application processor
  • modem processor modem processor
  • GPU graphics processing unit
  • image signal processor image signal processor
  • ISP image signal processor
  • controller memory
  • video codec digital signal processor
  • DSP digital signal processor
  • NPU neural-network processing unit
  • the different processing units may be independent devices or integrated in one or more processors.
  • the controller may be the nerve center and command center of the electronic device 100.
  • the controller can generate operation control signals according to the instruction operation code and timing signals to complete the control of fetching instructions and executing instructions.
  • a memory may also be provided in the processor 110 to store instructions and data.
  • the memory in the processor 110 is a cache memory.
  • the memory can store instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to use the instruction or data again, it can be directly called from the memory. Repeated accesses are avoided, the waiting time of the processor 110 is reduced, and the efficiency of the system is improved.
  • the wireless communication function of the electronic device 100 can be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modem processor, and the baseband processor.
  • the antenna 1 and the antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in the electronic device 100 can be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization.
  • Antenna 1 can be multiplexed as a diversity antenna of a wireless local area network.
  • the antenna can be used in combination with a tuning switch.
  • the mobile communication module 150 can provide a wireless communication solution including 2G/3G/4G/5G and the like applied to the electronic device 100.
  • the mobile communication module 150 may include at least one filter, a switch, a power amplifier, a low noise amplifier (LNA), and the like.
  • the mobile communication module 150 can receive electromagnetic waves by the antenna 1, and perform processing such as filtering, amplifying and transmitting the received electromagnetic waves to the modem processor for demodulation.
  • the mobile communication module 150 can also amplify the signal modulated by the modem processor, and convert it into electromagnetic waves for radiation via the antenna 1.
  • at least part of the functional modules of the mobile communication module 150 may be provided in the processor 110.
  • at least part of the functional modules of the mobile communication module 150 and at least part of the modules of the processor 110 may be provided in the same device.
  • the wireless communication module 160 can provide applications on the electronic device 100 including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) networks), bluetooth (BT), and global navigation satellites.
  • WLAN wireless local area networks
  • BT wireless fidelity
  • GNSS global navigation satellite system
  • FM frequency modulation
  • NFC near field communication technology
  • infrared technology infrared, IR
  • the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 160 receives electromagnetic waves via the antenna 2, frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110.
  • the wireless communication module 160 may also receive a signal to be sent from the processor 110, perform frequency modulation, amplify, and convert it into electromagnetic waves to radiate through the antenna 2.
  • the electronic device 100 implements a display function through a GPU, a display screen 194, an application processor, and the like.
  • the GPU is an image processing microprocessor, which is connected to the display screen 194 and the application processor.
  • the GPU is used to perform mathematical and geometric calculations and is used for graphics rendering.
  • the processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
  • the display screen 194 is used to display images, videos, and the like.
  • the display screen 194 includes a display panel.
  • the display panel can use liquid crystal display (LCD), organic light-emitting diode (OLED), active matrix organic light-emitting diode or active-matrix organic light-emitting diode (active-matrix organic light-emitting diode).
  • LCD liquid crystal display
  • OLED organic light-emitting diode
  • active-matrix organic light-emitting diode active-matrix organic light-emitting diode
  • AMOLED flexible light-emitting diode (FLED), Miniled, MicroLed, Micro-oLed, quantum dot light-emitting diode (QLED), etc.
  • the electronic device 100 may include one or N display screens 194, and N is a positive integer greater than one.
  • the display screen 194 may be used to display a shooting preview interface, a shooting interface, and an image editing interface during shooting.
  • the electronic device 100 can implement a shooting function through an ISP, a camera 193, a video codec, a GPU, a display screen 194, and an application processor.
  • the ISP is used to process the data fed back from the camera 193. For example, when taking a picture, the shutter is opened, the light is transmitted to the photosensitive element of the camera through the lens, the light signal is converted into an electrical signal, and the photosensitive element of the camera transmits the electrical signal to the ISP for processing and is converted into an image visible to the naked eye.
  • ISP can also optimize the image noise, brightness, and skin color. ISP can also optimize the exposure, color temperature and other parameters of the shooting scene.
  • the ISP may be provided in the camera 193.
  • the camera 193 is used to capture still images or videos.
  • the object generates an optical image through the lens and is projected to the photosensitive element.
  • the photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the optical signal into an electrical signal, and then transfers the electrical signal to the ISP to convert it into a digital image signal.
  • ISP outputs digital image signals to DSP for processing.
  • DSP converts digital image signals into standard RGB, YUV and other formats of image signals.
  • the electronic device 100 may include one or N cameras 193, and N is a positive integer greater than one.
  • the camera 193 is a color camera. Unlike a black and white camera that can collect and obtain a grayscale image (also called a black and white image), the electronic device 110 uses a color camera to collect and obtain a color image, so as to record the color of the object being photographed.
  • a color camera to collect and obtain a color image, so as to record the color of the object being photographed.
  • each pixel value in a color image may include three primary colors of R (red), G (green), and B (blue).
  • the camera 193 may include one or more of the following cameras: a telephoto camera, a wide-angle camera, an ultra-wide-angle camera, or a depth camera.
  • the depth camera can be used to measure the distance of the subject.
  • the telephoto camera has a small shooting range and is suitable for shooting distant scenery; the wide-angle camera has a larger shooting range; the ultra-wide-angle camera has a larger shooting range than the wide-angle camera, and is suitable for shooting panoramas and other larger scenes.
  • Digital signal processors are used to process digital signals. In addition to digital image signals, they can also process other digital signals. For example, when the electronic device 100 selects the frequency point, the digital signal processor is used to perform Fourier transform on the energy of the frequency point.
  • Video codecs are used to compress or decompress digital video.
  • the electronic device 100 may support one or more video codecs. In this way, the electronic device 100 can play or record videos in multiple encoding formats, such as: moving picture experts group (MPEG) 1, MPEG2, MPEG3, MPEG4, and so on.
  • MPEG moving picture experts group
  • MPEG2 MPEG2, MPEG3, MPEG4, and so on.
  • NPU is a neural-network (NN) computing processor.
  • NN neural-network
  • the NPU can perform instance segmentation on the image, so as to distinguish the regions where different individuals in the image are located.
  • the NPU may also perform component segmentation on the image, so as to distinguish the regions where different components in the same individual are located.
  • the external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the electronic device 100.
  • the external memory card communicates with the processor 110 through the external memory interface 120 to realize the data storage function. For example, save music, video and other files in an external memory card.
  • the internal memory 121 may be used to store computer executable program code, where the executable program code includes instructions.
  • the processor 110 executes various functional applications and data processing of the electronic device 100 by running instructions stored in the internal memory 121.
  • the internal memory 121 may include a storage program area and a storage data area.
  • the storage program area can store an operating system, an application program (such as a sound playback function, an image playback function, etc.) required by at least one function, and the like.
  • the data storage area can store data (such as audio data, phone book, etc.) created during the use of the electronic device 100.
  • the internal memory 121 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash storage (UFS), and the like.
  • UFS universal flash storage
  • the processor 110 can retain the color of one or more individual objects in the image by running the instructions stored in the internal memory 121, and gray out the background of the one or more individual objects. Post-processing such as blur or background replacement.
  • the electronic device 100 can implement audio functions through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, and the application processor. For example, music playback, recording, etc.
  • the electronic device 100 can measure the distance by infrared or laser. In some embodiments, when shooting a scene, the electronic device 100 may use the distance sensor 180F to measure the distance to achieve fast focusing.
  • the ambient light sensor 180L is used to sense the brightness of the ambient light.
  • the electronic device 100 can adaptively adjust the brightness of the display screen 194 according to the perceived brightness of the ambient light.
  • the ambient light sensor 180L can also be used to automatically adjust the white balance when taking pictures.
  • the ambient light sensor 180L can also cooperate with the proximity light sensor 180G to detect whether the electronic device 100 is in the pocket to prevent accidental touch.
  • Touch sensor 180K also called “touch panel”.
  • the touch sensor 180K may be disposed on the display screen 194, and the touch screen is composed of the touch sensor 180K and the display screen 194, which is also called a “touch screen”.
  • the touch sensor 180K is used to detect touch operations acting on or near it.
  • the touch sensor can pass the detected touch operation to the application processor to determine the type of touch event.
  • the visual output related to the touch operation can be provided through the display screen 194.
  • the touch sensor 180K may also be disposed on the surface of the electronic device 100, which is different from the position of the display screen 194.
  • the structure illustrated in the embodiment of the present application does not constitute a specific limitation on the electronic device 100.
  • the electronic device 100 may include more or fewer components than those shown in the figure, or combine certain components, or split certain components, or arrange different components.
  • the illustrated components can be implemented in hardware, software, or a combination of software and hardware.
  • the NPU in the processor 110 may perform instance segmentation on the image to determine the area where different individual objects on the image are located.
  • the camera 193 and the ISP can collect and obtain a color image
  • the NPU in the processor 110 can perform instance segmentation on the image processed by the ISP to determine the mask area where different individuals are located on the image.
  • the processor 110 may traverse each pixel in the color image, and if the pixel is in the mask area where one or more individuals included in the target object (for example, specified by the user) are located, then the pixel is grayed out and virtualized.
  • Post-processing such as transformation or background replacement; if the pixel is not in the area where the target object is located, the pixel value of the pixel is retained. Therefore, the processor 110 can retain the color of the region where a specific one or more individual objects are located in units of individual objects, and perform post-processing such as graying, blurring, or background replacement of other regions, thereby improving the flexibility of color retention. And user experience.
  • the video codec can also encode post-processed image data to generate video files in a specific format.
  • the software system of the electronic device 100 may adopt a layered architecture, an event-driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture.
  • the embodiment of the present application takes an Android system with a layered architecture as an example to illustrate the software structure of the electronic device 100 by way of example.
  • FIG. 2 is a block diagram of the software structure of the electronic device 100 according to an embodiment of the present application.
  • the layered architecture divides the software into several layers, and each layer has a clear role and division of labor. Communication between layers through software interface.
  • the Android system is divided into four layers, from top to bottom, the application layer, the application framework layer, the Android runtime and system library, and the kernel layer.
  • the application layer can include a series of application packages.
  • the application package can include applications such as camera, gallery, calendar, call, map, navigation, WLAN, Bluetooth, music, video, short message, etc.
  • the application framework layer provides an application programming interface (application programming interface, API) and a programming framework for applications in the application layer.
  • the application framework layer includes some predefined functions.
  • the application framework layer can include a window manager, a content provider, a view system, a phone manager, a resource manager, and a notification manager.
  • the window manager is used to manage window programs.
  • the window manager can obtain the size of the display screen, determine whether there is a status bar, lock the screen, take a screenshot, etc.
  • the content provider is used to store and retrieve data and make these data accessible to applications.
  • the data may include videos, images, audios, phone calls made and received, browsing history and bookmarks, phone book, etc.
  • the view system includes visual controls, such as controls that display text, controls that display pictures, and so on.
  • the view system can be used to build applications.
  • the display interface can be composed of one or more views.
  • a display interface that includes a short message notification icon may include a view that displays text and a view that displays pictures.
  • the phone manager is used to provide the communication function of the electronic device 100. For example, the management of the call status (including connecting, hanging up, etc.).
  • the resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and so on.
  • the notification manager enables the application to display notification information in the status bar, which can be used to convey notification-type messages, and it can automatically disappear after a short stay without user interaction.
  • the notification manager is used to notify download completion, message reminders, and so on.
  • the notification manager can also be a notification that appears in the status bar at the top of the system in the form of a chart or a scroll bar text, such as a notification of an application running in the background, or a notification that appears on the screen in the form of a dialog window. For example, text messages are prompted in the status bar, prompt sounds, electronic devices vibrate, and indicator lights flash.
  • Android Runtime includes core libraries and virtual machines. Android runtime is responsible for the scheduling and management of the Android system.
  • the core library consists of two parts: one part is the function functions that the java language needs to call, and the other part is the core library of Android.
  • the application layer and application framework layer run in a virtual machine.
  • the virtual machine executes the java files of the application layer and the application framework layer as binary files.
  • the virtual machine is used to perform functions such as object life cycle management, stack management, thread management, security and exception management, and garbage collection.
  • the system library can include multiple functional modules. For example: surface manager (surface manager), media library (Media Libraries), three-dimensional graphics processing library (for example: OpenGL ES), 2D graphics engine (for example: SGL), etc.
  • the surface manager is used to manage the display subsystem and provides a combination of 2D and 3D layers for multiple applications.
  • the media library supports playback and recording of a variety of commonly used audio and video formats, as well as still image files.
  • the media library can support multiple audio and video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
  • the 3D graphics processing library is used to implement 3D graphics drawing, image rendering, synthesis, and layer processing.
  • the 2D graphics engine is a drawing engine for 2D drawing.
  • the kernel layer is the layer between hardware and software.
  • the kernel layer contains at least display driver, camera driver, audio driver, and sensor driver.
  • the system library may also include an image processing library.
  • the image processing library can obtain the regions where different individual objects are located in the image through instance segmentation, and retain the pixel values of the specific pixels in the region where one or more individual objects are located in the unit of individual objects, so as to retain the information of one or more individual objects. Color, and perform processing such as graying, blurring, or background replacement on other areas other than the area where the one or more individual objects are located.
  • the embodiment of the present application provides an image retention method, which can be applied to a video recording scene.
  • the method includes:
  • the mobile phone After the mobile phone detects the user's operation to open the camera application, it displays a shooting preview interface, where the shooting preview interface includes a preview image.
  • the mobile phone After the mobile phone detects the user's operation to open the camera application, it can start the camera application (hereinafter also referred to as camera), and display a shooting preview interface, which includes a preview image.
  • the preview image is the original image obtained by the camera and the ISP, and the original image is a color image.
  • the operation of the user to open the camera may be an operation of clicking the camera icon 401 shown in (a) of FIG. 4.
  • the mobile phone After the mobile phone detects this operation, it can start the camera application and display the shooting preview interface shown in (b) of FIG. 4, which includes a preview image, which is a color image collected by the camera.
  • the user's operation of opening the camera may be a voice instruction operation of opening the camera. After the mobile phone detects this operation, the camera application can be launched and the shooting preview interface shown in (b) of FIG. 4 is displayed.
  • the mobile phone enters a target shooting mode, and the target shooting mode is a video recording mode.
  • the mobile phone after the mobile phone starts the camera application, it enters a non-recording mode such as a photo mode by default. After the mobile phone detects the user's instruction to enter the video mode, it enters the video mode. Exemplarily, after the mobile phone starts the camera, it enters the photographing mode by default, and displays the photographing preview interface in the photographing mode as shown in (b) of FIG. 4. The mobile phone detects that the user has clicked the operation of the control 402 shown in FIG. 4(b) to enter the video recording mode, and displays the shooting preview interface in the video recording mode as shown in FIG. 4(c).
  • a non-recording mode such as a photo mode by default.
  • the mobile phone After the mobile phone starts the camera, it enters the photographing mode by default, and displays the photographing preview interface in the photographing mode as shown in (b) of FIG. 4.
  • the mobile phone detects that the user has clicked the operation of the control 402 shown in FIG. 4(b) to enter the video recording mode, and displays the shooting preview
  • the mobile phone After the mobile phone starts the camera application, it enters the recording mode by default (for example, the last time the camera application was opened, the recording mode was used), and the recording in the recording mode as shown in (c) in Figure 4 is displayed. Preview interface.
  • the mobile phone can also enter the video recording mode in other ways, which is not limited in the embodiment of this application.
  • the mobile phone After the mobile phone detects the user's preset operation 1, it enters the color retention mode.
  • the mobile phone can perform color retention processing on the color image obtained by the camera, so that the area of one or more individual objects on the image is retained in color, and other areas are subjected to post-processing such as graying, blurring, or background replacement.
  • the preset operation 1 is used to instruct the mobile phone to enter the color retention mode. After the mobile phone enters the color retention mode, a shooting preview interface is displayed, and the shooting preview interface includes a preview image.
  • the shooting preview interface includes a control 1 for indicating the color retention mode.
  • the control 1 may be the control 501 shown in (a) in FIG. 5 or the control 502 shown in (b) in FIG. 5.
  • a filter control 601 is included on the shooting preview interface. After the mobile phone detects that the user clicks on the control 601, referring to (b) in FIG. 6, the mobile phone displays a control 602 for indicating the color retention mode. After the mobile phone detects that the user clicks on the control 602, it enters the color retention mode.
  • the mobile phone displays the shooting preview interface, if it detects that the user enters the color retention mode or the voice instruction operation using the color retention function, it enters the color retention mode.
  • the mobile phone performs instance segmentation on the image obtained by the camera, and determines the area where different individual objects on the image are located.
  • instance segmentation refers to distinguishing different individuals in objects of the same type on the basis of semantic segmentation.
  • Semantic segmentation refers to pixel-level segmentation of objects in an image to determine the type of object to which the pixel belongs.
  • the types of objects may include people, vehicles, buildings, trees, dogs or cats, and so on.
  • Objects of the same type on the same image can include one or more individual objects.
  • the same image may include multiple people or multiple vehicles.
  • Instance segmentation refers to segmenting out the area where each person or each object is located on the image.
  • the mobile phone can down-sample the original image and convert it to a lower resolution image for CNN complex calculations to reduce the amount of calculation.
  • the mobile phone processes the M x N size of the original image (that is, the resolution is M x N) into m x n size, where m is smaller than M, and n is smaller than N.
  • the mobile phone extracts the semantic features of the image layer by layer through convolution and downsampling operations (including but not limited to stride convolution, pooling, etc.), and obtains multi-scale features with sizes m1 x n1, m2 x n2, m3 x n3 Figure, where m1, m2, m3 are multiples and less than m; n1, n2, n3 are multiples and less than n. Then, the mobile phone obtains the position of the target to be segmented (for example, a person, a car, or a building, etc.) in the image through calculations, returns to the area where the target is located, and frames the bounding box of the area where the target is located. Get the coordinates of the target in the image.
  • convolution and downsampling operations including but not limited to stride convolution, pooling, etc.
  • the mobile phone After target detection, the mobile phone performs image instance segmentation in each bounding box to obtain the area (or the mask area) where each individual object (hereinafter referred to as the individual) is located, thereby completing the instance segmentation operation.
  • the result of the instance segmentation of an image collected by the camera by the mobile phone can be seen in (a) in Figure 7.
  • the mobile phone recognizes the regions where different individuals are located. Among them, the area where different individuals are located corresponds to different gray values, and the area where the same individual is located corresponds to the same gray value.
  • the result of the semantic segmentation of the image by the mobile phone can be seen in (b) in Figure 7.
  • the area where different types of objects are located corresponds to different gray values
  • the area where the same type of objects is located corresponds to the area where different types of objects are located. The same gray value.
  • the mobile phone determines the target object and the target processing mode, and retains the color of the area where the target object is located on the preview image, and processes the background area according to the target processing mode.
  • the target object includes one or more individuals, and the multiple individuals belong to the same type. Or different types.
  • the mobile phone may display text information, controls, or marks corresponding to the color retention mode to remind the user that the color retention mode is currently in.
  • the control 800 is selected to indicate that the mobile phone is currently in the color retention mode.
  • the user may be notified of the function and effect of the color retention mode by means of display information or sound prompts. For example, referring to (a) in Figure 8, the mobile phone can prompt the user by displaying a text message, "In the retention mode, you can retain the color of one or more individuals in the area.”
  • the mobile phone After entering the color retention mode, the mobile phone can determine the target processing mode, and perform post-processing on areas other than the area where the target object whose color is to be retained according to the target processing mode, so as to obtain personalized and diversified image processing effects.
  • the target object may include one or more individual objects.
  • the area where the target object is located may be referred to as the target area
  • the area other than the area where the target object is located may be referred to as the background area.
  • the target processing mode may include a graying mode, a blurring mode, or a background replacement mode.
  • the mobile phone can convert the pixel values of the pixels in the background area into gray values while preserving the image color in the target area to convert the background area
  • the color image is converted to a grayscale image (also called a black and white image) to highlight the target object.
  • the pixel value is used to represent the color of the pixel point, for example, the pixel value may be an RGB value.
  • the mobile phone can convert the pixel value of the pixel in the background area into a pixel value of a specific value while preserving the color of the image in the target area, thereby converting the image in the background area into a specific value.
  • colour For example, a mobile phone can convert the image of pixels in the background area into blue, red, black, or white.
  • the mobile phone can perform blur processing on the background area while preserving the image color in the target area and clearly displaying the image in the area where the target object is located, so as to highlight the target object.
  • the mobile phone can also adjust the blur degree of the background area according to the user's instruction operation.
  • the mobile phone can replace the image in the background area with the image in the same position on the background picture (that is, another image) while preserving the image color in the target area. Realize the arbitrary replacement of the target object's background and obtain a personalized image.
  • the mobile phone can also prompt the user to select a background picture to be replaced, so that the background area is subsequently replaced with an area at a corresponding position on the background picture. If the user does not select a background picture, the phone uses the default background picture for background replacement. In some embodiments, the position or size of the target object on the background picture can also be adjusted according to the user's instruction operation.
  • the mobile phone after entering the color retention mode, can determine the target processing mode according to the user's instruction operation.
  • the target processing mode is the preset processing mode or the last used processing mode.
  • the mobile phone can also switch the target processing mode according to the user's instruction.
  • the mobile phone can determine the target object, thereby preserving the color of the area where the target object is located.
  • the mobile phone can operate according to the user's instructions, with the individual as the unit of selection, and set one or more individuals included in the target object, so that one individual, multiple individuals of different types, or multiple individuals of the same type can be retained in the subsequent image. Colors. Among them, taking the individual as the selection unit and setting the color retention from the dimension of a single individual can more accurately select the protagonist of the color to be retained and improve the flexibility of the color retention setting.
  • the video scene is a group dance performance
  • the target object can be the lead dancer
  • the mobile phone can retain the color of the area where the lead dancer is
  • the recording scene is a band performance scene
  • the target object can be the lead singer of the band
  • the mobile phone can reserve the area where the lead singer is Color
  • the mobile phone can retain the color of the area where the singer is located.
  • the mobile phone can retain the color of the area where the target object is located and the color of the area where the object overlaps the target object. For example, if the target object is a singer, and the singer holds a microphone or musical instrument in his hand, the mobile phone can retain the color of the area where the singer is located and the color of the microphone or musical instrument held by the singer.
  • the target processing mode defaults to the graying mode as an example.
  • the mobile phone when the image captured by the camera includes people, the mobile phone defaults to the shooting preview interface.
  • the target objects on the preview image are all people, and the color of the image in the target area where the target object is located is changed. Keep, the image in the background area is grayscale by default.
  • the mobile phone can delete or add one or more individuals from the target object according to the user's instructions.
  • the part filled with the left diagonal line represents the color-retaining area .
  • the mobile phone can prompt the user to specify the target object by sound or by displaying prompt information.
  • the shooting preview interface displayed by the mobile phone may refer to (a) in FIG. 8.
  • the mobile phone can prompt the user: Please click on an individual on the picture to delete or add objects with colors to be retained.
  • the mobile phone detects that the user clicks on Person 1, the mobile phone deletes Person 1 from the target object.
  • the target object includes Person 2. See Fig. 8 (b).
  • the area where Person 1 is located becomes For grayscale images, the phone retains the color of the area where Character 2 is located.
  • the mobile phone adds the puppy to the target object.
  • the target object includes Person 2 and the puppy, as shown in Figure 8 (c).
  • the image in the area where the puppy is located becomes a color image
  • the images in other areas are grayscale images.
  • the text "gray image” displayed in the background area indicates that the image in the background area is a gray image.
  • the image in a certain area of the image is a color/grayscale image, or it can be simply described as a color/grayscale image, or simply as a certain area It is a color/grayscale image.
  • the mobile phone after entering the color retention mode, when the image captured by the camera includes multiple people, the mobile phone defaults to the target object in the preview image to be the person or multiple people closest to the middle area.
  • the mobile phone can also add or delete one or more individuals in the target object according to the user's instruction operation, and perform color retention according to the modified target object.
  • the mobile phone after entering the color retention mode, when the image obtained by the camera includes a person, the mobile phone defaults to the target object in the preview image as the person.
  • the preview image includes Person 1, and Person 1 is the target object; when Person 1 moves out of the screen range of the mobile phone, the target object is not included in the preview image, and all the preview images are grayscale images; later, when the person appears on the image captured by the camera After 2, the target object on the preview image is automatically set to Person 2.
  • character 2 and character 1 may be the same or different.
  • the appearance of the person 2 on the image refers to that part or all of the person 2 appears on the image.
  • the mobile phone can also add or delete one or more individuals in the target object according to the user's instruction operation, and perform color retention according to the modified target object.
  • the target object is person 1. See (b) in Figure 9, after the character 1 moves out of the screen range of the mobile phone, and the mobile phone detects that the image obtained by the camera includes the character 2, the target object is the character 2, and the mobile phone retains the color of the area where the character 2 is located on the preview image .
  • the entire preview image is a grayscale image.
  • the target object on the preview image is the person 1, see (d) in FIG. 9, and the mobile phone retains the color of the area where the person 1 is located on the preview image.
  • the mobile phone after entering the color retention mode, the mobile phone defaults that the target object on the preview image is the person who appears first on the image obtained by the camera.
  • the preview image does not include the target object, and the preview image is a grayscale image; subsequently, the mobile phone can also add or delete one or more individuals in the target object according to the user's instructions. Color retention of the modified target object.
  • the target object is the character 1.
  • the character 1 moves out of the screen range of the mobile phone, see (b) in Figure 10, and the entire preview image is a grayscale image.
  • the target object is the puppy. See (c) in FIG. 10, and the mobile phone retains the color of the area where the puppy is located on the preview image.
  • the mobile phone after entering the color retention mode, the mobile phone defaults that the target object on the preview image is the middle individual on the image obtained by the camera or the individual located at the golden section point.
  • the middle individual is a puppy, or a building, etc.
  • the mobile phone can also add or delete individuals in the target object according to the user's instruction operation, and perform color retention according to the modified target object.
  • the mobile phone after entering the color retention mode, the mobile phone defaults that the target object in the preview image is an individual occupying the largest area on the image obtained by the camera.
  • the mobile phone can also add or delete one or more individuals in the target object according to the user's instruction operation, and perform color retention according to the modified target object.
  • the mobile phone determines the target object according to the preset type order by default.
  • the order of this type is characters, animals, buildings, etc. If the image captured by the camera includes a person, the target object on the preview image is a person; if the image captured by the camera does not include a person but includes an animal, the target object on the preview image is an animal; if the image captured by the camera does not include an animal If people and animals include buildings, the target object on the preview image is the building.
  • the mobile phone can also add or delete one or more individuals in the target object according to the user's instruction operation, and perform color retention according to the modified target object.
  • the target object is an object preset by the user through the system setting interface of the mobile phone.
  • the mobile phone can also add or delete one or more individuals in the target object according to the user's instructions, and perform color retention according to the modified target object.
  • the mobile phone after entering the color retention mode, the mobile phone can first automatically determine the target object, and then can add or delete one or more individuals in the target object according to the user's instruction operation, and perform color retention according to the modified target object.
  • the mobile phone displays a pure color image or a pure gray image.
  • the mobile phone determines the user's selection of one or more individuals as the target object, and reserves the color of the target area on the preview image (that is, the color of the image in the target area), and the image in the background area is processed into a grayscale image.
  • the mobile phone can prompt the user to specify the target object by sound or by displaying prompt information.
  • the preview image is a grayscale image
  • the mobile phone can prompt the user through a text message: Please click the individual on the picture to specify the object to be retained in color.
  • the mobile phone detects that the user clicks on the person 2 and determines that the person 2 is the target object. See (b) in FIG. 11, and the mobile phone retains the color of the area where the person 2 is located on the shooting preview interface. After the mobile phone detects that the user clicks on the puppy again, see (c) in Figure 11, and the mobile phone retains the colors of the area where the person 2 and the puppy are located on the shooting preview interface.
  • the mobile phone after entering the color retention mode, the mobile phone prompts the user: Please box select the object to be retained in color. After the mobile phone detects the user's operation to select an area, the target object includes individuals in the area, and the mobile phone includes the color of the area where the target object is located.
  • the shooting preview interface includes a gray mode control 1201, a blur mode control 1202, and a background replacement mode control 1203.
  • the graying mode preset by the target processing mode, the graying mode control 1201 is selected, the target object is Person 2, the area where Person 2 is located is colored, and the background area is a grayscale image.
  • the mobile phone detects that the user selects the operation of the blur mode control 1202, it switches the target processing mode to the blur mode.
  • the blur mode control 1202 is selected, the target object is Person 2, the area where Person 2 is located is a clear color image, and the background area is a blur image, that is, the image after the blur process.
  • the background replacement mode control is selected, the target processing mode is the background replacement mode, and the target object is Person 2.
  • At least one background picture is included on the shooting preview interface.
  • the mobile phone detects the user's operation of clicking the background picture 1301, referring to (b) in FIG. 13A, the mobile phone replaces the background area on the preview image with the area at the corresponding position on the background picture 1301. It can also be understood that the mobile phone superimposes the image of person 2 on the preview image onto the background image.
  • the position of the target object is different, and the replacement area on the target image is also different.
  • the target processing mode is a preset graying mode
  • the preview image is a pure grayscale image or a pure color image. If the mobile phone switches the target processing mode to the blur mode according to the user's instruction operation, in some embodiments, the preview image is a clear pure color image; in other embodiments, the middle area of the preview image is a clear color image , Other areas are blurred images.
  • the mobile phone detects the operation of the user instructing the target object, it reserves the area where the target object is located as a clear color image, and sets the background area as a blurred image.
  • the target processing mode is a preset graying mode
  • the preview image is a pure grayscale image or a pure color image. If the mobile phone switches the target processing mode from the graying mode to the background replacement mode according to the user's instruction operation, the preview image is a pure color image. After the mobile phone detects the operation of the user instructing the target object, it retains the color image in the area where the target object is located, and replaces the background area with the background picture.
  • the mobile phone After determining the target object and the target processing mode, the mobile phone processes the original image obtained by the camera and the ISP according to the target object and target processing mode. Therefore, on the preview image displayed on the shooting preview interface, the color of the target area is preserved (that is, the color of the image in the target area is preserved), and the image in the background area is an image processed according to the target processing mode.
  • the mobile phone displays the preview image on the shooting preview interface after entering the color retention mode, and the mobile phone enters the color retention mode to trigger the mobile phone to perform instance segmentation on each image subsequently acquired by the camera.
  • the mobile phone performs color retention processing according to the results of the instance segmentation, and displays the preview image after color retention processing on the shooting preview interface.
  • the mobile phone after the mobile phone detects the user's operation on the target object, it performs color retention processing for the current frame image, so that the user can see the color retention effect obtained in response to the user's operation as soon as possible on the preview image, giving the user Respond immediately.
  • the mobile phone performs color retention processing on the image obtained by the camera.
  • the target processing mode is the graying mode
  • the image 1 acquired by the camera can be seen in (a) in FIG. 13B, and the image 1 is a color image.
  • the mobile phone performs instance segmentation on image 1 and determines the mask area where each individual object on image 1 is located.
  • the mobile phone displays a preview image 1 on the shooting preview interface, and the preview image 1 is a pure grayscale image processed by the image 1.
  • the mobile phone determines that the target object is Person 2, and determines the mask area corresponding to Person 2.
  • the mobile phone retains the colors in the mask area of the person 2 on the image 1, and processes other areas into a grayscale image, thereby generating and displaying the color retention preview image 2 shown in (c) in FIG. 13B.
  • the camera acquires the image 2, and the mobile phone performs instance segmentation on the image 2, and determines the mask area where the target person 2 is located.
  • the mobile phone retains the colors in the mask area of the person 2 on the image 2, and processes other areas into grayscale images, thereby generating and displaying the preview image 3 shown in (e) in FIG. 13B.
  • the mobile phone detects that the user has clicked on Person 2, then Person 2 is removed from the target object. At this time, the target object does not include any individual objects, and the mobile phone processes Image 2 as pure gray In this way, the preview image 4 shown in (f) in FIG. 13B is generated and displayed.
  • the mobile phone performs color retention processing on the preview image.
  • the target processing mode is the graying mode
  • the camera acquires image 1
  • image 1 is a color image.
  • the mobile phone performs instance segmentation on image 1 and determines the mask area where each individual object on image 1 is located.
  • the mobile phone displays preview image 1 on the shooting preview interface, which is a pure grayscale image processed by image 1. If the mobile phone detects that the user clicks on Person 2, it determines that the target object is Person 2, and determines the mask area corresponding to Person 2.
  • the mobile phone restores the mask area of the person 2 on the preview image 1 to a color image, and the other areas are still grayscale images, thereby generating and displaying the preview image 2 after the color retention process.
  • the mobile phone after the mobile phone detects the user's operation on the target object, it performs color retention processing on the next frame of image.
  • the target processing mode is the graying mode
  • the image 1 obtained by the camera may refer to (a) in FIG. 13C
  • the image 1 is a color image.
  • the mobile phone performs instance segmentation on image 1 and determines the mask area where each individual object on image 1 is located.
  • the mobile phone displays a preview image 1 on the shooting preview interface, and the preview image 1 is a pure grayscale image processed by the image 1. If the mobile phone detects that the user clicks on Person 2, it is determined that the target object is Person 2. Then, referring to (c) in FIG.
  • the camera obtains the image 2, and the mobile phone performs instance segmentation on the image 2 and determines the mask area where the target person 2 is located.
  • the mobile phone retains the colors in the mask area of the person 2 on the image 2, and processes other areas into grayscale images, thereby generating and displaying the preview image 2 shown in (d) in Fig. 13C.
  • the mobile phone detects that the user has clicked on Person 2, Person 2 will be removed from the target object.
  • the target object does not include any individual objects.
  • the camera acquires image 3, the mobile phone performs instance segmentation on image 3, and processes image 3 into a pure grayscale image, thereby generating and displaying the preview shown in Figure 13C (f) Image 3.
  • the mobile phone displays the preview image on the shooting preview interface after entering the color retention mode.
  • the mobile phone entering the color retention mode will not trigger the mobile phone to perform instance segmentation.
  • the mobile phone After the mobile phone enters the color retention mode and detects the operation of the user instructing the target object, it triggers the instance segmentation of the image subsequently acquired by the camera.
  • the mobile phone can perform color retention processing on the current frame image or process the next frame image according to the instance segmentation result, and display the color retention processing preview image on the shooting preview interface.
  • the mobile phone can perform color retention processing on the image acquired by the camera or color retention processing on the preview image according to the result of the instance segmentation.
  • the target processing mode is the graying mode
  • the image 1 obtained by the camera may refer to (a) in FIG. 13D
  • the image 1 is a color image.
  • the mobile phone displays a preview image 1 on the shooting preview interface, and the preview image 1 is a pure grayscale image processed by the image 1. If the mobile phone detects the user's click operation on the preview image, it will perform instance segmentation on image 1, determine that the area where the click position is located is the mask area of person 2, and determine that the target object includes person 2.
  • the mobile phone retains the colors in the mask area of the person 2 on the image 1, and processes other areas into grayscale images, thereby generating and displaying the color retention preview image 2 shown in (c) in FIG. 13D. Then, referring to (d) in FIG. 13D, the camera acquires the image 2, and the mobile phone performs instance segmentation on the image 2 and determines the mask area where the target person 2 is located. The mobile phone retains the colors in the mask area of the person 2 on the image 2, and processes other areas into grayscale images, thereby generating and displaying the preview image 3 shown in (e) in FIG. 13D.
  • the mobile phone does not need to enter the color retention mode before entering the gray mode, blur mode, or background replacement mode; instead, it can directly enter the gray mode, blur mode, or background replacement mode.
  • the mobile phone may not perform step 303.
  • the method may further include the above step 304 and step 300:
  • the mobile phone After the mobile phone detects the user's preset operation 2 and enters the target processing mode, determines the target object, retains the color of the area where the target object is located on the preview image, and processes the background area according to the target processing mode, which includes graying Mode, blur mode and background replacement mode.
  • the target processing mode which includes graying Mode, blur mode and background replacement mode.
  • the shooting preview interface displayed by the mobile phone includes a gray mode control 1302, a blur mode control 1303, and a background replacement mode control 1304.
  • the mobile phone detects that the user clicks on the gray mode control 1302, it enters the gray mode.
  • the mobile phone can determine the target object, retain the color of the area where the target object is located, and set the background area as a grayscale image.
  • the mobile phone can also modify the target object according to the user's instruction.
  • the target object includes one or more individuals, and the multiple individuals belong to the same type or different types.
  • the preview image is a pure grayscale image or a pure color image.
  • the mobile phone detects the user's operation to indicate the target object on the preview image, it retains the color of the area where the target object is located and sets the background area as a grayscale image.
  • the mobile phone after the mobile phone enters the graying mode, if the image captured by the camera includes people, all the characters on the preview image of the mobile phone are the target objects by default.
  • the area where all the characters are located is a color image, and the other areas are grayscale images. .
  • the mobile phone After the mobile phone enters the gray mode, it can also switch to the blur mode or background replacement mode according to the user's instruction operation.
  • the gray mode if the target object is included in the preview image, after switching to the blur mode, the area where the target object is located is a clear color image, and the other areas are grayed out; or, after switching to the background replacement mode, the phone displays The color image of the area where the target object is located on the preview image is retained, and the other areas are replaced with images in the same area on the background image.
  • the preview image if the preview image does not include the target object and the preview image is a pure gray image or a pure color image, after switching to the blur mode or background replacement mode, the preview image displayed by the mobile phone is a pure color image.
  • the mobile phone can determine the target object, retain a clear color image of the area where the target object is located, and set the background area as a blur image. For example, after the mobile phone detects that the user clicks on the blur mode control 1303, it enters the blur mode.
  • the mobile phone can also modify the target object according to the user's instruction. For the method of determining and modifying the target object by the mobile phone, please refer to the related description in step 305 above.
  • the preview image is a pure grayscale image, or a pure color image, or the middle area is a clear color image and other areas are blurred images.
  • the mobile phone detects the user's operation to indicate the target object on the preview image, it retains the color of the area where the target object is located, and processes the background area into a blurred image.
  • the phone after the phone enters the blur mode, if the image captured by the camera includes a person, the phone defaults to the person closest to the middle of the preview image as the target object, the area where the target object is located is a clear color image, and the other areas are Blurred image.
  • the mobile phone After the mobile phone enters the blur mode, it can also switch to the gray mode or the background replacement mode according to the user's instruction operation.
  • the blur mode if the target object is included in the preview image, after switching to the gray mode, the area where the target object is located on the preview image displayed by the mobile phone is a color image, and other areas are grayscale images; or, switch to the background replacement mode After that, the color image of the area where the target object is located is retained, and the other areas are replaced with images in the same area on the background picture.
  • the preview image does not include the target object, it is a pure gray image, or a pure color image, or the middle area is a clear color image and other areas are blurred images, then switch to the gray mode Later, the preview image displayed by the mobile phone is a pure grayscale image or a pure color image; or, after switching to the background replacement mode, the preview image displayed by the mobile phone is a pure color image.
  • the mobile phone can determine the target object, retain the color of the area where the target object is located, and set the background area as a grayscale image. For example, after the mobile phone detects that the user has clicked the background replacement mode control 1304, it enters the background replacement mode. Moreover, the mobile phone can also switch the target object according to the user's instruction operation. For the description of determining and switching the target object by the mobile phone, please refer to the relevant description in step 305 above.
  • the preview image is a pure color image.
  • the mobile phone detects the user's operation to indicate the target object on the preview image, it retains the color image of the area where the target object is located, and replaces the background area with the image in the area at the same position on the background picture.
  • the phone after the phone enters the background replacement mode, if the image captured by the camera includes a person, the phone defaults to the person closest to the middle of the preview image as the target object, the color image of the area where the target object is located is retained, and other areas are replaced It is the image in the same location area on the background picture.
  • the mobile phone After the mobile phone enters the background replacement mode, it can also switch to the gray mode or the blur mode according to the user's instruction operation.
  • the background replacement mode if the target object is included in the preview image, after switching to the gray mode, the area where the target object is located is a color image, and the other areas are grayscale images; or, after switching to the blur mode, the area where the target object is located It is a clear color image, and other areas are replaced with blurred images.
  • the preview image displayed by the mobile phone is a pure gray image or a pure color image; or, switch to the blur mode Later, the preview image displayed by the mobile phone is a pure grayscale image, or a pure color image, or the middle area is a clear color image and other areas are blurred images.
  • the mobile phone displays a shooting interface after detecting the user's shooting operation.
  • the shooting interface includes a recorded image, the color of the target area on the recorded image is preserved, and the image in the background area is an image processed according to the target processing mode.
  • the shooting operation is a video recording operation.
  • the user's shooting operation may be an operation of the user clicking the shooting control 1400 shown in (a) of FIG.
  • the mobile phone After the mobile phone detects the user's shooting operation, it starts recording, and retains the color of the target area on the image collected during the recording process, that is, retains the pixel value of the pixels in the target area; and post-processes the background area according to the target processing mode.
  • the mobile phone can perform graying processing on the background area, thereby converting the pixel values in the background area into grayscale values, and the image in the background area into grayscale images.
  • the target object shown in (a) of FIG. 14 is Person 2 and the target processing mode is the graying mode
  • the recorded image on the shooting interface can be seen in FIG. 14 (b).
  • the mobile phone can perform blur processing on the background area.
  • the target object shown in (a) of FIG. 14 is the person 2
  • the recorded image on the shooting interface can be seen in (c) of FIG. 14.
  • the target object the color of the area where the person 2 is located is preserved and can be clearly displayed, and the background area is blurred and blurred, so that the target object can be highlighted, so that the target area and the background area form an intuitive contrast, and the target object is clear and brilliant. It can achieve the purpose of highlighting the subject and shining the protagonist, giving the user a visual impact and improving the user's shooting experience.
  • the text "blurred image” displayed in the background area indicates that the image in the background area is a blurred image.
  • the phone can replace the image in the background area with the image in the same position in the background image to achieve the effect of replacing the background of the target object .
  • the mobile phone can also smooth or feather the stitching position of the target area and the background picture in the image obtained after the background replacement, so that the stitching edge transition is smooth and natural, and a better fusion effect is obtained.
  • the recorded image on the shooting interface can be seen in (d) of FIG. 14. It can be seen from (d) in FIG. 14 that as the target object, the color of the area where the person 2 is located is preserved, and the background area is replaced with another image, which can make the image more personalized and creative, and improve the user's shooting experience.
  • the mobile phone can also adjust the target object and the target processing mode according to the user's instruction operation.
  • the color retention method provided by the embodiment of the present application may further include step 307:
  • the mobile phone After the mobile phone detects the user's operation of adjusting the target object, it performs color retention on the target area according to the adjusted target object.
  • the mobile phone can adjust the target object on an individual basis according to the user's instructions, so that the object to be preserved in color can be flexibly switched and accurately set, and the user's shooting experience can be improved.
  • the mobile phone After the mobile phone detects the user's preset operation 3, it enters the target object modification mode. Then, the mobile phone can adjust the target object according to the user's instructions.
  • the shooting interface may include an object adjustment control
  • the preset operation 3 may be an operation for the user to click the object adjustment control.
  • the mobile phone After the mobile phone detects the user's operation of adjusting the target processing mode, it performs post-processing on the background area according to the adjusted target processing mode.
  • the mobile phone adjusts the target processing mode according to the user's instruction operation, thereby flexibly adjusting the processing mode, and improving the diversity of video picture effects.
  • the shooting interface includes processing mode controls.
  • the target processing mode is the gray mode
  • the mobile phone detects that the user has clicked the background replacement mode control in the processing mode control, the target processing mode is switched to For background replacement mode
  • the shooting interface displayed by the mobile phone can be seen in Figure 16 (b).
  • the shooting interface does not include the target object.
  • the target processing mode is the graying mode
  • the mobile phone displays a pure grayscale image on the shooting interface.
  • the target processing mode is the blur mode
  • the mobile phone displays a pure blur image on the shooting interface.
  • the target processing mode is the blur mode
  • the mobile phone displays the background picture on the shooting interface. Later, in some technical solutions, after the mobile phone detects the appearance of the target object again, it continues to retain the color of the target area where the target object is located. Or, in other technical solutions, the mobile phone retains the color of the target area where the target object is located after detecting the operation of the user to re-designate the target object.
  • the mobile phone can also exit and enter the color retention mode multiple times.
  • the recorded image on the shooting interface is an image after color retention processing; after exiting the color retention mode, the recorded image on the shooting interface can be a color image.
  • the mobile phone can record the color-retained image and the video experience of the dynamic change of the color image, which can give the user a visual impact experience, so that the user can obtain a personalized and diversified video.
  • the shooting interface shown in (a) in FIG. 17, which is currently in the color retention mode.
  • the mobile phone After the mobile phone detects that the user has clicked the filter control 1701, as shown in (b) in FIG. 17, the mobile phone displays the refresh mode control 1702 and other controls.
  • the mobile phone After the mobile phone detects that the user has clicked on the refreshing mode control 1702, it enters the refreshing mode, as shown in (c) in FIG. 17, the recorded image on the shooting interface is a color image.
  • the mobile phone detects that the user clicks on the color retention mode control 1703 as shown in (d) in Figure 17, the mobile phone enters the color retention mode again and retains the color of the target area.
  • the mobile phone After the mobile phone detects the user's operation to stop shooting, it stops recording and generates a video.
  • the mobile phone After the mobile phone detects the user's operation to stop shooting, it can stop recording, and perform video encoding on the image data during the recording process to generate a video file obtained by shooting in the color retention mode.
  • the video image in the video file has been color-retained and post-processed.
  • the mobile phone before starting video recording, can determine the target object and target processing mode; in the video recording process, the mobile phone can perform color retention and post-processing on the collected images to obtain the video by shooting.
  • the color of the image in the target area where one or more individuals of the target object are located is preserved.
  • the mobile phone can use a single individual as a unit to retain the color of one individual, multiple individuals of different types, or multiple individuals of the same type, improve the flexibility and precise pertinence of color retention, highlight target objects, and improve users Shooting experience.
  • the background area can be subjected to post-processing such as graying, blurring, or background replacement, which can improve the flexibility of image processing, so that users can obtain personalized and diversified video images.
  • the mobile phone enters the color retention mode before detecting the user's shooting operation.
  • the mobile phone may enter the color retention mode after detecting the user's shooting operation.
  • the mobile phone can perform instance segmentation on the image obtained by the camera during the recording process to determine the area where different individuals are located on the preview image.
  • the mobile phone can also determine the target processing mode and target object, so as to retain the color of the target area where the target object is located in the captured image, and perform post-processing on the background area according to the target processing mode.
  • the method may include:
  • the mobile phone After the mobile phone detects the user's operation to open the camera application, it displays a shooting preview interface, where the shooting preview interface includes a preview image.
  • step 1801 refer to the related description of step 301 above.
  • the mobile phone enters the target shooting mode, and the target shooting mode is the video recording mode.
  • step 1802 refer to the related description of step 302 above.
  • the mobile phone displays a shooting interface after detecting the user's shooting operation, and the shooting interface includes a color image.
  • the recorded image on the photographing interface displayed in step 1803 is a color image that has not been subjected to the above-mentioned color retention and post-processing.
  • the shooting interface may refer to (a) in FIG. 19.
  • the preset operation 4 can have multiple operations. Exemplarily, after the mobile phone detects that the user has clicked the filter control 1901 on the shooting interface shown in (a) of FIG. 19, as shown in (b) of FIG. 19, the mobile phone displays the color retention mode control 1902. After the mobile phone detects that the user clicks on the color retention mode control 1902, it enters the color retention mode; as shown in (c) in Figure 19, the color of some areas on the shooting interface is retained, and other areas are grayscale images.
  • the preset operation 4 may include an operation of the user clicking the filter control 1092 and clicking the color retention mode control 1902.
  • the mobile phone performs instance segmentation on the image obtained by the camera, and determines the area where different individuals on the image are located.
  • step 1805 refer to the related description of step 304 above.
  • the mobile phone determines the target object and the target processing mode, and retains the color of the area where the target object is located on the recorded image, and processes the background area according to the target processing mode.
  • the target object includes one or more individuals, and the multiple individuals belong to the same Type or different type.
  • step 1806 For the manner in which the mobile phone determines the target processing mode in step 1806, refer to the relevant description in step 305 above. The difference is that the mobile phone determines the target processing mode during the video preview in step 305; the mobile phone determines the target processing mode during the video recording in step 1806, which will not be repeated here.
  • step 1806 For the manner in which the mobile phone determines the target object in step 1806, reference may be made to the relevant description in step 305 above. The difference is that the mobile phone determines the target object during the video preview in step 305; the mobile phone determines the target object during the video recording in step 1806, which will not be repeated here.
  • the mobile phone may not perform step 1804 and perform it after step 1803: the mobile phone enters the target processing mode after detecting the user's preset operation 5, determines the target object, and retains the area where the target object is located on the recorded image
  • the color of the background area is processed according to the target processing mode, the target processing mode includes gray mode, blur mode and background replacement mode.
  • the mobile phone performs related processing during the video recording process; and in step 300, the mobile phone performs related processing during the video preview.
  • the mobile phone After determining the target object and target processing mode, the mobile phone processes the collected video images according to the target object and target processing mode. On the recorded image displayed on the shooting interface, the color of the target area is preserved, and the image in the background area is the image processed according to the target processing mode.
  • the method may also include:
  • the mobile phone After the mobile phone detects the user's operation to adjust the target object, it performs color retention on the target area according to the adjusted target object.
  • step 1807 refer to the related description of step 307 above.
  • the mobile phone After the mobile phone detects the user's operation of adjusting the target processing mode, it performs post-processing on the background area according to the adjusted target processing mode.
  • step 1808 refer to the related description of step 308 above.
  • the mobile phone After the mobile phone detects the user's operation to stop shooting, it stops recording and generates video.
  • step 1809 refer to the related description of step 309 above.
  • the mobile phone after starting the video recording, can determine the target object and the target processing mode; in the video recording process, the mobile phone can perform color retention and post-processing on the collected image to obtain the video by shooting.
  • the color of the image in the target area where one or more individuals of the target object are located is preserved.
  • the mobile phone can use a single individual as a unit to retain the color of one individual, multiple individuals of different types, or multiple individuals of the same type, improve the flexibility and precise pertinence of color retention, highlight target objects, and improve users Shooting experience.
  • the background area can be subjected to post-processing such as graying, blurring, or background replacement, which can improve the flexibility of image processing, so that users can obtain personalized and diversified video images.
  • each video screen can be recorded using the color retention and post-processing methods in the above video mode.
  • the mobile phone can perform color retention and post-processing for the video images of certain lines according to the user's selection operation, so as to obtain more personalized and diversified video images.
  • the embodiment of the present application provides an image retention method, which can be applied to a photographing scene.
  • the method includes steps 2001-2005.
  • the steps 2001-2005 may be the above-mentioned steps 301-305.
  • the target shooting mode in step 2002 is the camera mode; and in step 2003, the mobile phone enters the color retention mode based on the shooting preview interface in the camera mode.
  • the photographing preview interface in the photographing mode may refer to (b) in FIG. 4.
  • the target processing mode is the graying mode and the target object is the person 2
  • see (a) in FIG. 21 for the shooting preview interface and the mobile phone retains the color of the area where the person 2 is located.
  • the method may further include:
  • the photographing operation is a photographing operation.
  • the mobile phone detects that the user clicks on the shooting control 2100 shown in Figure 21 (a)
  • the color of the target area where the person 2 is located on the photo is Is retained, the image in the background area is the image processed according to the graying mode of the target processing mode.
  • the mobile phone can perform color retention and post-processing on the collected images to obtain photos.
  • the color of the target area where one or more individuals of the target object are located in the photo is preserved.
  • the mobile phone can use a single individual as a unit to retain the color of one individual, multiple individuals of different types, or multiple individuals of the same type, improve the flexibility and precise pertinence of color retention, highlight target objects, and improve users Shooting experience.
  • the background area can be subjected to post-processing such as graying, blurring, or background replacement, which can improve the flexibility of image processing, so that users can obtain personalized and diversified photos.
  • the mobile phone may not perform step 2003 and perform it after step 2002: the mobile phone detects the user's preset operation 6 and enters the target processing mode, determines the target object, and keeps the preview The color of the area where the target object on the image is located is processed on the background area according to the target processing mode.
  • the target processing mode includes a graying mode, a blurring mode, and a background replacement mode.
  • the above description is based on the example of obtaining a photo taken by the mobile phone.
  • the mobile phone can also take multiple photos at one time in the continuous shooting mode. Due to the short shooting time in the continuous shooting mode, the individual objects to be shot in the continuous shooting are basically unchanged. Therefore, the image retention method in the continuous shooting mode is similar to the image retention method when taking a photo.
  • the mobile phone can be used in Before the user's shooting operation is detected, the target object and the target processing mode are determined, so as to obtain multiple photos, which will not be repeated here.
  • the mobile phone can also perform component segmentation on the individuals on the image based on the mobile phone segmenting the individuals according to the instance.
  • different component segmentation strategies can be segmented to obtain different components.
  • a person in a component segmentation strategy, can include parts such as head, neck, arms, hands, clothes, legs, and feet.
  • a person in another component segmentation strategy, can include components such as hair, forehead, ears, nose, face, mouth, neck, and arms.
  • the embodiment of the present application may be based on component segmentation, retain the color on the image in units of components, and perform post-processing such as graying, blurring, or background replacement on the background area.
  • post-processing such as graying, blurring, or background replacement on the background area.
  • the difference from the foregoing method of image retention based on instance segmentation is that the target object may include one or more components, and the background region includes other regions except the one or more components.
  • the target object may belong to the same individual or different individuals, which is not limited in the embodiment of the present application.
  • the target object may be one or more components that are defaulted or instructed by the user.
  • the target object can also be switched according to the user's instruction operation.
  • the target object may be a component indicated by the user.
  • the mobile phone detects that the user has clicked the component color retention mode control, it enters the component color retention mode.
  • the head of the person 2 is determined as the target object, so that the color of the head of the person 2 is preserved.
  • the mobile phone can also perform post-processing such as graying, blurring, or background replacement on areas other than the head of the character 2.
  • the mobile phone detects the user's preset operation 7 (for example, a double-click operation) on the person 2 individual, the person 2 can be divided into parts.
  • the area where the person 2 is located becomes a grayscale image.
  • the mobile phone can also display the individual's component division status. Then, after the mobile phone detects that the user clicks on the skirt of the character 2, it determines that the skirt of the character 2 is the target object. See (c) in FIG. 22. The mobile phone retains the color of the area where the skirt of the character 2 is located.
  • the mobile phone performs post-processing such as graying, blurring, or background replacement on areas other than the character 2’s skirt. After the mobile phone detects that the user clicks on the head of the character 2, see (d) in FIG. 22, and the mobile phone retains the color of the area where the skirt and the head of the character 2 are located.
  • the target object may include components on the image whose colors are close to the components indicated by the user.
  • the color closeness means that the difference between the pixel values of the corresponding pixels is smaller than the preset threshold.
  • the part indicated by the user is the neck
  • the target object may include the neck, the face and hands that are close to the skin color of the neck, and so on.
  • the mobile phone can retain the color of a specific color component on the image. After the mobile phone detects that the user clicks on a certain location, the target object includes components that are close to the pixel value of the location. In the embodiment of the present application, that the pixel value is close means that the difference in the pixel value is less than the preset threshold.
  • the mobile phone can use a single component as a unit to retain the color of one component, multiple individuals of different individuals, or different components of the same individual, reduce the granularity of color retention settings, improve the flexibility and precise targeting of color retention, and highlight
  • the target object makes the image obtained by shooting more creative and improves the user's shooting experience.
  • the background area can be subjected to post-processing such as graying, blurring, or background replacement, which can improve the flexibility of image processing, so that users can obtain personalized and diversified video images.
  • the mobile phone may retain the color of the individual including a specific color on the image. After the mobile phone detects the user's operation of clicking on a certain location, the target object includes the individual to which the pixel point close to the pixel value of the location belongs.
  • the mobile phone can also retain the color of the region where the specific color is located on the image. After the mobile phone detects that the user clicks on a certain location, the target object includes an area close to the pixel value of the pixel at that location.
  • the mobile phone can retain the colors of the components, individuals or regions corresponding to the same color from the color dimension, improve the flexibility and precise pertinence of color retention, highlight the colors that you want to retain, and make the captured images more creative and improve User shooting experience.
  • the mobile phone after the shooting is completed, can save the photos and videos after color retention and post-processing. In some embodiments, the mobile phone can also save the original image without color retention and post-processing.
  • the thumbnails of photos and videos after color retention and post-processing can be displayed differently from the thumbnails of photos and videos without color retention and post-processing.
  • thumbnail 2301 of a photo after color retention and post-processing is stored in the gallery, and a thumbnail 2302 of a photo without color retention and post-processing is saved.
  • the thumbnail 2303 of the video after color retention and post-processing, and the thumbnail 2304 of the video without color retention and post-processing are stored in the gallery.
  • the thumbnail 2301 of the photo after color retention and post-processing retains the color of a partial area
  • the thumbnail 2302 is a color image as a whole.
  • the color retention mark 2300 is displayed on the thumbnails of the photos and videos after color retention processing and post processing.
  • a video that has undergone color retention processing may include multiple video image frames, and some of the video image frames may have undergone color retention processing, while other video image frames may not have undergone color retention processing.
  • a video that has undergone color retention processing may use one of the video image frames that have undergone color retention processing as a thumbnail image to distinguish it from a video that has not undergone color retention processing.
  • the mobile phone can also perform editing processing to determine the target object and the target processing mode, so as to retain the color of the area where the target object is located according to the target object and the target processing mode, and compare the background area Perform ashing, blurring, or background replacement processing.
  • the target image can be a photo taken by a mobile phone, a downloaded image, or an image copied from other devices.
  • the embodiment of the present application also provides an image retention method, which can be applied to an image editing scene.
  • the method includes:
  • the target image is an image obtained by the mobile phone, for example, it may be a photo obtained by shooting, a downloaded image, or an image copied from other devices.
  • the mobile phone After the mobile phone detects the user's preset operation 8, it displays the editing interface of the target image.
  • the mobile phone detects that the user clicks the thumbnail of the target image obtained by the previous shooting on the shooting preview interface, or clicks the thumbnail 2302 of the target image in the gallery as shown in FIG. 23, after the operation of (a) ), the mobile phone zooms in to display the target image.
  • the mobile phone detects the user's click operation on the target image, it displays an interface as shown in (b) of FIG. 25, and the interface includes an edit control 2501.
  • the preset operation 8 may be an operation for the user to click on the edit control.
  • the image on the editing interface is a color image.
  • the mobile phone enters the target editing mode, and the target editing mode interface is displayed.
  • the target editing mode includes a blur mode, a color retention mode, or a background replacement mode.
  • the edit mode control includes a blur mode control 2503, a preserved color mode control 2504, a background replacement mode control 2505, and the like.
  • the mobile phone detects that the user has clicked the operation of the blur mode control 2503 and then enters the blur mode, and displays the blur mode interface as shown in (a) in FIG. 26.
  • the mobile phone detects that the user has clicked the operation of the reserved color mode control 2504 and then enters the reserved color mode, and displays the reserved color mode interface as shown in (a) in FIG. 27.
  • the color-preserving mode interface includes the grayscale image that the target image is converted into.
  • the mobile phone detects that the user has clicked the background replacement mode control 2505 to enter the background replacement mode.
  • the mobile phone performs color retention processing on the target image according to the target editing mode.
  • the target image when the target editing mode is the blur mode, the target image includes area 1, which can be referred to as a clear area; the area outside of area 1 on the target image is referred to as area 2, and area 2 can also be referred to as a blurred area.
  • the image in the clear area retains the pixel value of the original target image
  • the image in the blurred area is the blurred image after the blur processing.
  • the mobile phone can be operated according to the user's instructions to adjust the shape and size of the clear area.
  • the clear area can be a circle, an ellipse, or a square (also called a linear shape).
  • the virtual mode interface includes a circular control 2601 and a linear control 2602.
  • the clear area 2603 on the blur mode interface is a circle.
  • the clear area on the blur mode interface is a square. The mobile phone can adjust the size of the clear area according to the user's finger pinching or dragging.
  • the mobile phone can call the image blur algorithm to determine the front and back scenes according to the segmentation results.
  • the blur algorithm simulates the image depth estimation to make the foreground subject clear and the background blurry. Among them, the foreground is the image in area 1, and the background is the image in area 2.
  • the mobile phone can also adjust the blur degree of the blur area according to the user's instruction operation.
  • the blur mode interface includes a blur level adjustment control 2604.
  • the mobile phone detects that the user clicks on different positions on the blur level adjustment control 2604, or drags on the blur level adjustment control 2604, as shown in Figure 26 (b), the mobile phone can adjust the blur area The blur level is higher, and the higher the blur level, the greater the blur degree of the image in the blur area.
  • the blur mode interface may also include a blur control 2605.
  • the position on the blur mode interface becomes blurred and blurred. For example, whether the area that the user wants to clearly display is a regular shape such as a circle or a square, the user can use the blur control to blur and blur the area of the edge of the individual that he wants to display in the circular area. For another example, the user can make a certain position in the blurred area become more blurred and blurred through the module control.
  • the virtual mode interface may also include a return control 2606 for returning to the previous operation.
  • the blur mode interface may also include controls for completing the blurring process and controls for exiting the blurring process.
  • the preserved color mode interface includes the grayscale image converted into the target image.
  • the mobile phone in the color retention mode, can retain the color specified by the user on the target image, that is, the mobile phone can retain colors by color. For example, the mobile phone can prompt the user: tap the picture to select the color that needs to be retained.
  • the color retention mode interface includes a color selection control 2701.
  • the color selection control 2701 When the color selection control 2701 is selected, the mobile phone detects that the user has clicked on the face of the person 2 on the grayscale image. , See (b) in Figure 27, the mobile phone reserves the area where the pixel value of the face is close to the position. After the mobile phone detects the operation of the user's neck position, it retains the color of the area on the target image where the neck is close to the pixel value of the position.
  • the mobile phone after the mobile phone detects that the user clicks on a certain position, the color of each component close to the pixel value of the position is retained.
  • the color selection control 2701 when the color selection control 2701 is selected, after the mobile phone detects that the user has clicked on the face of the person 2 on the grayscale image, see (c) in Figure 27, the mobile phone retains the face and neck on the target image The color of the part whose pixel value is close to that of the hand. That is, the areas where the parts with similar skin tones such as the face, neck, and hands are located are colored, and the other areas are grayscale images.
  • the color-preserving mode interface further includes an eraser control 2702.
  • the eraser control 2702 When the eraser control 2702 is selected, the mobile phone detects that the user is dragging (or smearing) with the color reserved area, and can restore the smeared area to a grayscale image.
  • the mobile phone detects that the user uses an eraser to smear the left hand of the character 2, see (d) in Figure 27, the mobile phone places the left hand The area is restored to a grayscale image.
  • the color retention mode interface may also include an eraser size adjustment control for adjusting the size of the eraser (that is, the area of the eraser's scope).
  • the user can use the eraser to erase the area that does not want to retain the color. For example, the user can use the eraser to fine-tune the area where the color needs to be retained, or adjust the boundary between the color retention area and the area where the grayscale image is located.
  • the color-preserving mode interface further includes a return control, which can be used to return to the previous operation.
  • the color-preserving mode interface may also include controls for completing the blurring process and controls for exiting the blurring process.
  • the color of all pixels on the individual to which the position belongs is close to the pixel value of the position. For example, after the mobile phone detects that the user clicks on the face of person 2, it retains the color of the face of person 2 on the target image, as well as the colors of the forehead, neck, and hands of the person 2 and the face whose pixel values are close to each other.
  • the mobile phone after the mobile phone detects that the user clicks on a certain location, it retains the colors of all areas close to the pixel value of the location.
  • the mobile phone in the color retention mode, can retain the color of the component specified by the user, that is, retain the color by component.
  • the mobile phone can prompt the user to tap a picture to select parts that need to retain colors. After the mobile phone detects that the user clicks on a part, the color of the part is retained, and other areas are still grayscale images.
  • the mobile phone detects that the user has clicked on the position of the head of the person 2 shown in (a) in FIG. 27, the color of the head is retained, and the other areas are grayscale images.
  • the mobile phone in the color retention mode, can retain the color of the individual specified by the user, that is, retain the color by individual. For example, the mobile phone can prompt the user to tap a picture to select an individual whose color needs to be preserved. After the mobile phone detects that the user clicks on an individual, the color of the individual is retained, and other areas are still grayscale images.
  • the color retention mode interface includes a color retention control by color, a color retention control by individual, and a color retention control by component.
  • the mobile phone can use the corresponding color retention strategy to perform color retention processing on the target image according to the user's selection operation.
  • the mobile phone can retain the color of some areas on the target image based on the color, in terms of parts or individuals, reduce the granularity of color retention settings, improve the flexibility and precise pertinence of color retention, highlight the target object, and make the shooting obtainable
  • the images are more creative and improve the user's shooting experience.
  • the background area can be subjected to post-processing such as graying, blurring, or background replacement, which can improve the flexibility of image processing, so that users can obtain personalized and diversified video images.
  • the mobile phone can retain the target area on the target image where the target object is located, and replace the image in the background area with the image in the same location in the background image. It can also be understood that the mobile phone superimposes the image in the target area on the background picture, the image in the target area is the foreground image, and the background picture is the background image.
  • the target object may be the default object of the mobile phone or the object indicated by the user.
  • the background image can be the default image of the mobile phone or the image selected by the user.
  • the mobile phone can also replace the background area with a background picture or a combination of multiple background pictures according to the user's instructions.
  • the mobile phone can also operate according to the user's instructions, zoom in or zoom out the target object, move the position of the target object, and add or delete individuals in the target object.
  • the mobile phone can separately perform color retention processing on each image in the above-mentioned manner, as well as editing processing such as graying, blurring, or background replacement.
  • the mobile phone can edit one of the images (such as the first image) according to the user's instructions, and the other images automatically adopt the same Ways to carry out color retention processing, and editing processing such as graying, blurring or background replacement.
  • the mobile phone retains the color of the person 1 in the first image according to the user's instruction operation; then the mobile phone also automatically retains the color of the person 1 in the other images.
  • the mobile phone blurs the areas other than Person 1 in the first image, and the mobile phone automatically blurs the areas other than Person 1 in the other images.
  • the mobile phone replaces the area other than Person 1 in the first image with a certain background picture, then the phone will automatically replace the area other than Person 1 in other images with the same background picture.
  • the target image is an image that has not undergone image retention processing.
  • the target image may also be an image that has undergone image retention processing before.
  • the mobile phone can perform the image retention process again according to the user's instructions. For example, this edit can adjust the target area that needs to retain the color, adjust the clear area or replace it with a new background, etc.
  • the mobile phone can edit the acquired image, so that the color on the image can be retained on an individual or component basis, the protagonist can be selected more accurately, the flexibility and precise pertinence of the color retention setting can be improved, and the edited color can be improved.
  • the image is more creative.
  • the background area can be subjected to post-processing such as graying, blurring, or background replacement, which can improve the flexibility of image processing, so that users can obtain personalized and diversified video images.
  • the mobile phone can also perform editing processing to determine the target object and the target post-editing mode , So as to retain the color of the area where the target object is located according to the target object and the post-target editing mode, and perform graying, blurring, or background replacement processing on the background area.
  • the mobile phone displays the editing interface of the target video.
  • the mobile phone can perform color retention processing on one or more images in the target video, as well as processing such as graying, blurring, or background replacement according to the instructions of the user.
  • the unprocessed image in the target video remains as the original color image.
  • the mobile phone can perform color retention processing, graying, blurring, or background replacement processing on a certain image (for example, the first image) in the target video.
  • the images after this image are processed in the same way for color retention, as well as graying, blurring, or background replacement.
  • the mobile phone retains the color of the area where the target person 1 is located for the first image in the target video, and retains the color of the target vehicle 1 for the tenth image according to the user's instruction operation. Then, the second to ninth images also automatically retain the color of the area where the person 1 is located. If the person 1 is not included in the second to ninth images, a pure color image or a pure grayscale image will be displayed. The images after the 10th image also automatically retain the color of vehicle 1. If the image after the 10th image does not include the vehicle 1, a pure color image or a pure gray image is displayed.
  • the mobile phone grayed out the area other than the person 1 on the first image in the target video, and grayed out the area other than the vehicle 1 on the tenth image. Then, the areas other than Person 1 in the second to ninth images are also automatically grayed out. If the person 1 is not included in the second to ninth images, a pure grayscale image will be displayed. The areas other than vehicle 1 on the images after the tenth image are also automatically grayed out. If the image after the 10th image does not include vehicle 1, a pure gray image is displayed.
  • the mobile phone blurs the area other than the person 1 on the first image in the target video, and blurs the area other than the vehicle 1 on the tenth image. Then, the areas other than Person 1 in the second to ninth images are also automatically blurred. If the person 1 is not included in the second to ninth images, the pure blurred image or the original color image will be displayed. The areas other than the vehicle 1 on the images after the tenth image are also automatically blurred. If the image after the 10th image does not include the vehicle 1, the pure blurred image or the original color image is displayed.
  • the mobile phone can edit the video that has been obtained, so that the color on the video image can be retained on an individual or component basis, and the protagonist can be selected more accurately, and the flexibility and precise pertinence of the color retention setting can be improved, so that after editing
  • the video is more creative.
  • the background area can be subjected to post-processing such as graying, blurring, or background replacement, which can improve the flexibility of image processing, so that users can obtain personalized and diversified video images.
  • the mobile phone can segment each individual object. Based on this, in other embodiments of the present application, the mobile phone can also synthesize individual objects in a video or image collection with other videos, so as to achieve a personalized, vivid or special video processing effect.
  • the mobile phone can generate a small video according to the target subject.
  • the target subject can be one or more target subjects.
  • the image of the small video is the image of the target subject included in video 1, and only the image of the area where the target subject is located is retained. In this way, the small video generated by the mobile phone is similar to a moving picture emoticon package of the target subject.
  • the mobile phone can generate an image set 2 based on the target subject.
  • the images in the image set 2 are the images of the target subject included in the image set 1, and the image only retains the image of the area where the target subject is located.
  • the mobile phone can synthesize the video where the target subject is located with another video.
  • the target subject is a character
  • video 1 is a video of the target subject dancing on the grass
  • video 2 is a time-lapse video of the starry sky.
  • the mobile phone can superimpose the image of the target subject in video 1 on the image of video 2, so that video 1 and video 2 of the starry sky time-lapse photography can be combined into a new video to generate a new video of the target subject dancing under the starry sky.
  • Video 1 is a slow motion video about the target subject.
  • the mobile phone can segment the target subject in video 1 and merge it with the image in another video (such as a time-lapse video) to generate a new video.
  • the mobile phone can extract the image of the area where the target subject is located from the video when the target subject grows up, and synthesize it with the video of the target subject when he was a child, giving the user a picture of himself when he grew up and when he was a child. feel.
  • the mobile phone can also extract the image of the area where the target subject is located from the video of the old subject, and synthesize it with the video of the young subject, giving the user a feeling of traveling through time and space.
  • the mobile phone can also extract the target subject in different videos and synthesize it with another new video. For example, a couple in a different place takes a video separately, and then the mobile phone picks out the two characters in the two videos as the target subject and puts it in a new background video, thereby generating a new couple in the same place video.
  • the mobile phone can extract the image of the area where the target subject is located in a slow motion video, extract the image of the area where the target subject is located in the fast motion video, and combine the image of the area where the target subject is located into a new video. Thereby forming a clear sense of difference between fast and slow.
  • video 1 is a video of a coach
  • video 2 is a video of a student.
  • the mobile phone can extract the image of the area where the coach is in video 1; The video is combined into a new video to facilitate the comparison of whether the students’ actions are standardized.
  • the mobile phone can extract the target subject in the video and synthesize it with multiple other videos.
  • the target subject is a character
  • video 1 is a video of the target subject practicing martial arts.
  • Video 2-Video 5 are the background videos of different seasons in spring, summer, autumn and winter.
  • the mobile phone can extract the image of the area where the target subject is located in video 1, and synthesize it with video 2 to video 5 according to different time periods, giving users the feeling of practicing martial arts through the four seasons.
  • the mobile phone can also extract the same target subject from multiple videos and edit them together to generate a new video. For example, for multiple videos of children growing up from childhood, the image of each child in each video is extracted by the way of cutouts for users to edit or intelligently edit, and finally they are automatically grouped together and then configured. Upload one or more new background videos to generate a new video.
  • the mobile phone can record the video of the target subject who often falls when they first learn to roller skating, the video that is more proficient after learning for a period of time, and the smooth video after learning for a long time, that is, the target subject learns the difference in roller skating.
  • the image of the area where the target subject is located in the video of the stage is extracted, and the image of the target subject in the smooth video after the target subject learns to slide is extracted, assembled together, and then matched with one or more new background videos to generate A new video.
  • the mobile phone can also perform the above-mentioned color retention processing on the target subject in the synthesized video image, which will not be repeated here.
  • the electronic device is a mobile phone.
  • the image retention method provided in the above embodiment can also be used, which will not be repeated here.
  • the electronic device includes hardware and/or software modules corresponding to each function.
  • the present application can be implemented in the form of hardware or a combination of hardware and computer software. Whether a certain function is executed by hardware or computer software-driven hardware depends on the specific application and design constraint conditions of the technical solution. Those skilled in the art can use different methods for each specific application in combination with the embodiments to implement the described functions, but such implementation should not be considered as going beyond the scope of the present application.
  • the embodiment of the present application may divide the electronic device into functional modules according to the foregoing method examples.
  • each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module.
  • the above-mentioned integrated modules can be implemented in the form of hardware. It should be noted that the division of modules in this embodiment is illustrative, and is only a logical function division, and there may be other division methods in actual implementation.
  • An embodiment of the present application also provides an electronic device, including: a camera for capturing images; a screen for displaying an interface; one or more processors and one or more memories.
  • the one or more memories are coupled with one or more processors, and the one or more memories are used to store computer program codes.
  • the computer program codes include computer instructions.
  • An embodiment of the present application also provides an electronic device including one or more processors and one or more memories.
  • the one or more memories are coupled with one or more processors, and the one or more memories are used to store computer program codes.
  • the computer program codes include computer instructions.
  • the electronic device executes The above related method steps implement the image retention method in the above embodiment.
  • the embodiment of the present application also provides a computer-readable storage medium that stores computer instructions in the computer-readable storage medium.
  • the computer instructions run on an electronic device, the electronic device executes the above-mentioned related method steps to implement the above-mentioned embodiments.
  • the color retention method in the image is also provided.
  • the embodiments of the present application also provide a computer program product.
  • the computer program product runs on a computer, the computer is caused to execute the above-mentioned related steps to realize the image retention method executed by the electronic device in the above-mentioned embodiment.
  • the embodiments of the present application also provide a device.
  • the device may specifically be a chip, component or module.
  • the device may include a processor and a memory connected to each other.
  • the memory is used to store computer execution instructions.
  • the processor can execute the computer-executable instructions stored in the memory, so that the chip executes the image retention method executed by the electronic device in the foregoing method embodiments.
  • the electronic device, computer readable storage medium, computer program product, or chip provided in this embodiment are all used to execute the corresponding method provided above. Therefore, the beneficial effects that can be achieved can refer to the above provided The beneficial effects of the corresponding method will not be repeated here.
  • the disclosed device and method may be implemented in other ways.
  • the device embodiments described above are merely illustrative.
  • the division of the modules or units is only a logical function division. In actual implementation, there may be other division methods, for example, multiple units or components may be divided. It can be combined or integrated into another device, or some features can be omitted or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate parts may or may not be physically separate.
  • the parts displayed as units may be one physical unit or multiple physical units, that is, they may be located in one place, or they may be distributed to multiple different places. . Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional units in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit can be implemented in the form of hardware or software functional unit.
  • the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a readable storage medium.
  • the technical solutions of the embodiments of the present application are essentially or the part that contributes to the prior art, or all or part of the technical solutions can be embodied in the form of a software product, and the software product is stored in a storage medium. It includes several instructions to make a device (which may be a single-chip microcomputer, a chip, etc.) or a processor (processor) execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage media include: U disk, mobile hard disk, read only memory (read only memory, ROM), random access memory (random access memory, RAM), magnetic disk or optical disk and other media that can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本申请实施例提供一种图像留色方法及设备,涉及电子技术领域,能够保留图像中一个或多个个体对象的色彩,提高色彩保留的灵活性和用户使用体验。具体方案为:电子设备启动相机应用,确定第一个体对象为目标对象,确定目标处理模式;根据彩色摄像头获取的图像生成第一预览图像;在预览界面中显示第一预览图像,第一预览图像中第一区域的图像显示为彩色,第一预览图像中第二区域的图像为根据目标处理模式处理后的图像;响应于用户的第一操作确定第二个体对象为目标对象;在预览界面中显示第二预览图像,第二预览图像中第三区域的图像显示为彩色,第二预览图像中第四区域的图像为根据目标处理模式处理后的图像。本申请实施例用于图像处理。

Description

图像留色方法及设备
本申请要求于2020年3月13日提交国家知识产权局、申请号为202010177496.5、申请名称为“图像留色方法及设备”的中国专利申请的优先权,以及要求于2020年3月25日提交国家知识产权局、申请号为202010220045.5、申请名称为“图像留色方法及设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及电子技术领域,尤其涉及一种图像留色方法及设备。
背景技术
随着电子技术的发展,手机或平板电脑等电子设备的相机功能和图像处理能力越来越强大。例如,电子设备可以使用相机功能,在夜景模式、大光圈模式或多路录像模式等多种模式下,拍摄获得具有不同效果的照片或视频。再例如,电子设备还可以对拍摄获得的照片或视频中的图像进行滤镜或美肤等处理,从而获得更好的图像效果。
发明内容
本申请实施例提供一种图像留色方法及设备,能够保留图像中一个或多个个体对象的色彩,提高色彩保留的灵活性和用户使用体验。
为达到上述目的,本申请实施例采用如下技术方案:
一方面,本申请实施例提供了一种图像留色方法,应用于电子设备,电子设备包括彩色摄像头,该方法包括:电子设备启动相机应用,显示预览界面。电子设备确定第一个体对象为目标对象,并确定目标处理模式。电子设备根据彩色摄像头获取的图像,生成第一预览图像,第一预览图像中包括第一个体对象和第二个体对象,第二个体对象与第一个体对象不同。电子设备在预览界面中显示第一预览图像,第一预览图像中第一区域的图像显示为彩色,第一预览图像中第二区域的图像为根据目标处理模式处理后的图像。其中,第一区域为第一个体对象在第一预览图像中占据的图像区域,第二区域为第一预览图像中除第一区域以外的区域;电子设备响应于用户的第一操作,确定第二个体对象为目标对象。电子设备在预览界面中显示第二预览图像,第二预览图像中第三区域的图像显示为彩色,第二预览图像中第四区域的图像为根据目标处理模式处理后的图像。其中,第三区域为第二个体对象在第二预览图像中占据的图像区域,第四区域为第二预览图像中除第三区域以外的区域。
在该方案中,第一个体对象和第二个体对象可以包括一个个体对象,也可以包括多个个体对象。在开始录像之前,电子设备可以确定目标对象和目标处理模式。其中,目标对象中的一个或多个个体所在区域内的图像色彩被保留。即,电子设备可以以单个个体为单位,设置保留一个个体、不同类型的多个个体、或同一类型的多个个体的色彩,提高色彩保留的灵活性和精确针对性,突出目标对象,提高用户拍摄体验。并且,目标对象所在区域以外的其他区域可以根据目标处理模式进行处理,获得个性化的图像处理效果。
在一种可能的设计中,该方法还包括:电子设备响应于用户的第二操作,切换目 标处理模式。电子设备根据切换后的目标处理模式,更新预览界面显示的预览图像中第四区域的图像。
也就是说,在预览过程中,电子设备还可以根据用户的指示切换目标处理模式,从而切换目标对象所在区域以外的其他区域的处理效果。
在另一种可能的设计中,在电子设备确定第一个体对象为目标对象之前,该方法还包括:电子设备在预览界面中显示第三预览图像,第三预览图像为彩色摄像头获取的图像转化成的灰度图像。
也就是说,在电子设备刚进入留色模式之后,确定目标对象之前,预览图像可以为纯灰度图像,以区别于非留色模式下的彩色图像。
在另一种可能的设计中,该方法还包括:电子设备响应于用户的录像操作显示拍摄界面,拍摄界面包括录拍图像,该录拍图像包括该第三区域和该第四区域。电子设备响应于用户的停止录像操作后,停止录像并生成视频。
这样,电子设备录制获得的视频图像上,目标对象中的一个或多个个体所在第一区域内的图像色彩被保留。也就是说,电子设备可以以单个个体为单位,保留一个个体、不同类型的多个个体、或同一类型的多个个体的色彩,提高色彩保留的灵活性和精确针对性,突出目标对象,提高用户拍摄体验。并且,目标对象所在区域以外的其他区域可以根据目标模式进行后处理,获得效果个性化的图像处理效果。
另一方面,本申请实施例提供了一种图像留色方法,该方法应用于电子设备,电子设备包括彩色摄像头,该方法包括:包括:电子设备启动相机应用,显示预览界面。电子设备响应于用户的录像操作,显示拍摄界面。电子设备确定第一个体对象为目标对象,确定目标处理模式。电子设备根据彩色摄像头获取的图像,生成第一录拍图像,第一录拍图像中包括第一个体对象和第二个体对象,第二个体对象与第一个体对象不同。电子设备在拍摄界面中显示第一录拍图像,第一录拍图像中第一区域的图像显示为彩色,第一录拍图像中第二区域的图像为根据目标处理模式处理后的图像。其中,第一区域为第一个体对象在第一录拍图像中占据的图像区域,第二区域为第一录拍图像中除第一区域以外的区域。电子设备响应于用户的第一操作,确定第二个体对象为目标对象。电子设备在拍摄界面中显示第二录拍图像,第二录拍图像中第三区域的图像显示为彩色,第二录拍图像中第四区域的图像为根据目标处理模式处理后的图像。其中,第三区域为第二个体对象在第二录拍图像中占据的图像区域,第四区域为第二录拍图像中除第三区域以外的区域。电子设备响应于用户的停止录像操作,停止录像并生成视频。
在该方案中,第一个体对象和第二个体对象可以为一个个体对象,也可以为多个个体对象。在开始录像之后,电子设备可以确定目标对象和目标处理模式。在录像过程中,电子设备可以对采集到的图像进行留色处理和后处理,从而拍摄获得视频。在视频图像上,目标对象中的一个或多个个体所在第一区域内的图像色彩被保留。也就是说,电子设备可以以单个个体为单位,保留一个个体、不同类型的多个个体、或同一类型的多个个体的色彩,提高色彩保留的灵活性和精确针对性,突出目标对象,提高用户拍摄体验。并且,目标对象所在区域以外的其他区域可以根据目标处理模式进行处理,获得个性化的视频图像。
在一种可能的设计中,在电子设备确定第一个体对象为目标对象之前,该方法还包括:电子设备在拍摄界面中显示第三录拍图像,第三录拍图像为彩色摄像头获取的图像转换成的灰度图像。
也就是说,电子设备在刚进入留色模式后,确定目标对象之前,可以先显示纯灰度图像,以区别于非留色模式下的彩色图像。
在另一种可能的设计中,拍摄界面显示的预览图像上包括第三个体对象,该第三个体对象不同于第二个体对象。在电子设备停止录像并生成视频之前,该方法还包括:电子设备响应于用户的第三操作,确定第三个体对象为目标对象。在拍摄界面中显示第四录拍图像,第四录拍图像中第五区域的图像显示为彩色,第四录拍图像中第六区域的图像为根据目标处理模式处理后的图像。其中,第五区域为第三个体对象在第四录拍图像中占据的图像区域,第六区域为第四录拍图像中除第五区域以外的区域。
也就是说,在拍摄过程中,电子设备可以根据用户的指示,更改目标对象包括的个体对象,从而拍摄获得目标对象动态变化的视频。
在另一种可能的设计中,在电子设备停止录像并生成视频之前,该方法还包括:电子设备响应于用户的第四操作,切换目标处理模式。电子设备根据切换后的目标处理模式,更新拍摄界面显示的录拍图像中第四区域的图像。
也就是说,在录像过程中,电子设备可以根据用户的指示,更改目标处理模式,从而拍摄获得后处理效果动态变化的视频,获得个性化、多样化的视频图像。
在另一种可能的设计中,电子设备确定第一个体对象为目标对象,包括:电子设备确定第一个体对象为彩色摄像头获取的图像上的人物,第一个体对象为目标对象。或者,电子设备响应于用户针对第一个体对象的操作,确定第一个体对象为目标对象。
也就是说,电子设备刚进入留色模式后,可以采用默认的目标对象,或者根据用户的指示操作确定目标对象。
在另一种可能的设计中,目标处理模式为第一模式,第二区域内的图像为根据第一模式处理后的灰度图像;或者,目标处理模式为第二模式,第二区域内的图像为根据第二模式处理后的虚化图像;或者,目标处理模式为第三模式,第二区域内的图像为根据第三模式处理后的替换为另一图像的图像。
在另一种可能的设计中,电子设备确定目标处理模式,包括:电子设备确定目标处理模式为默认的第一模式。
也就是说,进入留色模式后,目标处理模式默认为灰化处理模式。
另一方面,本申请实施例提供了一种留色处理方法,该方法应用于电子设备,电子设备包括彩色摄像头,该方法包括:电子设备启动相机应用,显示预览界面。电子设备确定第一个体对象为目标对象,确定目标处理模式。电子设备根据彩色摄像头获取的图像,生成第一预览图像,第一预览图像中包括第一个体对象和第二个体对象,第二个体对象与第一个体对象不同。电子设备在预览界面中显示第一预览图像,第一预览图像中第一区域的图像显示为彩色,第一预览图像中第二区域的图像为根据目标处理模式处理后的图像。其中,第一区域为第一个体对象在第一预览图像中占据的图像区域,第二区域为第一预览图像中除第一区域以外的区域。电子设备响应于用户的第一操作,确定第二个体对象为目标对象。电子设备在预览界面中显示第二预览图像, 第二预览图像中第三区域的图像显示为彩色,第二预览图像中第四区域的图像为根据目标处理模式处理后的图像。其中,第三区域为第二个体对象在第二预览图像中占据的图像区域,第四区域为第二预览图像中除第三区域以外的区域。电子设备响应于用户的拍照操作生成照片,该照片包括该第三区域和该第四区域。
在该方案中,电子设备可以对采集到的图像进行留色处理和后处理,从而拍摄获得照片。该照片上目标对象中的一个或多个个体所在第一区域的色彩被保留。也就是说,电子设备可以以单个个体为单位,保留一个个体、不同类型的多个个体、或同一类型的多个个体的色彩,提高色彩保留的灵活性和精确针对性,突出目标对象,提高用户拍摄体验。并且,目标对象所在区域以外的其他区域可以根据目标模式对图像进行后处理,获得个性化、多样化的照片。
在一种可能的设计中,该方法还包括:电子设备响应于用户的第二操作,切换目标处理模式。电子设备根据切换后的目标处理模式,更新第四区域的图像。
在另一种可能的设计中,在电子设备确定第一个体对象为目标对象之前,该方法还包括:
在预览界面中显示第三预览图像,第三预览图像为彩色摄像头获取的图像转化成的灰度图像。
另一方面,本申请实施例提供了一种图像留色方法,包括:电子设备检测到用户的针对目标图像的第五操作,该目标图像为彩色图像。电子设备进入目标编辑模式,并显示第一界面,第一界面上的目标图像为灰度图像。电子设备检测到用户针对第一位置的操作后,将目标图像上与第一位置的像素值的差值小于预设阈值的像素点恢复为彩色。
在该方案中,电子设备可以对已经获得的目标图像进行编辑,从而保留目标图像上特定的颜色,获得个性化的图像处理效果。
在一种可能的设计中,电子设备将目标图像上与第一位置的像素值的差值小于预设阈值的像素点恢复为彩色,包括:电子设备将目标图像上第一位置所属的部件中,与第一位置的像素值的差值小于预设阈值的像素点恢复为彩色;或者,电子设备将目标图像上第一位置所属的个体中,与第一位置的像素值的差值小于预设阈值的像素点恢复为彩色。
也就是说,电子设备可以根据用户指定的颜色以个体或部件为单位,保留目标图像上部分区域的色彩。
在另一种可能的设计中,第一界面还包括第一控件,该方法还包括:若第一控件被选中,且电子设备检测到用户使用第一控件,针对彩色图像上第七区域的第六操作,则电子设备将第七区域内的彩色图像变为灰度图像。
这样,用户可以使用第一控件将变为彩色图像的区域再变为灰度图像。
在另一种可能的设计中,第一界面还包括第二控件,该方法还包括:电子设备检测到用户针对第二控件的操作后,调整第一控件的作用域的面积。
也就是说,电子设备可以调整第一控件作用区域的大小。
另一方面,本申请实施例提供了一种图像留色方法,包括:电子设备检测到用户的针对目标图像的第五操作,该目标图像为彩色图像。电子设备进入目标编辑模式, 并显示第一界面,第一界面上的目标图像为灰度图像。电子设备检测到用户针对第一位置的操作后,将目标图像上第一位置所属的部件恢复为彩色。
在该方案中,电子设备可以对已经获得的目标图像进行编辑,从而保留目标图像上特定部件的色彩,获得个性化的图像处理效果,提供色彩保留设置的灵活性和精确性。
另一方面,本申请实施例提供了一种图像留色方法,包括:电子设备检测到用户的针对目标图像的第五操作,该目标图像为彩色图像。电子设备进入目标编辑模式,并显示第一界面,第一界面上的目标图像为灰度图像。电子设备检测到用户针对第一位置的操作后,将目标图像上第一位置所属的个体恢复为彩色。
在该方案中,电子设备可以对已经获得的目标图像进行编辑,从而保留目标图像上特定个体的色彩,获得个性化的图像处理效果,提供色彩保留设置的灵活性和精确性。
另一方面,本申请实施例提供了一种图像留色方法,包括:电子设备检测到用户的针对目标图像的第七操作。电子设备进入目标编辑模式,并显示第二界面,第二界面包括第八区域、第九区域和第三控件,第九区域内的图像为虚化图像。电子设备检测到用户针对第三控件的操作后,调整第九区域内图像的虚化程度。
在该方案中,电子设备可以对已获得的目标图像进行编辑处理,从而保留部分区域内的清晰图像,将其他区域变为虚化图像。
在一种可能的设计中,第二界面还包括第四控件,第四控件用于切换第八区域的形状。电子设备检测到用户针对第四控件的操作后,根据切换后的形状调整第八区域。
也就是说,用户可以指示清晰图像所在区域的形状,例如该形状可以是圆形或方形等。
在另一种可能的设计中,该方法还包括:电子设备检测到用户针对第八区域的第八操作后,调整第八区域的大小。
也就是说,电子设备可以调整清晰图像所在区域的大小。
另一方面,本申请实施例提供了一种图像处理装置,该装置包含在电子设备中。该装置具有实现上述方面及可能的设计中任一方法中电子设备行为的功能,使得电子设备执行上述方面任一项可能的设计中电子设备执行的图像留色方法。该功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。硬件或软件包括至少一个与上述功能相对应的模块或单元。例如,该装置可以包括显示单元、确定单元、检测单元和更新单元等。
又一方面,本申请实施例提供了一种电子设备,包括:彩色摄像头,用于采集彩色图像;屏幕,用于显示界面,一个或多个处理器;以及存储器,存储器中存储有代码。当代码被电子设备执行时,使得电子设备执行上述方面任一项可能的设计中电子设备执行的图像留色方法。
又一方面,本申请实施例提供了一种电子设备,包括:一个或多个处理器;以及存储器,存储器中存储有代码。当代码被电子设备执行时,使得电子设备执行上述方面任一项可能的设计中电子设备执行的图像留色方法。
另一方面,本申请实施例提供了一种计算机可读存储介质,包括计算机指令,当 计算机指令在电子设备上运行时,使得电子设备执行上述方面任一项可能的设计中的图像留色方法。
又一方面,本申请实施例提供了一种计算机程序产品,当计算机程序产品在计算机上运行时,使得计算机执行上述方面任一项可能的设计中电子设备执行的图像留色方法。
另一方面,本申请实施例提供了一种芯片系统,该芯片系统应用于电子设备。该芯片系统包括一个或多个接口电路和一个或多个处理器;接口电路和处理器通过线路互联;接口电路用于从电子设备的存储器接收信号,并向处理器发送信号,信号包括存储器中存储的计算机指令;当处理器执行计算机指令时,使得电子设备执行上述方面任一项可能的设计中的图像留色方法。
上述其他方面对应的有益效果,可以参见关于方法方面的有益效果的描述,此处不予赘述。
附图说明
图1A为本申请实施例提供的一种电子设备的硬件结构示意图;
图1B为本申请实施例提供的一种图像留色方法流程图;
图2为本申请实施例提供的一种电子设备的软件架构示意图;
图3为本申请实施例提供的另一种图像留色方法流程图;
图4为本申请实施例提供的一组界面示意图;
图5为本申请实施例提供的另一组界面示意图;
图6为本申请实施例提供的另一组界面示意图;
图7为本申请实施例提供的一组实例分割和语义分割的效果示意图;
图8为本申请实施例提供的另一组界面示意图;
图9为本申请实施例提供的另一组界面示意图;
图10为本申请实施例提供的另一组界面示意图;
图11为本申请实施例提供的另一组界面示意图;
图12为本申请实施例提供的另一组界面示意图;
图13A为本申请实施例提供的另一组界面示意图;
图13B为本申请实施例提供的一组图像与界面示意图;
图13C为本申请实施例提供的另一组图像与界面示意图;
图13D为本申请实施例提供的另一组图像与界面示意图;
图13E为本申请实施例提供的一种界面示意图;
图14为本申请实施例提供的另一组界面示意图;
图15为本申请实施例提供的另一组界面示意图;
图16为本申请实施例提供的另一组界面示意图;
图17为本申请实施例提供的另一组界面示意图;
图18为本申请实施例提供的另一种图像留色方法流程图;
图19为本申请实施例提供的另一组界面示意图;
图20为本申请实施例提供的另一种图像留色方法流程图;
图21为本申请实施例提供的一种拍摄界面和拍摄获得的照片的示意图;
图22为本申请实施例提供的另一组界面示意图;
图23为本申请实施例提供的另一种界面示意图;
图24为本申请实施例提供的另一种图像留色方法流程图;
图25为本申请实施例提供的另一组界面示意图;
图26为本申请实施例提供的另一组界面示意图;
图27为本申请实施例提供的另一组界面示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行描述。其中,在本申请实施例的描述中,除非另有说明,“/”表示或的意思,例如,A/B可以表示A或B;本文中的“和/或”仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,在本申请实施例的描述中,“多个”是指两个或多于两个。
以下,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征。在本实施例的描述中,除非另有说明,“多个”的含义是两个或两个以上。
本申请实施例提供一种图像留色方法,能够在拍照或录像过程中保留一个或多个个体对象的色彩,还能够对获得的图像进行留色处理从而保留一个或多个个体对象的色彩。基于该方案,电子设备可以以个体对象为单位进行图像色彩保留,实现个性化的图像处理效果,提高色彩保留的灵活性和用户使用体验。
本申请实施例提供的图像留色方法可以应用于手机、平板电脑、可穿戴设备、车载设备、增强现实(augmented reality,AR)/虚拟现实(virtual reality,VR)设备、笔记本电脑、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本、个人数字助理(personal digital assistant,PDA)等电子设备上,本申请实施例对电子设备的具体类型不作任何限制。
示例性的,图1A示出了电子设备100的结构示意图。电子设备100可以包括处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头193,显示屏194,以及用户标识模块(subscriber identification module,SIM)卡接口195等。其中传感器模块180可以包括压力传感器180A,陀螺仪传感器180B,气压传感器180C,磁传感器180D,加速度传感器180E,距离传感器180F,接近光传感器180G,指纹传感器180H,温度传感器180J,触摸传感器180K,环境光传感器180L,骨传导传感器180M等。
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,存储器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理 器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。
其中,控制器可以是电子设备100的神经中枢和指挥中心。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了系统的效率。
电子设备100的无线通信功能可以通过天线1,天线2,移动通信模块150,无线通信模块160,调制解调处理器以及基带处理器等实现。
天线1和天线2用于发射和接收电磁波信号。电子设备100中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。
移动通信模块150可以提供应用在电子设备100上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块150可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块150可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块150还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。在一些实施例中,移动通信模块150的至少部分功能模块可以被设置于处理器110中。在一些实施例中,移动通信模块150的至少部分功能模块可以与处理器110的至少部分模块被设置在同一个器件中。
无线通信模块160可以提供应用在电子设备100上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块160可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块160经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器110。无线通信模块160还可以从处理器110接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。
电子设备100通过GPU,显示屏194,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏194和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
显示屏194用于显示图像,视频等。显示屏194包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes, QLED)等。在一些实施例中,电子设备100可以包括1个或N个显示屏194,N为大于1的正整数。在本申请实施例中,显示屏194可以用于显示拍摄过程中的拍摄预览界面和拍摄界面,以及图像编辑界面等。
电子设备100可以通过ISP,摄像头193,视频编解码器,GPU,显示屏194以及应用处理器等实现拍摄功能。
ISP用于处理摄像头193反馈的数据。例如,拍照时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将所述电信号传递给ISP处理,转化为肉眼可见的图像。ISP还可以对图像的噪点,亮度,肤色进行算法优化。ISP还可以对拍摄场景的曝光,色温等参数优化。在一些实施例中,ISP可以设置在摄像头193中。
摄像头193用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元件。感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-oxide-semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的RGB,YUV等格式的图像信号。在一些实施例中,电子设备100可以包括1个或N个摄像头193,N为大于1的正整数。
在本申请的实施例中,摄像头193是彩色摄像头。与黑白摄像头可以采集获得灰度图像(也称黑白图像)不同,电子设备110使用彩色摄像头采集获得彩色图像,从而记录被拍摄对象的色彩。例如,彩色图像中的每个像素值都可以包括R(红)G(绿)B(蓝)三种基色。
并且,摄像头193可以包括以下一种或多种摄像头:长焦摄像头、广角摄像头、超广角摄像头或深度摄像头等。其中,深度摄像头可以用于测量被拍摄对象的距离。长焦摄像头的拍摄范围小,适用于拍摄远处的景物;广角摄像头的拍摄范围较大;超广角摄像头的拍摄范围大于广角摄像头,适用于拍摄全景等较大画面的景物。
数字信号处理器用于处理数字信号,除了可以处理数字图像信号,还可以处理其他数字信号。例如,当电子设备100在频点选择时,数字信号处理器用于对频点能量进行傅里叶变换等。
视频编解码器用于对数字视频压缩或解压缩。电子设备100可以支持一种或多种视频编解码器。这样,电子设备100可以播放或录制多种编码格式的视频,例如:动态图像专家组(moving picture experts group,MPEG)1,MPEG2,MPEG3,MPEG4等。
NPU为神经网络(neural-network,NN)计算处理器,通过借鉴生物神经网络结构,例如借鉴人脑神经元之间传递模式,对输入信息快速处理,还可以不断的自学习。通过NPU可以实现电子设备100的智能认知等应用,例如:图像识别,人脸识别,语音识别,文本理解等。在本申请的实施例中,NPU可以对图像进行实例分割,从而区别出图像中的不同个体所在的区域。在一些实施例中,NPU还可以对图像进行部件分割,从而区别出同一个体中的不同部件所在的区域。
外部存储器接口120可以用于连接外部存储卡,例如Micro SD卡,实现扩展电子设备100的存储能力。外部存储卡通过外部存储器接口120与处理器110通信,实现 数据存储功能。例如将音乐,视频等文件保存在外部存储卡中。
内部存储器121可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。处理器110通过运行存储在内部存储器121的指令,从而执行电子设备100的各种功能应用以及数据处理。内部存储器121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储电子设备100使用过程中所创建的数据(比如音频数据,电话本等)等。此外,内部存储器121可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。
在本申请的实施例中,处理器110通过运行存储在内部存储器121的指令,可以保留图像中一个或多个个体对象的色彩,并对该一个或多个个体对象以外的背景进行灰化、虚化或背景替换等后处理。
电子设备100可以通过音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,以及应用处理器等实现音频功能。例如音乐播放,录音等。
距离传感器180F,用于测量距离。电子设备100可以通过红外或激光测量距离。在一些实施例中,拍摄场景,电子设备100可以利用距离传感器180F测距以实现快速对焦。
环境光传感器180L用于感知环境光亮度。电子设备100可以根据感知的环境光亮度自适应调节显示屏194亮度。环境光传感器180L也可用于拍照时自动调节白平衡。环境光传感器180L还可以与接近光传感器180G配合,检测电子设备100是否在口袋里,以防误触。
触摸传感器180K,也称“触控面板”。触摸传感器180K可以设置于显示屏194,由触摸传感器180K与显示屏194组成触摸屏,也称“触控屏”。触摸传感器180K用于检测作用于其上或附近的触摸操作。触摸传感器可以将检测到的触摸操作传递给应用处理器,以确定触摸事件类型。可以通过显示屏194提供与触摸操作相关的视觉输出。在另一些实施例中,触摸传感器180K也可以设置于电子设备100的表面,与显示屏194所处的位置不同。
可以理解的是,本申请实施例示意的结构并不构成对电子设备100的具体限定。在本申请另一些实施例中,电子设备100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
在本申请实施例中,处理器110中的NPU可以对图像进行实例分割,确定图像上不同的个体对象所在的区域。在拍摄场景下,参见图1B,摄像头193和ISP可以采集获得彩色图像,处理器110中的NPU可以对ISP处理后的图像进行实例分割,确定图像上不同个体所在的掩膜(mask)区域。处理器110可以遍历彩色图像中的每个像素点,若该像素点在目标对象(例如用户指定的)包括的一个或多个个体所在的mask区域内,则对该像素点进行灰化、虚化或背景替换等后处理;若该像素点不在目标对象所在的区域内,则保留该像素点的像素值。从而,处理器110可以以个体对象为单位,保留特定的一个或多个个体对象所在区域的色彩,并将其他区域进行灰化、虚化 或背景替换等后处理,从而提高色彩保留的灵活性和用户使用体验。在录像场景下,视频编解码器还可以对后处理后的图像数据进行编码,以便生成特定格式的视频文件。
电子设备100的软件系统可以采用分层架构,事件驱动架构,微核架构,微服务架构,或云架构。本申请实施例以分层架构的Android系统为例,示例性说明电子设备100的软件结构。
图2是本申请实施例的电子设备100的软件结构框图。分层架构将软件分成若干个层,每一层都有清晰的角色和分工。层与层之间通过软件接口通信。在一些实施例中,将Android系统分为四层,从上至下分别为应用程序层,应用程序框架层,安卓运行时(Android runtime)和系统库,以及内核层。应用程序层可以包括一系列应用程序包。
如图2所示,应用程序包可以包括相机,图库,日历,通话,地图,导航,WLAN,蓝牙,音乐,视频,短信息等应用程序。
应用程序框架层为应用程序层的应用程序提供应用编程接口(application programming interface,API)和编程框架。应用程序框架层包括一些预先定义的函数。
如图2所示,应用程序框架层可以包括窗口管理器,内容提供器,视图系统,电话管理器,资源管理器,通知管理器等。
窗口管理器用于管理窗口程序。窗口管理器可以获取显示屏大小,判断是否有状态栏,锁定屏幕,截取屏幕等。
内容提供器用来存放和获取数据,并使这些数据可以被应用程序访问。所述数据可以包括视频,图像,音频,拨打和接听的电话,浏览历史和书签,电话簿等。
视图系统包括可视控件,例如显示文字的控件,显示图片的控件等。视图系统可用于构建应用程序。显示界面可以由一个或多个视图组成的。例如,包括短信通知图标的显示界面,可以包括显示文字的视图以及显示图片的视图。
电话管理器用于提供电子设备100的通信功能。例如通话状态的管理(包括接通,挂断等)。
资源管理器为应用程序提供各种资源,比如本地化字符串,图标,图片,布局文件,视频文件等等。
通知管理器使应用程序可以在状态栏中显示通知信息,可以用于传达告知类型的消息,可以短暂停留后自动消失,无需用户交互。比如通知管理器被用于告知下载完成,消息提醒等。通知管理器还可以是以图表或者滚动条文本形式出现在系统顶部状态栏的通知,例如后台运行的应用程序的通知,还可以是以对话窗口形式出现在屏幕上的通知。例如在状态栏提示文本信息,发出提示音,电子设备振动,指示灯闪烁等。
Android Runtime包括核心库和虚拟机。Android runtime负责安卓系统的调度和管理。
核心库包含两部分:一部分是java语言需要调用的功能函数,另一部分是安卓的核心库。
应用程序层和应用程序框架层运行在虚拟机中。虚拟机将应用程序层和应用程序框架层的java文件执行为二进制文件。虚拟机用于执行对象生命周期的管理,堆栈管理,线程管理,安全和异常的管理,以及垃圾回收等功能。
系统库可以包括多个功能模块。例如:表面管理器(surface manager),媒体库(Media Libraries),三维图形处理库(例如:OpenGL ES),2D图形引擎(例如:SGL)等。
表面管理器用于对显示子系统进行管理,并且为多个应用程序提供了2D和3D图层的融合。
媒体库支持多种常用的音频,视频格式回放和录制,以及静态图像文件等。媒体库可以支持多种音视频编码格式,例如:MPEG4,H.264,MP3,AAC,AMR,JPG,PNG等。
三维图形处理库用于实现三维图形绘图,图像渲染,合成,和图层处理等。
2D图形引擎是2D绘图的绘图引擎。
内核层是硬件和软件之间的层。内核层至少包含显示驱动,摄像头驱动,音频驱动,传感器驱动。
在本申请实施例中,参见图2,系统库中还可以包括图像处理库。图像处理库可以通过实例分割获得图像上不同个体对象分别所在的区域,以个体对象为单位保留特定的一个或多个个体对象所在区域内像素点的像素值,从而保留一个或多个个体对象的色彩,并将该一个或多个个体对象所在区域以外的其他区域进行灰化、虚化或背景替换等处理。
为便于理解,本申请以下实施例将以电子设备为具有图1A和图2所示结构的手机为例,结合附图对本申请实施例提供的图像留色方法进行阐述。
本申请实施例提供了一种图像留色方法,可以应用于录像场景,参见图3,该方法包括:
301、手机检测到用户打开相机应用的操作后,显示拍摄预览界面,该拍摄预览界面包括预览图像。
手机检测到用户打开相机应用的操作后,可以启动相机应用(以下也可简称相机),并显示拍摄预览界面,该拍摄预览界面包括预览图像。此时,该预览图像为摄像头和ISP获得的原始图像,该原始图像为彩色图像。
其中,用户打开相机的操作可以有多种。示例性的,用户打开相机的操作可以为点击图4中的(a)所示的相机图标401的操作。手机检测到该操作后可以启动相机应用,并显示图4中的(b)所示的拍摄预览界面,该拍摄预览界面包括预览图像,该预览图像为摄像头采集获得的彩色图像。
再示例性的,用户打开相机的操作可以为打开相机的语音指示操作,手机检测到该操作后可以启动相机应用,并显示图4中的(b)所示的拍摄预览界面。
302、手机进入目标拍摄模式,该目标拍摄模式为录像模式。
在一些实现方式中,手机启动相机应用后,默认进入拍照模式等非录像模式。手机检测到用户进入录像模式的指示操作后,进入录像模式。示例性的,手机启动相机后默认进入拍照模式,并显示如图4中的(b)所示的拍照模式下的拍摄预览界面。手机检测到用户点击图4中的(b)所示的控件402的操作后进入录像模式,并显示如图4中的(c)所示的录像模式下的拍摄预览界面。
在另一些实现方式中,手机启动相机应用后,默认进入录像模式(例如上一次打开相机应用后使用的是录像模式),并显示如图4中的(c)所示的录像模式下的拍摄 预览界面。
手机还可以通过其他方式进入录像模式,本申请实施例不予限定。
303、手机检测到用户的预设操作1后,进入留色模式。
在留色模式下,手机可以对摄像头获得的彩色图像进行色彩保留处理,使得图像上的一个或多个个体对象所在的区域保留为彩色,其他区域进行灰化、虚化或背景替换等后处理。
其中,该预设操作1用于指示手机进入留色模式。手机进入留色模式后,显示拍摄预览界面,该拍摄预览界面包括预览图像。
用户指示手机进入留色模式的方式可以有多种。例如,在一些实施例中,拍摄预览界面上包括用于指示留色模式的控件1,当手机检测到用户点击控件1的操作后,手机进入留色模式。示例性的,该控件1可以为图5中的(a)所示的控件501,或者图5中的(b)所示的控件502。
在另一些实施例中,参见图6中的(a),拍摄预览界面上包括滤镜控件601。手机检测到用户点击控件601后,参见图6中的(b),手机显示用于指示留色模式的控件602。手机检测到用户点击控件602后,进入留色模式。
在另一些实施例中,手机在显示拍摄预览界面后,若检测到用户进入留色模式或使用色彩保留功能的语音指示操作,则进入留色模式。
304、手机对摄像头获取的图像进行实例分割,确定图像上不同个体对象所在的区域。
其中,实例分割是指在语义分割的基础上,区分同一类型的物体中的不同个体。语义分割是指对图像中的物体进行像素级的分割,确定像素所属的物体类型。例如,物体类型可以包括人物、车辆、建筑、树、狗或猫等。同一图像上同一类型的物体可以包括一个或多个个体对象。例如,同一图像上可以包括多个人或多辆车等。
实例分割的方法可以有多种,例如基于卷积神经网络(convolutional neural network,CNN)的分割方法、基于阈值的分割方法、基于区域的分割方法、基于边缘的分割方法以及基于特定理论的分割方法等。实例分割是指分割出每个人像或者每个物体在图像上所在的区域。
以采用基于CNN的深度学习算法进行实例分割为例进行说明。基于该算法,在摄像头获取到原始图像后,手机可以对原始图像进行下采样,转换为分辨率较低的图像进行CNN的复杂计算,以降低运算量。手机将原始图像的M x N尺寸(即分辨率为M x N)处理为m x n尺寸,m小于M,且n小于N。手机通过卷积与下采样操作(包括但不限于stride卷积、池化pooling等),逐层提取图像的语义特征,得到尺寸分别为m1 x n1,m2 x n2,m3 x n3的多尺度特征图,其中m1,m2,m3成倍数关系且小于m;n1,n2,n3成倍数关系且小于n。而后,手机经过运算得到待分割目标(例如,人,车,或建筑等)在图像中所处的位置,回归出目标所在的区域,并框出目标所在区域的边界框(bounding box),给出目标在图像中的坐标。至此,手机实现了目标检测。在目标检测之后,手机在每个边界框内进行图像实例分割,得到每个个体对象(以下简称个体)所在的区域(或称所在的mask区域),从而完成实例分割操作。
示例性的,手机针对摄像头采集到的某张图像进行实例分割的结果可以参见图7 中的(a),手机识别出来不同个体分别所在的区域。其中,不同个体所在区域对应不同的灰度值,同一个体所在区域对应同一灰度值。手机针对该图像进行语义分割的结果可以参见图7中的(b),手机识别除了不同类型的物体所在的区域,不同类型的物体所在区域对应不同的灰度值,同一类型的物体所在区域对应同一灰度值。
305、手机确定目标对象和目标处理模式,并保留预览图像上目标对象所在区域的色彩,根据目标处理模式对背景区域进行处理,该目标对象包括一个或多个个体,该多个个体属于同一类型或不同类型。
在一些实施例中,在进入留色模式后,手机可以显示留色模式对应的文字信息、控件或标记等,以提示用户当前处于留色模式。示例性的,参见图8中的(a),控件800被选中以表示手机当前处于留色模式。
在另一些实施例中,手机在首次或每次进入留色模式后,可以通过显示信息或声音提示等方式告知用户留色模式的功能和作用。例如,参见图8中的(a),手机可以通过显示文字信息来提示用户“在留色模式下,您可以保留一个或多个个体所在区域的色彩”。
在进入留色模式后,手机可以确定目标处理模式,并根据目标处理模式对待保留色彩的目标对象所在区域以外的其他区域进行后处理,从而获得个性化、多样化的图像处理效果。其中,目标对象可以包括一个或多个个体对象。在本申请实施例中,目标对象所在的区域可以称为目标区域,目标对象所在区域以外的其他区域可以称为背景区域。
例如,该目标处理模式可以包括灰化模式、虚化模式或背景替换模式等。
当目标处理模式为灰化模式时,在一些实施例中,手机可以在保留目标区域内的图像色彩的情况下,将背景区域内的像素点的像素值转换为灰度值,将背景区域内的彩色图像转换为灰度图像(也称黑白图像),从而突出目标对象。其中,像素值用于表示像素点的颜色,例如像素值可以为RGB值。在灰化模式下,手机可以将背景区域内像素点的RGB值处理为R值=G值=B值。
在另一些实施例中,手机可以在保留目标区域内的图像色彩的情况下,将背景区域内的像素点的像素值转换为特定数值的像素值,从而将背景区域内的图像转换为特定的颜色。例如,手机可以将背景区域内的像素点的图像转换为蓝色、红色、黑色或白色等。
当目标处理模式为虚化模式时,手机可以在保留目标区域内的图像色彩,并清晰显示目标对象所在区域内图像的情况下,对背景区域进行虚化处理,从而突出目标对象。在一些实施例中,若目标处理模式为虚化模式,则手机还可以根据用户的指示操作调整背景区域的虚化程度。
当目标处理模式为背景替换模式时,手机可以在保留目标区域内的图像色彩的情况下,将背景区域内的图像替换为背景图片(即另一张图像)上相同位置的区域内的图像,实现目标对象背景的任意替换,获得个性化的图像。
若目标处理模式为背景替换模式,则手机还可以提示用户选择一张待替换的背景图片,以便后续将背景区域替换为该背景图片上的相应位置的区域。若用户未选择背景图片,则手机采用默认的背景图片进行背景替换。在一些实施例中,目标对象在背 景图片上的位置或大小等,还可以根据用户的指示操作进行调整。
在一些实施例中,在进入留色模式后,手机可以根据用户的指示操作确定目标处理模式。
在另一些实施例中,在进入留色模式后,该目标处理模式为预设的处理模式,或者上一次使用的处理模式。手机还可以根据用户的指示操作切换目标处理模式。
在步骤305中,手机可以确定目标对象,从而保留目标对象所在区域的色彩。例如,手机可以根据用户的指示操作,以个体为选择单位,设置目标对象包括的一个或多个个体,以便后续可以保留图像中的一个个体、不同类型的多个个体或同一类型的多个个体的色彩。其中,以个体为选择单位,从单个个体的维度来进行色彩保留设置,可以更为精确地选择待保留色彩的主角,提高色彩保留设置的灵活性。
举例来说,录像场景为群舞表演,目标对象可以为领舞,手机可以保留领舞所在区域的色彩;再比如,录像场景为乐队演奏场景,目标对象可以为乐队主唱,手机可以保留乐队主唱所在区域的色彩;再比如,录像场景为演唱会现场,目标对象为歌手,手机可以保留歌手所在区域的色彩。
在其他一些技术方案中,手机可以保留目标对象所在区域的色彩以及与目标对象重叠的物体所在区域的色彩。举例来说,目标对象为歌手,歌手手上握有话筒或乐器等物体,手机可以保留歌手所在区域的色彩,并保留歌手握持的话筒或乐器等物体的色彩。
以进入留色模式后,目标处理模式默认为灰化模式为例进行说明。在一些实施例中,在进入留色模式后,当摄像头获取的图像上包括人物时,手机默认拍摄预览界面上,预览图像上的目标对象为所有人物,目标对象所在目标区域内的图像色彩被保留,背景区域内的图像默认为灰度图像。手机可以根据用户的指示操作从目标对象中删除或添加一个或多个个体。
需要说明的是,为便于区分图像上保留色彩的区域(也称留色区域)和非留色区域,在附图表示的手机所显示的图像上,左斜线填充的部分表示保留色彩的区域。
例如,手机可以通过声音或通过显示提示信息等方式,提示用户指定目标对象。示例性的,在进入留色模式后,手机显示的拍摄预览界面可以参见图8中的(a)。如图8中的(a)所示,手机可以提示用户:请点击图片上的个体,以删除或添加待保留色彩的对象。参见图8中的(a),手机检测到用户点击人物1的操作后,手机从目标对象中删除人物1,目标对象包括人物2,参见图8中的(b),人物1所在区域变为灰度图像,手机保留人物2所在区域的色彩。而后,参见图8中的(b)手机检测到用户点击小狗的操作后,手机在目标对象中添加小狗,目标对象包括人物2和小狗,如图8中的(c)所示,小狗所在区域内的图像变为彩色图像,人物2所在区域内的图像仍为彩色图像,其他区域内的图像为灰度图像。
需要注意的是,在图8的(a)等附图表示的手机所显示的图像上,背景区域内显示的文字“灰度图像”表示背景区域内的图像为灰度图像。
需要说明的是,在本申请实施例中,图像上某个区域内的图像为彩色/灰度图像,也可以简述为某个区域为彩色/灰度图像,或者简述为某个区域内为彩色/灰度图像。
在另一些实施例中,在进入留色模式后,当摄像头获取的图像上包括多个人时, 手机默认预览图像上的目标对象为最靠近中间区域的一个人或多个人。手机还可以根据用户的指示操作在目标对象中添加或删除一个或多个个体,并根据修改后的目标对象进行色彩保留。
在另一些实施例中,在进入留色模式后,当摄像头获取的图像上包括一个人物时,手机默认预览图像上的目标对象为该人物。例如,预览图像包括人物1,人物1为目标对象;当人物1移出手机的画面范围后,预览图像上未包括目标对象,预览图像全部为灰度图像;后续,当摄像头获取的图像上出现人物2后,预览图像上的目标对象自动设定为人物2。其中,人物2与人物1可以相同或不同。其中,图像上出现人物2是指,图像上出现人物2的一部分或全部。手机还可以根据用户的指示操作在目标对象中添加或删除一个或多个个体,并根据修改后的目标对象进行色彩保留。
示例性的,参见图9中的(a),进入留色模式后,目标对象为人物1。参见图9中的(b),在人物1移出手机的画面范围,且手机检测到摄像头获得的图像上包括人物2后,目标对象为人物2,手机在预览图像上保留人物2所在区域的色彩。
或者,参见图9中的(c),在人物1移出手机的画面范围后,整个预览图像为灰度图像。手机再次检测到摄像头获取的图像上的人物1后,预览图像上的目标对象为人物1,参见图9中的(d),手机在预览图像上保留人物1所在区域的色彩。
在另一些实施例中,在进入留色模式后,手机默认预览图像上的目标对象为,摄像头获取的图像上首先出现的人物。当目标对象移出手机的画面范围后,预览图像上未包括目标对象,预览图像为灰度图像;后续,手机还可以根据用户的指示操作在目标对象中添加或删除一个或多个个体,并根据修改后的目标对象进行色彩保留。
示例性的,参见图10中的(a),进入留色模式后,目标对象为人物1。人物1移出手机的画面范围后,参见图10中的(b),整个预览图像为灰度图像。手机检测到用户点击小狗的操作后,目标对象为小狗,参见图10中的(c),手机在预览图像上保留小狗所在区域的色彩。
在另一些实施例中,在进入留色模式后,手机默认预览图像上的目标对象为摄像头获取到的图像上最中间的个体或者位于黄金分割点位置的个体。例如,最中间的个体是一只小狗,或者是一座建筑等。手机还可以根据用户的指示操作在目标对象中添加或删除个体,并根据修改后的目标对象进行色彩保留。
在另一些实施例中,在进入留色模式后,手机默认预览图像上目标对象为摄像头获取到的图像上占用面积最大的一个个体。手机还可以根据用户的指示操作在目标对象中添加或删除一个或多个个体,并根据修改后的目标对象进行色彩保留。
在另一些实施例中,在进入留色模式后,手机默认按照预设的类型顺序确定目标对象。例如,该类型顺序为人物,动物,建筑等。若摄像头获取的图像上包括人物,则预览图像上的目标对象为人物;若摄像头获取的图像上不包括人物而包括动物,则预览图像上的目标对象为动物;若摄像头获取的图像上不包括人物和动物而包括建筑,则预览图像上的目标对象为建筑。手机还可以根据用户的指示操作在目标对象中添加或删除一个或多个个体,并根据修改后的目标对象进行色彩保留。
在另一些实施例中,该目标对象为用户通过手机的系统设置界面预先设置的对象。手机还可以根据用户的指示操作在目标对象中添加或删除一个或多个个体,并根据修 改后的目标对象进行色彩保留。
也就是说,进入留色模式后,手机可以先自动确定目标对象,而后还可以根据用户的指示操作在目标对象中添加或删除一个或多个个体,并根据修改后的目标对象进行色彩保留。
在另一些实施例中,手机进入留色模式后不存在默认的目标对象,手机显示纯彩色图像或纯灰度图像。手机确定用户的选择一个或多个个体为目标对象,并在预览图像上保留目标区域的色彩(即保留目标区域内图像的色彩),背景区域内的图像处理为灰度图像。例如,手机可以通过声音或通过显示提示信息等方式,提示用户指定目标对象。示例性的,在进入留色模式后,参见图11中的(a),预览图像为灰度图像,手机可以通过文字信息提示用户:请点击图片上的个体,以指定待保留色彩的对象。手机检测到用户点击人物2的操作后确定人物2为目标对象,参见图11中的(b),手机在拍摄预览界面上保留人物2所在区域的色彩。手机检测到用户又点击小狗的操作后,参见图11中的(c),手机在拍摄预览界面上保留人物2和小狗所在区域的色彩。
再示例性的,在进入留色模式后,手机提示用户:请框选待保留色彩的对象。手机检测到用户框选一个区域的操作后,目标对象包括该区域内的个体,手机包括目标对象所在区域的色彩。
在目标对象确定后,若目标处理模式发生变化,则目标区域和背景区域内的图像效果也会相应变化。示例性的,参见图12中的(a),进入留色模式后,拍摄预览界面包括灰化模式控件1201、虚化模式控件1202和背景替换模式控件1203。目标处理模式预设的灰化模式,灰化模式控件1201被选中,目标对象为人物2,人物2所在区域为彩色,背景区域为灰度图像。参见图12中的(a),手机检测到用户选择虚化模式控件1202的操作后,将目标处理模式切换为虚化模式。如图12中的(b)所示,虚化模式控件1202被选中,目标对象为人物2,人物2所在区域为清晰的彩色图像,背景区域为虚化图像,即虚化处理后的图像。
再示例性的,参见图13A中的(a),背景替换模式控件被选中,目标处理模式为背景替换模式,目标对象为人物2。拍摄预览界面上包括至少一张背景图片。手机检测到用户点击背景图片1301的操作后,参见图13A中的(b),手机将预览图像上的背景区域替换为背景图片1301上相应位置的区域。也可以理解为,手机将预览图像上人物2的图像叠加到背景图片上。并且,对比图13A中的(b)-(c)可知,目标对象的位置不同,目标图像上的替换区域也不同。
再示例性的,进入留色模式后,目标处理模式为预设的灰化模式,预览图像为纯灰度图像或纯彩色图像。若手机根据用户的指示操作将目标处理模式切换为虚化模式,则在一些实施例中,预览图像为清晰的纯彩色图像;在另一些实施例中,预览图像的中间区域为清晰的彩色图像,其他区域为虚化图像。手机检测到用户指示目标对象的操作后,将目标对象所在区域保留为清晰的彩色图像,将背景区域设置为虚化图像。
再示例性的,进入留色模式后,目标处理模式为预设的灰化模式,预览图像为纯灰度图像或纯彩色图像。若手机根据用户的指示操作将目标处理模式由灰化模式切换为背景替换模式,则预览图像为纯彩色图像。手机检测到用户指示目标对象的操作后, 保留目标对象所在区域内的彩色图像,将背景区域替换为背景图片。
手机在确定目标对象和目标处理模式后,根据目标对象和目标处理模式对摄像头和ISP获取的原始图像进行处理。从而,在拍摄预览界面显示的预览图像上,目标区域的色彩被保留(即目标区域内图像的色彩被保留),背景区域内的图像为根据目标处理模式处理后的图像。
在以上实施例描述的方案中,手机进入留色模式后在拍摄预览界面上显示预览图像,并且手机进入留色模式触发手机对摄像头后续获取的每张图像进行实例分割。手机根据实例分割结果进行留色处理,并在拍摄预览界面上显示留色处理后的预览图像。
在一些技术方案中,手机检测到用户针对目标对象的操作后,针对当前帧图像进行留色处理,从而在预览图像上尽快让用户看到响应于用户的操作获得的留色效果,给用户以即时响应。
示例性的,手机针对摄像头获取到的图像进行留色处理。举例来说,目标处理模式为灰化模式,摄像头获取的图像1可以参见图13B中的(a),图像1为彩色图像。手机对图像1进行实例分割,确定图像1上每个个体对象所在的mask区域。参见图13B中的(b),手机在拍摄预览界面上显示预览图像1,该预览图像1为图像1处理成的纯灰度图像。参见图13B中的(b),若手机检测到用户点击人物2的操作,则手机确定目标对象为人物2,并确定人物2对应的mask区域。手机保留图像1上人物2的mask区域内的色彩,将其他区域处理为灰度图像,从而生成并显示图13B中的(c)所示的留色处理后的预览图像2。而后,参见图13B中的(d),摄像头获取到图像2,手机对图像2进行实例分割,并确定目标对象人物2所在的mask区域。手机保留图像2上人物2的mask区域内的色彩,将其他区域处理为灰度图像,从而生成并显示图13B中的(e)所示的预览图像3。而后,参见图13B中的(e),若手机检测到用户点击人物2的操作,则从目标对象中移除人物2,此时目标对象不包括任何个体对象,手机将图像2处理为纯灰度图像,从而生成并显示图13B中的(f)所示的预览图像4。
再示例性的,手机针对预览图像进行留色处理。举例来说,目标处理模式为灰化模式,摄像头获取到图像1,图像1为彩色图像。手机对图像1进行实例分割,确定图像1上每个个体对象所在的mask区域。手机在拍摄预览界面上显示预览图像1,该预览图像1为图像1处理成的纯灰度图像。若手机检测到用户点击人物2的操作,则确定目标对象为人物2,并确定人物2对应的mask区域。手机将预览图像1上人物2的mask区域恢复成彩色图像,将其他区域内仍为灰度图像,从而生成并显示留色处理后的预览图像2。
在另一些技术方案中,手机检测到用户针对目标对象的操作后,针对下一帧图像进行留色处理。示例性的,目标处理模式为灰化模式,摄像头获取的图像1可以参见图13C中的(a),图像1为彩色图像。手机对图像1进行实例分割,确定图像1上每个个体对象所在的mask区域。参见图13C中的(b),手机在拍摄预览界面上显示预览图像1,该预览图像1为图像1处理成的纯灰度图像。若手机检测到用户点击人物2的操作,则确定目标对象为人物2。而后,参见图13C中的(c),摄像头获取到图像2,手机对图像2进行实例分割,并确定目标对象人物2所在的mask区域。手机保留图像2上人物2的mask区域内的色彩,将其他区域处理为灰度图像,从而生成并显示 图13C中的(d)所示的预览图像2。而后,若手机检测到用户点击人物2的操作,则从目标对象中移除人物2,此时目标对象不包括任何个体对象。而后,参见图13C中的(e),摄像头获取到图像3,手机对图像3进行实例分割,将图像3处理为纯灰度图像,从而生成并显示图13C中的(f)所示的预览图像3。
在其他一些实施例中,手机进入留色模式后在拍摄预览界面上显示预览图像。手机进入留色模式不会触发手机进行实例分割,手机进入留色模式并检测到用户指示目标对象的操作后,才触发对摄像头后续获取的图像进行实例分割。与上述实施例类似,手机根据实例分割结果,可以对当前帧图像进行留色处理或者对下一帧图像进行处理,并在拍摄预览界面上显示留色处理后的预览图像。与上述实施例类型,手机根据实例分割结果,可以对摄像头获取的图像进行留色处理,或者对预览图像进行留色处理。
以手机根据实例分割结果对当前帧图像进行留色处理,且对摄像头获取的图像进行留色处理为例进行说明。示例性的,目标处理模式为灰化模式,摄像头获取的图像1可以参见图13D中的(a),图像1为彩色图像。参见图13D中的(b),手机在拍摄预览界面上显示预览图像1,该预览图像1为图像1处理成的纯灰度图像。若手机检测到用户在预览图像上的点击操作,则对图像1进行实例分割,确定点击位置所在区域为人物2的mask区域,确定目标对象包括人物2。手机保留图像1上人物2的mask区域内的色彩,将其他区域处理为灰度图像,从而生成并显示图13D中的(c)所示的留色处理后的预览图像2。而后,参见图13D中的(d),摄像头获取到图像2,手机对图像2进行实例分割,并确定目标对象人物2所在的mask区域。手机保留图像2上人物2的mask区域内的色彩,将其他区域处理为灰度图像,从而生成并显示图13D中的(e)所示的预览图像3。
在其他一些实施例中,手机不需要先进入留色模式后,再进入灰化模式、虚化模式或背景替换模式;而可以直接进入灰化模式、虚化模式或背景替换模式。在该实施例中,在上述步骤302之后,手机可以不执行步骤303,附图虽未示出,但该方法还可以包括上述步骤304以及步骤300:
300、手机检测到用户的预设操作2后进入目标处理模式,并确定目标对象,保留预览图像上目标对象所在区域的色彩,根据目标处理模式对背景区域进行处理,该目标处理模式包括灰化模式、虚化模式和背景替换模式。
示例性的,参见图13E,在步骤302中进入录像模式后,手机显示的拍摄预览界面包括灰化模式控件1302、虚化模式控件1303和背景替换模式控件1304。手机检测到用户点击灰化模式控件1302的操作后,进入灰化模式。
在步骤302之后,若手机在步骤300中进入灰化模式,则手机可以确定目标对象,并保留目标对象所在区域的色彩,将背景区域设置为灰度图像。并且,手机还可以根据用户的指示操作修改目标对象。该目标对象包括一个或多个个体,该多个个体属于同一类型或不同类型。其中,手机确定和修改目标对象的方式可以参见上述步骤305中的相关描述。
举例来说,手机进入灰化模式后,预览图像为纯灰度图像或纯彩色图像。手机检测到用户在预览图像上指示目标对象的操作后,保留目标对象所在区域的色彩,并将背景区域设置为灰度图像。
在另一举例中,手机进入灰化模式后,若摄像头获取的图像上包括人物,则手机默认预览图像上的所有人物为目标对象,该所有人物所在区域为彩色图像,其他区域为灰度图像。
手机进入灰化模式后,还可以根据用户的指示操作切换到虚化模式或背景替换模式。在灰化模式下,若预览图像上包括目标对象,则切换到虚化模式后,目标对象所在区域为清晰的彩色图像,其他区域为灰化图像;或者,切换到背景替换模式后,手机显示的预览图像上目标对象所在区域的彩色图像被保留,其他区域替换为背景图片上相同位置的区域内的图像。在灰化模式下,若预览图像上未包括目标对象,预览图像为纯灰度图像或纯彩色图像,则切换到虚化模式或背景替换模式后,手机显示的预览图像为纯彩色图像。
在步骤302之后,若手机在步骤300中进入虚化模式,则手机可以确定目标对象,并保留目标对象所在区域的清晰的彩色图像,将背景区域设置为虚化图像。例如,手机检测到用户点击虚化模式控件1303的操作后,进入虚化模式。并且,手机还可以根据用户的指示操作修改目标对象。其中,手机确定和修改目标对象的方式可以参见上述步骤305中的相关描述。
举例来说,手机进入虚化模式后,预览图像为纯灰度图像,或者为纯彩色图像,或者中间区域为清晰的彩色图像且其他区域为虚化图像。手机检测到用户在预览图像上指示目标对象的操作后,保留目标对象所在区域的色彩,并将背景区域处理为虚化图像。
在另一举例中,手机进入虚化模式后,若摄像头获取的图像上包括人物,则手机默认预览图像上最靠近中间的人物为目标对象,目标对象所在区域为清晰的彩色图像,其他区域为虚化图像。
手机进入虚化模式后,还可以根据用户的指示操作切换到灰化模式或背景替换模式。在虚化模式下,若预览图像上包括目标对象,则切换到灰化模式后,手机显示的预览图像上目标对象所在区域为彩色图像,其他区域为灰度图像;或者,切换到背景替换模式后,目标对象所在区域的彩色图像被保留,其他区域替换为背景图片上相同位置的区域内的图像。在虚化模式下,若预览图像上未包括目标对象,为纯灰度图像,或者为纯彩色图像,或者为中间区域为清晰的彩色图像且其他区域为虚化图像,则切换到灰化模式后,手机显示的预览图像为纯灰度图像或纯彩色图像;或者,切换到背景替换模式后,手机显示的预览图像为纯彩色图像。
在步骤302之后,若手机在步骤300中进入背景替换模式,则手机可以确定目标对象,并保留目标对象所在区域的色彩,将背景区域设置为灰度图像。例如,手机检测到用户点击背景替换模式控件1304的操作后,进入背景替换模式。并且,手机还可以根据用户的指示操作切换目标对象。其中,手机确定和切换目标对象的描述可以参见上述步骤305中的相关说明。
举例来说,手机进入背景替换模式后,预览图像为纯彩色图像。手机检测到用户在预览图像上指示目标对象的操作后,保留目标对象所在区域的彩色图像,并将背景区域替换为背景图片上相同位置的区域内的图像。
在另一举例中,手机进入背景替换模式后,若摄像头获取的图像上包括人物,则 手机默认预览图像上最靠近中间的人物为目标对象,目标对象所在区域的彩色图像被保留,其他区域替换为背景图片上相同位置的区域内的图像。
手机进入背景替换模式后,还可以根据用户的指示操作切换到灰化模式或虚化模式。在背景替换模式下,若预览图像上包括目标对象,则切换到灰化模式后,目标对象所在区域为彩色图像,其他区域为灰度图像;或者,切换到虚化模式后,目标对象所在区域为清晰的彩色图像,其他区域替换为虚化图像。在背景替换模式下,若预览图像上未包括目标对象,为纯彩色图像,则切换到灰化模式后,手机显示的预览图像为纯灰度图像或纯彩色图像;或者,切换到虚化模式后,手机显示的预览图像为纯灰度图像,或者为纯彩色图像,或者中间区域为清晰的彩色图像且其他区域为虚化图像。
306、手机检测到用户的拍摄操作后显示拍摄界面,该拍摄界面包括录拍图像,录拍图像上目标区域的色彩被保留,背景区域内的图像为根据目标处理模式处理后的图像。
其中,该拍摄操作为录像操作。例如,用户的拍摄操作可以为用户点击图14中的(a)所示的拍摄控件1400的操作,或者用户的语音指示操作或手势操作等,本申请实施例不予限定。
手机检测到用户的拍摄操作后开始录像,保留录像过程中采集到的图像上目标区域的色彩,即保留目标区域内像素点的像素值;并根据目标处理模式对背景区域进行了后处理。
例如,若目标处理模式为灰化模式,则手机可以对背景区域进行灰化处理,从而将背景区域内的像素值转换为灰度值,将背景区域内的图像转化为灰度图像。示例性的,在图14中的(a)所示的目标对象为人物2,目标处理模式为灰化模式的情况下,在开始拍摄后,拍摄界面上的录拍图像可以参见图14中的(b)。
由图14中的(b)可知,作为目标对象,人物2所在区域的色彩被保留,背景区域为灰度图像,从而可以突出目标对象,使得目标区域与背景区域形成鲜明对比,目标对象色彩亮丽,可以达到突出主体和闪耀主角的目的,给用户以视觉冲击,提高用户拍摄体验。
若目标处理模式为虚化模式,则手机可以对背景区域进行虚化处理。例如,在图14中的(a)所示的目标对象为人物2的情况下,在开始拍摄后,拍摄界面上的录拍图像可以参见图14中的(c)。其中,作为目标对象,人物2所在区域的色彩被保留且可以清晰显示,背景区域被模糊和虚化处理,从而可以突出目标对象,使得目标区域与背景区域形成直观的对比,目标对象清楚绚丽,可以达到突出主体和闪耀主角的目的,给用户以视觉冲击,提高用户拍摄体验。
需要注意的是,在图14的(c)等附图表示的手机所显示的图像上,背景区域内显示的文字“虚化图像”表示背景区域内的图像为虚化图像。
若目标处理模式为背景替换模式,且用户在预览模式下指定了背景图片,则手机可以将背景区域内的图像替换为背景图片中相同位置的区域内的图像,实现替换目标对象所在背景的效果。并且,手机还可以对背景替换后获得的图像中,目标区域和背景图片的拼接位置进行平滑或羽化等处理,使得拼接边缘过渡平滑自然,获得较好的融合效果。
示例性的,在图14中的(a)所示的目标对象为人物2的情况下,在开始拍摄后,拍摄界面上的录拍图像可以参见图14中的(d)。由图14中的(d)可知,作为目标对象,人物2所在区域的色彩被保留,背景区域被替换为其他图像,可以使得图像更具个性化和创意,能够提高用户拍摄体验。
在一些实施例中,在拍摄过程中,手机还可以根据用户的指示操作调整目标对象和目标处理模式。参见图3,本申请实施例提供的留色方法还可以包括步骤307:
307、在拍摄过程中,手机检测到用户调整目标对象的操作后,根据调整后的目标对象对目标区域进行色彩保留。
在拍摄过程中,手机可以根据用户的指示操作,以个体为单位调整目标对象,从而使得待保留色彩的对象可以灵活切换和精确设置,提高用户拍摄体验。
例如,在图15中的(a)所示的情况下,手机检测到用户点击人物1的操作后,将人物1添加至目标对象,参见图15中的(b),保留人物1和人物2所在区域的色彩。手机检测到用户点击人物2的操作后,将人物2从目标对象中删除,参见图15中的(c),手机保留人物1所在区域的色彩。
再例如,手机检测到用户的预设操作3后,进入目标对象修改模式。而后,手机可以根据用户的指示操作调整目标对象。示例性的,拍摄界面上可以包括对象调节控件,预设操作3可以为用户点击对象调节控件的操作。
308、在拍摄过程中,手机检测到用户调整目标处理模式的操作后,根据调整后的目标处理模式对背景区域进行后处理。
在拍摄过程中,手机根据用户的指示操作调整目标处理模式,从而灵活调整处理模式,提高视频画面效果的多样性。
例如,拍摄界面包括处理模式控件。参见图16中的(a)所示的拍摄界面,在目标处理模式为灰化模式的情况下,若手机检测到用户点击处理模式控件中的背景替换模式控件的操作,则目标处理模式切换为背景替换模式,手机显示的拍摄界面可以参见图16中的(b)。
在本申请的一些实施例中,在录像过程中,当目标对象移出手机的画面范围后,拍摄界面上不包括目标对象。该种情况下,若目标处理模式为灰化模式,则手机在拍摄界面上显示纯灰度图像。若目标处理模式为虚化模式,则手机在拍摄界面上显示纯虚化图像。若目标处理模式为虚化模式,则手机在拍摄界面上显示背景图片。后续,在一些技术方案中,手机再次检测到目标对象出现后,继续保留目标对象所在目标区域的色彩。或者,在另一些技术方案中,手机检测到用户重新指定目标对象的操作后,再保留目标对象所在目标区域的色彩。
在本申请其他一些实施例中,在录像过程中,手机还可以多次退出和进入留色模式。在留色模式下,拍摄界面上的录拍图像为留色处理后的图像;在退出留色模式后,拍摄界面上的录拍图像可以为彩色图像。这样,手机可以录制留色处理后的图像和彩色图像动态变化的视频体验,可以给用户以视觉冲击体验,使得用户获得个性化和多样化的视频。
示例性的,参见图17中的(a)所示的拍摄界面,当前为留色模式。手机检测到用户点击滤镜控件1701的操作后,如图17中的(b)所示,手机显示清新模式控件 1702等控件。手机检测到用户点击清新模式控件1702的操作后,进入清新模式,如图17中的(c)所示,拍摄界面上的录拍图像为彩色图像。手机检测到用户点击留色模式控件1703的操作后,如图17中的(d)所示,手机再次进入留色模式,并保留目标区域的色彩。
309、手机检测到用户停止拍摄的操作后,停止录像并生成视频。
手机检测到用户停止拍摄的操作后,可以停止录像,并对录像过程中的图像数据进行视频编码,从而生成留色模式下拍摄获得的视频文件。该视频文件中的视频图像进行了留色和后处理。
在步骤301-309描述的方案中,在开始录像之前,手机可以确定目标对象和目标处理模式;在录像过程中,手机可以对采集到的图像进行留色处理和后处理,从而拍摄获得视频。该视频的视频图像上,目标对象中的一个或多个个体所在目标区域内的图像色彩被保留。也就是说,手机可以以单个个体为单位,保留一个个体、不同类型的多个个体、或同一类型的多个个体的色彩,提高色彩保留的灵活性和精确针对性,突出目标对象,提高用户拍摄体验。并且,背景区域可以进行灰化、虚化或背景替换等后处理,可以提高图像处理的灵活性,使得用户获得个性化、多样化的视频图像。
以上是以手机在检测到用户的拍摄操作之前进入留色模式为例进行说明的,在其他一些实施例中,手机可以在检测到用户的拍摄操作后,再进入留色模式。在进入留色模式后,手机可以对录像过程中摄像头获取到的图像进行实例分割,确定预览图像上不同个体所在的区域。手机还可以确定目标处理模式和目标对象,从而保留采集到的图像上目标对象所在目标区域的色彩,并根据目标处理模式对背景区域进行后处理。
参见图18,该方法可以包括:
1801、手机检测到用户打开相机应用的操作后,显示拍摄预览界面,该拍摄预览界面包括预览图像。
步骤1801可以参见上述步骤301的相关描述。
1802、手机进入目标拍摄模式,该目标拍摄模式为录像模式。
步骤1802可以参见上述步骤302的相关描述。
1803、手机检测到用户的拍摄操作后显示拍摄界面,该拍摄界面包括彩色图像。
与上述步骤306中的拍摄界面不同,步骤1803中显示的拍摄界面上的录拍图像为未经上述留色处理和后处理的彩色图像。示例性的,拍摄界面可以参见图19中的(a)。
1804、手机检测到用户的预设操作4后,进入留色模式。
其中,预设操作4可以有多种操作。示例性的,手机检测到用户在图19中的(a)所示的拍摄界面上点击滤镜控件1901的操作后,如图19中的(b)所示,手机显示留色模式控件1902。手机检测到用户点击留色模式控件1902的操作后,进入留色模式;如图19中的(c)所示,拍摄界面上部分区域的色彩被保留,其他区域为灰度图像。该预设操作4可以包括用户点击滤镜控件1092并点击留色模式控件1902的操作。
1805、手机对摄像头获取的图像进行实例分割,确定图像上不同个体所在的区域。
步骤1805可以参见上述步骤304的相关描述。
1806、手机确定目标对象和目标处理模式,并保留录拍图像上目标对象所在区域的色彩,根据目标处理模式对背景区域进行处理,该目标对象包括一个或多个个体, 该多个个体属于同一类型或不同类型。
手机在步骤1806中确定目标处理模式的方式可以参见上述步骤305中的相关描述。不同之处在于,手机在步骤305中在录像预览时确定目标处理模式;手机在步骤1806中在录像过程中确定目标处理模式,此处不再赘述。
手机在步骤1806中确定目标对象的方式可以参见上述步骤305中的相关描述。不同之处在于,手机在步骤305中在录像预览时确定目标对象;手机在步骤1806中在录像过程中确定目标对象,此处不再赘述。
在其他一些实施例中,手机可以不执行步骤1804,并在步骤1803后执行:手机检测到用户的预设操作5后进入目标处理模式,并确定目标对象,保留录拍图像上目标对象所在区域的色彩,根据目标处理模式对背景区域进行处理,该目标处理模式包括灰化模式、虚化模式和背景替换模式。关于该步骤的实现方式可以参见上述步骤300的相关说明。不同之处在于,此处,手机在录像过程中进行相关处理;而在步骤300中,手机在录像预览时进行相关处理。
手机在确定目标对象和目标处理模式后,根据目标对象和目标处理模式对采集到的视频图像进行处理。在拍摄界面显示的录拍图像上,目标区域的色彩被保留,背景区域内的图像为根据目标处理模式处理后的图像。
进一步地,该方法还可以包括:
1807、在拍摄过程中,手机检测到用户调整目标对象的操作后,根据调整后的目标对象对目标区域进行色彩保留。
步骤1807可以参见上述步骤307的相关描述。
1808、在拍摄过程中,手机检测到用户调整目标处理模式的操作后,根据调整后的目标处理模式对背景区域进行后处理。
步骤1808可以参见上述步骤308的相关描述。
1809、手机检测到用户停止拍摄的操作后,停止录像并生成视频。
步骤1809可以参见上述步骤309的相关描述。
在步骤1801-1809描述的方案中,在开始录像之后,手机可以确定目标对象和目标处理模式;在录像过程中,手机可以对采集到的图像进行留色处理和后处理,从而拍摄获得视频。该视频的视频图像上,目标对象中的一个或多个个体所在目标区域内的图像色彩被保留。也就是说,手机可以以单个个体为单位,保留一个个体、不同类型的多个个体、或同一类型的多个个体的色彩,提高色彩保留的灵活性和精确针对性,突出目标对象,提高用户拍摄体验。并且,背景区域可以进行灰化、虚化或背景替换等后处理,可以提高图像处理的灵活性,使得用户获得个性化、多样化的视频图像。
需要注意的是,以上是以录像模式为单路录像模式为例进行说明的。在多路录像模式下,每路视频画面均可以采用上述录像模式下的留色处理和后处理方法进行录制。或者,在多路录像模式下,手机可以根据用户的选择操作,针对某些线路的视频画面进行留色处理和后处理,从而获得更为个性化和多样化的视频图像。
本申请实施例提供了一种图像留色方法,可以应用于拍照场景,参见图20,该方法包括步骤2001-2005。该步骤2001-2005可以为上述步骤301-305。不同之处在于,步骤2002中的目标拍摄模式为拍照模式;并且,手机在步骤2003中基于拍照模式下 的拍摄预览界面进入留色模式。示例性的,进入留色模式后,拍照模式下的拍摄预览界面可以参见图4中的(b)。示例性的,在目标处理模式为灰化模式,目标对象为人物2的情况下,拍摄预览界面可以参见图21中的(a),手机保留了人物2所在区域的色彩。
参见图20,在步骤2005之后,该方法还可以包括:
2006、手机检测到用户的拍摄操作后拍摄获得照片,该照片上目标区域的色彩被保留,背景区域内的图像为根据目标处理模式处理后的图像。
其中,该拍摄操作为拍照操作。示例性的,手机检测到用户点击图21中的(a)所示的拍摄控件2100后,拍摄获得如图21中的(b)所示的照片,该照片上人物2所在的目标区域的色彩被保留,背景区域内的图像为根据目标处理模式灰化模式处理后的图像。
在步骤2001-2006描述的方案中,手机可以在对采集到的图像进行留色处理和后处理,从而拍摄获得照片。该照片上目标对象中的一个或多个个体所在目标区域的色彩被保留。也就是说,手机可以以单个个体为单位,保留一个个体、不同类型的多个个体、或同一类型的多个个体的色彩,提高色彩保留的灵活性和精确针对性,突出目标对象,提高用户拍摄体验。并且,背景区域可以进行灰化、虚化或背景替换等后处理,可以提高图像处理的灵活性,使得用户获得个性化、多样化的照片。
在其他一些实施例中,在其他一些实施例中,手机可以不执行步骤2003,并在步骤2002后执行:手机检测到用户的预设操作6后进入目标处理模式,并确定目标对象,保留预览图像上目标对象所在区域的色彩,根据目标处理模式对背景区域进行处理,该目标处理模式包括灰化模式、虚化模式和背景替换模式。关于该步骤的实现方式可以参见上述步骤300的相关说明。
以上是以手机拍摄获得一张照片为例进行说明的,手机还可以在连拍模式下,一次性拍摄获得多张照片。由于连拍模式下拍摄时长短,连续拍摄的多张照中待拍摄的个体对象基本不发生变化,因而连拍模式下的图像留色方法与拍摄一张照片时的图像留色方法类似,手机可以在检测到用户的拍摄操作之前确定目标对象和目标处理模式,从而拍摄获得多张照片,这里不再赘述。
在其他一些实施例中,在手机根据实例分割区分个体的基础上,手机还可以对图像上的个体进行部件分割。对于同一个体来说,不同的部件分割策略可以分割得到不同的部件。例如,在一种部件分割策略中,人可以包括头部,脖子、胳膊、手、衣服、腿和脚等部件。在另一种部件分割策略中,人可以包括头发、额头、耳朵、鼻子、脸、嘴、脖子和胳膊等部件。
本申请实施例可以基于部件分割,以部件为单位保留图像上的色彩,并对背景区域进行灰化、虚化或背景替换等后处理。在该种情况下,与上述基于实例分割的图像留色方法的不同之处在于,目标对象可以包括一个或多个部件,背景区域包括该一个或多个部件以外的其他区域。
可以理解的是,目标对象包括的多个部件可以属于同一个体,也可以属于不同的个体,本申请实施例不予限定。其中,该目标对象可以是默认的或用户指示的一个或多个部件。并且,该目标对象还可以根据用户的指示操作进行切换。
在一些技术方案中,目标对象可以为用户指示的部件。例如,手机检测到用户点击部件留色模式控件的操作后,进入部件留色模式。手机检测到用户点击上述人物2的头部的操作后,确定人物2的头部为目标对象,从而保留人物2的头部的色彩。手机还可以对人物2的头部以外的其他区域进行灰化、虚化或背景替换等后处理。
再例如,在图22中的(a)所示的拍摄预览界面上,手机检测到用户针对人物2个体的预设操作7(例如双击操作)后,可以对人物2进行部件分割。在一些实施例中,参见图22中的(b),人物2所在区域变为灰度图像。在一些实施例中,手机还可以显示该个体的部件分割情况。而后,手机检测到用户点击人物2裙子的操作后,确定人物2的裙子为目标对象,参见图22中的(c),手机保留人物2的裙子所在区域的色彩。手机对人物2的裙子以外的其他区域进行灰化、虚化或背景替换等后处理。手机检测到用户点击人物2头部的操作后,参见图22中的(d),手机保留人物2的裙子和头部所在区域的色彩。
在其他一些技术方案中,目标对象可以包括图像上与用户指示的部件颜色接近的各部件。在本申请的实施例中,颜色接近是指对应的像素点的像素值的差值小于预设阈值。例如,用户指示的部件为脖子,目标对象可以包括脖子,与脖子的肤色接近的脸和手等。
在其他一些实施例中,手机可以保留图像上特定颜色的部件的色彩。手机检测到用户点击某个位置后,目标对象包括与该位置的像素值接近的部件。在本申请的实施例中,像素值接近是指像素值的差值小于预设阈值。
这样,手机可以以单个部件为单位,保留一个部件、不同个体的多个个体、或同一个体的不同部件的色彩,减小色彩保留的设置粒度,提高色彩保留的灵活性和精确针对性,突出目标对象,使得拍摄获得的图像更具创意,提高用户拍摄体验。并且,背景区域可以进行灰化、虚化或背景替换等后处理,可以提高图像处理的灵活性,使得用户获得个性化、多样化的视频图像。
在其他一些实施例中,手机可以保留图像上包括特定颜色的个体的色彩。手机检测到用户点击某个位置的操作后,目标对象包括与该位置的像素值接近的像素点所属的个体。
在其他一些实施例中,手机还可以保留图像上特定颜色所在区域的色彩。手机检测到用户点击某个位置后,目标对象包括与该位置的像素点的像素值接近的区域。
这样,手机可以从颜色的维度,保留同一颜色对应的部件、个体或区域的色彩,提高色彩保留的灵活性和精确针对性,突出想要保留的颜色,使得拍摄获得的图像更具创意,提高用户拍摄体验。
在本申请的实施例中,在拍摄完成后,手机可以保存留色处理及后处理后的照片和视频。在一些实施例中,手机还可以保存未进行留色处理及后处理的原始图像。留色处理及后处理后的照片和视频的缩略图,与未经留色处理及后处理的照片和视频的缩略图可以区别显示。
示例性的,参见图23,图库中保存有留色处理及后处理后的照片的缩略图2301,以及未经留色处理及后处理的照片的缩略图2302。图库中保存有留色处理及后处理后的视频的缩略图2303,以及未经留色处理及后处理的视频的缩略图2304。
对比缩略图2301和缩略图2302,以及可知,经过留色处理及后处理后的照片的缩略图2301上保留了部分区域的色彩,而缩略图2302整体为彩色图像。在一些实施例中,留色处理及后处理后的照片和视频的缩略图上显示有留色标记2300。
需要说明的是,经过留色处理的视频可以包括多个视频图像帧,其中一些视频图像帧可能经过了留色处理,另一些视频图像帧未经过留色处理。在一些实施例中,经过留色处理的视频可以采用其中一个经过留色处理的视频图像帧作为缩略图图像,以区别于未经过留色处理的视频。
以上是以拍摄过程中进行图像留色处理为例进行说明的。在其他一些实施例中,对于手机获取到的目标图像,手机也可以进行编辑处理,确定目标对象和目标处理模式,从而根据目标对象和目标处理模式保留目标对象所在区域的色彩,并对背景区域进行灰化处理、虚化处理或背景替换处理。例如,该目标图像可以为手机拍摄获得的照片,下载的图像,或从其他设备拷贝的图像等。
本申请实施例还提供了一种图像留色方法,可以应用于图像编辑场景,参见图24,该方法包括:
2401、手机检测到用户的预设操作8后,显示目标图像的编辑界面。
其中,目标图像是手机获取到的一张图像,例如可以是拍摄获得的照片,下载的图像,或从其他设备拷贝的图像等。
手机检测到用户的预设操作8后,显示目标图像的编辑界面。示例性的,手机检测到用户点击拍摄预览界面上之前拍摄获得的目标图像的缩略图,或者点击如图23所示的图库中目标图像的缩略图2302的操作后,如图25中的(a)所示,手机放大显示目标图像。而后,手机检测到用户针对目标图像的点击操作后,显示如图25中的(b)所示的界面,该界面包括编辑控件2501。手机检测到用户点击编辑控件2501的操作后,显示如图25中的(c)所示的编辑界面。其中,该预设操作8可以为用户点击编辑控件的操作。该编辑界面上的图像为彩色图像。
2402、手机进入目标编辑模式,显示目标编辑模式界面,该目标编辑模式包括虚化模式、保留色彩模式或背景替换模式。
例如,手机检测到用户点击如图25中的(c)所示的编辑界面上的控件2502的操作后,参见图25中的(d),手机显示编辑模式控件。该编辑模式控件包括虚化模式控件2503、保留色彩模式控件2504和背景替换模式控件2505等。
手机检测到用户点击虚化模式控件2503的操作后进入虚化模式,显示如图26中的(a)所示的虚化模式界面。
手机检测到用户点击保留色彩模式控件2504的操作后进入保留色彩模式,显示如图27中的(a)所示的保留色彩模式界面。保留色彩模式界面包括目标图像转换成的灰度图像。
手机检测到用户点击背景替换模式控件2505的操作后进入背景替换模式。
2403、手机根据目标编辑模式对目标图像进行留色处理。
其中,当目标编辑模式为虚化模式时,目标图像包括区域1,区域1可以称为清晰区域;目标图像上区域1以外的区域称为区域2,区域2也可以称为虚化区域。其中,清晰区域内的图像保留原目标图像的像素值,虚化区域内的图像为进行虚化处理 后的模糊图像。手机可以根据用户的指示操作,调节清晰区域的形状和大小。例如,清晰区域可以为圆形、椭圆形或方形(也称线性形状)等。
示例性的,参见图26中的(a),虚化模式界面包括圆形控件2601和线性控件2602。圆形控件2601被选中后,虚化模式界面上的清晰区域2603为圆形。线性控件2602被选中后,虚化模式界面上的清晰区域为方形。手机可以根据用户手指的捏合或拖动等方式调整清晰区域的大小。
在虚化模式下,手机可以调用图像虚化的算法,根据分割结果确定前后景,虚化算法模拟图像深度估计,做出前景主体清晰、背景虚化的效果。其中,前景为区域1内的图像,后景为区域2内的图像。
手机还可以根据用户的指示操作调节虚化区域的虚化程度。例如,如图26中的(a)所示,虚化模式界面包括虚化级别调节控件2604。手机检测到用户点击虚化级别调节控件2604上不同位置的操作,或者在虚化级别调节控件2604上进行拖动的操作后,如图26中的(b)所示,手机可以调节虚化区域的虚化级别,而虚化级别越高,虚化区域内图像的虚化程度就越大。
在一些实施例虚化模式界面上还可以包括模糊控件2605。手机检测到用户选中模糊控件2605,并点击图像上的某个位置的操作后,将虚化模式界面上该位置变得虚化和模糊。比如,用户想要清晰显示的区域是不是圆形或方形等规则形状,用户可以通过模糊控件将圆形区域内想要显示的个体的边缘的区域变得虚化和模糊。再比如,用户可以通过模块控件让虚化区域内的某个位置变得更加虚化和模糊。
在一些实施例中,虚化模式界面上还可以包括返回控件2606,用于返回上一步操作。虚化模式界面上还可以包括完成虚化处理的控件以及退出虚化处理的控件等。
当目标编辑模式为保留色彩模式时,保留色彩模式界面包括目标图像转换成的灰度图像。在一些实施例中,在保留色彩模式下,手机可以保留目标图像上用户指定的颜色,即手机可以按颜色保留色彩。例如,手机可以提示用户:点按图片选择需要保留的颜色。
在一些技术方案中,手机检测到用户点击某个位置的操作后,保留该位置所属部件上,与该位置的像素值接近的区域的色彩。其中,像素值接近是指像素值之间的差值小于预设阈值。示例性的,参见图27中的(a),保留色彩模式界面包括选色控件2701,在选色控件2701选中的情况下,手机检测到用户点击灰度图像上人物2的脸部的操作后,参见图27中的(b),手机保留脸部与该位置的像素值接近的区域。手机又检测到用户脖子位置的操作后,保留目标图像上脖子与该位置的像素值接近的区域色彩。
在另一些技术方案中,手机检测到用户点击某个位置的操作后,保留与该位置的像素值接近的各部件的色彩。示例性的,在选色控件2701选中的情况下,手机检测到用户点击灰度图像上人物2的脸部的操作后,参见图27中的(c),手机保留目标图像上脸部、脖子和手部等像素值接近的部件的色彩。即,脸部、脖子和手部等肤色接近的部件所在区域为彩色,其他区域为灰度图像。
在一些实施例中,保留色彩模式界面还包括橡皮擦控件2702。在橡皮擦控件2702选中的情况下,手机检测到用户在以色彩保留区域拖动(或称涂抹)的操作后,可以 将涂抹区域恢复为灰度图像。示例性的,在图27中的(c)所示的情况下,手机检测到用户使用橡皮擦在人物2的左手部位进行涂抹的操作后,参见图27中的(d),手机将左手所在区域恢复为灰度图像。
此外,保留色彩模式界面还可以包括橡皮擦尺寸调节控件,用于调节橡皮擦的大小(即橡皮擦的作用域的面积)。用户通过橡皮擦可以擦掉不想保留色彩的区域,例如用户可以通过橡皮擦对需要保留色彩的区域进行微调,或调整色彩保留区域与灰度图像所在区域的边界。
在另一些实施例中,保留色彩模式界面还包括返回控件,可以用于返回上一步操作。保留色彩模式界面还可以包括完成虚化处理的控件以及退出虚化处理的控件等。
在另一些技术方案中,手机检测到用户点击某个位置的操作后,保留该位置所属个体上与该位置的像素值接近的所有像素点的色彩。例如,手机检测到用户点击人物2的脸部后,保留目标图像上人物2的脸部的色彩,以及人物2与脸部的像素值接近的额头、脖子和手的色彩。
在另一些技术方案中,手机检测到用户点击某个位置的操作后,保留与该位置的像素值接近的所有区域的色彩。
在另一些实施例中,在保留色彩模式下,手机可以保留用户指定的部件的色彩,即按部件留色。例如,手机可以提示用户点按图片选择需要保留色彩的部件。手机检测到用户点击某个部件后,保留该部件的色彩,其他区域仍为灰度图像。示例性的,手机检测到用户点击图27中的(a)所示的人物2的头部位置的操作后,保留头部的色彩,其他区域为灰度图像。
在另一些实施例中,在保留色彩模式下,手机可以保留用户指定的个体的色彩,即按个体留色。例如,手机可以提示用户点按图片选择需要保留色彩的个体。手机检测到用户点击某个个体后,保留该个体的色彩,其他区域仍为灰度图像。
在一些实施例中,保留色彩模式界面上包括按颜色留色控件,按个体留色控件,以及按部件留色控件。手机可以根据用户的选择操作,采用相应的留色策略对目标图像进行留色处理。
这样,手机可以根据颜色,以部件或个体为单位,保留目标图像上部分区域的色彩,减小色彩保留的设置粒度,提高色彩保留的灵活性和精确针对性,突出目标对象,使得拍摄获得的图像更具创意,提高用户拍摄体验。并且,背景区域可以进行灰化、虚化或背景替换等后处理,可以提高图像处理的灵活性,使得用户获得个性化、多样化的视频图像。
当目标编辑模式为背景替换模式时,手机可以保留目标图像上目标对象所在的目标区域,并将背景区域内的图像替换为背景图片中相同位置的区域内的图像。也可以理解为,手机将目标区域内的图像叠加到了背景图片上,目标区域内的图像为前景图像,背景图片为后景图像。其中,目标对象可以是手机默认的或用户指示的对象。背景图片可以是手机默认的或用户选择的图像。
在背景替换模式下,手机还可以根据用户的指示将背景区域替换为一张背景图片或多张背景图片的组合。手机还可以根据用户的指示操作,放大或缩小目标对象,移动目标对象的位置,在目标对象中增加或删除个体。
以上是以单张图像的编辑处理过程为例进行说明的。对于连拍等操作获得的多张图像,在一些实施例中,手机可以采用上述方式对每张图像分别进行留色处理,以及灰化、虚化或背景替换等编辑处理。在另一些实施例中,由于连拍每张图像上的对象基本相同,因而手机可以根据用户的指示操作,对其中一张图像(例如第一张图像)进行编辑处理,其他图像自动采用相同的方式进行留色处理,以及灰化、虚化或背景替换等编辑处理。例如,手机根据用户的指示操作对第一张图像中的人物1进行留色;那么手机对其他图像的人物1也自动进行留色。手机对第一张图像中人物1以外的其他区域进行虚化处理,那么手机也自动对其他图像中人物1以外的其他区域进行虚化处理。手机将第一张图像中人物1以外的其他区域替换成了某个背景图片,那么手机也自动对其他图像中人物1以外的其他区域替换成同一背景图片。
以上是以目标图像为未进行图像留色处理的图像为例进行说明的,该目标图像也可以是之前已经进行过图像留色处理的图像。在本次编辑过程中,手机可以根据用户的指示操作,再次进行图像留色处理。例如,本次编辑可以调整需要保留色彩的目标区域,调整清晰区域的或替换为新的背景等。
这样,手机可以对已获得的图像进行编辑处理,从而以个体或部件为单位保留图像上的色彩,可以更为精确地选择主角,提高色彩保留设置的灵活性和精确针对性,使得编辑后的图像更具创意。并且,背景区域可以进行灰化、虚化或背景替换等后处理,可以提高图像处理的灵活性,使得用户获得个性化、多样化的视频图像。
以上是以拍摄过程中进行图像留色处理为例进行说明的。在其他一些实施例中,对于手机获取到的目标视频,例如手机拍摄获得的视频,下载的视频,或从其他设备拷贝的视频等,手机也可以进行编辑处理,确定目标对象和目标后编辑模式,从而根据目标对象和目标后编辑模式保留目标对象所在区域的色彩,并对背景区域进行灰化处理、虚化处理或背景替换处理。
例如,手机显示目标视频的编辑界面。在一些实施例中,手机可以根据用户的指示操作,对目标视频中的一张或多张图像进行留色处理,以及灰化、虚化或背景替换等处理。目标视频中未进行处理的图像保持为原彩色图像。
在另一些实施例中,手机可以对目标视频中的某张图像(例如第一张图像)进行留色处理,以及灰化、虚化或背景替换等处理。该张图像之后的图像采用相同的方式进行留色处理,以及灰化、虚化或背景替换等处理。
比如,手机根据用户的指示操作,对目标视频中的第一张图像保留了目标对象人物1所在区域的色彩,对第10张图像保留了目标对象车辆1的色彩。那么,第2张到第9张图像中也自动保留人物1所在区域的色彩。第2张到第9张图像中如果不包括该人物1,则显示纯彩色图像或纯灰度图像。第10张图像之后的图像也自动保留车辆1的色彩。第10张图像之后的图像如果不包括车辆1,则显示纯彩色图像或纯灰度图像。
再比如,手机根据用户的指示操作,对目标视频中的第一张图像上人物1以外的区域进行了灰化处理,对第10张图像上车辆1以外的区域进行了灰化处理。那么,第2张到第9张图像中人物1以外的区域也自动进行灰化处理。第2张到第9张图像中如果不包括该人物1,则显示纯灰度图像。第10张图像之后的图像上车辆1以外的区 域也自动进行灰化处理。第10张图像之后的图像如果不包括车辆1,则显示纯灰度图像。
再比如,手机根据用户的指示操作,对目标视频中的第一张图像上人物1以外的区域进行了虚化处理,对第10张图像上车辆1以外的区域进行了虚化处理。那么,第2张到第9张图像中人物1以外的区域也自动进行虚化处理。第2张到第9张图像中如果不包括该人物1,则显示纯虚化图像或原彩色图像。第10张图像之后的图像上车辆1以外的区域也自动进行虚化处理。第10张图像之后的图像如果不包括车辆1,则显示纯虚化图像或原彩色图像。
这样,手机可以对已获得的视频进行编辑处理,从而以个体或部件为单位保留视频图像上的色彩,可以更为精确地选择主角,提高色彩保留设置的灵活性和精确针对性,使得编辑后的视频更具创意。并且,背景区域可以进行灰化、虚化或背景替换等后处理,可以提高图像处理的灵活性,使得用户获得个性化、多样化的视频图像。
通过实例分割,手机可以分割出每个个体对象。基于此,在本申请的其他实施例中,手机还可以将视频或图像集合中的个体对象与其他视频进行合成,从而达到个性化的、生动的或特殊的视频处理效果。
在一些实施例中,针对已生成的视频1,手机可以根据目标主体生成一个小视频。该目标主体可以是一个或多个目标主体。该小视频的图像为视频1中包括的目标主体的图像,且仅保留了目标主体所在区域的图像。这样,手机生成的该小视频类似为目标主体的一个动图表情包。
或者,针对图像集合1,手机可以根据目标主体生成一个图像集合2,该图像集合2中的图像为图像集合1中包括的目标主体的图像,且该图像仅保留了目标主体所在区域的图像。
在其他一些实施例中,手机可以将目标主体所在的视频和另一个视频进行合成。比如,目标主体为人物,视频1为目标主体在草地跳舞的视频,视频2为星空延时摄影视频。手机可以将视频1中目标主体所在区域的图像叠加到视频2的图像上,从而使得视频1与星空延时摄影的视频2合成一个新的视频,生成目标主体在星空下舞动的新视频。
再比如,视频1为关于目标主体的慢动作视频。手机可以将视频1中的目标主体分割出来,并与另一视频(例如延时摄影视频)中的图像进行合并,从而生成一个新的视频。
再比如,目标主体为人物,手机可以将目标主体长大后的视频中,目标主体所在区域的图像提取出来,与目标主体小时候的视频合成,给用户以长大后的自己遇见小时候的自己的感觉。或者,手机也可以将目标主体老了后的视频中,目标主体所在区域的图像提取出来,与目标主体年轻时的视频合成,给用户以穿越时空的感觉。
在其他一些实施例中,手机还可以将不同视频中的目标主体提取出来,并与另一个新的视频合成。比如,一对异地的情侣,分别拍一段视频,然后手机将两个视频中的两个人物作为目标主体抠出来,放到一个新的背景视频里,从而生成情侣在同一个地方的一个新的视频。
再比如,手机可以将慢动作视频中目标主体所在区域的图像提取出来,将快动作 视频中的目标主体所在区域的图像提取出来,并将目标主体所在区域的图像合成到一个新的视频中,从而形成明显的快、慢差异感。
再比如,视频1为教练的视频,视频2为学生的视频,手机可以将视频1中教练所在区域的图像提取出来;将视频2中学生所在区域的图像提取出来,并将教练的视频和学生的视频合成到一个新的视频中,以方便对比学生的动作是否规范。
在其他一些实施例中,手机可以将视频中的目标主体提取出来,与多个其他视频合成。比如,目标主体为人物,视频1为目标主体练习武术的视频。视频2-视频5分别为春夏秋冬不同季节的背景视频。手机可以将视频1中目标主体所在区域的图像提取出来,并按照不同的时间段分别与视频2-视频5合成,给用户以练习武术历经四季的感觉。
在其他一些实施例中,手机还可以把多个视频中的同一个目标主体提取出来并剪辑到一起,从而生成一个新的视频。比如,针对小孩子从小到大成长过程中的多个视频,用抠图的方式把每个视频中的孩子的图像都提取出来,供用户编辑,或智能编辑,最后自动集合到一起,再配上一个或多个新的背景视频,从而生成一个新的视频。
比如,目标主体为人,手机可以将目标主体刚学轮滑时经常摔倒的视频,学了一段时间后比较熟练的视频,以及学了很久之后滑的很流畅的视频,即目标主体学习轮滑的不同阶段的视频中目标主体所在区域的图像提取出来,与目标主体学会之后滑的很流畅的视频中目标主体的图像提取出来,集合到一起,再配上一个或多个新的背景视频,从而生成一个新的视频。
并且,手机还可以对合成的视频图像中的目标主体进行上述留色处理,此处不予赘述。
以上是以电子设备为手机为例进行说明的,当电子设备为其他设备时,也可以采用以上实施例提供的图像留色方式,此处不予赘述。
可以理解的是,为了实现上述功能,电子设备包含了执行各个功能相应的硬件和/或软件模块。结合本文中所公开的实施例描述的各示例的算法步骤,本申请能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。本领域技术人员可以结合实施例对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
本申请实施例可以根据上述方法示例对电子设备进行功能模块的划分,例如,可以对应各个功能划分各个功能模块,也可以将两个或两个以上的功能集成在一个处理模块中。上述集成的模块可以采用硬件的形式实现。需要说明的是,本实施例中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。
本申请实施例还提供一种电子设备,包括:摄像头,用于采集图像;屏幕,用于显示界面;一个或多个处理器以及一个或多个存储器。该一个或多个存储器与一个或多个处理器耦合,一个或多个存储器用于存储计算机程序代码,计算机程序代码包括计算机指令,当一个或多个处理器执行计算机指令时,使得电子设备执行上述相关方法步骤实现上述实施例中的图像留色方法。
本申请实施例还提供一种电子设备,包括一个或多个处理器以及一个或多个存储 器。该一个或多个存储器与一个或多个处理器耦合,一个或多个存储器用于存储计算机程序代码,计算机程序代码包括计算机指令,当一个或多个处理器执行计算机指令时,使得电子设备执行上述相关方法步骤实现上述实施例中的图像留色方法。
本申请的实施例还提供一种计算机可读存储介质,该计算机可读存储介质中存储有计算机指令,当该计算机指令在电子设备上运行时,使得电子设备执行上述相关方法步骤实现上述实施例中的图像留色方法。
本申请的实施例还提供了一种计算机程序产品,当该计算机程序产品在计算机上运行时,使得计算机执行上述相关步骤,以实现上述实施例中电子设备执行的图像留色方法。
另外,本申请的实施例还提供一种装置,这个装置具体可以是芯片,组件或模块,该装置可包括相连的处理器和存储器;其中,存储器用于存储计算机执行指令,当装置运行时,处理器可执行存储器存储的计算机执行指令,以使芯片执行上述各方法实施例中电子设备执行的图像留色方法。
其中,本实施例提供的电子设备、计算机可读存储介质、计算机程序产品或芯片均用于执行上文所提供的对应的方法,因此,其所能达到的有益效果可参考上文所提供的对应的方法中的有益效果,此处不再赘述。
通过以上实施方式的描述,所属领域的技术人员可以了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。
在本申请所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个装置,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是一个物理单元或多个物理单元,即可以位于一个地方,或者也可以分布到多个不同地方。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该软件产品存储在一个存储介质中,包括若干指令用以使得一个设备(可以是单片机,芯片等)或处理器(processor)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(read only  memory,ROM)、随机存取存储器(random access memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上内容,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (22)

  1. 一种图像留色方法,其特征在于,所述方法应用于电子设备,所述电子设备包括彩色摄像头,所述方法包括:
    启动相机应用,显示预览界面;
    确定第一个体对象为目标对象,确定目标处理模式;
    根据所述彩色摄像头获取的图像,生成第一预览图像,所述第一预览图像中包括所述第一个体对象和第二个体对象,所述第二个体对象与所述第一个体对象不同;
    在所述预览界面中显示所述第一预览图像,所述第一预览图像中第一区域的图像显示为彩色,所述第一预览图像中第二区域的图像为根据所述目标处理模式处理后的图像;其中,所述第一区域为所述第一个体对象在所述第一预览图像中占据的图像区域,所述第二区域为所述第一预览图像中除所述第一区域以外的区域;
    响应于用户的第一操作,确定第二个体对象为所述目标对象;
    在所述预览界面中显示第二预览图像,所述第二预览图像中第三区域的图像显示为彩色,所述第二预览图像中第四区域的图像为根据所述目标处理模式处理后的图像;其中,所述第三区域为所述第二个体对象在所述第二预览图像中占据的图像区域,所述第四区域为所述第二预览图像中除所述第三区域以外的区域。
  2. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    响应于所述用户的第二操作,切换所述目标处理模式;
    根据切换后的所述目标处理模式,更新所述第四区域的图像。
  3. 根据权利要求1或2所述的方法,其特征在于,在所述确定第一个体对象为目标对象之前,所述方法还包括:
    在所述预览界面中显示第三预览图像,所述第三预览图像为所述彩色摄像头获取的图像转化成的灰度图像。
  4. 根据权利要求1-3任一项所述的方法,其特征在于,所述方法还包括:
    响应于所述用户的录像操作显示拍摄界面,所述拍摄界面包括录拍图像,所述录拍图像包括所述第三区域和所述第四区域;
    响应于所述用户的停止录像操作后,停止录像并生成视频。
  5. 一种图像留色方法,其特征在于,所述方法应用于电子设备,所述电子设备包括彩色摄像头,所述方法包括:
    启动相机应用,显示预览界面;
    响应于用户的录像操作,显示拍摄界面;
    确定第一个体对象为目标对象,确定目标处理模式;
    根据所述彩色摄像头获取的图像,生成第一录拍图像,所述第一录拍图像中包括所述第一个体对象和第二个体对象,所述第二个体对象与所述第一个体对象不同;
    在所述拍摄界面中显示所述第一录拍图像,所述第一录拍图像中第一区域的图像显示为彩色,所述第一录拍图像中第二区域的图像为根据所述目标处理模式处理后的图像;其中,所述第一区域为所述第一个体对象在所述第一录拍图像中占据的图像区域,所述第二区域为所述第一录拍图像中除所述第一区域以外的区域;
    响应于用户的第一操作,确定第二个体对象为所述目标对象;
    在所述拍摄界面中显示第二录拍图像,所述第二录拍图像中第三区域的图像显示为彩色,所述第二录拍图像中第四区域的图像为根据所述目标处理模式处理后的图像;其中,所述第三区域为所述第二个体对象在所述第二录拍图像中占据的图像区域,所述第四区域为所述第二录拍图像中除所述第三区域以外的区域;
    响应于所述用户的停止录像操作,停止录像并生成视频。
  6. 根据权利要求5所述的方法,其特征在于,在所述确定第一个体对象为目标对象之前,所述方法还包括:
    在所述拍摄界面中显示第三录拍图像,所述第三录拍图像为所述彩色摄像头获取的图像转换成的灰度图像。
  7. 根据权利要求4-6任一项所述的方法,其特征在于,在所述停止录像并生成视频之前,所述方法还包括:
    响应于所述用户的第三操作,确定第三个体对象为所述目标对象;
    在所述拍摄界面中显示第四录拍图像,所述第四录拍图像中第五区域的图像显示为彩色,所述第四录拍图像中第六区域的图像为根据所述目标处理模式处理后的图像;其中,所述第五区域为所述第三个体对象在所述第四录拍图像中占据的图像区域,所述第六区域为所述第四录拍图像中除所述第五区域以外的区域。
  8. 根据权利要求4-6任一项所述的方法,其特征在于,在所述停止录像并生成视频之前,所述方法还包括:
    响应于所述用户的第四操作,切换所述目标处理模式;
    根据切换后的所述目标处理模式,更新所述拍摄界面中所述第四区域的图像。
  9. 根据权利要求1-3任一项所述的方法,其特征在于,所述方法还包括:
    响应于所述用户的拍照操作生成照片,所述照片包括所述第三区域和所述第四区域。
  10. 根据权利要求1-9任一项所述的方法,其特征在于,所述确定第一个体对象为目标对象,包括:
    确定所述第一个体对象为所述彩色摄像头获取的图像上的人物,所述第一个体对象为所述目标对象;或者,
    响应于所述用户针对所述第一个体对象的操作,确定所述第一个体对象为所述目标对象。
  11. 根据权利要求1-10任一项所述的方法,其特征在于,所述目标处理模式为第一模式,所述第二区域的图像为根据所述第一模式处理后的灰度图像;或者,
    所述目标处理模式为第二模式,所述第二区域的图像为根据所述第二模式处理后的虚化图像;或者,
    所述目标处理模式为第三模式,所述第二区域的图像为根据所述第三模式处理后的替换为另一图像的图像。
  12. 根据权利要求11所述的方法,其特征在于,所述确定目标处理模式,包括:
    确定所述目标处理模式为默认的所述第一模式。
  13. 一种电子设备,其特征在于,包括:
    彩色摄像头,用于采集彩色图像;
    屏幕,用于显示界面;
    一个或多个处理器;
    存储器;
    以及一个或多个计算机程序,其中所述一个或多个计算机程序被存储在所述存储器中,所述一个或多个计算机程序包括指令;当所述指令被所述电子设备执行时,使得所述电子设备执行以下步骤:
    启动相机应用,显示预览界面;
    确定第一个体对象为目标对象,确定目标处理模式;
    根据所述彩色摄像头获取的图像,生成第一预览图像,所述第一预览图像中包括所述第一个体对象和第二个体对象,所述第二个体对象与所述第一个体对象不同;
    在所述预览界面中显示所述第一预览图像,所述第一预览图像中第一区域的图像显示为彩色,所述第一预览图像中第二区域的图像为根据所述目标处理模式处理后的图像;其中,所述第一区域为所述第一个体对象在所述第一预览图像中占据的图像区域,所述第二区域为所述第一预览图像中除所述第一区域以外的区域;
    响应于用户的第一操作,确定第二个体对象为所述目标对象;
    在所述预览界面中显示第二预览图像,所述第二预览图像中第三区域的图像显示为彩色,所述第二预览图像中第四区域的图像为根据所述目标处理模式处理后的图像;其中,所述第三区域为所述第二个体对象在所述第二预览图像中占据的图像区域,所述第四区域为所述第二预览图像中除所述第三区域以外的区域。
  14. 根据权利要求13所述的电子设备,其特征在于,当所述指令被所述电子设备执行时,还使得所述电子设备执行以下步骤:
    响应于所述用户的第二操作,切换所述目标处理模式;
    根据切换后的所述目标处理模式,更新所述第四区域的图像。
  15. 根据权利要求13或14所述的电子设备,其特征在于,当所述指令被所述电子设备执行时,还使得所述电子设备执行以下步骤:
    响应于所述用户的录像操作显示拍摄界面,所述拍摄界面包括录拍图像,所述录拍图像包括所述第三区域和所述第四区域;
    响应于所述用户的停止录像操作后,停止录像并生成视频。
  16. 一种电子设备,其特征在于,包括:
    彩色摄像头,用于采集彩色图像;
    屏幕,用于显示界面;
    一个或多个处理器;
    存储器;
    以及一个或多个计算机程序,其中所述一个或多个计算机程序被存储在所述存储器中,所述一个或多个计算机程序包括指令;当所述指令被所述电子设备执行时,使得所述电子设备执行以下步骤:
    启动相机应用,显示预览界面;
    响应于用户的录像操作,显示拍摄界面;
    确定第一个体对象为目标对象,确定目标处理模式;
    根据所述彩色摄像头获取的图像,生成第一录拍图像,所述第一录拍图像中包括所述第一个体对象和第二个体对象,所述第二个体对象与所述第一个体对象不同;
    在所述拍摄界面中显示所述第一录拍图像,所述第一录拍图像中第一区域的图像显示为彩色,所述第一录拍图像中第二区域的图像为根据所述目标处理模式处理后的图像;其中,所述第一区域为所述第一个体对象在所述第一录拍图像中占据的图像区域,所述第二区域为所述第一录拍图像中除所述第一区域以外的区域;
    响应于用户的第一操作,确定第二个体对象为所述目标对象;
    在所述拍摄界面中显示第二录拍图像,所述第二录拍图像中第三区域的图像显示为彩色,所述第二录拍图像中第四区域的图像为根据所述目标处理模式处理后的图像;其中,所述第三区域为所述第二个体对象在所述第二录拍图像中占据的图像区域,所述第四区域为所述第二录拍图像中除所述第三区域以外的区域;
    响应于所述用户的停止录像操作,停止录像并生成视频。
  17. 根据权利要求15或16所述的电子设备,其特征在于,当所述指令被所述电子设备执行时,还使得所述电子设备执行以下步骤:
    在所述停止录像并生成视频之前,响应于所述用户的第三操作,确定第三个体对象为所述目标对象;
    在所述拍摄界面中显示第四录拍图像,所述第四录拍图像中第五区域的图像显示为彩色,所述第四录拍图像中第六区域的图像为根据所述目标处理模式处理后的图像;其中,所述第五区域为所述第三个体对象在所述第四录拍图像中占据的图像区域,所述第六区域为所述第四录拍图像中除所述第五区域以外的区域。
  18. 根据权利要求15或16所述的电子设备,其特征在于,当所述指令被所述电子设备执行时,还使得所述电子设备执行以下步骤:
    在所述停止录像并生成视频之前,响应于所述用户的第四操作,切换所述目标处理模式;
    根据切换后的所述目标处理模式,更新所述拍摄界面中所述第四区域的图像。
  19. 根据权利要求13或14所述的电子设备,其特征在于,当所述指令被所述电子设备执行时,还使得所述电子设备执行以下步骤:
    响应于所述用户的拍照操作生成照片,所述照片包括所述第三区域和所述第四区域。
  20. 一种电子设备,其特征在于,包括:
    一个或多个处理器;
    存储器;
    以及一个或多个计算机程序,其中所述一个或多个计算机程序被存储在所述存储器中,所述一个或多个计算机程序包括指令,当所述指令被所述电子设备执行时,使得所述电子设备执行如权利要求1-12任一项所述的图像留色方法。
  21. 一种计算机可读存储介质,其特征在于,包括计算机指令,当所述计算机指令在计算机上运行时,使得所述计算机执行如权利要求1-12任一项所述的图像留色方法。
  22. 一种计算机程序产品,其特征在于,当所述计算机程序产品在计算机上运行时,使得所述计算机执行如权利要求1-12任一项所述的图像留色方法。
PCT/CN2021/079603 2020-03-13 2021-03-08 图像留色方法及设备 WO2021180046A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US17/911,279 US20230188830A1 (en) 2020-03-13 2021-03-08 Image Color Retention Method and Device
EP21767582.6A EP4109879A4 (en) 2020-03-13 2021-03-08 METHOD AND DEVICE FOR IMAGE COLOR RETENTION

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN202010177496 2020-03-13
CN202010177496.5 2020-03-13
CN202010220045.5A CN113395441A (zh) 2020-03-13 2020-03-25 图像留色方法及设备
CN202010220045.5 2020-03-25

Publications (1)

Publication Number Publication Date
WO2021180046A1 true WO2021180046A1 (zh) 2021-09-16

Family

ID=77616270

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/079603 WO2021180046A1 (zh) 2020-03-13 2021-03-08 图像留色方法及设备

Country Status (4)

Country Link
US (1) US20230188830A1 (zh)
EP (1) EP4109879A4 (zh)
CN (1) CN113395441A (zh)
WO (1) WO2021180046A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114025237A (zh) * 2021-12-02 2022-02-08 维沃移动通信有限公司 视频生成方法、装置和电子设备

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023160830A1 (en) * 2022-02-28 2023-08-31 Mind Switch AG Electronic treatment device
CN116055867B (zh) * 2022-05-30 2023-11-24 荣耀终端有限公司 一种拍摄方法和电子设备
CN118075600A (zh) * 2022-11-22 2024-05-24 荣耀终端有限公司 拍摄模式切换方法及相关装置

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130148853A1 (en) * 2011-12-12 2013-06-13 Samsung Electronics Co., Ltd. Image processing apparatus and image processing method
CN109496423A (zh) * 2018-10-15 2019-03-19 华为技术有限公司 一种拍摄场景下的图像显示方法及电子设备
CN109816663A (zh) * 2018-10-15 2019-05-28 华为技术有限公司 一种图像处理方法、装置与设备
CN110033003A (zh) * 2019-03-01 2019-07-19 华为技术有限公司 图像分割方法和图像处理装置
CN110602424A (zh) * 2019-08-28 2019-12-20 维沃移动通信有限公司 视频处理方法及电子设备
CN111160350A (zh) * 2019-12-23 2020-05-15 Oppo广东移动通信有限公司 人像分割方法、模型训练方法、装置、介质及电子设备
CN112150499A (zh) * 2019-06-28 2020-12-29 华为技术有限公司 图像处理方法及相关装置

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9336575B2 (en) * 2012-06-25 2016-05-10 Konica Minolta, Inc. Image processing apparatus, image processing method, and image processing program
KR20140137738A (ko) * 2013-05-23 2014-12-03 삼성전자주식회사 이미지 디스플레이 방법, 이미지 디스플레이 장치 및 기록 매체
JPWO2015049899A1 (ja) * 2013-10-01 2017-03-09 オリンパス株式会社 画像表示装置および画像表示方法
US9367939B2 (en) * 2013-10-22 2016-06-14 Nokia Technologies Oy Relevance based visual media item modification
CN104660905B (zh) * 2015-03-04 2018-03-16 广东欧珀移动通信有限公司 拍照处理方法及装置
KR102443214B1 (ko) * 2016-01-08 2022-09-15 삼성전자 주식회사 영상처리장치 및 그 제어방법
CN107707823A (zh) * 2017-10-18 2018-02-16 维沃移动通信有限公司 一种拍摄方法及移动终端
CN108012091A (zh) * 2017-11-29 2018-05-08 北京奇虎科技有限公司 图像处理方法、装置、设备及其存储介质
JP2019128827A (ja) * 2018-01-25 2019-08-01 ソニーセミコンダクタソリューションズ株式会社 画像処理装置、および画像処理方法、並びにプログラム
CN113163133A (zh) * 2018-10-15 2021-07-23 华为技术有限公司 一种图像处理方法、装置与设备

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130148853A1 (en) * 2011-12-12 2013-06-13 Samsung Electronics Co., Ltd. Image processing apparatus and image processing method
CN109496423A (zh) * 2018-10-15 2019-03-19 华为技术有限公司 一种拍摄场景下的图像显示方法及电子设备
CN109816663A (zh) * 2018-10-15 2019-05-28 华为技术有限公司 一种图像处理方法、装置与设备
CN110033003A (zh) * 2019-03-01 2019-07-19 华为技术有限公司 图像分割方法和图像处理装置
CN112150499A (zh) * 2019-06-28 2020-12-29 华为技术有限公司 图像处理方法及相关装置
CN110602424A (zh) * 2019-08-28 2019-12-20 维沃移动通信有限公司 视频处理方法及电子设备
CN111160350A (zh) * 2019-12-23 2020-05-15 Oppo广东移动通信有限公司 人像分割方法、模型训练方法、装置、介质及电子设备

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4109879A4

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114025237A (zh) * 2021-12-02 2022-02-08 维沃移动通信有限公司 视频生成方法、装置和电子设备

Also Published As

Publication number Publication date
CN113395441A (zh) 2021-09-14
US20230188830A1 (en) 2023-06-15
EP4109879A4 (en) 2023-10-04
EP4109879A1 (en) 2022-12-28

Similar Documents

Publication Publication Date Title
CN112532869B (zh) 一种拍摄场景下的图像显示方法及电子设备
WO2021180046A1 (zh) 图像留色方法及设备
WO2021078001A1 (zh) 一种图像增强方法及装置
KR20210073568A (ko) 이미지 처리 방법 및 장치, 및 디바이스
CN112262563A (zh) 图像处理方法及电子设备
CN113170037B (zh) 一种拍摄长曝光图像的方法和电子设备
CN115242983B (zh) 拍摄方法、电子设备及可读存储介质
WO2022156473A1 (zh) 一种播放视频的方法及电子设备
CN113099146A (zh) 一种视频生成方法、装置及相关设备
WO2024021742A1 (zh) 一种注视点估计方法及相关设备
CN113538227B (zh) 一种基于语义分割的图像处理方法及相关设备
CN117201930B (zh) 一种拍照方法和电子设备
EP4325877A1 (en) Photographing method and related device
CN113536834A (zh) 眼袋检测方法以及装置
WO2023280021A1 (zh) 一种生成主题壁纸的方法及电子设备
WO2022228010A1 (zh) 一种生成封面的方法及电子设备
WO2023231696A1 (zh) 一种拍摄方法及相关设备
CN114615421B (zh) 图像处理方法及电子设备
CN116723418B (zh) 拍照方法和相关装置
CN114697525B (zh) 一种确定跟踪目标的方法及电子设备
WO2022127609A1 (zh) 图像处理方法及电子设备
CN115861042A (zh) 一种图像处理方法、电子设备及介质
CN117729421A (zh) 图像处理方法、电子设备和计算机可读存储介质
CN113452895A (zh) 一种拍摄方法及设备
CN114757955A (zh) 一种目标跟踪方法及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21767582

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021767582

Country of ref document: EP

Effective date: 20220921

NENP Non-entry into the national phase

Ref country code: DE