WO2022143382A1 - 图像处理方法及装置 - Google Patents

图像处理方法及装置 Download PDF

Info

Publication number
WO2022143382A1
WO2022143382A1 PCT/CN2021/140738 CN2021140738W WO2022143382A1 WO 2022143382 A1 WO2022143382 A1 WO 2022143382A1 CN 2021140738 W CN2021140738 W CN 2021140738W WO 2022143382 A1 WO2022143382 A1 WO 2022143382A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
processed
input
person
response
Prior art date
Application number
PCT/CN2021/140738
Other languages
English (en)
French (fr)
Inventor
董丽君
Original Assignee
维沃移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 维沃移动通信有限公司 filed Critical 维沃移动通信有限公司
Publication of WO2022143382A1 publication Critical patent/WO2022143382A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Definitions

  • the present application belongs to the technical field of communication equipment, and in particular relates to an image processing method and apparatus.
  • the beauty effect is usually turned on when taking pictures, and the beauty effect is applied to all the faces that can be detected.
  • the effect is better for the group photo who did not have makeup originally, and the makeup of the group photo who has already made up will be more intense. Therefore, the existing photo beauty There is a big gap between the beauty effect of makeup and no makeup, and the user experience is poor.
  • the purpose of the embodiments of the present application is to provide an image processing method and apparatus, which can solve the problem of poor user experience in the existing image processing solutions.
  • an embodiment of the present application provides an image processing method, the method comprising:
  • the target reference character image obtain makeup information of the target reference character
  • the to-be-processed character image is processed to obtain a processed character image.
  • an embodiment of the present application provides an image processing apparatus, and the apparatus includes:
  • a first receiving module configured to receive the first input of the user
  • a first determination module configured to determine a target reference person image in the image to be processed in response to the first input
  • a second receiving module configured to receive the second input of the user
  • a second determination module configured to determine, in response to the second input, an image of a person to be processed in the image to be processed
  • a first obtaining module configured to obtain makeup information of the target reference character according to the target reference character image
  • the first processing module is configured to process the to-be-processed character image according to the makeup information to obtain a processed character image.
  • embodiments of the present application provide an electronic device, the electronic device includes a processor, a memory, and a program or instruction stored on the memory and executable on the processor, the program or instruction being The processor implements the steps of the method according to the first aspect when executed.
  • an embodiment of the present application provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or instruction is executed by a processor, the steps of the method according to the first aspect are implemented .
  • an embodiment of the present application provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction, and implement the first aspect the method described.
  • a communication device comprising a processor, a memory, and a program or instruction stored on the memory and executable on the processor, the program or instruction being implemented when executed by the processor The method as described in the first aspect.
  • a computer program product is provided, the program product is stored in a non-volatile storage medium, the program product is executed by at least one processor to implement the method of the first aspect.
  • FIG. 1 is a schematic flowchart of an image processing method according to an embodiment of the present application.
  • FIG. 2 is a schematic diagram of a specific implementation flow of the image processing method according to an embodiment of the present application.
  • FIG. 3 is a schematic diagram 1 of image processing according to an embodiment of the present application.
  • FIG. 4 is a second schematic diagram of image processing according to an embodiment of the application.
  • FIG. 5 is a schematic diagram 3 of image processing according to an embodiment of the present application.
  • FIG. 6 is a fourth schematic diagram of image processing according to an embodiment of the present application.
  • FIG. 7 is a schematic diagram 5 of image processing according to an embodiment of the present application.
  • FIG. 8 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application.
  • FIG. 9 is a first schematic structural diagram of an electronic device according to an embodiment of the application.
  • FIG. 10 is a second schematic structural diagram of an electronic device according to an embodiment of the present application.
  • first, second and the like in the description and claims of the present application are used to distinguish similar objects, and are not used to describe a specific order or sequence. It is to be understood that the data so used are interchangeable under appropriate circumstances so that the embodiments of the present application can be practiced in sequences other than those illustrated or described herein, and distinguish between “first”, “second”, etc.
  • the objects are usually of one type, and the number of objects is not limited.
  • the first object may be one or more than one.
  • “and/or” in the description and claims indicates at least one of the connected objects, and the character “/" generally indicates that the associated objects are in an "or” relationship.
  • the image processing method provided by the embodiment of the present application is applied to an electronic device. As shown in FIG. 1 , the method includes:
  • Step 11 Receive the first input from the user.
  • the first input may be an instruction manually input by the user or a voice instruction, which is not limited herein.
  • Step 12 In response to the first input, determine the target reference person image in the image to be processed.
  • the to-be-processed image includes any one of a photographed image, a preview image, or a video frame image.
  • the image that has been photographed refers to an image obtained by performing a photographing operation, such as a photo.
  • Step 13 Receive a second input from the user.
  • the second input may be an instruction manually input by the user or a voice instruction, which is not limited herein.
  • Step 14 In response to the second input, determine an image of a person to be processed in the images to be processed.
  • the number of person images to be processed may be at least one.
  • the to-be-processed image where the target reference person image is located and the to-be-processed image where the to-be-processed person image is located are the same image or different images.
  • Step 15 Acquire makeup information of the target reference person according to the target reference person image.
  • the specific means for obtaining the makeup information may adopt the existing method, which will not be repeated here.
  • Step 16 Process the person image to be processed according to the makeup information to obtain a processed person image.
  • the character makeup in the processed character image is similar to, or even consistent with, the character makeup in the target reference character image.
  • the method further includes: in the case of receiving the third input on the image to be processed, performing face detection on the image to be processed to obtain a candidate object image; corresponding , the determining the target reference person image in the image to be processed in response to the first input includes: in response to the first input, determining the target reference person image from the candidate person image; The determining, in response to the second input, the image of the person to be processed in the image to be processed includes: in response to the second input, determining the image of the person to be processed from the candidate person images.
  • the third input may be an input performed through a first function key, and the first input and the second input may be a preset click operation on the screen of the electronic device.
  • the method further includes: receiving a fourth input from the user; and saving the processing in response to the fourth input post image.
  • the to-be-processed image including the target reference person image and the processed person image may be saved.
  • the image to be processed takes a group photo taken as an example.
  • the embodiments of the present application provide an image processing method, which can be specifically implemented as a method for supporting makeup sharing and migration on a group photo of multiple people: After that, the user marks a makeup artist (that is, the above target reference character image) and other makeup recipients (that is, the above-mentioned character images to be processed); and then supports the user to "confirm the migration", and the makeup recipient will have exactly the same as the makeup artist after confirmation.
  • the makeup in the overall picture is relatively uniform; this can avoid the embarrassment of the group photo without makeup and the incongruity of the overall group photo. That is, this solution: make full use of the combination of face detection and makeup migration capabilities, and support users to support makeup migration and sharing in the same screen in the same screen.
  • FIG. 2 the solution provided by the embodiment of the present application may be shown in FIG. 2 , including:
  • Step 21 After the electronic device has finished taking the photo, it will display the captured photo in the case of receiving the user's opening instruction; and in the case of receiving the user's editing instruction, it will display the optional editing buttons, as shown in Figure 3 shown; after receiving the "makeup sharing" instruction (corresponding to the above-mentioned third input), the makeup sharing detection is started.
  • Step 22 After enabling the makeup sharing detection, firstly detect the human face in the current picture (ie, the displayed photo), as shown in FIG. 4 .
  • Step 23 After completing the detection, provide the option of selecting the makeup person and the makeup person, as shown in Figure 5 and Figure 6 (the results of detecting faces, such as several faces can also be displayed); receiving the user's selection instruction, Determine the makeup artist and the makeup person (that is, in response to the first input, from the candidate object image, determine the target reference person image; in response to the second input, from the candidate object image , determine the to-be-processed person image).
  • the selected face under which option is selected is the face under this option; for example, if the makeup artist's option is selected, select the face on the left side of the picture, then the face The face of the makeup artist (see Figure 5).
  • the user is supported to select the makeup artist's face, select the makeup person's face, and the makeup of the makeup artist's face will be transferred (copied) to the makeup person's face.
  • Step 24 After the migration effect is determined, it can be directly determined to save, and the final imaging effect is the uniform makeup effect, as shown in Figure 7.
  • the solutions provided by the embodiments of the present application can help users to transfer other people's makeup in the group photo to their own or other people without makeup, so as to achieve uniform makeup of the overall shooting effect; this can save the need for multiple people to take pictures and make up.
  • it can provide users with a good photo-taking experience with makeup on. Except for those who do not need makeup, everyone in the finished film can have uniform makeup to avoid the unfavorable image of users taking photos with others without makeup.
  • the solutions provided by the embodiments of the present application can also be applied to taking pictures.
  • taking pictures the makeup of a certain makeup artist is directly migrated and shared to other faces in the picture; or applied to In the video media stream, the makeup of a certain person in the video is migrated and shared to other people's faces to achieve the effect of fun, which is not limited here.
  • the image processing method receives the first input from the user; determines the target reference person image in the image to be processed in response to the first input; receives the second input from the user; The second input is to determine the image of the person to be processed in the image to be processed; according to the image of the target reference person, the makeup information of the target reference person is obtained; according to the makeup information, the image of the person to be processed is processed.
  • the execution subject may be an image processing apparatus, or a control module in the image processing apparatus for executing the image processing method.
  • the image processing apparatus provided by the embodiments of the present application is described by taking an image processing apparatus executing an image processing method as an example.
  • the embodiment of the present application also provides an image processing apparatus, as shown in FIG. 8 , including:
  • the first receiving module 81 is used for receiving the first input of the user
  • a first determination module 82 configured to determine the target reference person image in the image to be processed in response to the first input
  • the second receiving module 83 is configured to receive the second input of the user
  • a second determination module 84 configured to determine the image of the person to be processed in the image to be processed in response to the second input
  • the first obtaining module 85 is configured to obtain makeup information of the target reference character according to the target reference character image
  • the first processing module 86 is configured to process the to-be-processed character image according to the makeup information to obtain a processed character image.
  • the image processing apparatus further includes: a first detection module, configured to, before receiving the first input from the user, in the case of receiving the third input of the image to be processed, detect the image to be processed. Process the image to perform face detection to obtain a candidate object image; correspondingly, the first determination module includes: a first determination sub-module for, in response to the first input, from the candidate object image, determine the target reference person image; the second determination module includes: a second determination sub-module for determining the to-be-processed person image from the candidate person images in response to the second input.
  • a first detection module configured to, before receiving the first input from the user, in the case of receiving the third input of the image to be processed, detect the image to be processed. Process the image to perform face detection to obtain a candidate object image; correspondingly, the first determination module includes: a first determination sub-module for, in response to the first input, from the candidate object image, determine the target reference person image; the second determination module includes: a second determination sub-module for determining
  • the to-be-processed image includes any one of a photographed image, a preview image, or a video frame image.
  • the to-be-processed image where the target reference person image is located and the to-be-processed image where the to-be-processed person image is located are the same image or different images.
  • the image processing device further includes: a third receiving module, configured to receive the user's fourth image after processing the to-be-processed character image according to the makeup information to obtain the processed character image. input; a first saving module, configured to save the processed character image in response to the fourth input.
  • the image processing apparatus receives the first input from the user; determines the target reference person image in the image to be processed in response to the first input; receives the second input from the user; The second input is to determine the image of the person to be processed in the image to be processed; according to the image of the target reference person, obtain the makeup information of the target reference person; according to the makeup information, process the image of the person to be processed to obtain the processing
  • It can realize a method of supporting makeup sharing and migration on multiple group photo photos, making the image processing more intelligent, and the makeup in the overall picture being more uniform, avoiding the embarrassment of the group photo with no makeup and the inconsistency of the overall group photo.
  • the coordination improves the user experience and solves the problem of poor user experience in the existing image processing solutions.
  • the image processing apparatus in this embodiment of the present application may be an apparatus, or may be a component, an integrated circuit, or a chip in a terminal.
  • the apparatus may be a mobile electronic device or a non-mobile electronic device.
  • the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palmtop computer, an in-vehicle electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook, or a personal digital assistant (personal digital assistant).
  • UMPC ultra-mobile personal computer
  • PDA personal digital assistant
  • non-mobile electronic devices can be servers, network attached storage (Network Attached Storage, NAS), personal computer (personal computer, PC), television (television, TV), teller machine or self-service machine, etc., this application Examples are not specifically limited.
  • the image processing apparatus in this embodiment of the present application may be an apparatus having an operating system.
  • the operating system may be an Android (Android) operating system, an iOS operating system, or other possible operating systems, which are not specifically limited in the embodiments of the present application.
  • the image processing apparatus provided in this embodiment of the present application can implement each process implemented by the method embodiments in FIG. 1 to FIG. 7 , and to avoid repetition, details are not repeated here.
  • an embodiment of the present application further provides an electronic device 90, including a processor 91, a memory 92, a program or instruction stored in the memory 92 and executable on the processor 91,
  • an electronic device 90 including a processor 91, a memory 92, a program or instruction stored in the memory 92 and executable on the processor 91,
  • the program or instruction is executed by the processor 91, each process of the above image processing method embodiments can be implemented, and the same technical effect can be achieved. To avoid repetition, details are not described here.
  • the electronic devices in the embodiments of the present application include the aforementioned mobile electronic devices and non-mobile electronic devices.
  • FIG. 10 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
  • the electronic device 100 includes but is not limited to: a radio frequency unit 101, a network module 102, an audio output unit 103, an input unit 104, a sensor 105, a display unit 106, a user input unit 107, an interface unit 108, a memory 109, and a processor 1010, etc. part.
  • the electronic device 100 may also include a power supply (such as a battery) for supplying power to various components, and the power supply may be logically connected to the processor 1010 through a power management system, so as to manage charging, discharging, and power management through the power management system. consumption management and other functions.
  • a power supply such as a battery
  • the structure of the electronic device shown in FIG. 10 does not constitute a limitation on the electronic device, and the electronic device may include more or less components than the one shown, or combine some components, or arrange different components, which will not be repeated here. .
  • the processor 1010 is configured to receive the user's first input through the user input unit 107; in response to the first input, determine the target reference person image in the image to be processed; and receive the user's second input through the user input unit 107 ; in response to the second input, determine the image of the person to be processed in the image to be processed; according to the target reference person image, obtain the makeup information of the target reference person; according to the makeup information, to the person to be processed The image is processed to obtain the processed person image.
  • a first input from a user is received; in response to the first input, a target reference person image in an image to be processed is determined; a second input from a user is received; in response to the second input, the an image of a person to be processed in the image to be processed; obtaining makeup information of the target reference person according to the target reference person image; processing the person image to be processed according to the makeup information to obtain a processed person image;
  • a method of supporting makeup sharing and migration on group photo photos of multiple people is implemented, which makes image processing more intelligent, and the makeup in the overall picture is more uniform, avoiding the embarrassment of the group photo with no makeup and the inconsistency of the overall group photo, and improving the user experience. , which solves the problem of poor user experience of the existing image processing solutions.
  • the processor 1010 is further configured to, before receiving the first input from the user, in the case of receiving the third input on the image to be processed through the user input unit 107, perform a human operation on the image to be processed. Face detection, get candidate image;
  • the processor 1010 is specifically configured to, in response to the first input, determine the target reference person image from the candidate object image; in response to the second input, determine the target reference person image from the candidate object image , and determine the to-be-processed person image.
  • the to-be-processed image includes: any one of a photographed image, a preview image, or a video frame image.
  • the to-be-processed image where the target reference person image is located and the to-be-processed image where the to-be-processed person image is located are the same image or different images.
  • the processor 1010 is further configured to, after processing the to-be-processed character image according to the makeup information to obtain the processed character image, receive a fourth input from the user through the user input unit 107; in response to The fourth input is to save the processed person image.
  • the solutions provided by the embodiments of the present application can help users to transfer other people's makeup in the group photo to their own or other people without makeup, so as to achieve uniform makeup in the overall shooting effect; this can save the time cost of multiple people taking pictures and makeup, In addition, it can provide users with a good photo-taking experience with makeup on. Except for those who do not need makeup, other people in the finished film can have uniform makeup to avoid the unfavorable image of users taking photos with others without makeup.
  • the input unit 104 may include a graphics processor (Graphics Processing Unit, GPU) 1041 and a microphone 1042. Such as camera) to obtain still pictures or video image data for processing.
  • the display unit 106 may include a display panel 1061, which may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like.
  • the user input unit 107 includes a touch panel 1071 and other input devices 1072 .
  • the touch panel 1071 is also called a touch screen.
  • the touch panel 1071 may include two parts, a touch detection device and a touch controller.
  • Other input devices 1072 may include, but are not limited to, physical keyboards, function keys (such as volume control keys, switch keys, etc.), trackballs, mice, and joysticks, which are not described herein again.
  • Memory 109 may be used to store software programs as well as various data including, but not limited to, application programs and operating systems.
  • the processor 1010 may integrate an application processor and a modem processor, wherein the application processor mainly processes the operating system, user interface, and application programs, and the like, and the modem processor mainly processes wireless communication. It can be understood that, the above-mentioned modulation and demodulation processor may not be integrated into the processor 1010.
  • Embodiments of the present application further provide a readable storage medium, where a program or an instruction is stored on the readable storage medium.
  • a program or an instruction is stored on the readable storage medium.
  • the processor is the processor in the electronic device described in the foregoing embodiments.
  • the readable storage medium includes a computer-readable storage medium, such as a computer read-only memory (Read-Only Memory, ROM), a random access memory (Random Access Memory, RAM), a magnetic disk or an optical disk, and the like.
  • An embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction to implement the above image processing method embodiments.
  • the chip includes a processor and a communication interface
  • the communication interface is coupled to the processor
  • the processor is configured to run a program or an instruction to implement the above image processing method embodiments.
  • the chip mentioned in the embodiments of the present application may also be referred to as a system-on-chip, a system-on-chip, a system-on-a-chip, or a system-on-a-chip, or the like.
  • An embodiment of the present application further provides a communication device, the communication device includes a processor, a memory, and a program or instruction stored on the memory and executable on the processor, where the program or instruction is processed by the processor.
  • the embodiments of the present application further provide a computer program product, the computer program product is stored in a non-volatile storage medium, and the computer program product is configured to be executed by at least one processor to implement the steps of the above image processing method , in order to avoid repetition, it will not be repeated here.
  • the method of the above embodiment can be implemented by means of software plus a necessary general hardware platform, and of course can also be implemented by hardware, but in many cases the former is better implementation.
  • the technical solution of the present application can be embodied in the form of a software product in essence or in a part that contributes to the prior art, and the computer software product is stored in a storage medium (such as ROM/RAM, magnetic disk, CD-ROM), including several instructions to make a terminal (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) execute the methods described in the various embodiments of this application.
  • a storage medium such as ROM/RAM, magnetic disk, CD-ROM

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

本申请公开了一种图像处理方法及装置,属于通信技术领域。其中,该方法包括:接收用户的第一输入;响应于所述第一输入,确定待处理图像中的目标参考人物图像;接收用户的第二输入;响应于所述第二输入,确定所述待处理图像中的待处理人物图像;根据所述目标参考人物图像,获取目标参考人物的妆容信息;根据所述妆容信息,对所述待处理人物图像进行处理,得到处理后的人物图像。

Description

图像处理方法及装置
相关申请的交叉引用
本申请主张在2020年12月30日在中国提交的中国专利申请No.202011606748.8的优先权,其全部内容通过引用包含于此。
技术领域
本申请属于通信设备技术领域,具体涉及一种图像处理方法及装置。
背景技术
目前,为了满足用户需求,通常是拍照时,打开美颜效果,将美颜效果作用于能检测到的所有人脸。但是,通过美颜效果对图片上能检测到的所有人脸进行上妆时,对于原本没有化妆的合影者效果较好,而已经化妆的合影者妆容会更加浓,因此,现有的拍照美颜方式对化妆和未化妆的人美颜效果差距较大,用户体验差。
发明内容
本申请实施例的目的是提供一种图像处理方法及装置,能够解决现有的图像处理方案用户体验差的问题。
为了解决上述技术问题,本申请是这样实现的:
第一方面,本申请实施例提供了一种图像处理方法,该方法包括:
接收用户的第一输入;
响应于所述第一输入,确定待处理图像中的目标参考人物图像;
接收用户的第二输入;
响应于所述第二输入,确定所述待处理图像中的待处理人物图像;
根据所述目标参考人物图像,获取目标参考人物的妆容信息;
根据所述妆容信息,对所述待处理人物图像进行处理,得到处理后的人物图像。
第二方面,本申请实施例提供了一种图像处理装置,该装置包括:
第一接收模块,用于接收用户的第一输入;
第一确定模块,用于响应于所述第一输入,确定待处理图像中的目标参考人物图像;
第二接收模块,用于接收用户的第二输入;
第二确定模块,用于响应于所述第二输入,确定所述待处理图像中的待处理人物图像;
第一获取模块,用于根据所述目标参考人物图像,获取目标参考人物的妆容信息;
第一处理模块,用于根据所述妆容信息,对所述待处理人物图像进行处理,得到处理后的人物图像。
第三方面,本申请实施例提供了一种电子设备,该电子设备包括处理器、存储器及存储在所述存储器上并可在所述处理器上运行的程序或指令,所述程序或指令被所述处理器执行时实现如第一方面所述的方法的步骤。
第四方面,本申请实施例提供了一种可读存储介质,所述可读存储介质上存储程序或指令,所述程序或指令被处理器执行时实现如第一方面所述的方法的步骤。
第五方面,本申请实施例提供了一种芯片,所述芯片包括处理器和通信接口,所述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现如第一方面所述的方法。
第六方面,提供了一种通信设备,包括处理器,存储器及存储在所述存储器上并可在所述处理器上运行的程序或指令,所述程序或指令被所述处理器执行时实现如第一方面所述的方法。
第七方面,提供了一种计算机程序产品,所述程序产品被存储在非易失的存储介质中,所述程序产品被至少一个处理器执行以实现如第一方面所述的方法。
在本申请实施例中,通过接收用户的第一输入;响应于所述第一输入,确定待处理图像中的目标参考人物图像;接收用户的第二输入;响应于所述第二输入,确定所述待处理图像中的待处理人物图像;根据所述目标参考人物图像,获取目标参考人物的妆容信息;根据所述妆容信息,对所述待处理 人物图像进行处理,得到处理后的人物图像;能够实现一种在多人合影照片上支持妆容共享迁移的方法,使得图像处理更加智能、整体画面内的妆容比较统一,避免了合影照片素颜者的尴尬和整体合影者的不协调,提升了用户体验,很好的解决了现有的图像处理方案用户体验差的问题。
附图说明
图1为本申请实施例的图像处理方法流程示意图;
图2为本申请实施例的图像处理方法具体实现流程示意图;
图3为本申请实施例的图像处理示意图一;
图4为本申请实施例的图像处理示意图二;
图5为本申请实施例的图像处理示意图三;
图6为本申请实施例的图像处理示意图四;
图7为本申请实施例的图像处理示意图五;
图8为本申请实施例的图像处理装置结构示意图;
图9为本申请实施例的电子设备结构示意图一;
图10为本申请实施例的电子设备结构示意图二。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本申请的说明书和权利要求书中的术语“第一”、“第二”等是用于区别类似的对象,而不用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便本申请的实施例能够以除了在这里图示或描述的那些以外的顺序实施,且“第一”、“第二”等所区分的对象通常为一类,并不限定对象的个数,例如第一对象可以是一个,也可以是多个。此外,说明书以及权利要求中“和/或”表示所连接对象的至少其中之一,字符“/”,一般表示前后关联对象是一种“或”的关系。
下面结合附图,通过具体的实施例及其应用场景对本申请实施例提供的图像处理方法进行详细地说明。
本申请实施例提供的图像处理方法,应用于电子设备,如图1所示,所述方法包括:
步骤11:接收用户的第一输入。
第一输入可以是用户手动输入的指令或者语音指令,在此不作限定。
步骤12:响应于所述第一输入,确定待处理图像中的目标参考人物图像。
其中,所述待处理图像包括:拍摄完成的图像、预览图像或者视频帧图像中的任一项。关于拍摄完成的图像是指已经执行完拍摄操作而获取的图像,比如照片。
步骤13:接收用户的第二输入。
第二输入可以是用户手动输入的指令或者语音指令,在此不作限定。
步骤14:响应于所述第二输入,确定所述待处理图像中的待处理人物图像。
待处理人物图像的数量可以是至少一个。所述目标参考人物图像所在的待处理图像与所述待处理人物图像所在的待处理图像为同一图像或不同的图像。
步骤15:根据所述目标参考人物图像,获取目标参考人物的妆容信息。
关于获取妆容信息的具体手段可采用现有方式,在此不再赘述。
步骤16:根据所述妆容信息,对所述待处理人物图像进行处理,得到处理后的人物图像。
处理后的人物图像中的人物妆容与目标参考人物图像的人物妆容相似、甚至达到一致。
进一步的,在接收用户的第一输入之前,还包括:在接收到对所述待处理图像的第三输入的情况下,对所述待处理图像进行人脸检测,得到候选人物图像;对应的,所述响应于所述第一输入,确定待处理图像中的目标参考人物图像,包括:响应于所述第一输入,从所述候选人物图像中,确定出所述目标参考人物图像;所述响应于所述第二输入,确定所述待处理图像中的待处理人物图像,包括:响应于所述第二输入,从所述候选人物图像中,确 定出所述待处理人物图像。
这样可以更人性化的满足用户的个性化需求。
其中,关于第三输入可以为通过第一功能按键执行的输入,第一输入和第二输入可以为在电子设备屏幕上的预设点击操作。
具体比如:如图4所示,在接收针对“妆容共享”这个功能按键的输入(对应于上述第三输入)之后,进行人脸检测;在检测完成后,如图5所示,响应于第一输入,从两张人脸中选出“化妆者(对应于上述目标参考人物图像)”;如图6所示,响应于第二输入,从两张人脸中选出“被化妆者(对应于上述待处理人物图像)”。
进一步的,在根据所述妆容信息,对所述待处理人物图像进行处理,得到处理后的人物图像之后,还包括:接收用户的第四输入;响应于所述第四输入,保存所述处理后的人物图像。
具体可以是将包含目标参考人物图像以及处理后的人物图像的待处理图像进行保存。
这样可以保存得到整体画面内的妆容比较统一的图像以供用户使用。
下面对本申请实施例提供的所述图像处理方法进行进一步说明,待处理图像以拍摄的合影照片为例。
针对上述技术问题,本申请实施例提供了一种图像处理方法,具体可实现为一种在多人合影照片上支持妆容共享迁移的方法:通过对拍摄的(合影)照片进行人脸识别,识别后由用户标注出一个化妆者(即上述目标参考人物图像)和其他的被化妆者(即上述待处理人物图像);然后支持用户“确认迁移”,确认后被化妆者会拥有和化妆者一模一样的妆容,整体画面内的妆容比较统一;这样可以避免合影照片素颜者的尴尬和整体合影者的不协调。也就是本方案:充分利用人脸检测和妆容迁移的能力的结合,支持用户在同一画面中支持画面中的妆容迁移共享。
具体的,本申请实施例提供的方案可如图2所示,包括:
步骤21:电子设备拍摄完照片之后,接收到用户的打开指示的情况下,对拍摄的照片进行显示;并在接收到用户的编辑指令的情况下,显示可供选择的编辑按键,如图3所示;在接收到“妆容共享”指令(对应于上述第三 输入)之后,开启妆容共享检测。
也就是,用户拍照后,打开刚拍的照片,点击“妆容共享”即开启妆容共享检测。
步骤22:开启妆容共享检测后,首先检测当前画面(即显示的照片)中的人脸,如图4所示。
步骤23:在完成检测之后,提供选择化妆者和被化妆者的选项,如图5和图6所示(还可以显示检测人脸的结果,比如几张人脸);接收用户的选中指令,确定化妆者和被化妆者(即上述响应于所述第一输入,从所述候选人物图像中,确定出所述目标参考人物图像;响应于所述第二输入,从所述候选人物图像中,确定出所述待处理人物图像)。具体可以是:在哪个选项被选中的情况下所选择的人脸,即为该选项下的人脸;比如化妆者的选项被选中的情况下选中图中左侧的人脸,则该人脸为化妆者的人脸(参见图5)。
也就是,支持用户选择化妆者的人脸,选择被化妆者的人脸,化妆者人脸的妆容将被迁移(复制)到被化妆者的人脸上。
步骤24:确定迁移效果后,可直接确定保存,最后成像效果为妆容统一效果,如图7所示。
具体可以是接收到用户确认化妆者和被化妆者的指令后,显示根据化妆者处理被化妆者的人脸后的照片;在接收到保存指令之后,对处理后的照片进行保存(对应于上述响应于所述第四输入,保存所述处理后的人物图像)。
由上可知,本申请实施例提供的方案能够帮助用户将合影照片中的别人的妆容迁移到自己或其他未化妆人员脸上,达到整体拍摄效果的妆容统一;这样既可以省去多人拍照化妆的时间成本,又可以为用户提供良好的已上妆的拍照体验,拍照后的成片中除不需要化妆的人员外,其他人都可以妆容统一,避免用户没化妆和别人合影的不利形象。
在此说明,本申请实施例提供的方案还可以被适用到拍照中,在拍照时(过程中)直接就将其中的某一化妆者妆容迁移共享到该照片的其他人脸上;或者运用到视频媒体流中,将视频中某一个人的妆容迁移共享到其他人脸上,达到趣玩的效果,在此不作限定。
综上,本申请实施例提供的所述图像处理方法通过接收用户的第一输入; 响应于所述第一输入,确定待处理图像中的目标参考人物图像;接收用户的第二输入;响应于所述第二输入,确定所述待处理图像中的待处理人物图像;根据所述目标参考人物图像,获取目标参考人物的妆容信息;根据所述妆容信息,对所述待处理人物图像进行处理,得到处理后的人物图像;能够实现一种在多人合影照片上支持妆容共享迁移的方法,使得图像处理更加智能、整体画面内的妆容比较统一,避免了合影照片素颜者的尴尬和整体合影者的不协调,提升了用户体验,很好的解决了现有的图像处理方案用户体验差的问题。
需要说明的是,本申请实施例提供的图像处理方法,执行主体可以为图像处理装置,或者该图像处理装置中的用于执行图像处理方法的控制模块。本申请实施例中以图像处理装置执行图像处理方法为例,说明本申请实施例提供的图像处理装置。
本申请实施例还提供了一种图像处理装置,如图8所示,包括:
第一接收模块81,用于接收用户的第一输入;
第一确定模块82,用于响应于所述第一输入,确定待处理图像中的目标参考人物图像;
第二接收模块83,用于接收用户的第二输入;
第二确定模块84,用于响应于所述第二输入,确定所述待处理图像中的待处理人物图像;
第一获取模块85,用于根据所述目标参考人物图像,获取目标参考人物的妆容信息;
第一处理模块86,用于根据所述妆容信息,对所述待处理人物图像进行处理,得到处理后的人物图像。
进一步的,所述的图像处理装置,还包括:第一检测模块,用于在接收用户的第一输入之前,在接收到对所述待处理图像的第三输入的情况下,对所述待处理图像进行人脸检测,得到候选人物图像;对应的,所述第一确定模块,包括:第一确定子模块,用于响应于所述第一输入,从所述候选人物图像中,确定出所述目标参考人物图像;所述第二确定模块,包括:第二确定子模块,用于响应于所述第二输入,从所述候选人物图像中,确定出所述 待处理人物图像。
其中,所述待处理图像包括:拍摄完成的图像、预览图像或者视频帧图像中的任一项。
本申请实施例中,所述目标参考人物图像所在的待处理图像与所述待处理人物图像所在的待处理图像为同一图像或不同的图像。
进一步的,所述的图像处理装置,还包括:第三接收模块,用于在根据所述妆容信息,对所述待处理人物图像进行处理,得到处理后的人物图像之后,接收用户的第四输入;第一保存模块,用于响应于所述第四输入,保存所述处理后的人物图像。
本申请实施例提供的所述图像处理装置通过接收用户的第一输入;响应于所述第一输入,确定待处理图像中的目标参考人物图像;接收用户的第二输入;响应于所述第二输入,确定所述待处理图像中的待处理人物图像;根据所述目标参考人物图像,获取目标参考人物的妆容信息;根据所述妆容信息,对所述待处理人物图像进行处理,得到处理后的人物图像;能够实现一种在多人合影照片上支持妆容共享迁移的方法,使得图像处理更加智能、整体画面内的妆容比较统一,避免了合影照片素颜者的尴尬和整体合影者的不协调,提升了用户体验,很好的解决了现有的图像处理方案用户体验差的问题。
本申请实施例中的图像处理装置可以是装置,也可以是终端中的部件、集成电路、或芯片。该装置可以是移动电子设备,也可以为非移动电子设备。示例性的,移动电子设备可以为手机、平板电脑、笔记本电脑、掌上电脑、车载电子设备、可穿戴设备、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本或者个人数字助理(personal digital assistant,PDA)等,非移动电子设备可以为服务器、网络附属存储器(Network Attached Storage,NAS)、个人计算机(personal computer,PC)、电视机(television,TV)、柜员机或者自助机等,本申请实施例不作具体限定。
本申请实施例中的图像处理装置可以为具有操作系统的装置。该操作系统可以为安卓(Android)操作系统,可以为iOS操作系统,还可以为其他可能的操作系统,本申请实施例不作具体限定。
本申请实施例提供的图像处理装置能够实现图1至图7的方法实施例实现的各个过程,为避免重复,这里不再赘述。
可选的,如图9所示,本申请实施例还提供一种电子设备90,包括处理器91,存储器92,存储在存储器92上并可在所述处理器91上运行的程序或指令,该程序或指令被处理器91执行时实现上述图像处理方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
需要说明的是,本申请实施例中的电子设备包括上述所述的移动电子设备和非移动电子设备。
图10为实现本申请实施例的一种电子设备的硬件结构示意图。
该电子设备100包括但不限于:射频单元101、网络模块102、音频输出单元103、输入单元104、传感器105、显示单元106、用户输入单元107、接口单元108、存储器109、以及处理器1010等部件。
本领域技术人员可以理解,电子设备100还可以包括给各个部件供电的电源(比如电池),电源可以通过电源管理系统与处理器1010逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。图10中示出的电子设备结构并不构成对电子设备的限定,电子设备可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置,在此不再赘述。
其中,处理器1010,用于通过用户输入单元107接收用户的第一输入;响应于所述第一输入,确定待处理图像中的目标参考人物图像;通过用户输入单元107接收用户的第二输入;响应于所述第二输入,确定所述待处理图像中的待处理人物图像;根据所述目标参考人物图像,获取目标参考人物的妆容信息;根据所述妆容信息,对所述待处理人物图像进行处理,得到处理后的人物图像。
本申请实施例,通过接收用户的第一输入;响应于所述第一输入,确定待处理图像中的目标参考人物图像;接收用户的第二输入;响应于所述第二输入,确定所述待处理图像中的待处理人物图像;根据所述目标参考人物图像,获取目标参考人物的妆容信息;根据所述妆容信息,对所述待处理人物图像进行处理,得到处理后的人物图像;能够实现一种在多人合影照片上支持妆容共享迁移的方法,使得图像处理更加智能、整体画面内的妆容比较统 一,避免了合影照片素颜者的尴尬和整体合影者的不协调,提升了用户体验,很好的解决了现有的图像处理方案用户体验差的问题。
可选的,处理器1010,还用于在接收用户的第一输入之前,在通过用户输入单元107接收到对所述待处理图像的第三输入的情况下,对所述待处理图像进行人脸检测,得到候选人物图像;
对应的,处理器1010,具体用于响应于所述第一输入,从所述候选人物图像中,确定出所述目标参考人物图像;响应于所述第二输入,从所述候选人物图像中,确定出所述待处理人物图像。
可选的,所述待处理图像包括:拍摄完成的图像、预览图像或者视频帧图像中的任一项。
可选的,所述目标参考人物图像所在的待处理图像与所述待处理人物图像所在的待处理图像为同一图像或不同的图像。
可选的,处理器1010,还用于在根据所述妆容信息,对所述待处理人物图像进行处理,得到处理后的人物图像之后,通过用户输入单元107接收用户的第四输入;响应于所述第四输入,保存所述处理后的人物图像。
本申请实施例提供的方案能够帮助用户将合影照片中的别人的妆容迁移到自己或其他未化妆人员脸上,达到整体拍摄效果的妆容统一;这样既可以省去多人拍照化妆的时间成本,又可以为用户提供良好的已上妆的拍照体验,拍照后的成片中除不需要化妆的人员外,其他人都可以妆容统一,避免用户没化妆和别人合影的不利形象。
应理解的是,本申请实施例中,输入单元104可以包括图形处理器(Graphics Processing Unit,GPU)1041和麦克风1042,图形处理器1041对在视频捕获模式或图像捕获模式中由图像捕获装置(如摄像头)获得的静态图片或视频的图像数据进行处理。显示单元106可包括显示面板1061,可以采用液晶显示器、有机发光二极管等形式来配置显示面板1061。用户输入单元107包括触控面板1071以及其他输入设备1072。触控面板1071,也称为触摸屏。触控面板1071可包括触摸检测装置和触摸控制器两个部分。其他输入设备1072可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆,在此不再赘述。存储器109可用于存储软 件程序以及各种数据,包括但不限于应用程序和操作系统。处理器1010可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作系统、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器1010中。
本申请实施例还提供一种可读存储介质,所述可读存储介质上存储有程序或指令,该程序或指令被处理器执行时实现上述图像处理方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
其中,所述处理器为上述实施例中所述的电子设备中的处理器。所述可读存储介质,包括计算机可读存储介质,如计算机只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等。
本申请实施例另提供了一种芯片,所述芯片包括处理器和通信接口,所述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现上述图像处理方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
应理解,本申请实施例提到的芯片还可以称为系统级芯片、系统芯片、芯片系统或片上系统芯片等。
本申请实施例还提供一种通信设备,所述通信设备包括处理器、存储器及存储在所述存储器上并可在所述处理器上运行的程序或指令,所述程序或指令被所述处理器执行时实现如上图像处理方法的步骤,为避免重复,这里不再赘述。
本申请实施例还提供一种计算机程序产品,所述计算机程序产品被存储在非易失的存储介质中,所述计算机程序产品被配置成被至少一个处理器执行以实现如上图像处理方法的步骤,为避免重复,这里不再赘述。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、 方法、物品或者装置中还存在另外的相同要素。此外,需要指出的是,本申请实施方式中的方法和装置的范围不限按示出或讨论的顺序来执行功能,还可包括根据所涉及的功能按基本同时的方式或按相反的顺序来执行功能,例如,可以按不同于所描述的次序来执行所描述的方法,并且还可以添加、省去、或组合各种步骤。另外,参照某些示例所描述的特征可在其他示例中被组合。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端(可以是手机,计算机,服务器,空调器,或者网络设备等)执行本申请各个实施例所述的方法。
上面结合附图对本申请的实施例进行了描述,但是本申请并不局限于上述的具体实施方式,上述的具体实施方式仅仅是示意性的,而不是限制性的,本领域的普通技术人员在本申请的启示下,在不脱离本申请宗旨和权利要求所保护的范围情况下,还可做出很多形式,均属于本申请的保护之内。

Claims (15)

  1. 一种图像处理方法,包括:
    接收用户的第一输入;
    响应于所述第一输入,确定待处理图像中的目标参考人物图像;
    接收用户的第二输入;
    响应于所述第二输入,确定所述待处理图像中的待处理人物图像;
    根据所述目标参考人物图像,获取目标参考人物的妆容信息;
    根据所述妆容信息,对所述待处理人物图像进行处理,得到处理后的人物图像。
  2. 根据权利要求1所述的图像处理方法,其中,在接收用户的第一输入之前,还包括:
    在接收到对所述待处理图像的第三输入的情况下,对所述待处理图像进行人脸检测,得到候选人物图像;
    所述响应于所述第一输入,确定待处理图像中的目标参考人物图像,包括:
    响应于所述第一输入,从所述候选人物图像中,确定出所述目标参考人物图像;
    所述响应于所述第二输入,确定所述待处理图像中的待处理人物图像,包括:
    响应于所述第二输入,从所述候选人物图像中,确定出所述待处理人物图像。
  3. 根据权利要求1所述的图像处理方法,其中,所述待处理图像包括:拍摄完成的图像、预览图像或者视频帧图像中的任一项。
  4. 根据权利要求1-3中任一项所述的图像处理方法,其中,所述目标参考人物图像所在的待处理图像与所述待处理人物图像所在的待处理图像为同一图像或不同的图像。
  5. 根据权利要求1所述的图像处理方法,其中,在根据所述妆容信息,对所述待处理人物图像进行处理,得到处理后的人物图像之后,还包括:
    接收用户的第四输入;
    响应于所述第四输入,保存所述处理后的人物图像。
  6. 一种图像处理装置,包括:
    第一接收模块,用于接收用户的第一输入;
    第一确定模块,用于响应于所述第一输入,确定待处理图像中的目标参考人物图像;
    第二接收模块,用于接收用户的第二输入;
    第二确定模块,用于响应于所述第二输入,确定所述待处理图像中的待处理人物图像;
    第一获取模块,用于根据所述目标参考人物图像,获取目标参考人物的妆容信息;
    第一处理模块,用于根据所述妆容信息,对所述待处理人物图像进行处理,得到处理后的人物图像。
  7. 根据权利要求6所述的图像处理装置,其中,还包括:
    第一检测模块,用于在接收用户的第一输入之前,在接收到对所述待处理图像的第三输入的情况下,对所述待处理图像进行人脸检测,得到候选人物图像;
    所述第一确定模块,包括:
    第一确定子模块,用于响应于所述第一输入,从所述候选人物图像中,确定出所述目标参考人物图像;
    所述第二确定模块,包括:
    第二确定子模块,用于响应于所述第二输入,从所述候选人物图像中,确定出所述待处理人物图像。
  8. 根据权利要求6所述的图像处理装置,其中,所述待处理图像包括:拍摄完成的图像、预览图像或者视频帧图像中的任一项。
  9. 根据权利要求6-8中任一项所述的图像处理装置,其中,所述目标参考人物图像所在的待处理图像与所述待处理人物图像所在的待处理图像为同一图像或不同的图像。
  10. 根据权利要求6所述的图像处理装置,其中,还包括:
    第三接收模块,用于在根据所述妆容信息,对所述待处理人物图像进行处理,得到处理后的人物图像之后,接收用户的第四输入;
    第一保存模块,用于响应于所述第四输入,保存所述处理后的人物图像。
  11. 一种电子设备,包括处理器,存储器及存储在所述存储器上并可在所述处理器上运行的程序或指令,其中,所述程序或指令被所述处理器执行时实现如权利要求1至5中任一项所述的图像处理方法的步骤。
  12. 一种可读存储介质,其中,所述可读存储介质上存储程序或指令,所述程序或指令被处理器执行时实现如权利要求1至5中任一项所述的图像处理方法的步骤。
  13. 一种通信设备,其中,被配置为执行如权利要求1至5中任一项所述的图像处理方法的步骤。
  14. 一种芯片,包括处理器和通信接口,其中,所述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现如权利要求1至5中任一项所述的图像处理方法的步骤。
  15. 一种计算机程序产品,其中,所述程序产品被存储在非易失的存储介质中,所述程序产品被至少一个处理器执行以实现如权利要求1至5中任一项所述的图像处理方法的步骤。
PCT/CN2021/140738 2020-12-30 2021-12-23 图像处理方法及装置 WO2022143382A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011606748.8 2020-12-30
CN202011606748.8A CN112734661A (zh) 2020-12-30 2020-12-30 图像处理方法及装置

Publications (1)

Publication Number Publication Date
WO2022143382A1 true WO2022143382A1 (zh) 2022-07-07

Family

ID=75610759

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/140738 WO2022143382A1 (zh) 2020-12-30 2021-12-23 图像处理方法及装置

Country Status (2)

Country Link
CN (1) CN112734661A (zh)
WO (1) WO2022143382A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112734661A (zh) * 2020-12-30 2021-04-30 维沃移动通信有限公司 图像处理方法及装置
CN114143454B (zh) * 2021-11-19 2023-11-03 维沃移动通信有限公司 拍摄方法、装置、电子设备及可读存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108509846A (zh) * 2018-02-09 2018-09-07 腾讯科技(深圳)有限公司 图像处理方法、装置、计算机设备及存储介质
US10235730B1 (en) * 2013-05-20 2019-03-19 Visualmits, Llc Casino table games with interactive content
CN109712090A (zh) * 2018-12-18 2019-05-03 维沃移动通信有限公司 一种图像处理方法、装置和移动终端
CN111756995A (zh) * 2020-06-17 2020-10-09 维沃移动通信有限公司 图像处理方法及装置
CN112734661A (zh) * 2020-12-30 2021-04-30 维沃移动通信有限公司 图像处理方法及装置

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105657249A (zh) * 2015-12-16 2016-06-08 东莞酷派软件技术有限公司 一种图像处理方法及用户终端
WO2018133305A1 (zh) * 2017-01-19 2018-07-26 华为技术有限公司 一种图像处理的方法及装置
CN107948506A (zh) * 2017-11-22 2018-04-20 珠海格力电器股份有限公司 一种图像处理方法、装置及电子设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10235730B1 (en) * 2013-05-20 2019-03-19 Visualmits, Llc Casino table games with interactive content
CN108509846A (zh) * 2018-02-09 2018-09-07 腾讯科技(深圳)有限公司 图像处理方法、装置、计算机设备及存储介质
CN109712090A (zh) * 2018-12-18 2019-05-03 维沃移动通信有限公司 一种图像处理方法、装置和移动终端
CN111756995A (zh) * 2020-06-17 2020-10-09 维沃移动通信有限公司 图像处理方法及装置
CN112734661A (zh) * 2020-12-30 2021-04-30 维沃移动通信有限公司 图像处理方法及装置

Also Published As

Publication number Publication date
CN112734661A (zh) 2021-04-30

Similar Documents

Publication Publication Date Title
US10565763B2 (en) Method and camera device for processing image
WO2022063023A1 (zh) 视频拍摄方法、视频拍摄装置及电子设备
WO2022143382A1 (zh) 图像处理方法及装置
WO2022012657A1 (zh) 图像编辑方法、装置和电子设备
WO2022166944A1 (zh) 拍照方法、装置、电子设备及介质
US20210343070A1 (en) Method, apparatus and electronic device for processing image
WO2022156766A1 (zh) 拍照方法、装置及电子设备
WO2020134558A1 (zh) 图像处理方法、装置、电子设备及存储介质
WO2022068806A1 (zh) 图像处理方法、装置及电子设备
WO2022089568A1 (zh) 文件分享的方法、装置和电子设备
WO2022089479A1 (zh) 拍照方法、装置及电子设备
WO2023072156A1 (zh) 一种拍摄方法、拍摄装置、电子设备和可读存储介质
WO2022111458A1 (zh) 图像拍摄方法和装置、电子设备及存储介质
WO2022143971A1 (zh) 一种视频处理方法、装置和电子设备
WO2022206582A1 (zh) 视频处理方法、装置、电子设备和存储介质
WO2023025196A1 (zh) 图像处理方法、装置及电子设备
WO2023284632A1 (zh) 图像展示方法、装置及电子设备
WO2022135290A1 (zh) 截屏方法、装置及电子设备
WO2022089272A1 (zh) 图像处理方法及装置
WO2022156703A1 (zh) 一种图像显示方法、装置及电子设备
WO2022156667A1 (zh) 一种应用的控制方法、装置及电子设备
WO2023083132A1 (zh) 拍摄方法、装置、电子设备和可读存储介质
CN106534649A (zh) 双旋转摄像头的构图方法、装置和移动终端
WO2023143531A1 (zh) 拍摄方法、装置和电子设备
WO2023083089A1 (zh) 拍摄控件显示方法, 装置, 电子设备及介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21914119

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21914119

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM1205A DATED 12.02.2024)