WO2021190351A1 - 图像处理方法和电子设备 - Google Patents
图像处理方法和电子设备 Download PDFInfo
- Publication number
- WO2021190351A1 WO2021190351A1 PCT/CN2021/081022 CN2021081022W WO2021190351A1 WO 2021190351 A1 WO2021190351 A1 WO 2021190351A1 CN 2021081022 W CN2021081022 W CN 2021081022W WO 2021190351 A1 WO2021190351 A1 WO 2021190351A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- input
- parameters
- target
- user
- Prior art date
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 10
- 230000004044 response Effects 0.000 claims abstract description 38
- 238000000034 method Methods 0.000 claims abstract description 32
- 239000000284 extract Substances 0.000 claims description 5
- 230000006870 function Effects 0.000 description 11
- 238000010586 diagram Methods 0.000 description 10
- 238000004891 communication Methods 0.000 description 4
- 238000004590 computer program Methods 0.000 description 4
- 239000007787 solid Substances 0.000 description 4
- 238000001514 detection method Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000003709 image segmentation Methods 0.000 description 3
- 238000007726 management method Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000007599 discharging Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000009527 percussion Methods 0.000 description 1
- 230000005236 sound signal Effects 0.000 description 1
- 238000010897 surface acoustic wave method Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/04—Context-preserving transformations, e.g. by using an importance map
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
Definitions
- the present invention relates to the field of communication technology, in particular to an image processing method and electronic equipment.
- users can only process images through the system's own style, and cannot implement custom image styles, and if they are interested in a photo template, users cannot apply the template they like.
- the embodiment of the present invention provides an image processing method and electronic device to solve the problem that the image style and template that are provided by the system can only be used to process the image, and the image customization cannot be realized.
- the present invention is implemented as follows:
- an embodiment of the present invention provides an image processing method applied to an electronic device, including:
- the target parameter in the parameters of the first image is applied to the second image to obtain the target image.
- an embodiment of the present invention also provides an electronic device, including:
- the first acquisition module is configured to receive the first input of the user for the first image
- the first response module is configured to identify and extract the parameters of the first image in response to the first input
- the second acquisition module is used to receive the second input of the user
- the second response module is configured to apply the target parameter in the parameters of the first image to the second image in response to the second input to obtain the target image.
- Figure 1 shows a flowchart of an image processing method according to an embodiment of the present invention
- FIG. 2 shows one of the first image schematic diagrams according to the embodiment of the present invention
- FIG. 3 shows a schematic diagram of a first image parameter according to an embodiment of the present invention
- FIG. 4 shows one of the schematic diagrams of the target image according to the embodiment of the present invention.
- Fig. 5 shows the second schematic diagram of the first image according to the embodiment of the present invention.
- Fig. 6 shows the second schematic diagram of a target image according to an embodiment of the present invention
- FIG. 7 shows the third schematic diagram of the first image according to the embodiment of the present invention.
- FIG. 8 shows a schematic diagram of a second image according to an embodiment of the present invention.
- FIG. 9 shows a schematic diagram of a module of an electronic device according to an embodiment of the present invention.
- FIG. 10 shows a schematic diagram of the structure of an electronic device according to an embodiment of the present invention.
- Image parameters The parameters contained in each part of the image, such as the sky, sea, beach, etc. of the landscape photo, such as the background of the photo of the person, the person, etc., and the person can be subdivided into features such as head, upper body, and lower body.
- Image segmentation refers to the division of different regions with special meaning in the image. These regions do not intersect each other, and each region meets a certain similarity criterion of features such as grayscale, texture, and color. Image segmentation is one of the most important steps in the image analysis process, and the segmented area can be used as the target area for subsequent feature extraction.
- an embodiment of the present invention provides an image processing method applied to an electronic device, including:
- Step 11 Receive a user's first input for the first image.
- the first image may be any picture
- the first input may be a sliding operation, an operation in which the time of pressing the first image exceeds a preset time, pressing an artificial intelligence (AI) key, etc., There is no specific limitation here.
- AI artificial intelligence
- Step 12 In response to the first input, identify and extract the parameters of the first image.
- the parameters of the first image include but are not limited to at least one of object information, contour information, and image style information.
- image segmentation is performed on the first image, and the parameters contained in each part of the first image are identified and extracted, such as objects (such as people) in the first image.
- objects such as people
- the objects can also be subdivided into head, upper body, Features such as lower body.
- Step 13 receiving a second input from the user.
- Step 14 in response to the second input, apply the target parameter in the parameters of the first image to the second image to obtain the target image.
- the target parameter in the first image can be applied to any second image, so as to achieve the effect of customizing the parameters of the image; wherein, the second image can be multiple images, stickers, etc., which will not be done here. Specific restrictions.
- the target parameter is the outline information of the first image (that is, the six-frame photo frame in FIG. 2), the photo frame of the first image is extracted, and the extracted photo frame is as shown in FIG. 3;
- the second image (including multiple pictures) is filled into the frame of the first image to form a new image, that is, the target image, and the obtained target image is shown in FIG. 4.
- the target parameters in the parameters are processed, so as to realize the customization of the image.
- the step 13 may specifically include:
- the step 14 may specifically include:
- the feature information contained in the target parameter is applied to the feature information contained in the second object information to obtain a target image.
- the user may Through the second input of the target parameter (first object information), the feature information of the second object information is replaced with the feature information of the first object information, thereby obtaining the target image.
- the first image includes the first object information (ie the solid triangle part in FIG. 5) and the second object information (ie the solid circle part in FIG. 5), and the user obtains the first object information by pressing the first object information.
- the characteristic information of one object information that is, the dotted triangle part corresponding to the arrow of the solid triangle part in Fig. 5
- the characteristic information of the first object information is suspended on the current display interface, and the user obtains the second object information by pressing the second object information.
- the feature information of the object information (that is, the dashed circular part corresponding to the arrow in the solid circle in Figure 5), and the feature information of the second object information is suspended on the current display interface; the user can drag the first object information To the position of the second object information, apply the feature information of the first object information to the feature information of the second object information, that is, replace the feature information of the second object information with the first object
- the characteristic information of the information is obtained, and the target image is obtained as shown in FIG. 6.
- the characteristic information may be color information and the like.
- the method may further include:
- the target parameter in the parameters of the first image is processed, and the processed target parameter is saved.
- the target parameters (part of the parameters or all of the parameters) of the parameters of the first image may be processed, and the processed target parameters may be saved.
- the parameters of the first image that have been identified and extracted are displayed below the first image
- the area A in Figure 7 is the area where the first image is located
- the parameters of the extracted first image are the object information
- the contour information and the image style information are respectively displayed below the A area. The user can select and save one of the parameters by clicking on one of the parameters.
- the user can save the first image style information and the second image style information respectively, or save the first image style information.
- the style information and the second image style information are processed, and the two styles are mixed and matched to form a new image style information, and the image style information is saved, so that users can save the information such as the image style they like in real time. And users can customize the name of the saved image style information, which is convenient for subsequent use.
- the third input operation may be a user's click operation on the parameters of the first image, etc., which is not specifically limited here.
- step 13 may specifically include:
- the step 14 may specifically include:
- the saved processed target parameters are applied to the second image to obtain the target image.
- the user can enter the image editing interface through a second input (such as a pressing operation, etc.) of the selected second image.
- a second input such as a pressing operation, etc.
- the user can select the first area of the second image, and then the second image The first area of the image is edited; or, the user can select the first area of the second image, and then edit the second area on the second image except the first area to form an edited target image.
- area B is the area where the second image is located.
- Below area B can display options that can be used for editing, such as image style, filters, stickers, etc., by clicking on the image style (which can be the system's own The image style may also be an image style saved by the user) etc. to edit the second image to form the target image.
- the display position of the option for editing is not limited.
- the target image is obtained, and the image can be identified And process at least one parameter in the parameter, and process the target parameter in the parameter, so as to realize the customization of the image.
- an embodiment of the present invention also provides an electronic device 90, including:
- the first acquisition module 91 is configured to receive the first input of the user for the first image
- the first response module 92 is configured to identify and extract the parameters of the first image in response to the first input;
- the second obtaining module 93 is configured to receive the second input of the user
- the second response module 94 is configured to apply the target parameter in the parameters of the first image to the second image in response to the second input to obtain the target image.
- the parameters of the first image include at least one of object information, contour information, and image style information.
- the second acquiring module 93 includes:
- the first acquiring unit is configured to receive a second input of the parameter of the first image from the user;
- the second response module includes:
- the first response unit is configured to apply the characteristic information contained in the target parameter to the characteristic information contained in the second object information to obtain a target image.
- the electronic device 90 further includes:
- the third acquisition module is configured to receive a user's third input of the parameters of the first image
- the third response module is configured to process the target parameter in the parameters of the first image in response to the third input, and save the processed target parameter.
- the second obtaining module 93 includes:
- the second acquiring unit is configured to receive a second input of the user on the second image
- the second response module includes:
- the second response unit is configured to apply the saved processed target parameters to the second image to obtain the target image.
- the electronic device 90 can implement each process implemented by the electronic device in the method embodiments of FIGS. 1 to 8. To avoid repetition, details are not described herein again.
- the parameters of the first image are identified and extracted by the first response module 92, and the target parameters in the parameters are applied to the second image by the second response module 94 to obtain the target image, which can be identified At least one parameter in the image is output, and the target parameter in the parameter is processed, so as to realize the customization of the image.
- the electronic device 1000 includes but is not limited to: a radio frequency unit 1001, a network module 1002, an audio output unit 1003, an input unit 1004, a sensor 1005, and a display unit 1006, a user input unit 1007, an interface unit 1008, a memory 1009, a processor 1010, a power supply 1011 and other components.
- a radio frequency unit 1001 for example, a radio frequency unit 1001
- a network module 1002 for example, a Wi-Fi Protected Access (WMA)
- an audio output unit 1003 an input unit 1004
- a sensor 1005 a sensor
- a display unit 1006 a user input unit 1007
- an interface unit 1008 a memory 1009
- a processor 1010 a power supply 1011 and other components.
- the electronic device may include more or fewer components than those shown in the figure, or a combination of certain components, or different components. Layout.
- electronic devices include, but are not limited to, mobile phones, tablet computers, notebook computers, palmtop computers, vehicle-mounted
- the processor 1010 is used for:
- the target parameter in the parameters of the first image is applied to the second image to obtain the target image.
- the electronic device 1000 provided in the embodiment of the present invention can implement the various processes implemented by the electronic device in the method embodiments of FIG. 1 to FIG.
- the electronic device recognizes and extracts the parameters of the first image through the processor 1010, and applies the target parameters in the parameters to the second image to obtain the target image. At least one parameter in the image can be identified. And the target parameter in the parameters is processed, so as to realize the self-definition of the image.
- the radio frequency unit 1001 can be used to receive and send signals during information transmission or communication. Specifically, the downlink data from the base station is received and sent to the processor 1010 for processing; in addition, Uplink data is sent to the base station.
- the radio frequency unit 1001 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like.
- the radio frequency unit 1001 can also communicate with the network and other devices through a wireless communication system.
- the electronic device provides users with wireless broadband Internet access through the network module 1002, such as helping users to send and receive emails, browse web pages, and access streaming media.
- the audio output unit 1003 can convert the audio data received by the radio frequency unit 1001 or the network module 1002 or stored in the memory 1009 into audio signals and output them as sounds. Moreover, the audio output unit 1003 may also provide audio output related to a specific function performed by the electronic device 1000 (for example, call signal reception sound, message reception sound, etc.).
- the audio output unit 1003 includes a speaker, a buzzer, a receiver, and the like.
- the input unit 1004 is used to receive audio or video signals.
- the input unit 1004 may include a graphics processing unit (GPU) 10041 and a microphone 10042, and the graphics processor 10041 is configured to respond to still pictures or video images obtained by an image capture device (such as a camera) in a video capture mode or an image capture mode. Data is processed.
- the processed image frame can be displayed on the display unit 1006.
- the image frame processed by the graphics processor 10041 may be stored in the memory 1009 (or other storage medium) or sent via the radio frequency unit 1001 or the network module 1002.
- the microphone 10042 can receive sound, and can process such sound into audio data.
- the processed audio data can be converted into a format that can be sent to a mobile communication base station via the radio frequency unit 1001 in the case of a telephone call mode for output.
- the electronic device 1000 further includes at least one sensor 1005, such as a light sensor, a motion sensor, and other sensors.
- the light sensor includes an ambient light sensor and a proximity sensor.
- the ambient light sensor can adjust the brightness of the display panel 10061 according to the brightness of the ambient light
- the proximity sensor can close the display panel 10061 and 10061 when the electronic device 1000 is moved to the ear. / Or backlight.
- the accelerometer sensor can detect the magnitude of acceleration in various directions (usually three axes), and can detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of electronic devices (such as horizontal and vertical screen switching, related games) , Magnetometer attitude calibration), vibration recognition related functions (such as pedometer, percussion), etc.; sensor 1005 can also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, Infrared sensors, etc., will not be repeated here.
- the display unit 1006 is used to display information input by the user or information provided to the user.
- the display unit 1006 may include a display panel 10061, and the display panel 10061 may be configured in the form of a liquid crystal display (LCD), an organic light-emitting diode (OLED), or the like.
- LCD liquid crystal display
- OLED organic light-emitting diode
- the user input unit 1007 can be used to receive inputted numeric or character information, and generate key signal input related to user settings and function control of the electronic device.
- the user input unit 1007 includes a touch panel 10071 and other input devices 10072.
- the touch panel 10071 also called a touch screen, can collect user touch operations on or near it (for example, the user uses any suitable objects or accessories such as fingers, stylus, etc.) on the touch panel 10071 or near the touch panel 10071. operate).
- the touch panel 10071 may include two parts, a touch detection device and a touch controller.
- the touch detection device detects the user's touch position, detects the signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives the touch information from the touch detection device, converts it into contact coordinates, and then sends it To the processor 1010, the command sent by the processor 1010 is received and executed.
- the touch panel 10071 can be implemented in multiple types such as resistive, capacitive, infrared, and surface acoustic wave.
- the user input unit 1007 may also include other input devices 10072.
- other input devices 10072 may include, but are not limited to, a physical keyboard, function keys (such as volume control buttons, switch buttons, etc.), trackball, mouse, and joystick, which will not be repeated here.
- the touch panel 10071 can cover the display panel 10061.
- the touch panel 10071 detects a touch operation on or near it, it transmits it to the processor 1010 to determine the type of touch event, and then the processor 1010 determines the type of the touch event according to the touch.
- the type of event provides corresponding visual output on the display panel 10061.
- the touch panel 10071 and the display panel 10061 are used as two independent components to implement the input and output functions of the electronic device, in some embodiments, the touch panel 10071 and the display panel 10061 may be integrated
- the implementation of the input and output functions of the electronic device is not specifically limited here.
- the interface unit 1008 is an interface for connecting an external device and the electronic device 1000.
- the external device may include a wired or wireless headset port, an external power source (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device with an identification module, audio input/output (I/O) port, video I/O port, headphone port, etc.
- the interface unit 1008 can be used to receive input (for example, data information, power, etc.) from an external device and transmit the received input to one or more elements in the electronic device 1000 or can be used to connect to the electronic device 1000 and the external device. Transfer data between devices.
- the memory 1009 can be used to store software programs and various data.
- the memory 1009 may mainly include a program storage area and a data storage area.
- the program storage area may store an operating system, an application program required by at least one function (such as a sound playback function, an image playback function, etc.), etc.; Data created by the use of mobile phones (such as audio data, phone book, etc.), etc.
- the memory 1009 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other volatile solid-state storage devices.
- the processor 1010 is the control center of the electronic device. It uses various interfaces and lines to connect various parts of the entire electronic device. It runs or executes software programs and/or modules stored in the memory 1009, and calls data stored in the memory 1009. , Perform various functions of electronic equipment and process data, so as to monitor the electronic equipment as a whole.
- the processor 1010 may include one or more processing units; preferably, the processor 1010 may integrate an application processor and a modem processor, where the application processor mainly processes the operating system, user interface, application programs, etc., the modem The processor mainly deals with wireless communication. It can be understood that the foregoing modem processor may not be integrated into the processor 1010.
- the electronic device 1000 may also include a power source 1011 (such as a battery) for supplying power to various components.
- a power source 1011 such as a battery
- the power source 1011 may be logically connected to the processor 1010 through a power management system, so as to manage charging, discharging, and power consumption management through the power management system. And other functions.
- the electronic device 1000 includes some functional modules not shown, which will not be repeated here.
- the embodiment of the present invention also provides an electronic device, including a processor 1010, a memory 1009, a computer program stored on the memory 1009 and capable of running on the processor 1010, when the computer program is executed by the processor 1010
- an electronic device including a processor 1010, a memory 1009, a computer program stored on the memory 1009 and capable of running on the processor 1010, when the computer program is executed by the processor 1010
- the embodiment of the present invention also provides a computer-readable storage medium, and a computer program is stored on the computer-readable storage medium.
- a computer program is stored on the computer-readable storage medium.
- the computer program is executed by a processor, each process of the above-mentioned image processing method embodiment is realized, and the same technology can be achieved. The effect, in order to avoid repetition, will not be repeated here.
- the computer-readable storage medium such as read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk, or optical disk, etc.
- the technical solution of the present invention essentially or the part that contributes to the existing technology can be embodied in the form of a software product, and the computer software product is stored in a storage medium (such as ROM/RAM, magnetic disk, The optical disc) includes several instructions to make a terminal (which can be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) execute the method described in each embodiment of the present invention.
- a terminal which can be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Human Computer Interaction (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Description
Claims (10)
- 一种图像处理方法,应用于电子设备,包括:接收用户对于第一图像的第一输入;响应于所述第一输入,识别并提取所述第一图像的参数;接收用户的第二输入;响应于所述第二输入,将所述第一图像的参数中的目标参数应用于第二图像,得到目标图像。
- 根据权利要求1所述的方法,其中,所述第一图像的参数包括对象信息、外形轮廓信息和图像风格信息中的至少一项。
- 根据权利要求2所述的方法,其中,在所述第一图像与所述第二图像为同一图像、且所述第一图像的参数包括第一对象信息和第二对象信息、且所述目标参数为所述第一对象信息的情况下,所述接收用户的第二输入,包括:接收用户对于所述第一图像的参数的第二输入;所述将所述第一图像的参数中的目标参数应用于第二图像,得到目标图像,包括:将所述目标参数包含的特征信息应用于所述第二对象信息包含的特征信息中,得到目标图像。
- 根据权利要求1所述的方法,其中,所述接收用户的第二输入之前,所述方法还包括:接收用户对于所述第一图像的参数的第三输入;响应于所述第三输入,对所述第一图像的参数中的目标参数进行处理,并保存处理后的目标参数。
- 根据权利要求4所述的方法,其中,所述接收用户的第二输入,包括:接收用户对于所述第二图像的第二输入;所述将所述第一图像的参数中的目标参数应用于第二图像,得到目标图像包括:将所保存的处理后的目标参数应用于所述第二图像,得到目标图像。
- 一种电子设备,包括:第一获取模块,用于接收用户对于第一图像的第一输入;第一响应模块,用于响应于所述第一输入,识别并提取所述第一图像的参数;第二获取模块,用于接收用户的第二输入;第二响应模块,用于响应于所述第二输入,将所述第一图像的参数中的目标参数应用于第二图像,得到目标图像。
- 根据权利要求6所述的电子设备,其中,所述第一图像的参数包括对象信息、外形轮廓信息和图像风格信息中的至少一项。
- 根据权利要求7所述的电子设备,其中,在所述第一图像与所述第二图像为同一图像、且所述第一图像的参数包括第一对象信息和第二对象信息、且所述目标参数为所述第一对象信息的情况下,所述第二获取模块,包括:第一获取单元,用于接收用户对于所述第一图像的参数的第二输入;所述第二响应模块,包括:第一响应单元,用于将所述目标参数包含的特征信息应用于所述第二对象信息包含的特征信息中,得到目标图像。
- 根据权利要求6所述的电子设备,还包括:第三获取模块,用于接收用户对于所述第一图像的参数的第三输入;第三响应模块,用于响应于所述第三输入,对所述第一图像的参数中的目标参数进行处理,并保存处理后的目标参数。
- 根据权利要求9所述的电子设备,其中,所述第二获取模块,包括:第二获取单元,用于接收用户对于所述第二图像的第二输入;所述第二响应模块,包括:第二响应单元,用于将所保存的处理后的目标参数应用于所述第二图像,得到目标图像。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010207477.2A CN111402273A (zh) | 2020-03-23 | 2020-03-23 | 一种图像处理方法和电子设备 |
CN202010207477.2 | 2020-03-23 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021190351A1 true WO2021190351A1 (zh) | 2021-09-30 |
Family
ID=71413436
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/081022 WO2021190351A1 (zh) | 2020-03-23 | 2021-03-16 | 图像处理方法和电子设备 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN111402273A (zh) |
WO (1) | WO2021190351A1 (zh) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111402273A (zh) * | 2020-03-23 | 2020-07-10 | 维沃移动通信(杭州)有限公司 | 一种图像处理方法和电子设备 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106657793A (zh) * | 2017-01-11 | 2017-05-10 | 维沃移动通信有限公司 | 一种图像处理方法及移动终端 |
US20180365807A1 (en) * | 2017-06-16 | 2018-12-20 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Method, device and nonvolatile computer-readable medium for image composition |
CN109104566A (zh) * | 2018-06-28 | 2018-12-28 | 维沃移动通信有限公司 | 一种图像显示方法及终端设备 |
CN109993711A (zh) * | 2019-03-25 | 2019-07-09 | 维沃移动通信有限公司 | 一种图像处理方法及终端设备 |
CN111402273A (zh) * | 2020-03-23 | 2020-07-10 | 维沃移动通信(杭州)有限公司 | 一种图像处理方法和电子设备 |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109089042B (zh) * | 2018-08-30 | 2020-12-15 | Oppo广东移动通信有限公司 | 图像处理方式识别方法、装置、存储介质及移动终端 |
CN109461124A (zh) * | 2018-09-21 | 2019-03-12 | 维沃移动通信(杭州)有限公司 | 一种图像处理方法及终端设备 |
CN110223237A (zh) * | 2019-04-23 | 2019-09-10 | 维沃移动通信有限公司 | 调节图像参数的方法及终端设备 |
-
2020
- 2020-03-23 CN CN202010207477.2A patent/CN111402273A/zh active Pending
-
2021
- 2021-03-16 WO PCT/CN2021/081022 patent/WO2021190351A1/zh active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106657793A (zh) * | 2017-01-11 | 2017-05-10 | 维沃移动通信有限公司 | 一种图像处理方法及移动终端 |
US20180365807A1 (en) * | 2017-06-16 | 2018-12-20 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Method, device and nonvolatile computer-readable medium for image composition |
CN109104566A (zh) * | 2018-06-28 | 2018-12-28 | 维沃移动通信有限公司 | 一种图像显示方法及终端设备 |
CN109993711A (zh) * | 2019-03-25 | 2019-07-09 | 维沃移动通信有限公司 | 一种图像处理方法及终端设备 |
CN111402273A (zh) * | 2020-03-23 | 2020-07-10 | 维沃移动通信(杭州)有限公司 | 一种图像处理方法和电子设备 |
Also Published As
Publication number | Publication date |
---|---|
CN111402273A (zh) | 2020-07-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021098678A1 (zh) | 投屏控制方法及电子设备 | |
WO2021078116A1 (zh) | 视频处理方法及电子设备 | |
WO2021036542A1 (zh) | 录屏方法及移动终端 | |
US20220365641A1 (en) | Method for displaying background application and mobile terminal | |
CN109461117B (zh) | 一种图像处理方法及移动终端 | |
WO2021104321A1 (zh) | 图像显示方法及电子设备 | |
WO2021190428A1 (zh) | 图像拍摄方法和电子设备 | |
WO2021147779A1 (zh) | 配置信息分享方法、终端设备及计算机可读存储介质 | |
WO2021136159A1 (zh) | 截屏方法及电子设备 | |
WO2021190429A1 (zh) | 一种图像处理方法和电子设备 | |
WO2020182035A1 (zh) | 图像处理方法及终端设备 | |
CN107592459A (zh) | 一种拍照方法及移动终端 | |
WO2021004426A1 (zh) | 内容选择方法及终端 | |
WO2020220990A1 (zh) | 受话器控制方法及终端 | |
WO2021077908A1 (zh) | 参数调节方法和电子设备 | |
WO2021104160A1 (zh) | 编辑方法及电子设备 | |
CN109819168B (zh) | 一种摄像头的启动方法以及移动终端 | |
WO2021197165A1 (zh) | 图片处理方法及电子设备 | |
WO2021036553A1 (zh) | 图标显示方法及电子设备 | |
WO2021190387A1 (zh) | 检测结果输出的方法、电子设备及介质 | |
CN109448069B (zh) | 一种模板生成方法及移动终端 | |
WO2020011080A1 (zh) | 显示控制方法及终端设备 | |
CN109618218B (zh) | 一种视频处理方法及移动终端 | |
WO2021104159A1 (zh) | 显示控制方法及电子设备 | |
WO2021129818A1 (zh) | 视频播放方法及电子设备 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21774245 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21774245 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 23.02.2023) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21774245 Country of ref document: EP Kind code of ref document: A1 |