WO2021190351A1 - 图像处理方法和电子设备 - Google Patents

图像处理方法和电子设备 Download PDF

Info

Publication number
WO2021190351A1
WO2021190351A1 PCT/CN2021/081022 CN2021081022W WO2021190351A1 WO 2021190351 A1 WO2021190351 A1 WO 2021190351A1 CN 2021081022 W CN2021081022 W CN 2021081022W WO 2021190351 A1 WO2021190351 A1 WO 2021190351A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
input
parameters
target
user
Prior art date
Application number
PCT/CN2021/081022
Other languages
English (en)
French (fr)
Inventor
彭业
Original Assignee
维沃移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 维沃移动通信有限公司 filed Critical 维沃移动通信有限公司
Publication of WO2021190351A1 publication Critical patent/WO2021190351A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction

Definitions

  • the present invention relates to the field of communication technology, in particular to an image processing method and electronic equipment.
  • users can only process images through the system's own style, and cannot implement custom image styles, and if they are interested in a photo template, users cannot apply the template they like.
  • the embodiment of the present invention provides an image processing method and electronic device to solve the problem that the image style and template that are provided by the system can only be used to process the image, and the image customization cannot be realized.
  • the present invention is implemented as follows:
  • an embodiment of the present invention provides an image processing method applied to an electronic device, including:
  • the target parameter in the parameters of the first image is applied to the second image to obtain the target image.
  • an embodiment of the present invention also provides an electronic device, including:
  • the first acquisition module is configured to receive the first input of the user for the first image
  • the first response module is configured to identify and extract the parameters of the first image in response to the first input
  • the second acquisition module is used to receive the second input of the user
  • the second response module is configured to apply the target parameter in the parameters of the first image to the second image in response to the second input to obtain the target image.
  • Figure 1 shows a flowchart of an image processing method according to an embodiment of the present invention
  • FIG. 2 shows one of the first image schematic diagrams according to the embodiment of the present invention
  • FIG. 3 shows a schematic diagram of a first image parameter according to an embodiment of the present invention
  • FIG. 4 shows one of the schematic diagrams of the target image according to the embodiment of the present invention.
  • Fig. 5 shows the second schematic diagram of the first image according to the embodiment of the present invention.
  • Fig. 6 shows the second schematic diagram of a target image according to an embodiment of the present invention
  • FIG. 7 shows the third schematic diagram of the first image according to the embodiment of the present invention.
  • FIG. 8 shows a schematic diagram of a second image according to an embodiment of the present invention.
  • FIG. 9 shows a schematic diagram of a module of an electronic device according to an embodiment of the present invention.
  • FIG. 10 shows a schematic diagram of the structure of an electronic device according to an embodiment of the present invention.
  • Image parameters The parameters contained in each part of the image, such as the sky, sea, beach, etc. of the landscape photo, such as the background of the photo of the person, the person, etc., and the person can be subdivided into features such as head, upper body, and lower body.
  • Image segmentation refers to the division of different regions with special meaning in the image. These regions do not intersect each other, and each region meets a certain similarity criterion of features such as grayscale, texture, and color. Image segmentation is one of the most important steps in the image analysis process, and the segmented area can be used as the target area for subsequent feature extraction.
  • an embodiment of the present invention provides an image processing method applied to an electronic device, including:
  • Step 11 Receive a user's first input for the first image.
  • the first image may be any picture
  • the first input may be a sliding operation, an operation in which the time of pressing the first image exceeds a preset time, pressing an artificial intelligence (AI) key, etc., There is no specific limitation here.
  • AI artificial intelligence
  • Step 12 In response to the first input, identify and extract the parameters of the first image.
  • the parameters of the first image include but are not limited to at least one of object information, contour information, and image style information.
  • image segmentation is performed on the first image, and the parameters contained in each part of the first image are identified and extracted, such as objects (such as people) in the first image.
  • objects such as people
  • the objects can also be subdivided into head, upper body, Features such as lower body.
  • Step 13 receiving a second input from the user.
  • Step 14 in response to the second input, apply the target parameter in the parameters of the first image to the second image to obtain the target image.
  • the target parameter in the first image can be applied to any second image, so as to achieve the effect of customizing the parameters of the image; wherein, the second image can be multiple images, stickers, etc., which will not be done here. Specific restrictions.
  • the target parameter is the outline information of the first image (that is, the six-frame photo frame in FIG. 2), the photo frame of the first image is extracted, and the extracted photo frame is as shown in FIG. 3;
  • the second image (including multiple pictures) is filled into the frame of the first image to form a new image, that is, the target image, and the obtained target image is shown in FIG. 4.
  • the target parameters in the parameters are processed, so as to realize the customization of the image.
  • the step 13 may specifically include:
  • the step 14 may specifically include:
  • the feature information contained in the target parameter is applied to the feature information contained in the second object information to obtain a target image.
  • the user may Through the second input of the target parameter (first object information), the feature information of the second object information is replaced with the feature information of the first object information, thereby obtaining the target image.
  • the first image includes the first object information (ie the solid triangle part in FIG. 5) and the second object information (ie the solid circle part in FIG. 5), and the user obtains the first object information by pressing the first object information.
  • the characteristic information of one object information that is, the dotted triangle part corresponding to the arrow of the solid triangle part in Fig. 5
  • the characteristic information of the first object information is suspended on the current display interface, and the user obtains the second object information by pressing the second object information.
  • the feature information of the object information (that is, the dashed circular part corresponding to the arrow in the solid circle in Figure 5), and the feature information of the second object information is suspended on the current display interface; the user can drag the first object information To the position of the second object information, apply the feature information of the first object information to the feature information of the second object information, that is, replace the feature information of the second object information with the first object
  • the characteristic information of the information is obtained, and the target image is obtained as shown in FIG. 6.
  • the characteristic information may be color information and the like.
  • the method may further include:
  • the target parameter in the parameters of the first image is processed, and the processed target parameter is saved.
  • the target parameters (part of the parameters or all of the parameters) of the parameters of the first image may be processed, and the processed target parameters may be saved.
  • the parameters of the first image that have been identified and extracted are displayed below the first image
  • the area A in Figure 7 is the area where the first image is located
  • the parameters of the extracted first image are the object information
  • the contour information and the image style information are respectively displayed below the A area. The user can select and save one of the parameters by clicking on one of the parameters.
  • the user can save the first image style information and the second image style information respectively, or save the first image style information.
  • the style information and the second image style information are processed, and the two styles are mixed and matched to form a new image style information, and the image style information is saved, so that users can save the information such as the image style they like in real time. And users can customize the name of the saved image style information, which is convenient for subsequent use.
  • the third input operation may be a user's click operation on the parameters of the first image, etc., which is not specifically limited here.
  • step 13 may specifically include:
  • the step 14 may specifically include:
  • the saved processed target parameters are applied to the second image to obtain the target image.
  • the user can enter the image editing interface through a second input (such as a pressing operation, etc.) of the selected second image.
  • a second input such as a pressing operation, etc.
  • the user can select the first area of the second image, and then the second image The first area of the image is edited; or, the user can select the first area of the second image, and then edit the second area on the second image except the first area to form an edited target image.
  • area B is the area where the second image is located.
  • Below area B can display options that can be used for editing, such as image style, filters, stickers, etc., by clicking on the image style (which can be the system's own The image style may also be an image style saved by the user) etc. to edit the second image to form the target image.
  • the display position of the option for editing is not limited.
  • the target image is obtained, and the image can be identified And process at least one parameter in the parameter, and process the target parameter in the parameter, so as to realize the customization of the image.
  • an embodiment of the present invention also provides an electronic device 90, including:
  • the first acquisition module 91 is configured to receive the first input of the user for the first image
  • the first response module 92 is configured to identify and extract the parameters of the first image in response to the first input;
  • the second obtaining module 93 is configured to receive the second input of the user
  • the second response module 94 is configured to apply the target parameter in the parameters of the first image to the second image in response to the second input to obtain the target image.
  • the parameters of the first image include at least one of object information, contour information, and image style information.
  • the second acquiring module 93 includes:
  • the first acquiring unit is configured to receive a second input of the parameter of the first image from the user;
  • the second response module includes:
  • the first response unit is configured to apply the characteristic information contained in the target parameter to the characteristic information contained in the second object information to obtain a target image.
  • the electronic device 90 further includes:
  • the third acquisition module is configured to receive a user's third input of the parameters of the first image
  • the third response module is configured to process the target parameter in the parameters of the first image in response to the third input, and save the processed target parameter.
  • the second obtaining module 93 includes:
  • the second acquiring unit is configured to receive a second input of the user on the second image
  • the second response module includes:
  • the second response unit is configured to apply the saved processed target parameters to the second image to obtain the target image.
  • the electronic device 90 can implement each process implemented by the electronic device in the method embodiments of FIGS. 1 to 8. To avoid repetition, details are not described herein again.
  • the parameters of the first image are identified and extracted by the first response module 92, and the target parameters in the parameters are applied to the second image by the second response module 94 to obtain the target image, which can be identified At least one parameter in the image is output, and the target parameter in the parameter is processed, so as to realize the customization of the image.
  • the electronic device 1000 includes but is not limited to: a radio frequency unit 1001, a network module 1002, an audio output unit 1003, an input unit 1004, a sensor 1005, and a display unit 1006, a user input unit 1007, an interface unit 1008, a memory 1009, a processor 1010, a power supply 1011 and other components.
  • a radio frequency unit 1001 for example, a radio frequency unit 1001
  • a network module 1002 for example, a Wi-Fi Protected Access (WMA)
  • an audio output unit 1003 an input unit 1004
  • a sensor 1005 a sensor
  • a display unit 1006 a user input unit 1007
  • an interface unit 1008 a memory 1009
  • a processor 1010 a power supply 1011 and other components.
  • the electronic device may include more or fewer components than those shown in the figure, or a combination of certain components, or different components. Layout.
  • electronic devices include, but are not limited to, mobile phones, tablet computers, notebook computers, palmtop computers, vehicle-mounted
  • the processor 1010 is used for:
  • the target parameter in the parameters of the first image is applied to the second image to obtain the target image.
  • the electronic device 1000 provided in the embodiment of the present invention can implement the various processes implemented by the electronic device in the method embodiments of FIG. 1 to FIG.
  • the electronic device recognizes and extracts the parameters of the first image through the processor 1010, and applies the target parameters in the parameters to the second image to obtain the target image. At least one parameter in the image can be identified. And the target parameter in the parameters is processed, so as to realize the self-definition of the image.
  • the radio frequency unit 1001 can be used to receive and send signals during information transmission or communication. Specifically, the downlink data from the base station is received and sent to the processor 1010 for processing; in addition, Uplink data is sent to the base station.
  • the radio frequency unit 1001 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like.
  • the radio frequency unit 1001 can also communicate with the network and other devices through a wireless communication system.
  • the electronic device provides users with wireless broadband Internet access through the network module 1002, such as helping users to send and receive emails, browse web pages, and access streaming media.
  • the audio output unit 1003 can convert the audio data received by the radio frequency unit 1001 or the network module 1002 or stored in the memory 1009 into audio signals and output them as sounds. Moreover, the audio output unit 1003 may also provide audio output related to a specific function performed by the electronic device 1000 (for example, call signal reception sound, message reception sound, etc.).
  • the audio output unit 1003 includes a speaker, a buzzer, a receiver, and the like.
  • the input unit 1004 is used to receive audio or video signals.
  • the input unit 1004 may include a graphics processing unit (GPU) 10041 and a microphone 10042, and the graphics processor 10041 is configured to respond to still pictures or video images obtained by an image capture device (such as a camera) in a video capture mode or an image capture mode. Data is processed.
  • the processed image frame can be displayed on the display unit 1006.
  • the image frame processed by the graphics processor 10041 may be stored in the memory 1009 (or other storage medium) or sent via the radio frequency unit 1001 or the network module 1002.
  • the microphone 10042 can receive sound, and can process such sound into audio data.
  • the processed audio data can be converted into a format that can be sent to a mobile communication base station via the radio frequency unit 1001 in the case of a telephone call mode for output.
  • the electronic device 1000 further includes at least one sensor 1005, such as a light sensor, a motion sensor, and other sensors.
  • the light sensor includes an ambient light sensor and a proximity sensor.
  • the ambient light sensor can adjust the brightness of the display panel 10061 according to the brightness of the ambient light
  • the proximity sensor can close the display panel 10061 and 10061 when the electronic device 1000 is moved to the ear. / Or backlight.
  • the accelerometer sensor can detect the magnitude of acceleration in various directions (usually three axes), and can detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of electronic devices (such as horizontal and vertical screen switching, related games) , Magnetometer attitude calibration), vibration recognition related functions (such as pedometer, percussion), etc.; sensor 1005 can also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, Infrared sensors, etc., will not be repeated here.
  • the display unit 1006 is used to display information input by the user or information provided to the user.
  • the display unit 1006 may include a display panel 10061, and the display panel 10061 may be configured in the form of a liquid crystal display (LCD), an organic light-emitting diode (OLED), or the like.
  • LCD liquid crystal display
  • OLED organic light-emitting diode
  • the user input unit 1007 can be used to receive inputted numeric or character information, and generate key signal input related to user settings and function control of the electronic device.
  • the user input unit 1007 includes a touch panel 10071 and other input devices 10072.
  • the touch panel 10071 also called a touch screen, can collect user touch operations on or near it (for example, the user uses any suitable objects or accessories such as fingers, stylus, etc.) on the touch panel 10071 or near the touch panel 10071. operate).
  • the touch panel 10071 may include two parts, a touch detection device and a touch controller.
  • the touch detection device detects the user's touch position, detects the signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives the touch information from the touch detection device, converts it into contact coordinates, and then sends it To the processor 1010, the command sent by the processor 1010 is received and executed.
  • the touch panel 10071 can be implemented in multiple types such as resistive, capacitive, infrared, and surface acoustic wave.
  • the user input unit 1007 may also include other input devices 10072.
  • other input devices 10072 may include, but are not limited to, a physical keyboard, function keys (such as volume control buttons, switch buttons, etc.), trackball, mouse, and joystick, which will not be repeated here.
  • the touch panel 10071 can cover the display panel 10061.
  • the touch panel 10071 detects a touch operation on or near it, it transmits it to the processor 1010 to determine the type of touch event, and then the processor 1010 determines the type of the touch event according to the touch.
  • the type of event provides corresponding visual output on the display panel 10061.
  • the touch panel 10071 and the display panel 10061 are used as two independent components to implement the input and output functions of the electronic device, in some embodiments, the touch panel 10071 and the display panel 10061 may be integrated
  • the implementation of the input and output functions of the electronic device is not specifically limited here.
  • the interface unit 1008 is an interface for connecting an external device and the electronic device 1000.
  • the external device may include a wired or wireless headset port, an external power source (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device with an identification module, audio input/output (I/O) port, video I/O port, headphone port, etc.
  • the interface unit 1008 can be used to receive input (for example, data information, power, etc.) from an external device and transmit the received input to one or more elements in the electronic device 1000 or can be used to connect to the electronic device 1000 and the external device. Transfer data between devices.
  • the memory 1009 can be used to store software programs and various data.
  • the memory 1009 may mainly include a program storage area and a data storage area.
  • the program storage area may store an operating system, an application program required by at least one function (such as a sound playback function, an image playback function, etc.), etc.; Data created by the use of mobile phones (such as audio data, phone book, etc.), etc.
  • the memory 1009 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other volatile solid-state storage devices.
  • the processor 1010 is the control center of the electronic device. It uses various interfaces and lines to connect various parts of the entire electronic device. It runs or executes software programs and/or modules stored in the memory 1009, and calls data stored in the memory 1009. , Perform various functions of electronic equipment and process data, so as to monitor the electronic equipment as a whole.
  • the processor 1010 may include one or more processing units; preferably, the processor 1010 may integrate an application processor and a modem processor, where the application processor mainly processes the operating system, user interface, application programs, etc., the modem The processor mainly deals with wireless communication. It can be understood that the foregoing modem processor may not be integrated into the processor 1010.
  • the electronic device 1000 may also include a power source 1011 (such as a battery) for supplying power to various components.
  • a power source 1011 such as a battery
  • the power source 1011 may be logically connected to the processor 1010 through a power management system, so as to manage charging, discharging, and power consumption management through the power management system. And other functions.
  • the electronic device 1000 includes some functional modules not shown, which will not be repeated here.
  • the embodiment of the present invention also provides an electronic device, including a processor 1010, a memory 1009, a computer program stored on the memory 1009 and capable of running on the processor 1010, when the computer program is executed by the processor 1010
  • an electronic device including a processor 1010, a memory 1009, a computer program stored on the memory 1009 and capable of running on the processor 1010, when the computer program is executed by the processor 1010
  • the embodiment of the present invention also provides a computer-readable storage medium, and a computer program is stored on the computer-readable storage medium.
  • a computer program is stored on the computer-readable storage medium.
  • the computer program is executed by a processor, each process of the above-mentioned image processing method embodiment is realized, and the same technology can be achieved. The effect, in order to avoid repetition, will not be repeated here.
  • the computer-readable storage medium such as read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk, or optical disk, etc.
  • the technical solution of the present invention essentially or the part that contributes to the existing technology can be embodied in the form of a software product, and the computer software product is stored in a storage medium (such as ROM/RAM, magnetic disk, The optical disc) includes several instructions to make a terminal (which can be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) execute the method described in each embodiment of the present invention.
  • a terminal which can be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

一种图像处理方法和电子设备。该方法包括:接收用户对于第一图像的第一输入(11);响应于所述第一输入,识别并提取所述第一图像的参数(12);接收用户的第二输入(13);响应于所述第二输入,将所述第一图像的参数中的目标参数应用于第二图像,得到目标图像(14)。

Description

图像处理方法和电子设备
相关申请的交叉引用
本申请主张在2020年3月23日在中国提交的中国专利申请号No.202010207477.2的优先权,其全部内容通过引用包含于此。
技术领域
本发明涉及通信技术领域,特别涉及一种图像处理方法和电子设备。
背景技术
目前,用户只能通过系统自带的风格对图像进行处理,无法实现自定义图像风格,并且,如果对某一个相片的模板感兴趣,用户自己无法对于喜欢的模板进行套用。
发明内容
本发明实施例提供一种图像处理方法和电子设备,以解决只能使用系统自带的图像风格和模板等对图像进行处理,无法实现对图像自定义的问题。
为了解决上述技术问题,本发明是这样实现的:
第一方面,本发明实施例提供了一种图像处理方法,应用于电子设备,包括:
接收用户对于第一图像的第一输入;
响应于所述第一输入,识别并提取所述第一图像的参数;
接收用户的第二输入;
响应于所述第二输入,将所述第一图像的参数中的目标参数应用于第二图像,得到目标图像。
第二方面,本发明实施例还提供了一种电子设备,包括:
第一获取模块,用于接收用户对于第一图像的第一输入;
第一响应模块,用于响应于所述第一输入,识别并提取所述第一图像的参数;
第二获取模块,用于接收用户的第二输入;
第二响应模块,用于响应于所述第二输入,将所述第一图像的参数中的目标参数应用于第二图像,得到目标图像。
这样,本发明实施例中,通过识别并提取所述第一图像的参数,并将所述参数中的目标参数应用于第二图像,得到目标图像,可以识别出图像中的至少一个参数,并对所述参数中的目标参数进行处理,从而实现对图像自定义。
附图说明
为了更清楚地说明本发明实施例的技术方案,下面将对本发明实施例的描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1表示本发明实施例的图像处理方法流程图;
图2表示本发明实施例的第一图像示意图之一;
图3表示本发明实施例的第一图像参数示意图;
图4表示本发明实施例的目标图像示意图之一;
图5表示本发明实施例的第一图像示意图之二;
图6表示本发明实施例的目标图像示意图之二;
图7表示本发明实施例的第一图像示意图之三;
图8表示本发明实施例的第二图像示意图;
图9表示本发明实施例的电子设备的模块示意图;
图10表示本发明实施例的电子设备的结构示意图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
在进行本发明实施例的说明时,首先对下面描述中所用到的一些概念进行解释说明。
图像参数:图像各个部分包含的参数,比如风景照的天空、大海、沙滩等,比如人物照的背景、人物等,人物又可以细分为头部、上半身、下半身等特征。
图像分割:是指将图像中具有特殊意义的不同区域划分,这些区域互不相交,每个区域满足灰度、纹理、彩色等特征的某种相似性准则。图像分割是图像分析过程中最重要的步骤之一,分割出的区域可以作为后续特征提取的目标区域。
如图1所示,本发明实施例提供了一种图像处理方法,应用于电子设备,包括:
步骤11,接收用户对于第一图像的第一输入。
具体的,所述第一图像可以为任意一张图片,所述第一输入可以为滑动操作、按压第一图像的时间超过预设时间的操作、按压人工智能(Artificial Intelligence,AI)键等,在此不做具体限定。
步骤12,响应于所述第一输入,识别并提取所述第一图像的参数。
可选的,所述第一图像的参数包括但不限于对象信息、外形轮廓信息和图像风格信息中的至少一项。
具体的,对第一图像进行图像分割,识别并提取出来第一图像中的各个部分包含的参数,比如第一图像中的对象(如:人物),对象还可以细分为头部、上半身、下半身等特征。
步骤13,接收用户的第二输入。
步骤14,响应于所述第二输入,将所述第一图像的参数中的目标参数应用于第二图像,得到目标图像。
具体的,所述第一图像中的目标参数可以应用于任意一个第二图像上,从而实现自定义图像的参数的效果;其中,第二图像可以为多张图像、贴纸等,在此不做具体限定。
例如:如图2所示,所述目标参数为第一图像的外形轮廓信息(即图2中的六格相框),将第一图像的相框提取出来,提取出来的相框如图3所示; 将第二图像(包括多张图片)填充至第一图像的相框中,形成新的图像,即目标图像,得到的目标图像如图4所示。
本发明实施例中,通过识别并提取所述第一图像的参数,并将所述参数中的目标参数应用于第二图像,得到目标图像,可以识别出图像中的至少一个参数,并对所述参数中的目标参数进行处理,从而实现对图像的自定义。
进一步的,如图5和图6所示,在所述第一图像与所述第二图像为同一图像、且所述第一图像的参数包括第一对象信息和第二对象信息、且所述目标参数为所述第一对象信息的情况下,所述步骤13具体可以包括:
接收用户对于所述第一图像的参数的第二输入;
所述步骤14具体可以包括:
将所述目标参数包含的特征信息应用于所述第二对象信息包含的特征信息中,得到目标图像。
具体的,如果所述第一图像与所述第二图像为同一图像,并且第一图像包括第一对象信息和第二对象信息,并且所述目标参数为所述第一对象信息,则用户可以通过对目标参数(第一对象信息)的第二输入,将第二对象信息的特征信息替换为第一对象信息的特征信息,从而得到目标图像。
例如:所述第一图像包括第一对象信息(即图5中的实线三角形部分)和第二对象信息(即图5中的实线圆形部分),用户通过按压第一对象信息得到第一对象信息的特征信息(即图5中与实线三角形部分箭头对应的虚线三角形部分),并将第一对象信息的特征信息悬浮在当前显示界面上,用户通过按压第二对象信息得到第二对象信息的特征信息(即图5中与实线圆形部分箭头对应的虚线圆形部分),并将第二对象信息的特征信息悬浮在当前显示界面上;用户可以通过拖动第一对象信息的特征信息至第二对象信息的位置处,将所述第一对象信息的特征信息应用于所述第二对象信息的特征信息中,即将所述第二对象信息的特征信息替换为第一对象信息的特征信息,从而得到目标图像,所述目标图像如图6所示。其中,特征信息可以为颜色信息等。
进一步的,在所述步骤13之前,所述方法还可以包括:
接收用户对于所述第一图像的参数的第三输入;
响应于所述第三输入,对所述第一图像的参数中的目标参数进行处理, 并保存处理后的目标参数。
具体的,可以对所述第一图像的参数的目标参数(其中一部分的参数或者全部参数)进行处理,保存处理后的目标参数。
例如:如图7所示,识别并提取出来的第一图像的参数显示在第一图像的下方,图7中的A区域为第一图像所在区域,提取出来的第一图像的参数为对象信息、外形轮廓信息和图像风格信息,分别显示在A区域的下方,用户可以通过点击其中一个参数,对该参数进行选择并保存。
例如:用户如果选择的是图像风格信息,且图像风格信息包含第一图像风格信息和第二图像风格信息,用户可以分别保存第一图像风格信息和第二图像风格信息,也可以将第一图像风格信息和第二图像风格信息进行处理,将两个风格混搭,形成一种新的图像风格信息,并将该图像风格信息进行保存,以便用户可以对自己喜欢的图像风格等信息实时进行保存,并且用户可以自定义保存的图像风格信息的名称,方便后续使用。其中,所述第三输入操作可以为用户对于所述第一图像的参数的点击操作等,在此不做具体限定。
进一步的,所述步骤13,具体可以包括:
接收用户对于所述第二图像的第二输入;
所述步骤14具体可以包括:
将所保存的处理后的目标参数应用于所述第二图像,得到目标图像。
具体的,通过用户对选择的第二图像的第二输入(如按压操作等),可以进入图像编辑界面,在进入图像编辑界面之后,用户可以选择第二图像的第一区域,进而对第二图像的第一区域进行编辑;或者,用户可以选择第二图像的第一区域,进而对第二图像上除第一区域之外的第二区域进行编辑,以形成编辑后的目标图像。
例如:如图8所示,B区域为第二图像所在区域,在B区域下方可以显示可用于编辑的选项,如:图像风格、滤镜、贴纸等,通过点击图像风格(可以是系统自带的图像风格,也可以是用户保存的图像风格)等对第二图像进行编辑,以形成目标图像。其中,用于编辑的选项的显示位置并不限定。
综上所述,本发明实施例中,通过识别并提取所述第一图像的参数,并将所述第一图像的参数中的目标参数应用于第二图像,得到目标图像,可以 识别出图像中的至少一个参数,并对所述参数中的目标参数进行处理,从而实现对图像的自定义。
如图9所示,本发明实施例还提供了一种电子设备90,包括:
第一获取模块91,用于接收用户对于第一图像的第一输入;
第一响应模块92,用于响应于所述第一输入,识别并提取所述第一图像的参数;
第二获取模块93,用于接收用户的第二输入;
第二响应模块94,用于响应于所述第二输入,将所述第一图像的参数中的目标参数应用于第二图像,得到目标图像。
可选的,所述第一图像的参数包括对象信息、外形轮廓信息和图像风格信息中的至少一项。
可选的,在所述第一图像与所述第二图像为同一图像、且所述第一图像的参数包括第一对象信息和第二对象信息、且所述目标参数为所述第一对象信息的情况下,所述第二获取模块93,包括:
第一获取单元,用于接收用户对于所述第一图像的参数的第二输入;
所述第二响应模块,包括:
第一响应单元,用于将所述目标参数包含的特征信息应用于所述第二对象信息包含的特征信息中,得到目标图像。
可选的,所述电子设备90还包括:
第三获取模块,用于接收用户对于所述第一图像的参数的第三输入;
第三响应模块,用于响应于所述第三输入,对所述第一图像的参数中的目标参数进行处理,并保存处理后的目标参数。
可选的,所述第二获取模块93,包括:
第二获取单元,用于接收用户对于所述第二图像的第二输入;
所述第二响应模块,包括:
第二响应单元,用于将所保存的处理后的目标参数应用于所述第二图像,得到目标图像。
电子设备90能够实现图1至图8的方法实施例中电子设备实现的各个过程,为避免重复,这里不再赘述。
本发明实施例中,通过第一响应模块92识别并提取所述第一图像的参数,并通过第二响应模块94将所述参数中的目标参数应用于第二图像,得到目标图像,可以识别出图像中的至少一个参数,并对所述参数中的目标参数进行处理,从而实现对图像的自定义。
图10为实现本发明各个实施例的一种电子设备的硬件结构示意图,该电子设备1000包括但不限于:射频单元1001、网络模块1002、音频输出单元1003、输入单元1004、传感器1005、显示单元1006、用户输入单元1007、接口单元1008、存储器1009、处理器1010、以及电源1011等部件。本领域技术人员可以理解,图10中示出的电子设备结构并不构成对电子设备的限定,电子设备可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。在本发明实施例中,电子设备包括但不限于手机、平板电脑、笔记本电脑、掌上电脑、车载终端、可穿戴设备、以及计步器等。
其中,处理器1010,用于:
接收用户对于第一图像的第一输入;
响应于所述第一输入,识别并提取所述第一图像的参数;
接收用户的第二输入;
响应于所述第二输入,将所述第一图像的参数中的目标参数应用于第二图像,得到目标图像。
本发明实施例提供的电子设备1000能够实现图1至图8的方法实施例中电子设备实现的各个过程,为避免重复,这里不再赘述。
可见,该电子设备,通过处理器1010识别并提取所述第一图像的参数,并将所述参数中的目标参数应用于第二图像,得到目标图像,可以识别出图像中的至少一个参数,并对所述参数中的目标参数进行处理,从而实现对图像的自定义。
应理解的是,本发明实施例中,射频单元1001可用于收发信息或通话过程中,信号的接收和发送,具体的,将来自基站的下行数据接收后,给处理器1010处理;另外,将上行的数据发送给基站。通常,射频单元1001包括但不限于天线、至少一个放大器、收发信机、耦合器、低噪声放大器、双工器等。此外,射频单元1001还可以通过无线通信系统与网络和其他设备通信。
电子设备通过网络模块1002为用户提供了无线的宽带互联网访问,如帮助用户收发电子邮件、浏览网页和访问流式媒体等。
音频输出单元1003可以将射频单元1001或网络模块1002接收的或者在存储器1009中存储的音频数据转换成音频信号并且输出为声音。而且,音频输出单元1003还可以提供与电子设备1000执行的特定功能相关的音频输出(例如,呼叫信号接收声音、消息接收声音等等)。音频输出单元1003包括扬声器、蜂鸣器以及受话器等。
输入单元1004用于接收音频或视频信号。输入单元1004可以包括图形处理器(Graphics Processing Unit,GPU)10041和麦克风10042,图形处理器10041对在视频捕获模式或图像捕获模式中由图像捕获装置(如摄像头)获得的静态图片或视频的图像数据进行处理。处理后的图像帧可以显示在显示单元1006上。经图形处理器10041处理后的图像帧可以存储在存储器1009(或其它存储介质)中或者经由射频单元1001或网络模块1002进行发送。麦克风10042可以接收声音,并且能够将这样的声音处理为音频数据。处理后的音频数据可以在电话通话模式的情况下转换为可经由射频单元1001发送到移动通信基站的格式输出。
电子设备1000还包括至少一种传感器1005,比如光传感器、运动传感器以及其他传感器。具体地,光传感器包括环境光传感器及接近传感器,其中,环境光传感器可根据环境光线的明暗来调节显示面板10061的亮度,接近传感器可在电子设备1000移动到耳边时,关闭显示面板10061和/或背光。作为运动传感器的一种,加速计传感器可检测各个方向上(一般为三轴)加速度的大小,静止时可检测出重力的大小及方向,可用于识别电子设备姿态(比如横竖屏切换、相关游戏、磁力计姿态校准)、振动识别相关功能(比如计步器、敲击)等;传感器1005还可以包括指纹传感器、压力传感器、虹膜传感器、分子传感器、陀螺仪、气压计、湿度计、温度计、红外线传感器等,在此不再赘述。
显示单元1006用于显示由用户输入的信息或提供给用户的信息。显示单元1006可包括显示面板10061,可以采用液晶显示器(Liquid Crystal Display,LCD)、有机发光二极管(Organic Light-Emitting Diode,OLED)等形式来配置 显示面板10061。
用户输入单元1007可用于接收输入的数字或字符信息,以及产生与电子设备的用户设置以及功能控制有关的键信号输入。具体地,用户输入单元1007包括触控面板10071以及其他输入设备10072。触控面板10071,也称为触摸屏,可收集用户在其上或附近的触摸操作(比如用户使用手指、触笔等任何适合的物体或附件在触控面板10071上或在触控面板10071附近的操作)。触控面板10071可包括触摸检测装置和触摸控制器两个部分。其中,触摸检测装置检测用户的触摸方位,并检测触摸操作带来的信号,将信号传送给触摸控制器;触摸控制器从触摸检测装置上接收触摸信息,并将它转换成触点坐标,再送给处理器1010,接收处理器1010发来的命令并加以执行。此外,可以采用电阻式、电容式、红外线以及表面声波等多种类型实现触控面板10071。除了触控面板10071,用户输入单元1007还可以包括其他输入设备10072。具体地,其他输入设备10072可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆,在此不再赘述。
进一步的,触控面板10071可覆盖在显示面板10061上,当触控面板10071检测到在其上或附近的触摸操作后,传送给处理器1010以确定触摸事件的类型,随后处理器1010根据触摸事件的类型在显示面板10061上提供相应的视觉输出。虽然在图10中,触控面板10071与显示面板10061是作为两个独立的部件来实现电子设备的输入和输出功能,但是在某些实施例中,可以将触控面板10071与显示面板10061集成而实现电子设备的输入和输出功能,具体此处不做限定。
接口单元1008为外部装置与电子设备1000连接的接口。例如,外部装置可以包括有线或无线头戴式耳机端口、外部电源(或电池充电器)端口、有线或无线数据端口、存储卡端口、用于连接具有识别模块的装置的端口、音频输入/输出(I/O)端口、视频I/O端口、耳机端口等等。接口单元1008可以用于接收来自外部装置的输入(例如,数据信息、电力等等)并且将接收到的输入传输到电子设备1000内的一个或多个元件或者可以用于在电子设备1000和外部装置之间传输数据。
存储器1009可用于存储软件程序以及各种数据。存储器1009可主要包 括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序(比如声音播放功能、图像播放功能等)等;存储数据区可存储根据手机的使用所创建的数据(比如音频数据、电话本等)等。此外,存储器1009可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。
处理器1010是电子设备的控制中心,利用各种接口和线路连接整个电子设备的各个部分,通过运行或执行存储在存储器1009内的软件程序和/或模块,以及调用存储在存储器1009内的数据,执行电子设备的各种功能和处理数据,从而对电子设备进行整体监控。处理器1010可包括一个或多个处理单元;优选的,处理器1010可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作系统、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器1010中。
电子设备1000还可以包括给各个部件供电的电源1011(比如电池),优选的,电源1011可以通过电源管理系统与处理器1010逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。
另外,电子设备1000包括一些未示出的功能模块,在此不再赘述。
优选的,本发明实施例还提供一种电子设备,包括处理器1010,存储器1009,存储在存储器1009上并可在所述处理器1010上运行的计算机程序,该计算机程序被处理器1010执行时实现上述图像处理方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
本发明实施例还提供一种计算机可读存储介质,计算机可读存储介质上存储有计算机程序,该计算机程序被处理器执行时实现上述图像处理方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。其中,所述的计算机可读存储介质,如只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包 括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端(可以是手机,计算机,服务器,空调器,或者网络设备等)执行本发明各个实施例所述的方法。
上面结合附图对本发明的实施例进行了描述,但是本发明并不局限于上述的具体实施方式,上述的具体实施方式仅仅是示意性的,而不是限制性的,本领域的普通技术人员在本发明的启示下,在不脱离本发明宗旨和权利要求所保护的范围情况下,还可做出很多形式,均属于本发明的保护之内。

Claims (10)

  1. 一种图像处理方法,应用于电子设备,包括:
    接收用户对于第一图像的第一输入;
    响应于所述第一输入,识别并提取所述第一图像的参数;
    接收用户的第二输入;
    响应于所述第二输入,将所述第一图像的参数中的目标参数应用于第二图像,得到目标图像。
  2. 根据权利要求1所述的方法,其中,所述第一图像的参数包括对象信息、外形轮廓信息和图像风格信息中的至少一项。
  3. 根据权利要求2所述的方法,其中,在所述第一图像与所述第二图像为同一图像、且所述第一图像的参数包括第一对象信息和第二对象信息、且所述目标参数为所述第一对象信息的情况下,所述接收用户的第二输入,包括:
    接收用户对于所述第一图像的参数的第二输入;
    所述将所述第一图像的参数中的目标参数应用于第二图像,得到目标图像,包括:
    将所述目标参数包含的特征信息应用于所述第二对象信息包含的特征信息中,得到目标图像。
  4. 根据权利要求1所述的方法,其中,所述接收用户的第二输入之前,所述方法还包括:
    接收用户对于所述第一图像的参数的第三输入;
    响应于所述第三输入,对所述第一图像的参数中的目标参数进行处理,并保存处理后的目标参数。
  5. 根据权利要求4所述的方法,其中,所述接收用户的第二输入,包括:
    接收用户对于所述第二图像的第二输入;
    所述将所述第一图像的参数中的目标参数应用于第二图像,得到目标图像包括:
    将所保存的处理后的目标参数应用于所述第二图像,得到目标图像。
  6. 一种电子设备,包括:
    第一获取模块,用于接收用户对于第一图像的第一输入;
    第一响应模块,用于响应于所述第一输入,识别并提取所述第一图像的参数;
    第二获取模块,用于接收用户的第二输入;
    第二响应模块,用于响应于所述第二输入,将所述第一图像的参数中的目标参数应用于第二图像,得到目标图像。
  7. 根据权利要求6所述的电子设备,其中,所述第一图像的参数包括对象信息、外形轮廓信息和图像风格信息中的至少一项。
  8. 根据权利要求7所述的电子设备,其中,在所述第一图像与所述第二图像为同一图像、且所述第一图像的参数包括第一对象信息和第二对象信息、且所述目标参数为所述第一对象信息的情况下,所述第二获取模块,包括:
    第一获取单元,用于接收用户对于所述第一图像的参数的第二输入;
    所述第二响应模块,包括:
    第一响应单元,用于将所述目标参数包含的特征信息应用于所述第二对象信息包含的特征信息中,得到目标图像。
  9. 根据权利要求6所述的电子设备,还包括:
    第三获取模块,用于接收用户对于所述第一图像的参数的第三输入;
    第三响应模块,用于响应于所述第三输入,对所述第一图像的参数中的目标参数进行处理,并保存处理后的目标参数。
  10. 根据权利要求9所述的电子设备,其中,所述第二获取模块,包括:
    第二获取单元,用于接收用户对于所述第二图像的第二输入;
    所述第二响应模块,包括:
    第二响应单元,用于将所保存的处理后的目标参数应用于所述第二图像,得到目标图像。
PCT/CN2021/081022 2020-03-23 2021-03-16 图像处理方法和电子设备 WO2021190351A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010207477.2A CN111402273A (zh) 2020-03-23 2020-03-23 一种图像处理方法和电子设备
CN202010207477.2 2020-03-23

Publications (1)

Publication Number Publication Date
WO2021190351A1 true WO2021190351A1 (zh) 2021-09-30

Family

ID=71413436

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/081022 WO2021190351A1 (zh) 2020-03-23 2021-03-16 图像处理方法和电子设备

Country Status (2)

Country Link
CN (1) CN111402273A (zh)
WO (1) WO2021190351A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111402273A (zh) * 2020-03-23 2020-07-10 维沃移动通信(杭州)有限公司 一种图像处理方法和电子设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106657793A (zh) * 2017-01-11 2017-05-10 维沃移动通信有限公司 一种图像处理方法及移动终端
US20180365807A1 (en) * 2017-06-16 2018-12-20 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Method, device and nonvolatile computer-readable medium for image composition
CN109104566A (zh) * 2018-06-28 2018-12-28 维沃移动通信有限公司 一种图像显示方法及终端设备
CN109993711A (zh) * 2019-03-25 2019-07-09 维沃移动通信有限公司 一种图像处理方法及终端设备
CN111402273A (zh) * 2020-03-23 2020-07-10 维沃移动通信(杭州)有限公司 一种图像处理方法和电子设备

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109089042B (zh) * 2018-08-30 2020-12-15 Oppo广东移动通信有限公司 图像处理方式识别方法、装置、存储介质及移动终端
CN109461124A (zh) * 2018-09-21 2019-03-12 维沃移动通信(杭州)有限公司 一种图像处理方法及终端设备
CN110223237A (zh) * 2019-04-23 2019-09-10 维沃移动通信有限公司 调节图像参数的方法及终端设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106657793A (zh) * 2017-01-11 2017-05-10 维沃移动通信有限公司 一种图像处理方法及移动终端
US20180365807A1 (en) * 2017-06-16 2018-12-20 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Method, device and nonvolatile computer-readable medium for image composition
CN109104566A (zh) * 2018-06-28 2018-12-28 维沃移动通信有限公司 一种图像显示方法及终端设备
CN109993711A (zh) * 2019-03-25 2019-07-09 维沃移动通信有限公司 一种图像处理方法及终端设备
CN111402273A (zh) * 2020-03-23 2020-07-10 维沃移动通信(杭州)有限公司 一种图像处理方法和电子设备

Also Published As

Publication number Publication date
CN111402273A (zh) 2020-07-10

Similar Documents

Publication Publication Date Title
WO2021098678A1 (zh) 投屏控制方法及电子设备
WO2021078116A1 (zh) 视频处理方法及电子设备
WO2021036542A1 (zh) 录屏方法及移动终端
US20220365641A1 (en) Method for displaying background application and mobile terminal
CN109461117B (zh) 一种图像处理方法及移动终端
WO2021104321A1 (zh) 图像显示方法及电子设备
WO2021190428A1 (zh) 图像拍摄方法和电子设备
WO2021147779A1 (zh) 配置信息分享方法、终端设备及计算机可读存储介质
WO2021136159A1 (zh) 截屏方法及电子设备
WO2021190429A1 (zh) 一种图像处理方法和电子设备
WO2020182035A1 (zh) 图像处理方法及终端设备
CN107592459A (zh) 一种拍照方法及移动终端
WO2021004426A1 (zh) 内容选择方法及终端
WO2020220990A1 (zh) 受话器控制方法及终端
WO2021077908A1 (zh) 参数调节方法和电子设备
WO2021104160A1 (zh) 编辑方法及电子设备
CN109819168B (zh) 一种摄像头的启动方法以及移动终端
WO2021197165A1 (zh) 图片处理方法及电子设备
WO2021036553A1 (zh) 图标显示方法及电子设备
WO2021190387A1 (zh) 检测结果输出的方法、电子设备及介质
CN109448069B (zh) 一种模板生成方法及移动终端
WO2020011080A1 (zh) 显示控制方法及终端设备
CN109618218B (zh) 一种视频处理方法及移动终端
WO2021104159A1 (zh) 显示控制方法及电子设备
WO2021129818A1 (zh) 视频播放方法及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21774245

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21774245

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 23.02.2023)

122 Ep: pct application non-entry in european phase

Ref document number: 21774245

Country of ref document: EP

Kind code of ref document: A1