CN115857749A - Processing method, intelligent terminal and storage medium - Google Patents

Processing method, intelligent terminal and storage medium Download PDF

Info

Publication number
CN115857749A
CN115857749A CN202211574246.0A CN202211574246A CN115857749A CN 115857749 A CN115857749 A CN 115857749A CN 202211574246 A CN202211574246 A CN 202211574246A CN 115857749 A CN115857749 A CN 115857749A
Authority
CN
China
Prior art keywords
target
user
image
selection operation
optionally
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211574246.0A
Other languages
Chinese (zh)
Inventor
漆伟
施富凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Chuanying Information Technology Co Ltd
Original Assignee
Shanghai Chuanying Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Chuanying Information Technology Co Ltd filed Critical Shanghai Chuanying Information Technology Co Ltd
Priority to CN202211574246.0A priority Critical patent/CN115857749A/en
Publication of CN115857749A publication Critical patent/CN115857749A/en
Pending legal-status Critical Current

Links

Images

Abstract

The application provides a processing method, an intelligent terminal and a storage medium, wherein the processing method can be applied to the intelligent terminal and comprises the following steps: responding to element selection operation, and determining a target image element; in response to an object selection operation for a target rendering object, rendering a corresponding element of the target rendering object into the target avatar element. Through the technical scheme, when the user selects the target image element and the target rendering object on the video call interface, the corresponding element of the target rendering object can be rendered into the target image element, the user can see the target rendering object after the element rendering, the target image element does not need to be segmented and fused, the problem that image replacement cannot be carried out is solved, and then the user experience is improved.

Description

Processing method, intelligent terminal and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a processing method, an intelligent terminal, and a storage medium.
Background
At present, social contact contents of users are increasingly diversified, and a figure image replacement technology is provided for improving interestingness in social contact of the users.
In the course of conceiving and implementing the present application, the inventors found that at least the following problems existed: in some implementations, the human image replacement generally employs a region segmentation technique and/or an image fusion technique: when the image of the person is replaced by using the region segmentation technology, a plurality of saw teeth appear on the edges of the segmented elements, so that the replaced picture becomes rough. However, if the image fusion technology is used, the image fusion technology is easily affected by noise, and the edge of the fused image is very obscure, so that the vivid fusion effect cannot be achieved. In addition, in the hair style replacement process, due to the fact that images need to be segmented and fused, long background processing time is needed, and people cannot be replaced visually in the video call process, so that user experience is poor.
The foregoing description is provided for general background information and is not admitted to be prior art.
Disclosure of Invention
In view of the above technical problems, the present application provides a processing method, an intelligent terminal and a storage medium, so that a user can realize image replacement, and user experience is improved.
The application provides a processing method, which can be applied to an intelligent terminal and comprises the following steps:
s10: responding to element selection operation, and determining a target image element;
s20: in response to an object selection operation for a target rendering object, rendering a corresponding element of the target rendering object into the target avatar element.
Optionally, before step S10, the method further includes: at least one local character element is displayed in response to a local mode selection operation.
Optionally, the local character element is stored in a local device or a corresponding server.
Optionally, the local figure element comprises at least one of a figure element template and a figure element color.
Optionally, before step S10, the method further includes:
and in response to the interactive mode selection operation, performing image processing on each person to partition an image element area of each person.
Optionally, step S10 includes:
and in response to an element selection operation, taking the selected image element region as a target image element.
Optionally, after step S10, the method further includes:
displaying the determined or generated avatar element based on the target avatar element.
Optionally, the step S20 includes: responding to the region selection operation of the virtual image elements, and taking the person corresponding to the region where the selected virtual image elements are located as the target rendering object;
rendering the corresponding element of the target rendering object into the avatar element.
Optionally, the target rendering object comprises at least one portrait area.
Optionally, the method further comprises: and classifying the individual portrait, attaching a classification label to the individual portrait, and arranging the classification labels to form various classification options.
Optionally, step S20 includes:
in response to an object selection operation for a target classification item in each classification selection item, confirming a portrait corresponding to the target classification item as a target rendering object;
rendering corresponding elements of the target rendering object into the target avatar elements.
Optionally, after step S20, at least one of the following is further included:
in response to a store operation for a rendered target rendering object, storing an image of the rendered target rendering object;
and responding to the sharing selection operation aiming at the rendered target rendering object, and sharing the image of the rendered target rendering object to the contact corresponding to the sharing selection operation.
The application also provides an intelligent terminal, including: the device comprises a memory and a processor, wherein the memory stores a processing program, and the processing program realizes the steps of any one of the processing methods when being executed by the processor.
The present application also provides a storage medium storing a computer program which, when executed by a processor, implements the steps of the processing method as described in any one of the above.
As described above, the processing method of the present application, applicable to an intelligent terminal, includes the steps of: responding to element selection operation, and determining a target image element; in response to an object selection operation for a target rendering object, rendering a corresponding element of the target rendering object into the target avatar element. Through the technical scheme, when the user selects the target image element and the target rendering object on the video call interface, the corresponding element of the target rendering object can be rendered into the target image element, the user can see the target rendering object after the element rendering, the target image element does not need to be segmented and fused, the problem that image replacement cannot be carried out is solved, and then the user experience is improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application. In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the description of the embodiments will be briefly described below, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic hardware structure diagram of a mobile terminal implementing various embodiments of the present application;
fig. 2 is a communication network system architecture diagram according to an embodiment of the present application;
FIG. 3 is a flowchart illustrating a processing method according to a first embodiment;
fig. 4 is a first scene diagram when a user is engaged in a video call according to the processing method shown in the first embodiment;
fig. 5 is a second scene diagram when the user is engaged in a video call according to the processing method shown in the first embodiment;
fig. 6 is a third scene diagram when a user is engaged in a video call according to the processing method shown in the second embodiment;
fig. 7 is a fourth scene diagram when the user is making a video call according to the processing method shown in the second embodiment;
fig. 8 is a fifth scene diagram when a user is making a video call according to the processing method shown in the third embodiment;
fig. 9 is a sixth scene diagram at the time of a video call by a user according to the processing method shown in the third embodiment;
fig. 10 is a seventh scene diagram at the time of a video call by a user according to the processing method shown in the third embodiment;
fig. 11 is an eighth scene diagram when a user is making a video call according to the processing method shown in the third embodiment;
fig. 12 is a first scene diagram at the time of photographing by the user according to the processing method shown in the fourth embodiment;
fig. 13 is a second scene diagram at the time of photographing by the user according to the processing method shown in the fourth embodiment;
fig. 14 is a third scene diagram at the time of photographing by the user according to the processing method shown in the fourth embodiment;
fig. 15 is a fourth scene diagram at the time of photographing by the user according to the processing method shown in the fourth embodiment;
fig. 16 is a fifth scene diagram at the time of photographing by the user according to the processing method shown in the fourth embodiment;
fig. 17 is a sixth scene diagram at the time of photographing by the user according to the processing method shown in the fourth embodiment.
The implementation, functional features and advantages of the objectives of the present application will be further explained with reference to the accompanying drawings. With the above figures, there are shown specific embodiments of the present application, which will be described in more detail below. These drawings and written description are not intended to limit the scope of the inventive concepts in any manner, but rather to illustrate the inventive concepts to those skilled in the art by reference to specific embodiments.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the application, as detailed in the appended claims.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising a … …" does not exclude the presence of additional like elements in a process, method, article, or apparatus that comprises the element, and further, components, features, elements, and/or steps that may be similarly named in various embodiments of the application may or may not have the same meaning, unless otherwise specified by its interpretation in the embodiment or by context with further embodiments.
It should be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope herein. The word "if," as used herein, may be interpreted as "at … …" or "at … …" or "in response to a determination," depending on the context. Also, as used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context indicates otherwise. It will be further understood that the terms "comprises," "comprising," "includes" and/or "including," when used in this specification, specify the presence of stated features, steps, operations, elements, components, items, species, and/or groups, but do not preclude the presence, or addition of one or more other features, steps, operations, elements, components, items, species, and/or groups thereof. As used herein, the terms "or," "and/or," "including at least one of the following," and the like, are to be construed as inclusive or meaning any one or any combination. For example, "includes at least one of: A. b, C "means" any of the following: a; b; c; a and B; a and C; b and C; a and B and C ", further for example," A, B or C "or" A, B and/or C "means" any of the following: a; b; c; a and B; a and C; b and C; a and B and C'. An exception to this definition will occur only when a combination of elements, functions, steps or operations are inherently mutually exclusive in some way.
It should be understood that, although the steps in the flowcharts in the embodiments of the present application are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless explicitly stated herein. Moreover, at least some of the steps in the figures may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, in different orders, and may be performed alternately or at least partially with respect to other steps or sub-steps of other steps.
The words "if", as used herein, may be interpreted as "at … …" or "at … …" or "in response to a determination" or "in response to a detection", depending on the context. Similarly, the phrases "if determined" or "if detected (a stated condition or event)" may be interpreted as "when determined" or "in response to a determination" or "when detected (a stated condition or event)" or "in response to a detection (a stated condition or event)", depending on the context.
It should be noted that step numbers such as S10 and S20 are used herein for the purpose of more clearly and briefly describing the corresponding contents, and do not constitute a substantial limitation on the sequence, and those skilled in the art may perform S20 first and then S10 in the specific implementation, but these should be within the protection scope of the present application.
It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
In the following description, suffixes such as "module", "component", or "unit" used to denote elements are used only for the convenience of description of the present application, and have no specific meaning in themselves. Thus, "module", "component" or "unit" may be used mixedly.
The intelligent terminal may be implemented in various forms. For example, the smart terminal described in the present application may include smart terminals such as a mobile phone, a tablet computer, a notebook computer, a palmtop computer, a Personal Digital Assistant (PDA), a Portable Media Player (PMP), a navigation device, a wearable device, a smart band, a pedometer, and the like, and fixed terminals such as a Digital TV, a desktop computer, and the like.
The following description will be given by way of example of a mobile terminal, and it will be understood by those skilled in the art that the configuration according to the embodiment of the present application can be applied to a fixed type terminal, in addition to elements particularly used for mobile purposes.
Referring to fig. 1, which is a schematic diagram of a hardware structure of a mobile terminal for implementing various embodiments of the present application, the mobile terminal 100 may include: RF (Radio Frequency) unit 101, wiFi module 102, audio output unit 103, a/V (audio/video) input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, processor 110, and power supply 111. Those skilled in the art will appreciate that the mobile terminal architecture shown in fig. 1 is not intended to be limiting of mobile terminals, which may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The following describes each component of the mobile terminal in detail with reference to fig. 1:
the radio frequency unit 101 may be configured to receive and transmit signals during information transmission and reception or during a call, and specifically, receive downlink information of a base station and then process the downlink information to the processor 110; in addition, the uplink data is transmitted to the base station. Typically, radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 101 can also communicate with a network and other devices through wireless communication. The wireless communication may use any communication standard or protocol, including but not limited to GSM (Global System for Mobile communications), GPRS (General Packet Radio Service), CDMA2000 (Code Division Multiple Access 2000 ), WCDMA (Wideband Code Division Multiple Access), TD-SCDMA (Time Division-Synchronous Code Division Multiple Access), FDD-LTE (Frequency Division duplex-Long Term Evolution), TDD-LTE (Time Division duplex-Long Term Evolution, time Division Long Term Evolution), 5G, and so on.
WiFi belongs to short-distance wireless transmission technology, and the mobile terminal can help a user to receive and send e-mails, browse webpages, access streaming media and the like through the WiFi module 102, and provides wireless broadband internet access for the user. Although fig. 1 shows the WiFi module 102, it is understood that it does not belong to the essential constitution of the mobile terminal, and can be omitted entirely as needed within the scope not changing the essence of the invention.
The audio output unit 103 may convert audio data received by the radio frequency unit 101 or the WiFi module 102 or stored in the memory 109 into an audio signal and output as sound when the mobile terminal 100 is in a call signal reception mode, a call mode, a recording mode, a voice recognition mode, a broadcast reception mode, or the like. Also, the audio output unit 103 may also provide audio output related to a specific function performed by the mobile terminal 100 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 103 may include a speaker, a buzzer, and the like.
The a/V input unit 104 is used to receive audio or video signals. The a/V input Unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, the Graphics processor 1041 Processing image data of still pictures or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 106. The image frames processed by the graphic processor 1041 may be stored in the memory 109 (or other storage medium) or transmitted via the radio frequency unit 101 or the WiFi module 102. The microphone 1042 may receive sounds (audio data) via the microphone 1042 in a phone call mode, a recording mode, a voice recognition mode, or the like, and may be capable of processing such sounds into audio data. The processed audio (voice) data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 101 in case of a phone call mode. The microphone 1042 may implement various types of noise cancellation (or suppression) algorithms to cancel (or suppress) noise or interference generated in the course of receiving and transmitting audio signals.
The mobile terminal 100 also includes at least one sensor 105, such as a light sensor, a motion sensor, and other sensors. Optionally, the light sensor includes an ambient light sensor that may adjust the brightness of the display panel 1061 according to the brightness of ambient light, and a proximity sensor that may turn off the display panel 1061 and/or the backlight when the mobile terminal 100 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when stationary, and can be used for applications of recognizing the gesture of the mobile phone (such as horizontal and vertical screen switching, related games, magnetometer gesture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured on the mobile phone, further description is omitted here.
The display unit 106 is used to display information input by a user or information provided to the user. The Display unit 106 may include a Display panel 1061, and the Display panel 1061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 107 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the mobile terminal. Alternatively, the user input unit 107 may include a touch panel 1071 and other input devices 1072. The touch panel 1071, also referred to as a touch screen, may collect a touch operation by a user (e.g., an operation of the user on or near the touch panel 1071 using a finger, a stylus pen, or any other suitable object or accessory) thereon or nearby and drive a corresponding connection device according to a preset program. The touch panel 1071 may include two parts of a touch detection device and a touch controller. Optionally, the touch detection device detects a touch orientation of a user, detects a signal caused by a touch operation, and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 110, and can receive and execute commands sent by the processor 110. In addition, the touch panel 1071 may be implemented in various types, such as resistive, capacitive, infrared, and surface acoustic wave. The user input unit 107 may include other input devices 1072 in addition to the touch panel 1071. Optionally, other input devices 1072 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like, and are not limited thereto.
Alternatively, the touch panel 1071 may cover the display panel 1061, and when the touch panel 1071 detects a touch operation thereon or nearby, the touch panel is transmitted to the processor 110 to determine the type of the touch event, and then the processor 110 provides a corresponding visual output on the display panel 1061 according to the type of the touch event. Although the touch panel 1071 and the display panel 1061 are illustrated as two separate components in fig. 1 to implement the input and output functions of the mobile terminal, in some embodiments, the touch panel 1071 and the display panel 1061 may be integrated to implement the input and output functions of the mobile terminal, and is not limited herein.
The interface unit 108 serves as an interface through which at least one external device is connected to the mobile terminal 100. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 108 may be used to receive input (e.g., data information, power, etc.) from external devices and transmit the received input to one or more elements within the mobile terminal 100 or may be used to transmit data between the mobile terminal 100 and external devices.
The memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a program storage area and a data storage area, and optionally, the program storage area may store an operating system, an application program (such as a sound playing function, an image playing function, and the like) required by at least one function, and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 109 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 110 is a control center of the mobile terminal, connects various parts of the entire mobile terminal using various interfaces and lines, and performs various functions of the mobile terminal and processes data by operating or executing software programs and/or modules stored in the memory 109 and calling data stored in the memory 109, thereby performing overall monitoring of the mobile terminal. Processor 110 may include one or more processing units; preferably, the processor 110 may integrate an application processor and a modem processor, optionally the application processor primarily handles operating systems, user interfaces, application programs, etc., and the modem processor primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The mobile terminal 100 may further include a power supply 111 (e.g., a battery) for supplying power to various components, and preferably, the power supply 111 may be logically connected to the processor 110 via a power management system, so as to manage charging, discharging, and power consumption management functions via the power management system.
Although not shown in fig. 1, the mobile terminal 100 may further include a bluetooth module or the like, which is not described in detail herein.
In order to facilitate understanding of the embodiments of the present application, a communication network system on which the mobile terminal of the present application is based is described below.
Referring to fig. 2, fig. 2 is an architecture diagram of a communication Network system according to an embodiment of the present disclosure, where the communication Network system is an LTE system of a universal mobile telecommunications technology, and the LTE system includes a UE (User Equipment) 201, an e-UTRAN (Evolved UMTS Terrestrial Radio Access Network) 202, an epc (Evolved Packet Core) 203, and an IP service 204 of an operator, which are in communication connection in sequence.
Optionally, the UE201 may be the terminal 100 described above, and is not described herein again.
The E-UTRAN202 includes eNodeB2021 and other eNodeBs 2022, among others. Alternatively, the eNodeB2021 may be connected with other enodebs 2022 through a backhaul (e.g., X2 interface), the eNodeB2021 is connected to the EPC203, and the eNodeB2021 may provide the UE201 access to the EPC 203.
The EPC203 may include MME (Mobility Management Entity) 2031, hss (Home Subscriber Server) 2032, other MME2033, SGW (Serving gateway) 2034, pgw (PDN gateway) 2035, PCRF (Policy and Charging Rules Function) 2036, and the like. Optionally, the MME2031 is a control node that handles signaling between the UE201 and the EPC203, providing bearer and connection management. HSS2032 is used to provide some registers to manage functions such as home location register (not shown) and holds some user-specific information about service characteristics, data rates, etc. All user data may be sent through SGW2034, PGW2035 may provide IP address assignment for UE201 and other functions, and PCRF2036 is a policy and charging control policy decision point for traffic data flow and IP bearer resources, which selects and provides available policy and charging control decisions for a policy and charging enforcement function (not shown).
The IP services 204 may include the internet, intranets, IMS (IP Multimedia Subsystem), or other IP services, among others.
Although the LTE system is described as an example, it should be understood by those skilled in the art that the present application is not limited to the LTE system, but may also be applied to other wireless communication systems, such as GSM, CDMA2000, WCDMA, TD-SCDMA, 5G, and future new network systems (e.g. 6G), and the like.
Based on the above mobile terminal hardware structure and communication network system, various embodiments of the present application are provided.
First embodiment
The embodiment of the application provides a processing method, which comprises the following steps:
step S10: responding to element selection operation, and determining a target image element;
step S20: in response to an object selection operation for a target rendering object, rendering a corresponding element of the target rendering object into the target avatar element.
Alternatively, the processing method may be applied to an intelligent terminal.
Optionally, the smart terminal may be a mobile terminal such as a mobile phone and a tablet.
Optionally, the mobile phone may be a dual-screen terminal, and may also be a folding screen terminal.
The application scene of the embodiment may be a video call, when a target image element and a target rendering object are selected on a video call interface, corresponding elements of the target rendering object may be rendered into the target image element, the user can see the target rendering object after the element rendering without segmenting and fusing the target image element, so that the problem that the image cannot be replaced is solved, and the user experience is improved.
The method comprises the following specific steps:
step S10: responding to element selection operation, and determining a target image element;
optionally, the image element may be a hair style, or may be an eye, a makeup, a piece of jewelry to be worn, and the like, and is not limited specifically.
Optionally, the target character element is an element selected by the user from at least one character element to be rendered.
Optionally, the target image element is selected by the user through an element selection operation.
Alternatively, the user selects the target character element through an element selection operation when starting the video call.
Optionally, the user selects the target image element through an element selection operation by triggering of the user during the video call.
Optionally, as shown in fig. 4, characters such as interesting images or image rendering may be marked on the floating ball with the hair style rendering function, which is not limited specifically.
Optionally, if it is detected that the user touches the hover ball, highlighting the hover ball.
Optionally, as shown in fig. 5, the local mode and/or the interactive mode may be displayed in a floating frame, or directly display a font in a floating manner on the current interface, which is not limited in detail.
Optionally, the displayed character elements are different in the local mode and/or the interactive mode.
Optionally, the target image element selected by the user through the element selection operation in the local mode is different from the target image element selected by the user through the element selection operation in the interactive mode.
Optionally, the triggering manner of the element selection operation in the local mode and/or the interactive mode is different.
Step S20: in response to an object selection operation for a target rendering object, rendering a corresponding element of the target rendering object into the target avatar element.
Optionally, the target rendering object comprises at least one portrait area.
Optionally, the target rendering object may be a user, may also be a call object, may also be both a user and a call object, and the like, which is not limited specifically.
Optionally, after the user selects the target image element, the target rendering object is selected through an object selection operation.
Optionally, the object selection operations in the local mode and/or the interactive mode are different.
Optionally, the target rendering object selected by the user through the object selection operation in the local mode is different from the target rendering object selected by the user through the object selection operation in the interactive mode.
Optionally, the triggering manner of the object selection operation in the local mode and/or the interactive mode is different.
Alternatively, the corresponding element of the target rendering object, i.e., the element of the target rendering object corresponding to the target image element, may be that the hairstyle as the target image element corresponds to the hairstyle of the user, and may also be that the eyes as the target image element correspond to the eyes of the user.
Optionally, after the user selects the target avatar element and the target rendering object, the target rendering object rendered into the target avatar element can be seen.
Optionally, the hairstyle of the user is rendered to the corresponding target character element;
optionally, the hair style of the call object is rendered to a corresponding target image element;
optionally, the hairstyles of the user and the call object are each rendered to corresponding target character elements.
Optionally, the intelligent terminal may render the corresponding element of the target rendering object into the target avatar element based on a preset GAN model (generic adaptive Nets, determine or generate a confrontation network).
Optionally, the preset GAN model is obtained by training a preset model to be trained based on a training set formed by the portrait and the image elements and the labels of the portrait after the image elements in the training set are replaced.
Optionally, the intelligent terminal inputs the image and/or the target image element of the portrait area where the target rendering object is located into a preset GAN model, that is, the corresponding element of the target rendering object is rendered into the target image element, so as to obtain the image of the target rendering object after image rendering.
Compared with the region segmentation technology and/or the image fusion technology adopted in the prior art, the image processing method and the image processing device do not need long background processing time, do not segment or re-fuse the image, do not generate saw teeth or are influenced by noise, can realize the replacement of character images, and prompt the user experience.
Optionally, after the intelligent terminal renders the corresponding element of the target rendering object into the avatar element, the rendered image of the target rendering object is stored through a storage operation of the rendered target rendering object.
Optionally, the intelligent terminal responds to a storage operation of the user for the rendered target rendering object, and stores the image of the rendered target rendering object for the user to view or share in the future.
Optionally, after rendering the corresponding element of the target rendering object into the target image element by the intelligent terminal, sharing the rendered image of the target rendering object to the contact corresponding to the sharing selection operation through the sharing selection operation of the rendered target rendering object.
Optionally, the intelligent terminal responds to a sharing selection operation of a user for the rendered target rendering object, and shares the rendered image of the target rendering object to a contact corresponding to the sharing selection operation, so that the user can share the image.
In the video call process, when a user selects a target image element and a target rendering object on a video call interface, corresponding elements of the target rendering object can be rendered into the target image element, the user can see the target rendering object after the elements are rendered, optionally, the corresponding elements of the target rendering object are rendered by using a preset GAN model, long background processing time is not needed, the target image element is not needed to be segmented and fused, the problem that segmentation and fusion can occur does not exist, the image rendering effect is improved, the problem that image replacement cannot be performed is solved, user experience is improved, and after the image rendering is completed, the user can store or share images of the rendered target rendering object, and the social requirement of the user is further met.
Second embodiment
Based on the first embodiment in the present application, another embodiment of the present application is provided, in which a method for performing avatar rendering using a local mode when a user performs a video call is discussed.
Before step S10, the method includes:
step A1: in response to a local mode selection operation, displaying at least one local image element, optionally stored on a local device or a corresponding server.
Optionally, in response to the local mode selection operation, that is, if the smart terminal detects that the user selects the icon of the local mode in the mode selection operation, the icon of the local mode is highlighted.
Alternatively, as shown in fig. 6, if the smart terminal detects that the user selects the local mode in the mode selection operation, the icon of the local mode is highlighted.
Optionally, at least one local character element is displayed in response to a local mode selection operation.
Optionally, the local figure element comprises at least one of a figure element template and a figure element color.
Optionally, the local image element is stored in the local device or the corresponding server, which is not limited specifically.
Optionally, if the intelligent terminal detects that the user selects the local mode in the mode selection operation, highlighting an icon of the local mode and displaying an icon of an image element template derived from the local.
Optionally, if the intelligent terminal detects that the user selects the local mode in the mode selection operation, highlighting the icon of the local mode and displaying the icon of the locally derived image element color.
Alternatively, as shown in fig. 6, if the smart terminal detects that the user selects the local mode in the mode selection operation, the smart terminal highlights an icon of the local mode and displays an icon of an avatar element template derived from the local and an icon of an avatar element color.
Optionally, if the intelligent terminal detects a touch operation of the user on an icon of the image element template, highlighting the icon of the image element template selected by the user based on the local mode;
optionally, if the intelligent terminal detects a touch operation of the user on an icon of the image element template, highlighting the icon of the color of the image element selected by the user based on the local mode.
Alternatively, in response to the element selection operation, the target character element is determined, which may be a combination of highlighting the character element template selected by the user based on the local mode and the character element color.
Alternatively, as shown in fig. 7, in response to an object selection operation for the target rendering object, the hairstyle of the user rendered to the selected hairstyle and hair color is directly displayed on the call interface.
Optionally, in response to an object selection operation for the target rendering object, directly displaying the hair style of the call object rendered into the selected hair style and hair color on the call interface.
The user can change the hairstyle of the user through the local mode during the video call. The function of rendering the hair style during the video call is enriched, the interest of the video call is increased, and the figure image is beautified.
Third embodiment
Based on the first embodiment and the second embodiment in the present application, another embodiment of the present application is provided, in which a method for performing avatar rendering using an interactive mode when a user performs a video call is discussed.
Before step S10, the method includes:
step B1: responding to the interactive mode selection operation, carrying out image processing on each person, and dividing an image element area of each person;
optionally, in response to the interactive mode selection operation, that is, if the intelligent terminal detects that the user selects the icon of the interactive mode in the mode selection operation, the icon of the interactive mode is highlighted.
Optionally, as shown in fig. 8, if the intelligent terminal detects that the user selects the interaction mode in the mode selection operation, the icon of the interaction mode is highlighted, and an element selection focus is displayed.
Optionally, in response to the interactive mode selection operation, the intelligent terminal performs image processing on each person to partition an image element area of each person.
Optionally, at least one user and at least one call object may be present on the video call interface.
For example, the intelligent terminal processes images of two people, and divides image element areas of the two people on the video call interface, one is an image element area corresponding to the user, and the other is an image element area corresponding to the call object.
Optionally, the image element area corresponding to the user includes an eye area of the user, a hair style area of the user, and the like, which is not limited specifically.
Optionally, the image element area corresponding to the call object includes an eye area of the call object, a hair style area of the call object, and the like, and is not limited specifically.
Optionally, in response to the element selection operation, the image element region corresponding to the selected user is taken as the target image element.
Optionally, in response to the element selection operation, the image element area corresponding to the selected call object is taken as the target image element.
Optionally, the selection of the target image element by the user may be based on a touch and drag mode to move an element selection focus, or may be to touch the element selection focus first, and then move the element selection focus to the target position by touching the target position, and the like, which is not limited specifically.
Optionally, if the user moves the element selection focus to the area where the hair style of the user is located, the hair style of the user is taken as the target image element.
Optionally, if the user moves the element selection focus to the area where the eyes of the call object are located, the eyes of the call object are used as the target image element.
Optionally, the highlighting may be highlighting, displaying in different colors, displaying in different sizes, or the like, which is not limited in detail.
Alternatively, as shown in fig. 8, the element selection focus may be a finger-shaped focus, an arrow-shaped focus, or the like, and is not limited in detail.
After step S10, the method comprises:
and step B2: displaying an avatar element determined or generated based on the target avatar element;
optionally, if the intelligent terminal detects that the user selects the target image element, displaying the virtual image element determined or generated based on the target image element;
optionally, the avatar element is determined or generated based on the target avatar element, and an image in which the avatar element region of the target rendering object is located may be input to a preset neural network model, that is, the corresponding avatar element may be determined or generated.
Optionally, the preset neural network model is obtained by training a preset model to be trained based on an image in which the image element region is located and/or sample data combined by corresponding virtual image elements.
Alternatively, as shown in fig. 9, if it is detected that the user selects the hair style of the user on the current video call interface as the target image element, the intelligent terminal displays the virtual image element determined or generated based on the hair style of the user.
Optionally, the element selection focus (such as a cursor) is moved and finally falls on the area where the image element of the portrait is located on the current video call interface, and then the avatar element is determined or generated based on the image element of the portrait and displayed on the current video call interface.
Optionally, if the intelligent terminal detects that the element selection focus is dragged and finally falls in a hair style area of a call object on the current video call interface, determining or generating a virtual hair style based on the hair style of the call object, and displaying the virtual hair style on the current video call interface.
Optionally, if the intelligent terminal detects that the element selection focus is dragged and finally falls in a hair style area of the user on the current interface, the intelligent terminal determines or generates a virtual hair style based on the hair style of the user and displays the virtual hair style on the current video call interface.
Alternatively, as shown in fig. 10, the user may move an element selection focus to the avatar element and select, i.e., may move the avatar element.
Optionally, the target rendering object comprises at least one portrait area.
Optionally, the target rendering object may be the user, the call object, the user, and the call object.
Optionally, the selection method for the target rendering object may be according to an avatar element region to which the avatar element is moved, and a portrait region corresponding to the avatar element region to which the avatar element is moved is the target rendering object.
Optionally, as shown in fig. 11, when the intelligent terminal detects that the avatar element is moved to the hairstyle area of the call object, rendering the hairstyle of the call object as the avatar element based on a preset GAN model;
optionally, when detecting that the avatar element is moved to the eye area of the user, the smart terminal renders the eyes of the user as the avatar element based on a preset GAN model.
Optionally, when the intelligent terminal detects that the avatar element is moved to the hair style area of the call object and confirms rendering, rendering the hair style of the call object as the avatar element based on a preset GAN model;
optionally, when detecting that the avatar element is moved to the eye area of the user and confirming rendering, the smart terminal renders the eyes of the user as the avatar element based on a preset GAN model.
Optionally, as shown in fig. 11, when it is detected that the avatar element is moved to the hair style area of the call object, the smart terminal renders the hair style of the call object to the avatar element determined or generated based on the hair style of the user based on a preset GAN model, and displays an image of the target rendered object after rendering the hair style;
optionally, when detecting that the avatar element is moved to the hair style area of the user, the smart terminal renders the hair style of the user to the avatar element determined or generated based on the hair style of the call object based on a preset GAN model, and displays an image of the target rendering object after rendering the hair style.
Alternatively, the selection method for the target rendering object may also be between step S10 and step S20:
step C1: classifying the individual portrait and attaching a classification label to the individual portrait;
optionally, the individual portraits are classified, and a classification label is attached to each portrait, which may be, without specific limitation, attaching a label "oneself" or a "WeChat nickname of oneself" to the users at both ends of the video call, attaching a label "call object" or a "WeChat nickname of call object" to the call object ends at both ends of the video call, and the like.
Step (ii) of C2: arranging the classification labels to form various classification options;
optionally, the classification tags are arranged to form each classification option, and the classification tags may be displayed near the avatar elements in the form of a floating frame, or may be displayed in the form of text embedding in the corresponding portrait areas.
Optionally, in response to an object selection operation for a target classification item in each classification selection item, confirming a portrait corresponding to the target classification item as a target rendering object; rendering corresponding elements of the target rendering object into the target image elements.
Optionally, if the user triggers the nickname of the user's own WeChat, the user is determined to be a target rendering object, and the user's hairstyle is rendered into the target image element.
Optionally, if the user triggers a nickname for the WeChat of the call object, the call object is determined as a target rendering object, and the hairstyle of the call object is rendered into the target image element.
The user can change the hairstyle of the user or the call object through the interactive mode during the video call. The function of hair style rendering during video call is enriched, the interestingness of the video call is increased, and the character image is beautified.
Fourth embodiment
Based on the first embodiment, the second embodiment, and the third embodiment, the present application further provides another embodiment, and in this embodiment, a processing method when a user takes a picture is discussed.
Optionally, in addition to the above-mentioned character image rendering during the video call, the present embodiment also provides a processing method for the user during the photographing.
Alternatively, as shown in fig. 12, when the user takes a picture, the element template selection icon is displayed in response to a user's trigger operation based on the send-back function on the picture-taking interface.
Optionally, the exchange function may be preset in a function of a camera of the smart terminal, or may be exchange software or an exchange program installed in the smart terminal, and the like, which is not limited specifically.
Alternatively, as shown in fig. 12, the exchange function exchanges AI (Artificial Intelligence) among the functions of the camera of the smart terminal itself.
Optionally, the triggering operation may be sliding to a swap function, or clicking the swap function, and the like, which is not limited specifically.
Optionally, the element template selection icon may be displayed in a floating manner, or displayed in an embedded interface, and the like, and is not limited specifically.
Alternatively, as shown in fig. 13, when the user takes a picture, in response to a user's trigger operation based on the interchange function on the picture-taking interface, the element template selection icon is displayed, in response to a user's trigger operation based on the element template selection icon on the interface, the element template selection icon is highlighted, and the avatar element template icon is displayed.
Optionally, the intelligent terminal responds to the selection operation of the user on the image element template icon, takes the target image element template selected by the user as an avatar element, and highlights the avatar element.
Optionally, as shown in fig. 14, after the intelligent terminal detects that the target image element template is selected, an element transition icon is displayed.
Alternatively, as shown in fig. 15, the smart terminal highlights the element transition icon in response to a user's trigger operation for the element transition icon.
Optionally, as shown in fig. 16, in response to a trigger operation of the user for the element transition icon, the smart terminal highlights the element transition icon, and displays a Mask (Mask) image of the target rendering object with a hairstyle as the avatar element, so that the user can see the Mask image with the hairstyle transition.
Optionally, as shown in fig. 16, in response to a touch operation of a user on a photographing key on an interface, rendering a hairstyle of a photographed object as the target image element template based on a preset GAN model, and displaying an image of the target rendered object after the hairstyle rendering;
optionally, in response to a touch operation of a user on a photographing key on the interface, rendering eyes of the user as the avatar elements based on a preset GAN model, and displaying a rendered image after the eyes of the photographic object are changed.
Optionally, in response to a touch operation of a user on a photographing key on an interface, rendering the hair style of the call object as the avatar element based on a preset GAN model, and displaying a rendered interface after the hair style of the call object is changed.
Optionally, the shooting object may be the user or other person entering the mirror, and is not limited specifically.
According to the embodiment, when a user takes a picture, the hair style of the shot object can be changed, the hair style rendering function during the picture taking process is enriched, the picture taking interest is increased, and the figure image is beautified.
Optionally, in an actual implementation, a combination judgment may also be performed according to an actual situation, as shown in table 1 below.
TABLE 1
Figure BDA0003988616810000161
Figure BDA0003988616810000171
For example, for the combination example 1, scene information (such as when a user makes a video call) and a rendering mode (such as a local mode), step S10, and step S20 may be determined as technical solutions.
Optionally, the corresponding technical solution is: and responding to a local mode selection operation, and displaying at least one local image element, wherein the local image element is stored in a local device or a corresponding server.
For another example, for the combination example 2, the scene information (e.g., when the user is in a video call) and the rendering mode (e.g., the interactive mode), step S10, and step S20 may be determined as technical solutions.
Optionally, the corresponding technical solution is: the method comprises the steps of responding to element selection operation, determining a target image element, responding to interaction mode selection operation, carrying out image processing on each person, dividing an image element area of each person, and responding to element selection operation, and taking the selected image element area as the target image element.
Also for example, for the combination example 3, scene information (e.g., when the user is engaged in a video call) and avatar elements, rendering mode (e.g., interactive mode), step S10 and step S20 may be determined as technical solutions.
Optionally, the corresponding technical solution is: in response to an element selection operation, determining a target image element, displaying an avatar element determined or generated based on the target image element, and in response to an area selection operation of the avatar element, taking a person corresponding to an area where the selected avatar element is located as the target rendering object; rendering the corresponding element of the target rendering object into the avatar element.
Through the combination scheme, the target application or the target service can be determined from multiple applications or services more accurately and/or intelligently, and the user experience is further improved.
The above lists are only reference examples, and in order to avoid redundancy, they are not listed here, and in actual development or application, they may be flexibly combined according to actual needs, but any combination belongs to the technical solution of the present application, and is covered by the protection scope of the present application.
The embodiment of the present application further provides an intelligent terminal, which includes a memory and a processor, where the memory stores a processing program, and the processing program, when executed by the processor, implements the steps of the processing method in any of the embodiments.
The present application further provides a storage medium, in which a processing program is stored, and the processing program, when executed by a processor, implements the steps of the processing method in any of the above embodiments.
In the embodiments of the intelligent terminal and the storage medium provided in the present application, all technical features of any one of the embodiments of the processing method may be included, and the expanding and explaining contents of the specification are basically the same as those of the embodiments of the method, and are not described herein again.
Embodiments of the present application further provide a computer program product, which includes computer program code, when the computer program code runs on a computer, the computer is caused to execute the method as in the above various possible embodiments.
Embodiments of the present application further provide a chip, which includes a memory and a processor, where the memory is used to store a computer program, and the processor is used to call and run the computer program from the memory, so that a device in which the chip is installed executes the method in the above various possible embodiments.
It is to be understood that the foregoing scenarios are only examples, and do not constitute a limitation on application scenarios of the technical solutions provided in the embodiments of the present application, and the technical solutions of the present application may also be applied to other scenarios. For example, as can be known by those skilled in the art, with the evolution of system architecture and the emergence of new service scenarios, the technical solution provided in the embodiments of the present application is also applicable to similar technical problems.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
The steps in the method of the embodiment of the application can be sequentially adjusted, combined and deleted according to actual needs.
The units in the device in the embodiment of the application can be merged, divided and deleted according to actual needs.
In the present application, the same or similar term concepts, technical solutions and/or application scenario descriptions will be generally described only in detail at the first occurrence, and when the description is repeated later, the detailed description will not be repeated in general for brevity, and when understanding the technical solutions and the like of the present application, reference may be made to the related detailed description before the description for the same or similar term concepts, technical solutions and/or application scenario descriptions and the like which are not described in detail later.
In the present application, each embodiment is described with emphasis, and reference may be made to the description of other embodiments for parts that are not described or illustrated in any embodiment.
All possible combinations of the technical features in the embodiments are not described in the present application for the sake of brevity, but should be considered as the scope of the present application as long as there is no contradiction between the combinations of the technical features.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) as above and includes several operations to enable a terminal device (e.g., a mobile phone, a computer, a server, a controlled terminal, or a network device) to execute the method of each embodiment of the present application.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer operations. The procedures or functions according to the embodiments of the present application are wholly or partially generated when the computer program is loaded and executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer operations may be stored in a storage medium or transmitted from one storage medium to another, for example, the computer operations may be transmitted from one website site, computer, server, or data center to another website site, computer, server, or data center by wire (e.g., coaxial cable, fiber optic, digital subscriber line) or wirelessly (e.g., infrared, wireless, microwave, etc.). The storage medium may be any available medium that can be accessed by a computer or a server that includes an integration of one or more available media data centers, and the like. The usable medium may be a magnetic medium (e.g., floppy Disk, memory Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid State Disk (SSD)), among others.
The above description is only a preferred embodiment of the present application, and not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application, or which are directly or indirectly applied to other related technical fields, are included in the scope of the present application.

Claims (10)

1. A method of processing, comprising the steps of:
s10: responding to element selection operation, and determining a target image element;
s20: in response to an object selection operation for a target rendering object, rendering a corresponding element of the target rendering object into the target avatar element.
2. The method of claim 1, wherein step S10 is preceded by:
at least one local character element is displayed in response to a local mode selection operation.
3. The method of claim 2, wherein the local image element comprises at least one of an image element template and an image element color.
4. The method according to any one of claims 1 to 3, wherein step S10 is preceded by:
responding to the interactive mode selection operation, carrying out image processing on each person, and dividing an image element area of each person;
the step S10 includes:
and in response to an element selection operation, taking the selected image element region as a target image element.
5. The method of claim 4, wherein after step S10, further comprising:
displaying the determined or generated avatar element based on the target avatar element;
the step S20 includes:
responding to the region selection operation of the virtual image elements, and taking the person corresponding to the region where the selected virtual image elements are located as the target rendering object;
rendering the corresponding element of the target rendering object into the avatar element.
6. A method as claimed in any one of claims 1 to 3 wherein the target rendering object comprises at least one portrait area.
7. The method of claim 6, wherein the method further comprises:
classifying the portraits, attaching classification labels to the portraits, and arranging the classification labels to form classification options;
step S20 includes:
in response to an object selection operation for a target classification item in each classification selection item, confirming a portrait corresponding to the target classification item as a target rendering object;
rendering corresponding elements of the target rendering object into the target avatar elements.
8. The method according to any of claims 1 to 3, wherein after step S20, further comprising at least one of:
in response to a store operation for a rendered target rendering object, storing an image of the rendered target rendering object;
and responding to the sharing selection operation aiming at the rendered target rendering object, and sharing the image of the rendered target rendering object to the contact corresponding to the sharing selection operation.
9. An intelligent terminal, comprising: memory, processor, wherein the memory has stored thereon a processing program which, when executed by the processor, implements the steps of the processing method of any of claims 1 to 8.
10. A storage medium, characterized in that the storage medium has stored thereon a computer program which, when being executed by a processor, carries out the steps of the processing method according to any one of claims 1 to 8.
CN202211574246.0A 2022-12-08 2022-12-08 Processing method, intelligent terminal and storage medium Pending CN115857749A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211574246.0A CN115857749A (en) 2022-12-08 2022-12-08 Processing method, intelligent terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211574246.0A CN115857749A (en) 2022-12-08 2022-12-08 Processing method, intelligent terminal and storage medium

Publications (1)

Publication Number Publication Date
CN115857749A true CN115857749A (en) 2023-03-28

Family

ID=85671258

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211574246.0A Pending CN115857749A (en) 2022-12-08 2022-12-08 Processing method, intelligent terminal and storage medium

Country Status (1)

Country Link
CN (1) CN115857749A (en)

Similar Documents

Publication Publication Date Title
CN107835464B (en) Video call window picture processing method, terminal and computer readable storage medium
CN108037893B (en) Display control method and device of flexible screen and computer readable storage medium
CN108196922B (en) Method for opening application, terminal and computer readable storage medium
CN107885448B (en) Control method for application touch operation, mobile terminal and readable storage medium
CN113487705A (en) Image annotation method, terminal and storage medium
CN111966804A (en) Expression processing method, terminal and storage medium
CN113900560A (en) Icon processing method, intelligent terminal and storage medium
CN113126844A (en) Display method, terminal and storage medium
CN114647623A (en) Folder processing method, intelligent terminal and storage medium
CN113835586A (en) Icon processing method, intelligent terminal and storage medium
CN110764852B (en) Screenshot method, terminal and computer readable storage medium
CN114398113A (en) Interface display method, intelligent terminal and storage medium
CN114510188A (en) Interface processing method, intelligent terminal and storage medium
CN113867586A (en) Icon display method, intelligent terminal and storage medium
CN113360068A (en) Screen capture interaction method, mobile terminal and storage medium
CN112990020A (en) AR effect processing method, mobile terminal and readable storage medium
CN113342246A (en) Operation method, mobile terminal and storage medium
CN115857749A (en) Processing method, intelligent terminal and storage medium
CN113840062B (en) Camera control method, mobile terminal and readable storage medium
CN113518255A (en) Switching method, terminal and storage medium
CN114995740A (en) Interaction method, intelligent terminal and storage medium
CN117635782A (en) Display method, intelligent terminal and storage medium
CN116450077A (en) Display method, intelligent terminal and storage medium
CN114187220A (en) Image processing method, mobile terminal and storage medium
CN114020392A (en) Information processing method, intelligent terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication