CN115022270A - Image processing method, intelligent terminal and storage medium - Google Patents

Image processing method, intelligent terminal and storage medium Download PDF

Info

Publication number
CN115022270A
CN115022270A CN202210590910.4A CN202210590910A CN115022270A CN 115022270 A CN115022270 A CN 115022270A CN 202210590910 A CN202210590910 A CN 202210590910A CN 115022270 A CN115022270 A CN 115022270A
Authority
CN
China
Prior art keywords
intelligent terminal
target picture
group
editing
intelligent
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210590910.4A
Other languages
Chinese (zh)
Inventor
郑欢
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Chuanying Information Technology Co Ltd
Original Assignee
Shanghai Chuanying Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Chuanying Information Technology Co Ltd filed Critical Shanghai Chuanying Information Technology Co Ltd
Priority to CN202210590910.4A priority Critical patent/CN115022270A/en
Publication of CN115022270A publication Critical patent/CN115022270A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/07User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail characterised by the inclusion of specific contents
    • H04L51/10Multimedia information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • H04M1/72439User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for image or video messaging
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/021Services related to particular areas, e.g. point of interest [POI] services, venue services or geofences
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/06Selective distribution of broadcast services, e.g. multimedia broadcast multicast service [MBMS]; Services to user groups; One-way selective calling services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/12Messaging; Mailboxes; Announcements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W84/00Network topologies
    • H04W84/02Hierarchically pre-organised networks, e.g. paging networks, cellular networks, WLAN [Wireless Local Area Network] or WLL [Wireless Local Loop]
    • H04W84/10Small scale networks; Flat hierarchical networks
    • H04W84/12WLAN [Wireless Local Area Networks]

Abstract

The application provides an image processing method, an intelligent terminal and a storage medium, which are applied to a first intelligent terminal in a group of the same local area network, and comprise the following steps: when a user clicks a first target picture in a picture list area, generating an editing instruction of the first target picture; responding to the editing instruction, and jumping to an editing interface of the first target picture; when the user finishes editing and clicks a save button, generating a save instruction; responding to the storage instruction, and storing the edited first target picture in the first intelligent terminal; and responding to the storage instruction, and broadcasting the edited first target picture to other intelligent terminals in the group in the local area network where the group is located. According to the method and the device, the edited first target picture is broadcasted to other intelligent terminals in the group in the local area network where the group is located, so that the picture processing efficiency is improved, and the picture repairing efficiency of multiple users is improved.

Description

Image processing method, intelligent terminal and storage medium
Technical Field
The present application relates to the field of communications technologies, and in particular, to an image processing method, an intelligent terminal, and a storage medium.
Background
Contemporary young people take pictures at a high frequency in a party or on a trip to share their lives. However, after the taking of the picture is completed, the user often needs to retouch the picture to make himself more beautiful in the picture. In the prior art, a user can open a photo to be repaired in an intelligent terminal through a retouching software. The user can realize the retouching operation of the photo to be retouched by using the function provided by the retouching software.
However, the current generation of young people in a party or travel setting typically takes a picture of multiple people. It is considered that when processing a photo, a case may occur in which a plurality of users each modify their respective portions in the photo. For the scene, the inventor finds that at least the prior art has the problem of low image modifying efficiency for photos such as syndiography.
The foregoing description is provided for general background information and is not admitted to be prior art.
Disclosure of Invention
In view of the above technical problems, the present application provides an image processing method, an intelligent terminal and a storage medium, so that a plurality of users can obtain, modify and upload photos simultaneously through a local area network, the efficiency of a plurality of users repairing the same photo is improved, and the operation is simple and convenient.
In order to solve the above technical problem, the present application provides an image processing method, applied to a first intelligent terminal in a group of the same local area network, including:
s101, responding to an editing instruction of a first target picture, and entering an editing interface of the first target picture;
and S102, responding to the storage instruction of the first target picture, and broadcasting the edited first target picture to other intelligent terminals in the group.
Optionally, step S102 specifically includes:
s1021, acquiring an editing area of the first target picture;
s1022, responding to the storage instruction of the first target picture, and covering the editing area of the first target picture to a corresponding area in the first target picture in the first intelligent terminal;
s1023, responding to the storage instruction of the first target picture, broadcasting the editing area so as to enable other intelligent terminals in the group to obtain the editing area, and covering the editing area to a corresponding area in the first target picture.
Optionally, the method further includes:
s103, displaying the equipment information of the intelligent terminal editing the first target picture in the group in the editing interface of the first target picture.
Optionally, the method further comprises:
s201, creating a local area network hotspot and generating verification information so that a second intelligent terminal can join the group through the verification information.
Optionally, the S201 further includes:
s2011, acquiring a first position range;
s2012, when the second intelligent terminal which inputs the verification information is in the first position range, adding the second intelligent terminal to the group.
Optionally, the method further comprises:
s202, acquiring an adding instruction generated by selecting a third intelligent terminal in a map interface;
s203, responding to the adding instruction, and adding the third intelligent terminal to the group when the third intelligent terminal is not in the group.
Optionally, the method further comprises:
and S104, responding to a control instruction generated by clicking each authority control switch of the other intelligent terminals in the group, and opening or closing the authority of the other intelligent terminals.
And/or the presence of a gas in the gas,
s105, displaying terminal information of the intelligent terminals in the group in a user list area of a display interface;
and S106, when the other intelligent terminals are in the editing interface, marking the terminal information of the other intelligent terminals.
Optionally, the method further comprises:
s107, responding to a user picture viewing instruction generated by clicking a fourth intelligent terminal in the user list area, and displaying a picture edited by the fourth intelligent terminal on a first viewing interface.
Optionally, the method further includes:
s108, displaying the pictures uploaded to the local area network where the group is located in a picture list area of a display interface;
s109, responding to a picture viewing instruction of a fourth target picture in the picture list area, displaying the fourth target picture on a second viewing interface, and editing terminal information of the intelligent terminal of the fourth target picture.
Optionally, the method further comprises:
and S110, responding to a downloading instruction of a second target picture in the picture list area, and downloading the second target picture to the local from the cache.
And/or the presence of a gas in the gas,
s111, responding to an uploading instruction of a third target picture, broadcasting the third target picture to other intelligent terminals in the group, and enabling the other intelligent terminals to store the third target picture in caches of the other intelligent terminals.
The application also provides an intelligent terminal, including: the image processing system comprises a memory and a processor, wherein the memory stores an image processing program, and the image processing program realizes the steps of the method when being executed by the processor.
The present application also provides a computer storage medium storing a computer program which, when executed by a processor, implements the steps of the method as described above.
As described above, the image processing method of the present application is applied to an intelligent terminal, and includes the steps of: when a user clicks a first target picture in a picture list area, generating an editing instruction of the first target picture; responding to the editing instruction, and jumping to an editing interface of the first target picture; when the user finishes editing and clicks a save button, generating a save instruction; responding to the storage instruction, and storing the edited first target picture in the first intelligent terminal; and responding to the storage instruction, and broadcasting the edited first target picture to other intelligent terminals in the group in the local area network where the group is located. By the technical scheme, data transmission among the users in the group can be realized through the local area network, the users in the group can repair the first target picture at the same time, the repair efficiency under the multi-user repair scene is improved, the users do not need to wait for modification in a queue, and the user experience is improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application. In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the description of the embodiments will be briefly described below, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic diagram of a hardware structure of an intelligent terminal for implementing various embodiments of the present application;
fig. 2 is a communication network system architecture diagram according to an embodiment of the present application;
fig. 3 is a flowchart illustrating an image processing method according to a first embodiment;
fig. 4 is an interface flow diagram illustrating an image processing method according to the first embodiment;
fig. 5 is an interface flow diagram illustrating an image processing method according to the first embodiment;
FIG. 6 is a flowchart illustrating an image processing method according to a second embodiment;
fig. 7 is a schematic diagram of two-dimensional code sharing of an image processing method shown according to a second embodiment.
The implementation, functional features and advantages of the object of the present application will be further explained with reference to the embodiments, and with reference to the accompanying drawings. With the above figures, there are shown specific embodiments of the present application, which will be described in more detail below. The drawings and written description are not intended to limit the scope of the inventive concepts in any manner, but rather to illustrate the concepts of the application by those skilled in the art with reference to specific embodiments.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, the recitation of an element by the phrase "comprising an … …" does not exclude the presence of additional like elements in the process, method, article, or apparatus that comprises the element, and further, where similarly-named elements, features, or elements in different embodiments of the disclosure may have the same meaning, or may have different meanings, that particular meaning should be determined by their interpretation in the embodiment or further by context with the embodiment.
It should be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope herein. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context. Also, as used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context indicates otherwise. It will be further understood that the terms "comprises," "comprising," "includes" and/or "including," when used in this specification, specify the presence of stated features, steps, operations, elements, components, items, species, and/or groups, but do not preclude the presence, or addition of one or more other features, steps, operations, elements, components, items, species, and/or groups thereof. The terms "or," "and/or," "including at least one of the following," and the like, as used herein, are to be construed as inclusive or mean any one or any combination. For example, "includes at least one of: A. b, C "means" any of the following: a; b; c; a and B; a and C; b and C; a and B and C ", again for example," A, B or C "or" A, B and/or C "means" any of the following: a; b; c; a and B; a and C; b and C; a and B and C'. An exception to this definition will occur only when a combination of elements, functions, steps or operations are inherently mutually exclusive in some way.
It should be understood that, although the steps in the flowcharts in the embodiments of the present application are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless explicitly stated herein. Moreover, at least some of the steps in the figures may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, in different orders, and may be performed alternately or at least partially with respect to other steps or sub-steps of other steps.
The words "if", as used herein, may be interpreted as "at … …" or "at … …" or "in response to a determination" or "in response to a detection", depending on the context. Similarly, the phrases "if determined" or "if detected (a stated condition or event)" may be interpreted as "when determined" or "in response to a determination" or "when detected (a stated condition or event)" or "in response to a detection (a stated condition or event)", depending on the context.
It should be noted that step numbers such as S101 and S102 are used herein for the purpose of more clearly and briefly describing the corresponding contents, and do not constitute a substantial limitation on the sequence, and those skilled in the art may perform S102 first and then S101 in specific implementations, but these steps should be within the scope of the present application.
It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
In the following description, suffixes such as "module", "component", or "unit" used to denote elements are used only for the convenience of description of the present application, and have no specific meaning in themselves. Thus, "module", "component" or "unit" may be used mixedly.
The electronic device may be implemented in various forms. For example, the electronic devices described in the present application may include electronic devices such as a mobile phone, a tablet computer, a notebook computer, a palmtop computer, a PDA (Personal Digital Assistant), a PMP (Portable Media Player), a navigation device, a wearable device, a smart band, a pedometer, and the like, and fixed terminals such as a Digital TV, a desktop computer, and the like. Therefore, the electronic device may include two execution bodies, namely, an intelligent terminal and a server, which are used in the present application.
While the electronic device will be exemplified in the following description, those skilled in the art will understand that the configuration according to the embodiment of the present application can be applied to a fixed type terminal in addition to elements particularly used for moving purposes.
Referring to fig. 1, which is a schematic diagram of a hardware structure of an electronic device for implementing various embodiments of the present application, the electronic device 100 may include: RF (Radio Frequency) unit 101, WiFi module 102, audio output unit 103, a/V (audio/video) input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, processor 110, and power supply 111. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 1 does not constitute a limitation of the electronic device, and that the electronic device may include more or fewer components than shown, or some components may be combined, or a different arrangement of components.
The following describes each component of the electronic device in detail with reference to fig. 1:
the radio frequency unit 101 may be configured to receive and transmit signals during information transmission and reception or during a call, and specifically, receive downlink information of a base station and then process the downlink information to the processor 110; in addition, the uplink data is transmitted to the base station. Typically, radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 101 can also communicate with a network and other devices through wireless communication. The wireless communication may use any communication standard or protocol, including but not limited to GSM (Global System for Mobile communications), GPRS (General Packet Radio Service), CDMA2000(Code Division Multiple Access 2000), WCDMA (Wideband Code Division Multiple Access), TD-SCDMA (Time Division-Synchronous Code Division Multiple Access), FDD-LTE (Frequency Division duplex-Long Term Evolution), TDD-LTE (Time Division duplex-Long Term Evolution, Time Division Long Term Evolution), 5G, and so on.
WiFi belongs to short-distance wireless transmission technology, and the electronic equipment can help a user to send and receive e-mails, browse webpages, access streaming media and the like through the WiFi module 102, and provides wireless broadband Internet access for the user. Although fig. 1 shows the WiFi module 102, it is understood that it does not belong to the essential constitution of the electronic device, and may be omitted entirely as needed within the scope not changing the essence of the invention.
The audio output unit 103 may convert audio data received by the radio frequency unit 101 or the WiFi module 102 or stored in the memory 109 into an audio signal and output as sound when the electronic apparatus 100 is in a call signal reception mode, a call mode, a recording mode, a voice recognition mode, a broadcast reception mode, or the like. Also, the audio output unit 103 may also provide audio output related to a specific function performed by the electronic apparatus 100 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 103 may include a speaker, a buzzer, and the like.
The a/V input unit 104 is used to receive audio or video signals. The a/V input Unit 104 may include a GPU (Graphics Processing Unit) 1041 and a microphone 1042, and the Graphics processor 1041 processes image data of still pictures or videos obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 106. The image frames processed by the graphic processor 1041 may be stored in the memory 109 (or other storage medium) or transmitted via the radio frequency unit 101 or the WiFi module 102. The microphone 1042 may receive sounds (audio data) via the microphone 1042 in a phone call mode, a recording mode, a voice recognition mode, or the like, and may be capable of processing such sounds into audio data. The processed audio (voice) data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 101 in case of the phone call mode. The microphone 1042 may implement various types of noise cancellation (or suppression) algorithms to cancel (or suppress) noise or interference generated in the course of receiving and transmitting audio signals.
The electronic device 100 also includes at least one sensor 105, such as a light sensor, motion sensor, and other sensors. Optionally, the light sensor includes an ambient light sensor and a proximity sensor, optionally, the ambient light sensor may adjust the brightness of the display panel 1061 according to the brightness of ambient light, and the proximity sensor may turn off the display panel 1061 and/or the backlight when the electronic device 100 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used for applications of recognizing gestures of a mobile phone (such as horizontal and vertical screen switching, related games, magnetometer gesture calibration), vibration recognition related functions (such as pedometers and taps), and the like; as for other sensors such as a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured on the mobile phone, further description is omitted here.
The display unit 106 is used to display information input by a user or information provided to the user. The Display unit 106 may include a Display panel 1061, and the Display panel 1061 may be configured in the form of an LCD (Liquid Crystal Display), an OLED (Organic Light-Emitting Diode), or the like.
The user input unit 107 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic device. Alternatively, the user input unit 107 may include a touch panel 1071 and other input devices 1072. The touch panel 1071, also referred to as a touch screen, may collect a touch operation performed by a user on or near the touch panel 1071 (e.g., an operation performed by the user on or near the touch panel 1071 using a finger, a stylus, or any other suitable object or accessory), and drive a corresponding connection device according to a predetermined program. The touch panel 1071 may include two parts of a touch detection device and a touch controller. Optionally, the touch detection device detects a touch orientation of a user, detects a signal caused by a touch operation, and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 110, and can receive and execute commands sent by the processor 110. In addition, the touch panel 1071 may be implemented in various types such as resistive, capacitive, infrared, and surface acoustic wave. In addition to the touch panel 1071, the user input unit 107 may include other input devices 1072. Optionally, other input devices 1072 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like, and are not limited thereto.
Alternatively, the touch panel 1071 may cover the display panel 1061, and when the touch panel 1071 detects a touch operation thereon or nearby, the touch panel 1071 transmits the touch operation to the processor 110 to determine the type of the touch event, and then the processor 110 provides a corresponding visual output on the display panel 1061 according to the type of the touch event. Although the touch panel 1071 and the display panel 1061 are shown in fig. 1 as two separate components to implement the input and output functions of the electronic device, in some embodiments, the touch panel 1071 and the display panel 1061 may be integrated to implement the input and output functions of the electronic device, and is not limited herein.
The interface unit 108 serves as an interface through which at least one external device is connected to the electronic apparatus 100. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 108 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the electronic apparatus 100 or may be used to transmit data between the electronic apparatus 100 and the external device.
The memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a program storage area and a data storage area, and optionally, the program storage area may store an operating system, an application program required for at least one function (such as a sound playing function, an image playing function, and the like), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 109 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 110 is a control center of the electronic device, connects various parts of the entire electronic device using various interfaces and lines, performs various functions of the electronic device and processes data by operating or executing software programs and/or modules stored in the memory 109 and calling data stored in the memory 109, thereby performing overall monitoring of the electronic device. Processor 110 may include one or more processing units; preferably, the processor 110 may integrate an application processor and a modem processor, optionally, the application processor mainly handles operating systems, user interfaces, application programs, etc., and the modem processor mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The electronic device 100 may further include a power source 111 (such as a battery) for supplying power to each component, and preferably, the power source 111 may be logically connected to the processor 110 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system.
Although not shown in fig. 1, the electronic device 100 may further include a bluetooth module or the like, which is not described herein.
In order to facilitate understanding of the embodiments of the present application, a communication network system on which the electronic device of the present application is based is described below.
Referring to fig. 2, fig. 2 is an architecture diagram of a communication Network system according to an embodiment of the present disclosure, where the communication Network system is an LTE system of a universal mobile telecommunications technology, and the LTE system includes a UE (User Equipment) 201, an E-UTRAN (Evolved UMTS Terrestrial Radio Access Network) 202, an EPC (Evolved Packet Core) 203, and an IP service 204 of an operator, which are in communication connection in sequence.
Optionally, the UE201 may be the terminal 100 described above, and details are not described here.
The E-UTRAN202 includes eNodeB2021 and other eNodeBs 2022, among others. Alternatively, the eNodeB2021 may be connected with other enodebs 2022 through a backhaul (e.g., X2 interface), the eNodeB2021 is connected to the EPC203, and the eNodeB2021 may provide the UE201 access to the EPC 203.
The EPC203 may include an MME (Mobility Management Entity) 2031, an HSS (Home Subscriber Server) 2032, other MMEs 2033, an SGW (Serving gateway) 2034, a PGW (PDN gateway) 2035, and a PCRF (Policy and Charging Rules Function) 2036, and the like. Optionally, the MME2031 is a control node that handles signaling between the UE201 and the EPC203, providing bearer and connection management. HSS2032 is used to provide registers to manage functions such as home location register (not shown) and holds subscriber specific information about service characteristics, data rates, etc. All user data may be sent through SGW2034, PGW2035 may provide IP address allocation and other functions for UE201, PCRF2036 is a policy and charging control policy decision point for traffic data flow and IP bearer resources, which selects and provides available policy and charging control decisions for policy and charging enforcement function (not shown).
The IP services 204 may include the internet, intranets, IMS (IP Multimedia Subsystem), or other IP services, among others.
Although the LTE system is described as an example, it should be understood by those skilled in the art that the present application is not limited to the LTE system, but may also be applied to other wireless communication systems, such as GSM, CDMA2000, WCDMA, TD-SCDMA, and future new network systems (e.g. 5G), and the like.
Based on the above intelligent terminal hardware structure and communication network system, various embodiments of the present application are provided.
First embodiment
Referring to fig. 3, fig. 3 is a flowchart of an image processing method according to an embodiment of the present disclosure. On the basis of the embodiments shown in fig. 1 and fig. 2, as shown in fig. 3, with the first intelligent terminal as an execution subject, the method of this embodiment may include the following steps:
and S101, responding to an editing instruction of the first target picture, and entering an editing interface of the first target picture.
In this embodiment, the first intelligent terminal may display the picture list area in the display interface. As shown in fig. 4(a), a plurality of pictures may be sequentially arranged in the picture list region. The user may select one picture from the plurality of pictures as a first target picture. When the user clicks the first target picture, the first intelligent terminal may generate an editing instruction of the first target picture. For example, as shown in fig. 4(a), when the user clicks "fig. 6", the edit instruction of "fig. 6" is generated. In response to the editing instruction, the display interface of the first intelligent terminal may jump to the editing interface of the first target picture. The editing interface may be as shown in fig. 4 (b). The editing tool and the displayed first target picture are included. The user may edit the first target picture using an editing tool after selecting the tool in the editing tool.
Optionally, the first intelligent terminal may further include step S103. In the editing interface of the first target picture, the first intelligent terminal may obtain terminal information of the intelligent terminal editing the first target picture in the group. The first intelligent terminal can display the terminal information of the intelligent terminals in the editing interface, so that a user can know the user editing the first target picture together. The terminal information may be information such as a terminal number, an IP number, a user name, and a head portrait of the first intelligent terminal. The terminal information may be used to uniquely identify the intelligent terminal in the group. For example, as shown in fig. 4(b), the terminal information of the first smart terminal may be "smart terminal 1". The terminal information of the second smart terminal may be "smart terminal 2". The terminal information of the fifth intelligent terminal may be "intelligent terminal 5".
Optionally, when the user clicks the terminal information displayed in the editing interface, the first intelligent terminal may determine the intelligent terminal selected by the user according to the terminal information, and generate an editing and viewing instruction correspondingly. For example, when the user clicks "terminal 5", the edit view instruction generated by the first intelligent terminal jumps to the edit interface of the fifth intelligent terminal. When the first intelligent terminal displays the editing interface of the fifth intelligent terminal, the user cannot edit the editing interface of the fifth intelligent terminal at the first intelligent terminal. And the user can only view the editing process of the user of the fifth intelligent terminal on the first intelligent terminal.
And S102, responding to a storage instruction of the first target picture, and broadcasting the edited first target picture to other intelligent terminals in the group.
In this embodiment, the first intelligent terminal may execute a corresponding saving operation by obtaining a saving instruction generated by a user clicking a saving button. The intelligent terminal responds to the storage operation, and can broadcast the edited first target picture to other intelligent terminals in the group in the local area network where the group is located.
Alternatively, in a scene such as a party or a trip, after the photo is taken, the user usually sits around and edits the picture. Therefore, these users are typically within the range of a local area network. It is considered that when the first target picture is a group photograph, a plurality of users in the group may need to modify a part of the first target picture belonging to the group photograph according to their preferences. Therefore, in a face-to-face scene, after trading each revised picture content, a plurality of users in the group can edit a part of the first target picture belonging to the user. Corresponding to each intelligent terminal, a user can enter an editing interface of the first target picture after clicking the first target picture. When the user edits the first target picture in the editing interface, the region edited by the user is the editing region. When the user clicks a save button of the editing interface, the intelligent terminal acquires an editing area edited in the first target picture in the editing interface by the user. The first intelligent terminal can respond to a saving instruction and process the editing area of the first target picture.
Alternatively, the picture processing process is usually implemented by an application program. And, during the picture processing, the application is usually in a use state. Therefore, in order to facilitate reading and writing of the application program, the pictures in the picture list can be stored in the cache of the first intelligent terminal.
Optionally, for the first intelligent terminal in the group, the process of repairing and saving may specifically include the following steps:
s1021, the first intelligent terminal obtains the editing area in the first target picture.
And S1022, the first intelligent terminal responds to the storage instruction of the first target picture and covers the editing area of the first target picture to the corresponding area in the first target picture in the cache.
And S1023, the first intelligent terminal responds to the storage instruction of the first target picture and sends the editing area of the first target picture to other intelligent terminals in the group in a broadcasting mode. After the editing area of the first target picture is obtained, the other intelligent terminals in the group can cover the editing area to the corresponding area of the first target picture stored in the cache of each intelligent terminal, so that the first target picture is synchronized in each intelligent terminal of the group.
Optionally, when the first intelligent terminal initiates the group, the user list display area of the first intelligent terminal may further display an authority control switch of another intelligent terminal in the group. The first intelligent terminal further includes step S104. The first intelligent terminal can respond to control instructions generated by the authority control switches of other intelligent terminals in the click group, and the authority of the other intelligent terminals is turned on or turned off. For example, as shown in fig. 5(a), the left side of the terminal information of each intelligent terminal may also correspond to an authority control switch. The entitlement control switch may be used to control one or more entitlements. Or, the left side of the terminal information of each intelligent terminal can also correspond to a plurality of authority control switches. When a plurality of authority control switches are included, each authority control switch corresponds to one authority. When the user clicks the authority control switch of the sixth intelligent terminal, the first intelligent terminal can generate a control instruction. The control instruction is used for opening or closing the authority of the sixth intelligent terminal.
Optionally, the first intelligent terminal may further display terminal information and an editing state of all the intelligent terminals in the group in a user list area of the display interface, and the specific steps may include:
and S105, displaying the terminal information of the intelligent terminals in the group in a user list area of the display interface. For example, as shown in fig. 5(a), terminal information of the smart terminals such as "smart terminal 1" to "smart terminal 6" may be displayed in the user list area.
And S106, when other intelligent terminals are in the editing interface, marking the terminal information of other intelligent terminals. When other intelligent terminals of the group enter an editing page of a certain picture, the terminal information of the intelligent terminal entering the editing page is marked with an editing state. For example, as shown in fig. 5(a), the smart terminal 3 and the smart terminal 4 are in an editing state. When the display interface of the first intelligent terminal includes the user list area, the first intelligent terminal does not necessarily enter the editing page of a certain picture, that is, the first intelligent terminal is not in the editing interface.
Optionally, the first intelligent terminal may further include step S107. The first intelligent terminal can also obtain a user picture viewing instruction generated when the user clicks a fourth intelligent terminal in the user list area. The first smart terminal may jump to the first viewing interface. The first viewing interface is used for displaying the picture edited by the fourth intelligent terminal. For example, as shown in fig. 5(b), the fourth intelligent terminal edited "fig. 1", "fig. 5", "fig. 6", and "fig. 10".
Optionally, when the first intelligent terminal initiates the group, a delete button of another intelligent terminal in the group may be further displayed in the user list display area of the first intelligent terminal. And when the user clicks a delete button of the seventh intelligent terminal, the first intelligent terminal executes a delete operation to delete the seventh intelligent terminal from the group.
Optionally, the display interface of the first intelligent terminal may further include a picture list area. The first intelligent terminal can be further used for executing the following steps:
and S108, the first intelligent terminal displays the pictures uploaded to the local area network where the group is located in the picture list area of the display interface.
S109, the first intelligent terminal can also generate a picture viewing instruction when the user clicks a fourth target picture in the picture list area of the display interface. The first intelligent terminal can respond to the picture viewing instruction of the fourth target picture and enter a second viewing interface. And displaying the fourth target picture and the terminal information of the intelligent terminal for editing the fourth target picture in the second viewing interface.
Optionally, the first intelligent terminal may further include step S110. The first intelligent terminal can also generate a downloading instruction of the second target picture when the user clicks a downloading button of the second target picture. Or, the first intelligent terminal may further click a download button after the user selects the plurality of target pictures, so as to generate a download instruction of the plurality of target pictures. The first intelligent terminal can respond to the downloading instruction and download a second target picture or a plurality of target pictures corresponding to the downloading instruction from the cache to the local.
Optionally, the first intelligent terminal may further include step S111. The first intelligent terminal can also display a local picture list of the first intelligent terminal after clicking an uploading button. The user may select a third target picture or a plurality of target pictures from the local picture list. After the user finishes selecting the third target picture or the plurality of target pictures, the user can click the determination button to generate an uploading instruction. The first intelligent terminal can respond to the uploading instruction, and broadcast the third target picture or the multiple target pictures corresponding to the uploading instruction to other intelligent terminals in the group, so that the other intelligent terminals store the target pictures in the cache of the other intelligent terminals.
According to the image processing method, when a user clicks a first target picture in a picture list area, a first intelligent terminal generates an editing instruction of the first target picture. And the first intelligent terminal responds to the editing instruction and jumps to an editing interface of the first target picture. And the first intelligent terminal generates a storage instruction when the user finishes editing and clicks a storage button. And the intelligent terminal responds to the storage instruction and stores the edited first target picture in the first intelligent terminal. Meanwhile, the intelligent terminal responds to the storage instruction and broadcasts the edited first target picture to other intelligent terminals in the group in the local area network where the group is located. According to the method and the device, the edited first target picture is broadcasted to other intelligent terminals in the group in the local area network where the group is located, so that the edited first target picture is synchronized in the group, a plurality of users in the group can edit the first target picture at the same time, and the users do not need to wait for sending the picture to another user for editing after finishing editing, so that the picture processing efficiency is improved, and the picture repairing efficiency of multi-user picture repairing is improved.
Second embodiment
Referring to fig. 6, fig. 6 is a flowchart of an image processing method according to an embodiment of the present disclosure. On the basis of the embodiments shown in fig. 1 to fig. 5, as shown in fig. 6, with a first smart terminal in a group and a second smart terminal not in the group as executing subjects, the method of this embodiment may include the following steps:
s201, the first intelligent terminal creates a local area network hotspot and generates verification information so that the second intelligent terminal can join the group through the verification information.
In this embodiment, a user corresponding to the first intelligent terminal may be a creator of the group, and the first intelligent terminal may be a master intelligent terminal in the group. The user can create a local area network hotspot in the first intelligent terminal and generate verification information of the local area network hotspot. Optionally, the verification information may be a two-dimensional code or a verification code. For example, the two-dimensional code displayed by the first intelligent terminal may be as shown in fig. 7 (a).
The second intelligent terminal can complete identity authentication by using the authentication information and join the group. In the process that the second intelligent terminal joins the group, the interaction process between the first intelligent terminal and the second intelligent terminal specifically comprises the following steps:
s301, the second intelligent terminal sends a group joining request through the verification information.
In this embodiment, the second intelligent terminal may be an intelligent terminal not in the group. And the user using the second intelligent terminal may be in a face-to-face scenario with the user using the first intelligent terminal. The user can use the second intelligent terminal to obtain the verification information generated by the first intelligent terminal. Optionally, when the verification information is the two-dimensional code, the user may obtain the verification information in a manner of scanning the two-dimensional code by using the second intelligent terminal. For example, as shown in fig. 7(b), the second smart terminal may scan the two-dimensional code of the first smart terminal. Optionally, when the verification information is a verification code, the user may obtain the verification information in a manner of inputting the two-dimensional code in the second intelligent terminal.
After the second intelligent terminal obtains the verification information, the second intelligent terminal may generate a group join request correspondingly. The second intelligent terminal can send the group joining request to the first intelligent terminal through the local area network created by the first intelligent terminal. The group join request may include the terminal information and the authentication information of the second intelligent terminal.
S302, the first intelligent terminal obtains a group joining request and adds the second intelligent terminal to the group.
In this embodiment, the first intelligent terminal may obtain the group join request sent by the second intelligent terminal. The first intelligent terminal can add the second intelligent terminal into the group according to the group joining request. Because the coverage area of the local area network is limited, the distance between the intelligent terminal of the user requesting to join the group and the first intelligent terminal can be effectively ensured through face-to-face code scanning. When the verification information is the verification code, when the user can know the verification code through oral notification and the like, the distance between the intelligent terminal of the user requesting to join the group and the first intelligent terminal is usually within an effective range. Otherwise, when the first intelligent terminal needs to send the verification code to the second intelligent terminal through the network, the second intelligent terminal may not be in the coverage of the local area network.
Optionally, for the coverage of the local area network, a first location range may also be set in the first intelligent terminal. The first intelligent terminal can judge whether the second intelligent terminal meets the range requirement or not through the following steps:
s2011, the first intelligent terminal obtains a first position range. The first location range is less than the coverage area of the local area network. The first position range may be adjusted according to the user's needs. The user may complete the setting of the first location range in the first smart terminal.
And S2012, when the first intelligent terminal acquires the group joining request sent by the second intelligent terminal, the first intelligent terminal can also acquire the position information of the second intelligent terminal. The first intelligent terminal can determine the distance between the second intelligent terminal and the first intelligent terminal according to the position information of the second intelligent terminal. When the distance is within the first location range, the first intelligent terminal can add the second intelligent terminal to the group. Otherwise, the first intelligent terminal can refuse the join request of the second intelligent terminal.
Optionally, when the first intelligent terminal rejects the join request of the second intelligent terminal, the second intelligent terminal may receive rejection information fed back by the first intelligent terminal.
S202, the first intelligent terminal obtains an adding instruction generated by selecting a third intelligent terminal in a map interface.
In this embodiment, the first intelligent terminal may further display a map interface. And the map interface displays the terminal information and the terminal position of other intelligent terminals in the local area network coverage range of the first intelligent terminal. The user can select a third intelligent terminal in the map interface after viewing the map interface. The first intelligent terminal can obtain the selection operation of the user for selecting the third intelligent terminal, and correspondingly generates the adding instruction. The adding instruction is used for indicating the first intelligent terminal to add the third intelligent terminal into the group.
Optionally, the first intelligent terminal may send an addition request to the third intelligent terminal in response to the addition instruction. The addition requesting user requests the third intelligent terminal to join the group. The third intelligent terminal can display the adding request after receiving the adding request. The third intelligent terminal can feed back the processing result to the user. The processing result may include approval addition and rejection of addition. The third intelligent terminal can generate a processing result of adding the consent when the user clicks the consent button. Or, the third intelligent terminal may generate a processing result of rejecting addition when the user clicks a reject button.
S203, the first intelligent terminal responds to the adding instruction, and when the third intelligent terminal is not in the group, the third intelligent terminal is added into the group.
In this embodiment, the first intelligent terminal may directly add the third intelligent terminal to the group in response to the addition instruction.
Optionally, the first intelligent terminal may further obtain a processing result fed back by the third intelligent terminal. When the processing result is the same addition, the first intelligent terminal can add the third intelligent terminal into the group. And if not, the first intelligent terminal prompts that the addition fails.
According to the image processing method, the first intelligent terminal creates the local area network hotspot and generates the verification information. And the second intelligent terminal acquires the verification information generated by the first intelligent terminal and sends a group joining request to the first intelligent terminal. The first intelligent terminal can add the second intelligent terminal into the group according to the group joining request. The first intelligent terminal can also acquire an adding instruction generated by a third intelligent terminal displayed in a map interface by a user, and adds the third intelligent terminal to the group. In the application, other intelligent terminals are added into the group in a mode of sharing the verification information and the pull, the group creation efficiency is improved, and members in the group are guaranteed to be within the coverage range of the local area network, so that the data transmission efficiency and the picture processing efficiency are improved.
The application also provides an intelligent terminal, which comprises a memory and a processor, wherein the memory is stored with an image processing program, and the image processing program is executed by the processor to realize the steps of the image processing method in any embodiment.
The present application further provides a computer-readable storage medium, on which an image processing program is stored, and the image processing program, when executed by a processor, implements the steps of the image processing method in any of the above embodiments.
In the embodiments of the intelligent terminal and the computer-readable storage medium provided in the present application, all technical features of any one of the embodiments of the image processing method may be included, and the expanding and explaining contents of the specification are basically the same as those of the embodiments of the method, and are not described herein again.
Embodiments of the present application also provide a computer program product, which includes computer program code, when the computer program code runs on a computer, the computer is caused to execute the method in the above various possible embodiments.
Embodiments of the present application further provide a chip, which includes a memory and a processor, where the memory is used to store a computer program, and the processor is used to call and run the computer program from the memory, so that a device in which the chip is installed executes the method in the above various possible embodiments.
It is to be understood that the foregoing scenarios are only examples, and do not constitute a limitation on application scenarios of the technical solutions provided in the embodiments of the present application, and the technical solutions of the present application may also be applied to other scenarios. For example, as can be known by those skilled in the art, with the evolution of system architecture and the emergence of new service scenarios, the technical solution provided in the embodiments of the present application is also applicable to similar technical problems.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
The steps in the method of the embodiment of the application can be sequentially adjusted, combined and deleted according to actual needs.
The units in the device in the embodiment of the application can be merged, divided and deleted according to actual needs.
In the present application, the same or similar term concepts, technical solutions and/or application scenario descriptions will be generally described only in detail at the first occurrence, and when the description is repeated later, the detailed description will not be repeated in general for brevity, and when understanding the technical solutions and the like of the present application, reference may be made to the related detailed description before the description for the same or similar term concepts, technical solutions and/or application scenario descriptions and the like which are not described in detail later.
In the present application, each embodiment is described with an emphasis on the description, and reference may be made to the description of other embodiments for parts that are not described or recited in any embodiment.
All possible combinations of the technical features in the embodiments are not described in the present application for the sake of brevity, but should be considered as the scope of the present application as long as there is no contradiction between the combinations of the technical features.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, a controlled terminal, or a network device) to execute the method of each embodiment of the present application.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. The procedures or functions according to the embodiments of the present application are all or partially generated when the computer program instructions are loaded and executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wire (e.g., coaxial cable, fiber optic, digital subscriber line) or wirelessly (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, memory Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
The above description is only a preferred embodiment of the present application, and not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application, or which are directly or indirectly applied to other related technical fields, are included in the scope of the present application.

Claims (10)

1. An image processing method is applied to first intelligent terminals in a group of the same local area network, and comprises the following steps:
s101, responding to an editing instruction of a first target picture, and entering an editing interface of the first target picture;
and S102, responding to the storage instruction of the first target picture, and broadcasting the edited first target picture to other intelligent terminals in the group.
2. The method according to claim 1, wherein step S102 specifically comprises:
s1021, acquiring an editing area of the first target picture;
s1022, in response to the saving instruction of the first target picture, covering the editing area of the first target picture to a corresponding area in the first target picture in the first intelligent terminal;
s1023, responding to the saving instruction of the first target picture, broadcasting the editing area so that other intelligent terminals in the group can acquire the editing area and cover the editing area to the corresponding area in the first target picture.
3. The method of claim 1, further comprising:
s103, displaying the equipment information of the intelligent terminal editing the first target picture in the group in the editing interface of the first target picture.
4. The method according to any one of claims 1-3, further comprising:
s201, creating a local area network hotspot and generating verification information so that a second intelligent terminal can join the group through the verification information.
5. The method according to claim 4, wherein the S201 further comprises:
s2011, acquiring a first position range;
s2012, when the second intelligent terminal which inputs the verification information is in the first position range, adding the second intelligent terminal to the group.
6. The method of claim 5, further comprising:
s202, acquiring an adding instruction generated by selecting a third intelligent terminal in a map interface;
s203, responding to the adding instruction, and adding the third intelligent terminal to the group when the third intelligent terminal is not in the group.
7. The method according to any one of claims 1-3, further comprising:
s104, responding to a control instruction generated by clicking each authority control switch of the other intelligent terminals in the group, and opening or closing the authority of the other intelligent terminals;
and/or the presence of a gas in the gas,
s105, displaying terminal information of the intelligent terminals in the group in a user list area of a display interface;
and S106, when the other intelligent terminals are in the editing interface, marking the terminal information of the other intelligent terminals.
8. The method according to any one of claims 1-3, further comprising:
s110, responding to a downloading instruction of a second target picture in the picture list area, and downloading the second target picture to the local from a cache;
and/or the presence of a gas in the gas,
s111, responding to an uploading instruction of a third target picture, broadcasting the third target picture to other intelligent terminals in the group, and enabling the other intelligent terminals to store the third target picture in caches of the other intelligent terminals.
9. An intelligent terminal, characterized in that, intelligent terminal includes: memory, processor, wherein the memory has stored thereon a computer program which, when executed by the processor, carries out the steps of the image processing method according to any one of claims 1 to 8.
10. A readable storage medium, characterized in that the readable storage medium has stored thereon a computer program which, when being executed by a processor, carries out the steps of the image processing method according to any one of claims 1 to 8.
CN202210590910.4A 2022-05-27 2022-05-27 Image processing method, intelligent terminal and storage medium Pending CN115022270A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210590910.4A CN115022270A (en) 2022-05-27 2022-05-27 Image processing method, intelligent terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210590910.4A CN115022270A (en) 2022-05-27 2022-05-27 Image processing method, intelligent terminal and storage medium

Publications (1)

Publication Number Publication Date
CN115022270A true CN115022270A (en) 2022-09-06

Family

ID=83071689

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210590910.4A Pending CN115022270A (en) 2022-05-27 2022-05-27 Image processing method, intelligent terminal and storage medium

Country Status (1)

Country Link
CN (1) CN115022270A (en)

Similar Documents

Publication Publication Date Title
CN107390972B (en) Terminal screen recording method and device and computer readable storage medium
CN107517153B (en) Message push control method and terminal
CN108280136B (en) Multimedia object preview method, equipment and computer readable storage medium
CN107885448B (en) Control method for application touch operation, mobile terminal and readable storage medium
CN114371803B (en) Operation method, intelligent terminal and storage medium
CN110187808B (en) Dynamic wallpaper setting method and device and computer-readable storage medium
CN113487705A (en) Image annotation method, terminal and storage medium
CN112162870A (en) File processing operation method, mobile terminal and storage medium
CN111966804A (en) Expression processing method, terminal and storage medium
CN114025215A (en) File processing method, mobile terminal and storage medium
CN108282608B (en) Multi-region focusing method, mobile terminal and computer readable storage medium
CN114510166B (en) Operation method, intelligent terminal and storage medium
WO2023108444A1 (en) Image processing method, intelligent terminal, and storage medium
CN113286106B (en) Video recording method, mobile terminal and storage medium
CN114594886A (en) Operation method, intelligent terminal and storage medium
CN114595007A (en) Operation method, intelligent terminal and storage medium
CN113900559A (en) Information processing method, mobile terminal and storage medium
CN113805837A (en) Audio processing method, mobile terminal and storage medium
CN113590069A (en) Switching method, mobile terminal and storage medium
CN113835586A (en) Icon processing method, intelligent terminal and storage medium
CN113194227A (en) Processing method, mobile terminal and storage medium
CN109495683B (en) Interval shooting method and device and computer readable storage medium
CN115022270A (en) Image processing method, intelligent terminal and storage medium
CN114125151B (en) Image processing method, mobile terminal and storage medium
CN114003751A (en) Information processing method, intelligent terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication