CN116861010A - Processing method, processing apparatus, and storage medium - Google Patents

Processing method, processing apparatus, and storage medium Download PDF

Info

Publication number
CN116861010A
CN116861010A CN202310946108.9A CN202310946108A CN116861010A CN 116861010 A CN116861010 A CN 116861010A CN 202310946108 A CN202310946108 A CN 202310946108A CN 116861010 A CN116861010 A CN 116861010A
Authority
CN
China
Prior art keywords
processing
scheme
data processing
sub
target data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310946108.9A
Other languages
Chinese (zh)
Inventor
马隽斌
王涣森
李壮
罗义龙
黄毅山
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Chuanying Information Technology Co Ltd
Original Assignee
Shanghai Chuanying Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Chuanying Information Technology Co Ltd filed Critical Shanghai Chuanying Information Technology Co Ltd
Priority to CN202310946108.9A priority Critical patent/CN116861010A/en
Publication of CN116861010A publication Critical patent/CN116861010A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/51Indexing; Data structures therefor; Storage structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/54Browsing; Visualisation therefor

Abstract

The application provides a processing method, processing equipment and a storage medium, wherein the processing method comprises the following steps: and acquiring or determining at least one target data processing scheme according to the first object and/or the second object, and processing a third object according to the target data processing scheme. According to the technical scheme, the data processing scheme is determined through the first object and/or the second object, and the third object to be processed is processed through the data processing scheme, so that the data object can be automatically and quickly processed, and a user is not required to carry out complicated adjustment on operation parameters for data processing, so that the data object can be well processed.

Description

Processing method, processing apparatus, and storage medium
Technical Field
The application relates to the technical field of artificial intelligence, in particular to a processing method, processing equipment and a storage medium.
Background
In some current implementations, in the processing of data objects, various operating parameters typically need to be preset and/or adjusted by the user at his or her discretion.
In the course of conception and implementation of the present application, the inventors found that at least the following problems exist: when the user has high requirements on the data processing effect, the operation parameters are often required to be adjusted very fussy, but even fine adjustment of the operation parameters can easily cause distortion or deficiency of the data processing algorithm, so that the user can set and/or adjust the operation parameters to process the data by himself, and the final processing effect of the data is often poor.
The foregoing description is provided for general background information and does not necessarily constitute prior art.
Disclosure of Invention
Aiming at the technical problems, the application provides a processing method, processing equipment and a storage medium, which can automatically and quickly process the data object, so that the data object can be processed with good effect without tedious adjustment of operation parameters for data processing by a user.
The application provides a processing method, which can be applied to processing equipment (such as an intelligent terminal or a server and the like), and comprises the following steps:
at least one target data processing scheme is acquired or determined according to the first object and/or the second object, and a third object is processed according to the target data processing scheme.
Optionally, the method further comprises:
processing is performed on the first object to obtain or determine the second object.
Optionally, processing is performed for the first object, including at least one of:
the first object is processed according to at least one initial data processing scheme.
Optionally, the acquiring or determining at least one target data processing scheme according to the first object and/or the second object includes at least one of the following:
And calling a preset determination model to perform conversion processing on the first object and the second object so as to acquire or determine at least one target data processing scheme.
Optionally, the processing the third object according to the target data processing scheme includes:
and processing at least one sub-object to be processed in the third object according to the target data processing scheme.
Optionally, the obtaining or determining manner of the sub-object to be processed includes at least one of the following:
a first mode: acquiring or determining at least one sub-object to be processed according to at least one application range indication of the target data processing scheme;
the second mode is as follows: and acquiring or determining at least one sub-object to be processed according to the first object and/or the second object.
Optionally, the second mode includes at least one of:
determining a sub-object shared by the first object and the second object as the sub-object to be processed;
and acquiring each second sub-object in the third object, and determining the second sub-object identical to the first object as the sub-object to be processed.
Optionally, the method further comprises:
acquiring or determining at least one pattern adjustment parameter;
Adjusting the target data processing scheme according to the scheme adjustment parameters to obtain an adjusted processing scheme;
and executing the third object processing according to the target data processing scheme according to the adjusted processing scheme.
Optionally, the data types of each of the first object, the second object and/or the third object include: images, text, animation, video and/or audio.
Optionally, when the data types of the first object, the second object and the third object are all images, at least one of the following is further included:
the first object or the third object is a photographed image or an image stored at a preset storage location;
the second object is an image stored at a preset storage location;
the method further comprises the steps of: and displaying the image after the third image is processed.
The present application also provides a processing apparatus comprising: the device comprises a memory and a processor, wherein the memory stores a processing program, and the processing program realizes the steps of any one of the processing methods when being executed by the processor.
The present application also provides a storage medium storing a computer program which, when executed by a processor, implements the steps of any of the processing methods described above.
As described above, the processing method of the present application is applicable to a processing apparatus including: at least one target data processing scheme is acquired or determined according to the first object and/or the second object, and a third object is processed according to the target data processing scheme. According to the technical scheme, the data object can be automatically and quickly processed, so that the user is not required to carry out complicated adjustment on the operation parameters for data processing, and the data object can be well processed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application. In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly described below, and it will be obvious to those skilled in the art that other drawings can be obtained from these drawings without inventive effort.
Fig. 1 is a schematic diagram of a hardware structure of a mobile terminal implementing various embodiments of the present application;
fig. 2 is a schematic diagram of a communication network system according to an embodiment of the present application;
FIG. 3 is a flowchart illustrating steps of an embodiment of a processing method according to an embodiment of the present application;
fig. 4 is a schematic application flow chart of an embodiment of a processing method according to an embodiment of the present application.
The achievement of the objects, functional features and advantages of the present application will be further described with reference to the accompanying drawings, in conjunction with the embodiments. By means of which an explicit embodiment of the application has been shown, which will be described in more detail later. The drawings and the written description are not intended to limit the scope of the inventive concepts in any way, but rather to illustrate the inventive concepts to those skilled in the art by reference to the specific embodiments.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the application. Rather, they are merely examples of apparatus and methods consistent with aspects of the application as detailed in the accompanying claims.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, the element defined by the phrase "comprising one … …" does not exclude the presence of additional identical elements in a process, method, article, or apparatus that comprises the element, and alternatively, elements having the same name in different embodiments of the application may have the same meaning or may have different meanings, a particular meaning of which is to be determined by its interpretation in this particular embodiment or further in connection with the context of this particular embodiment.
It should be understood that although the terms first, second, third, etc. may be used herein to describe various information, these information should not be limited by these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope herein. The word "if" as used herein may be interpreted as "at … …" or "at … …" or "responsive to a determination", depending on the context. Furthermore, as used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context indicates otherwise. It will be further understood that the terms "comprises," "comprising," "includes," and/or "including" specify the presence of stated features, steps, operations, elements, components, items, categories, and/or groups, but do not preclude the presence, presence or addition of one or more other features, steps, operations, elements, components, items, categories, and/or groups. The terms "or", "and/or", "including at least one of", and the like, as used herein, may be construed as inclusive, or mean any one or any combination. For example, "including at least one of: A. b, C "means" any one of the following: a, A is as follows; b, a step of preparing a composite material; c, performing operation; a and B; a and C; b and C; a and B and C ", again as examples," A, B or C "or" A, B and/or C "means" any of the following: a, A is as follows; b, a step of preparing a composite material; c, performing operation; a and B; a and C; b and C; a and B and C). An exception to this definition will occur only when a combination of elements, functions, steps or operations are in some way inherently mutually exclusive.
It should be understood that, although the steps in the flowcharts in the embodiments of the present application are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited in order and may be performed in other orders, unless explicitly stated herein. Moreover, at least some of the steps in the figures may include multiple sub-steps or stages that are not necessarily performed at the same time, but may be performed at different times, the order of their execution not necessarily occurring in sequence, but may be performed alternately or alternately with at least some of the sub-steps or stages of other steps or steps.
The words "if", as used herein, may be interpreted as "at … …" or "at … …" or "in response to a determination" or "in response to a detection", depending on the context. Similarly, the phrase "if determined" or "if detected (stated condition or event)" may be interpreted as "when determined" or "in response to determination" or "when detected (stated condition or event)" or "in response to detection (stated condition or event), depending on the context.
It should be noted that, in this document, step numbers such as S10 are adopted, for the purpose of more clearly and briefly describing the corresponding content, and not to constitute a substantial limitation on the sequence, and those skilled in the art may perform other steps first and then perform S10 when implementing the present application, which is within the scope of protection of the present application.
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
In the following description, suffixes such as "module", "component", or "unit" for representing elements are used only for facilitating the description of the present application, and have no specific meaning per se. Thus, "module," "component," or "unit" may be used in combination.
The processing device may be implemented in various forms, such as a smart terminal (e.g., a mobile phone), a server, etc. For example, the processing devices described in the present application may include processing devices such as cell phones, tablet computers, notebook computers, palm computers, personal digital assistants (Personal Digital Assistant, PDA), portable media players (Portable Media Player, PMP), navigation devices, wearable devices, smart bracelets, pedometers, and fixed terminals such as digital TVs, desktop computers, and the like.
The following description will be given taking a mobile terminal as an example, and those skilled in the art will understand that the configuration according to the embodiment of the present application can be applied to a fixed type terminal in addition to elements particularly used for a moving purpose.
Referring to fig. 1, which is a schematic diagram of a hardware structure of a mobile terminal implementing various embodiments of the present application, the mobile terminal 100 may include: an RF (Radio Frequency) unit 101, a WiFi module 102, an audio output unit 103, an a/V (audio/video) input unit 104, a sensor 105, a display unit 106, a user input unit 107, an interface unit 108, a memory 109, a processor 110, and a power supply 111. Those skilled in the art will appreciate that the mobile terminal structure shown in fig. 1 is not limiting of the mobile terminal and that the mobile terminal may include more or fewer components than shown, or may combine certain components, or a different arrangement of components.
The following describes the components of the mobile terminal in detail with reference to fig. 1:
the radio frequency unit 101 may be used for receiving and transmitting signals during the information receiving or communication process, specifically, after receiving downlink information of the base station, processing the downlink information by the processor 110; and, the uplink data is transmitted to the base station. Typically, the radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. Optionally, the radio frequency unit 101 may also communicate with networks and other devices via wireless communication. The wireless communication may use any communication standard or protocol including, but not limited to, GSM (Global System of Mobile communication, global system for mobile communications), GPRS (General Packet Radio Service ), CDMA2000 (Code Division Multiple Access, 2000, CDMA 2000), WCDMA (Wideband Code Division Multiple Access ), TD-SCDMA (Time Division-Synchronous Code Division Multiple Access, time Division synchronous code Division multiple access), FDD-LTE (Frequency Division Duplexing-Long Term Evolution, frequency Division duplex long term evolution), TDD-LTE (Time Division Duplexing-Long Term Evolution, time Division duplex long term evolution), 5G, 6G, and the like.
WiFi belongs to a short-distance wireless transmission technology, and a mobile terminal can help a user to send and receive e-mails, browse web pages, access streaming media and the like through the WiFi module 102, so that wireless broadband Internet access is provided for the user. Although fig. 1 shows a WiFi module 102, it is understood that it does not belong to the necessary constitution of a mobile terminal, and can be omitted entirely as required within a range that does not change the essence of the invention.
The audio output unit 103 may convert audio data received by the radio frequency unit 101 or the WiFi module 102 or stored in the memory 109 into an audio signal and output as sound when the mobile terminal 100 is in a call signal reception mode, a talk mode, a recording mode, a voice recognition mode, a broadcast reception mode, or the like. Also, the audio output unit 103 may also provide audio output (e.g., a call signal reception sound, a message reception sound, etc.) related to a specific function performed by the mobile terminal 100. The audio output unit 103 may include a speaker, a buzzer, and the like.
The a/V input unit 104 is used to receive an audio or video signal. The a/V input unit 104 may include a graphics processor (Graphics Processing Unit, GPU) 1041 and a microphone 1042, the graphics processor 1041 processing image data of still pictures or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 106. The image frames processed by the graphics processor 1041 may be stored in the memory 109 (or other storage medium) or transmitted via the radio frequency unit 101 or the WiFi module 102. The microphone 1042 can receive sound (audio data) via the microphone 1042 in a phone call mode, a recording mode, a voice recognition mode, and the like, and can process such sound into audio data. The processed audio (voice) data may be converted into a format output that can be transmitted to the mobile communication base station via the radio frequency unit 101 in the case of a telephone call mode. The microphone 1042 may implement various types of noise cancellation (or suppression) algorithms to cancel (or suppress) noise or interference generated in the course of receiving and transmitting the audio signal.
The mobile terminal 100 also includes at least one sensor 105, such as a light sensor, a motion sensor, and other sensors. Optionally, the light sensor includes an ambient light sensor and a proximity sensor, optionally, the ambient light sensor may adjust the brightness of the display panel 1061 according to the brightness of ambient light, and the proximity sensor may turn off the display panel 1061 and/or the backlight when the mobile terminal 100 moves to the ear. As one of the motion sensors, the accelerometer sensor can detect the acceleration in all directions (generally three axes), and can detect the gravity and direction when stationary, and can be used for applications of recognizing the gesture of a mobile phone (such as horizontal and vertical screen switching, related games, magnetometer gesture calibration), vibration recognition related functions (such as pedometer and knocking), and the like; as for other sensors such as fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc. that may also be configured in the mobile phone, the detailed description thereof will be omitted.
The display unit 106 is used to display information input by a user or information provided to the user. The display unit 106 may include a display panel 1061, and the display panel 1061 may be configured in the form of a liquid crystal display (Liquid Crystal Display, LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 107 may be used to receive input numeric or character information and to generate key signal inputs related to user settings and function control of the mobile terminal. Alternatively, the user input unit 107 may include a touch panel 1071 and other input devices 1072. The touch panel 1071, also referred to as a touch screen, may collect touch operations thereon or thereabout by a user (e.g., operations of the user on the touch panel 1071 or thereabout by using any suitable object or accessory such as a finger, a stylus, etc.) and drive the corresponding connection device according to a predetermined program. The touch panel 1071 may include two parts of a touch detection device and a touch controller. Optionally, the touch detection device detects the touch azimuth of the user, detects a signal brought by touch operation, and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device, converts it into touch point coordinates, and sends the touch point coordinates to the processor 110, and can receive and execute commands sent from the processor 110. Alternatively, the touch panel 1071 may be implemented in various types of resistive, capacitive, infrared, surface acoustic wave, and the like. The user input unit 107 may include other input devices 1072 in addition to the touch panel 1071. Alternatively, other input devices 1072 may include, but are not limited to, at least one of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, mouse, joystick, etc., and are not limited in particular herein.
Alternatively, the touch panel 1071 may overlay the display panel 1061, and when the touch panel 1071 detects a touch operation thereon or thereabout, the touch panel 1071 is transferred to the processor 110 to determine the type of touch event, and the processor 110 then provides a corresponding visual output on the display panel 1061 according to the type of touch event. Although in fig. 1, the touch panel 1071 and the display panel 1061 are two independent components for implementing the input and output functions of the mobile terminal, in some embodiments, the touch panel 1071 may be integrated with the display panel 1061 to implement the input and output functions of the mobile terminal, which is not limited herein.
The interface unit 108 serves as an interface through which at least one external device can be connected with the mobile terminal 100. For example, the external devices may include a wired or wireless headset port, an external power (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 108 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the mobile terminal 100 or may be used to transmit data between the mobile terminal 100 and an external device.
Memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a storage program area and a storage data area, and alternatively, the storage program area may store an operating system, an application program required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, phonebook, etc.) created according to the use of the handset, etc. Alternatively, the memory 109 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The processor 110 is a control center of the mobile terminal, connects various parts of the entire mobile terminal using various interfaces and lines, and performs various functions of the mobile terminal and processes data by running or executing software programs and/or modules stored in the memory 109 and calling data stored in the memory 109, thereby performing overall monitoring of the mobile terminal. Processor 110 may include one or more processing units; preferably, the processor 110 may integrate an application processor and a modem processor, the application processor optionally handling mainly an operating system, a user interface, an application program, etc., the modem processor handling mainly wireless communication. It will be appreciated that the modem processor may not be integrated into the processor 110.
The mobile terminal 100 may further include a power source 111 (e.g., a battery) for supplying power to the respective components, and preferably, the power source 111 may be logically connected to the processor 110 through a power management system, so as to perform functions of managing charging, discharging, and power consumption management through the power management system.
Although not shown in fig. 1, the mobile terminal 100 may further include a bluetooth module or the like, which is not described herein.
In order to facilitate understanding of the embodiments of the present application, a communication network system on which the mobile terminal of the present application is based will be described below.
Referring to fig. 2, fig. 2 is a schematic diagram of a communication network system according to an embodiment of the present application, where the communication network system is an LTE system of a general mobile communication technology, and the LTE system includes a UE (User Equipment) 201, an e-UTRAN (Evolved UMTS Terrestrial Radio Access Network ) 202, an epc (Evolved Packet Core, evolved packet core) 203, and an IP service 204 of an operator that are sequentially connected in communication.
Alternatively, the UE201 may be the terminal 100, which is not described here.
The E-UTRAN202 includes eNodeB2021 and other eNodeB2022, etc. Alternatively, the eNodeB2021 may connect with other enodebs 2022 over a backhaul (e.g., X2 interface), the eNodeB2021 is connected to the EPC203, and the eNodeB2021 may provide access for the UE201 to the EPC 203.
EPC203 may include MME (Mobility Management Entity ) 2031, hss (Home Subscriber Server, home subscriber server) 2032, other MMEs 2033, SGW (Serving Gate Way) 2034, pgw (PDN Gate Way) 2035 and PCRF (Policy and Charging Rules Function, policy and tariff function entity) 2036, and so on. Optionally, MME2031 is a control node that handles signaling between UE201 and EPC203, providing bearer and connection management. HSS2032 is used to provide registers to manage functions such as home location registers (not shown) and to hold user specific information about service characteristics, data rates, etc. All user data may be sent through SGW2034 and PGW2035 may provide IP address allocation and other functions for UE201, PCRF2036 is a policy and charging control policy decision point for traffic data flows and IP bearer resources, which selects and provides available policy and charging control decisions for a policy and charging enforcement function (not shown).
IP services 204 may include the internet, intranets, IMS (IP Multimedia Subsystem ), or other IP services, etc.
Although the LTE system is described as an example, those skilled in the art should appreciate that the present application is not limited to LTE systems, but may be applied to other wireless communication systems, such as GSM, CDMA2000, WCDMA, TD-SCDMA, 5G, and future new network systems (e.g., 6G), etc.
The application provides a processing method, which can acquire or determine at least one target data processing scheme according to a first object and a second object with an association relation with the first object, and process a third object according to the target data processing scheme. Therefore, the data object can be automatically and quickly processed, and the user is not required to carry out complicated adjustment on the operation parameters for data processing, so that the data object can obtain good processing effect.
Various embodiments of the treatment method of the present application are set forth below in turn.
First embodiment
In this embodiment, the execution body of the processing method of the present application may be the processing device, or a cluster formed by the plurality of processing devices, where the processing device may be an intelligent terminal (such as the aforementioned mobile terminal 100) or a server. Here, the processing method will be described with the processing apparatus as the execution subject in the first embodiment of the processing method.
As shown in fig. 3, in this embodiment, the processing method includes:
step S10: at least one target data processing scheme is acquired or determined according to the first object and/or the second object, and a third object is processed according to the target data processing scheme.
In the present embodiment, description will be given taking as an example that at least one target data processing scheme is acquired or determined from the first object and the second object. In actual implementation, the determining manner of the target data processing scheme may include at least one of the following:
acquiring or determining at least one target data processing scheme according to the first object;
acquiring or determining at least one target data processing scheme according to the second object;
at least one target data processing scheme is acquired or determined from the first object and the second object.
In this embodiment, the processing device obtains or determines at least one target data processing scheme according to the first object and the second object having an association relationship with the first object. And then processing the third object according to the target data processing scheme.
Optionally, the processing device obtains or determines at least one target data processing scheme for processing other data objects by obtaining a first object and a second object having an association relationship with the first object, and then analyzing the first object and the second object. After the processing device analyzes the first object and/or the second object to acquire or determine to obtain at least one target data processing scheme, in the process of needing to perform data processing on any third object later, the processing device can automatically process the third object according to the target data processing scheme under the condition of obtaining authorization.
Alternatively, the first object and/or the second object are each data objects, for example, the first object and/or the second object may be data of which the data type is image, text, animation, video and/or audio.
Optionally, the processing device may obtain the first object and/or the second object through a preset man-machine interaction interface. The processing device outputs a prompt to the user via a human-machine interaction interface provided for the user, and presents one or more data storage locations for selecting the first object and/or the second object to the user output. And then, the user inputs at least one first object and/or at least one second object with an association relationship with the first object by using the man-machine interaction interface according to the prompt information and the data storage position. In this way, after determining that the at least one first object and the at least one second object are obtained, the processing device may invoke a pre-configured data optimization processing analysis model to analyze the first object and/or the second object, thereby obtaining or determining at least one target data processing scheme for performing optimization processing on other data objects.
Alternatively, the processing device may output the prompt information to the user by outputting a barrage, a dialog box, an H5 page, and/or voice information. Alternatively, the data optimization analysis model may be a data analysis model based on AIGC (AI generated content, also referred to as deterministic AI, meaning artificial intelligence deterministic content) technology, which is preloaded and trained by the processing device, and may obtain or derive a data optimization scheme for optimizing the first object to obtain the second object by analyzing the first object and/or the second object.
Optionally, the third object is also a data object of the same type as the first object and/or the second object, for example, the third object may also be data of the data type image, text, animation, video and/or audio.
Alternatively, the target data processing scheme may be a piece of descriptive text, which is used to instruct the processing device how to perform the data processing operation for the third object. Illustratively, the third object is a picture obtained by photographing the user (or may be a picture obtained by downloading the user from the cloud data platform through the data network), and the target data processing scheme acquired or determined by the processing device based on the analysis processing is "in the face state of the user with id xx: about 70% of acne marks on two sides of the cheek are eliminated and covered; about 5% tightening of the facial profile mandible line; the overall contrast of the face is improved by about 5%; under the condition of dim background, improving aesthetic and imaging expression such as 10 percent of facial brightness, and the like, the processing equipment can identify a user with id xx in the third object according to the target data processing scheme in the process of carrying out the face-beautifying optimization on the third object, and under the face-facing state of the user: about 70% of acne marks on two sides of the cheek are eliminated and covered; about 5% tightening of the facial profile mandible line; the overall contrast of the face is improved by about 5%; and/or to improve facial brightness by 10% in the case of dim background.
In this embodiment, the processing method of the present application obtains or determines, by the processing device, at least one target data processing scheme for processing other data objects by obtaining a first object and a second object having an association relationship with the first object, and then analyzing the first object and the second object. And then, in the process of carrying out data processing on any third object, the processing equipment can automatically process the third object according to the target data processing scheme under the condition of obtaining authorization.
Therefore, compared with the traditional mode of determining a scheme by presetting and repeatedly adjusting operation parameters for optimizing data by a user and then optimizing the data object by using the scheme, the method and the device directly obtain the target data processing scheme for optimizing the data object by analyzing and learning the first object and the second object which has an association relation with the first object, so that the third object which needs to be optimized is optimized based on the target data processing scheme, automatic and rapid data object processing is realized, and the user is not required to carry out complicated adjustment on the operation parameters for data processing, so that the data object can be well processed.
Second embodiment
Based on the first embodiment, a second embodiment of the application is presented, in which the inventive processing method is still performed by the processing device. In this embodiment, the processing method of the present application further includes:
step A: processing is performed on the first object to obtain or determine the second object.
In this embodiment, the processing device processes a first object to acquire or determine a second object having an association relationship with the first object.
Optionally, before acquiring or determining the target data processing scheme for processing the other data objects, the processing device further acquires the first object and performs an operation of optimizing processing on the first object so as to acquire or determine a second object having an association relationship with the first object. In this way, the processing device may obtain or determine at least one target data processing scheme for processing other data objects by analyzing the first object and the second object.
Optionally, the processing method of the present application processes the first object, which may be:
the first object is processed according to at least one initial data processing scheme.
In this embodiment, the processing device processes the first object according to the initial data processing scheme by acquiring or determining at least one initial data processing scheme.
Optionally, when the processing device acquires the first object and performs the operation of optimizing the first object so as to acquire or determine the second object having an association relationship with the first object, at least one initial data processing scheme for performing the optimization on the first object is acquired or determined first, and then the operation of optimizing the first object according to the initial data processing scheme is performed.
Optionally, the initial data processing scheme may be a scheme for indicating how to optimize the first object, which is obtained after the user configures and adjusts the operation parameters through a human-computer interaction interface provided by the processing device. The initial data processing scheme may also be a piece of descriptive text such as "in the front face state of the user with id xx in the first object: about 70% of acne marks on two sides of the cheek of the user are removed and covered; about 5% tightening of the facial profile mandible line; the overall contrast of the face is improved by about 5%; aesthetic and imaging expressions such as 10% improvement in facial brightness in the case of dim background. In this way, after the processing device performs optimization processing on the first object according to the initial data processing scheme to obtain the second object having an association relationship with the first object, the processing device further invokes the data optimization processing model to perform analysis learning on the first object and/or the second object, so that the initial data processing scheme can be determined to be the target data processing scheme for indicating how to perform optimization processing on the third object, that is, a section of text expression identical to the initial data processing scheme is obtained.
Optionally, the acquiring or determining at least one target data processing scheme according to the first object and/or the second object may be:
and calling a preset determination model to perform conversion processing on the first object and the second object so as to acquire or determine at least one target data processing scheme.
In this embodiment, the processing device performs the conversion processing on the first object and/or the second object by calling a preset determination model, so as to acquire or determine at least one target data processing scheme. Optionally, the operating parameters of the target data processing scheme are the same as the operating parameters of the initial data processing scheme.
Alternatively, the data optimization approach may be a deterministic model based on AIGC (Artificial Intelligence Generated Content ) technology. For example, when the data types of the first object, the second object and the third object are images, the determination model may be a GPT (generating Pre-trained Transformer, generating Pre-training converter) image conversion language model that is finely tuned in advance by the processing device.
Optionally, after obtaining the first object and the second object having an association relationship with the first object, the processing device may perform analysis learning on the first object and the second object by calling the determination model to convert the first object and/or the second object of the image type into a precise language description, so as to obtain or determine a target data processing scheme for indicating to perform optimization processing on the third object, that is, describe a section of text expression identical to the initial data processing scheme.
Optionally, the processing device performs analysis learning on the first object and/or the second object through the determination model, so that each operation parameter for indicating data processing on the third object in the target data processing scheme obtained based on image language conversion is the same as each operation parameter for indicating processing on the first object in the initial data processing scheme adopted by the processing device when the first object is subjected to data processing.
In this embodiment, the processing method of the present application processes, through a processing device, a first object to obtain a second object having an association relationship with the first object according to an initial data processing scheme formed by configuring and/or adjusting an operation parameter by a user, and then performs a transformation process on the first object and/or the second object through a determination model to obtain an accurate text description for the initial data processing scheme, and uses the text description as a target data processing scheme for indicating how to perform an optimization process on a third object. In this way, the processing method of the application obtains the target data processing scheme accurately described by combining the finely-adjusted determination model according to the optimization habit characteristics of the user for processing the data objects, so that the optimization processing of the processing equipment for other data objects is more in line with the user's expectations, and the professional-level data processing experience is also considered.
Third embodiment
Based on any of the embodiments, a third embodiment of the present application is presented, in which the inventive processing method is still performed by the processing device. In this embodiment, the processing the third object according to the target data processing scheme may be:
and processing at least one sub-object to be processed in the third object according to the target data processing scheme.
In this embodiment, the third object includes at least one sub-object, and the processing device may process, when obtaining authorization, at least one sub-object to be processed in the third object according to the target data processing scheme in a process of performing optimization processing on the third object by using the target data processing scheme.
Alternatively, when the data type of the third object is an image, sub-objects such as a male face, a female face, and a landscape may be included in the third object. The processing device determines, during the process of performing the optimization processing on the third object, a sub-object such as a male face, a female face, a landscape and the like from at least one sub-object of the third object by acquiring the authorization of the user as a sub-object to be processed, and performs the operation of performing the optimization processing on the sub-object to be processed according to the target data processing scheme.
Optionally, the obtaining or determining manner of the sub-object to be processed may include at least one of the following manners:
a first mode: and acquiring or determining at least one sub-object to be processed according to at least one application range indication.
In this embodiment, the processing device may acquire or determine at least one sub-object to be processed according to at least one application range indication of the target data processing scheme;
optionally, the processing device may acquire at least one application range indication of the target data processing scheme set by the user through man-machine interaction with the user, so as to acquire or determine, according to the application range indication, at least one sub-object to be processed in the third object, which needs to be optimally processed according to the target data scheme.
Optionally, the processing device may output a prompt to the user through the man-machine interaction interface, so as to receive the response prompt from the user and autonomously configure the application range of the target data processing scheme. Optionally, the processing device may further determine an application range of the target data processing scheme by performing text semantic parsing with respect to the target data processing scheme. Optionally, the target data processing scheme is described in the following text: about 70% of the acne marks on both sides of the cheek are removed and covered in the face state of the female user, about 5% of the chin line of the face outline is tightened, the overall contrast of the face is improved by about 5%, and the brightness of the face is improved by 10% in the case of dark background. The processing device can determine female users in a third object of the application range of the target data processing scheme by performing text semantic analysis on the target data processing scheme, and further determine all female users in the third object as sub-objects to be processed which need to be optimized according to the target data processing scheme.
The second mode is as follows: and acquiring or determining at least one sub-object to be processed according to the first object and/or the second object.
In this embodiment, the processing device may further perform recognition analysis on the first object and/or the second object, so as to determine, based on each sub-object included in the first object and/or the second object, at least one sub-object to be processed in the third object that needs to be optimized according to the target data scheme.
Optionally, the second mode includes at least one of:
determining a first sub-object shared by the first object and the second object as the sub-object to be processed;
and determining a second sub-object identical to the first sub-object as the sub-object to be processed.
In this embodiment, the processing device may directly determine at least one first sub-object that is common to the first object and/or the second object as a sub-object to be processed; and/or the processing device may further acquire each second sub-object in the third object, and then determine at least one second sub-object identical to the first sub-object among the second sub-objects as a sub-object to be processed.
Alternatively, when the processing device analyzes the first object and/or the second object through recognition to obtain or determine the sub-object to be processed in the third object, after recognizing and analyzing the sub-object shared by the first object and/or the second object, the processing device may detect whether the sub-object shared by the first object and the second object also exists in the third object, and directly determine the sub-object shared by the first object and the second object when detecting that the sub-object shared by the first object and the second object also exists in the third object, which is at least one sub-object to be processed in the third object that needs to be optimally processed according to the target data scheme.
Optionally, when the processing device analyzes the first object and/or the second object to obtain or determine the sub-object to be processed in the third object through recognition, each second sub-object in the third object may be recognized and analyzed first, then one or more second sub-objects identical to the first object in the second sub-objects are detected, especially, one or more second sub-objects identical to the first sub-objects in the first object in the second sub-objects are detected, and finally the detected second sub-object identical to the first object or the sub-objects in the first object is determined as at least one sub-object to be processed in the third object, which needs to be optimized according to the target data scheme.
Optionally, the processing device may further use the first manner and the second manner to obtain or determine at least one sub-object to be processed that needs to be optimized according to the target data scheme from the third object. Optionally, the processing device may first identify and analyze, from among the third objects, one or more second sub-objects that are identical to the first object or identical to the sub-objects in the first object in the second manner, and then further detect and determine whether the second sub-object is a sub-object to which an application range indication corresponding to the current target data processing scheme points, so that if yes, the second sub-object is determined to be at least one sub-object to be processed, which needs to be optimized according to the target data scheme, in the third object.
In this embodiment, in the processing method provided by the present application, in the process of performing, by using the processing device, optimization processing on a third object using a target data processing scheme, processing on at least one sub-object to be processed in the third object according to the target data processing scheme when authorization is obtained. The processing device can optimize the human image only for the user based on the user's needs, so as to prevent the degradation of the look and feel of other parts of the image caused by unexpected beautification of the image scenery or other figures. Furthermore, the processing device may of course also be adapted to obtain or determine a plurality of sub-objects to be processed for an optimization of the multi-portraits and/or scenes or for a local optimization of different parts of the image frame.
Fourth embodiment
Based on any of the embodiments, a fourth embodiment of the present application is presented, in which the inventive processing method is still performed by the processing device. In this embodiment, the processing method may further include step B and/or step C:
and (B) step (B): acquiring or determining a new processing scheme according to at least one scheme adjustment parameter and the target data processing scheme;
Step C: and processing the third object according to the new processing scheme.
In this embodiment, the processing device may acquire or determine at least one scheme adjustment parameter, and then adjust the target data processing scheme according to the scheme adjustment parameter to obtain an adjusted new processing scheme, so that in a subsequent process of needing to perform data processing on any third object, the third object can be automatically processed according to the new processing scheme under the condition of obtaining authorization.
Optionally, after the processing device acquires or determines at least one target data processing scheme based on the procedure set forth in the step S10 and/or the steps included in the step S10, the processing device may further acquire or determine at least one scheme adjustment parameter through a man-machine interaction interface provided for a user, and make a corresponding adjustment with respect to the target data processing scheme according to the scheme adjustment to obtain an adjusted processing scheme, so that the processing device may automatically process any third object according to the adjusted new processing scheme when authorization is obtained in a subsequent process of data processing for the third object.
Optionally, the processing device obtains or determines, based on the step S10 and/or the procedure set forth in each step included in the step S10, at least one target data processing scheme as follows: about 70% of acne marks on both sides of the cheek are removed and covered in the front face state of the first user with id xx, about 5% of the chin line of the face outline is tightened, the overall contrast of the face is improved by about 5%, and the brightness of the face is improved by 10% under the condition of dark background. If the user wants to apply the target data scheme only to the face states of all female users, the user can input one or more scheme adjustment parameters into the processing device through the operation of editing and adjusting the target data processing scheme through the man-machine interaction interface provided by the processing device. Thus, the processing device, after receiving the scheme adjustment parameters through the man-machine interface, may adjust the target data processing scheme to: about 70% of acne marks on two sides of a cheek are removed and covered in a face state of a female user, about 5% of chin lines of the face outline are tightened, the overall contrast of the face is improved by about 5%, and the brightness of the face is improved by 10% under the condition of dark background. Then, in response to an instruction initiated by the user to perform optimization processing on the third object, namely the image, the processing device can identify and determine all female users in the third object as sub-objects to be processed, and apply the adjusted processing scheme to all female users in the third object by utilizing the AIGC technology.
Optionally, the first object is a photograph taken by the user, and after the processing device outputs prompt information to the user and inputs the first object and/or the second object to obtain the target data processing scheme, the processing device marks the target data processing scheme as a first beautification scheme, and the user further inputs the photograph and a photograph-photograph a after beautification of scenery in the photograph according to the prompt information. At this time, the processing device may analyze the original photograph input by the user and the photograph a by calling the determining model based on the AIGC technique to obtain another target data processing scheme, which is a second beautification scheme. At this time, when the processing device recognizes that the user starts the photographing function and recognizes that the photograph shot by the user includes the female user and the scene, the processing device invokes the two schemes, and uses the AIGC technology to beautify the female user in the photograph according to the first beautification scheme, invokes the second beautification scheme to beautify the scenery in the photograph by using the AIGC technology, and finally outputs the photograph after beautification (namely, beautifies the front face of the female user and beautifies the scenery).
Optionally, the processing device may display the marks of the first beautification scheme and the second beautification scheme on an interface for displaying the photo shot by the user through man-machine interaction, so that the user may trigger the corresponding beautification instruction by triggering the corresponding marks or by means of voice, etc., and the processing device may invoke the scheme selected by the user independently according to the beautification instruction to perform the corresponding beautification processing on the photo.
Optionally, the processing device may further obtain the target data processing scheme after receiving the first photograph input by the user and the fifth photograph after performing the beautifying processing on the task in the first photograph, and performing the step S10 and/or the steps included in the step S10, to obtain the third beautifying scheme. Thus, when the processing device recognizes that the user enables the photographing function and recognizes that the first beautification scheme is included in the photo taken by the user, the first beautification scheme and the third beautification scheme are displayed to the user for selection by the user on the interface displaying the photo, and then the third beautification scheme selected by the user is executed on the character first in the photo by utilizing the AIGC technology according to the selection of the user.
Optionally, each data type of the first object, the second object, and/or the third object in the embodiments may include: images, text, animation, video and/or audio.
Optionally, when the first object and/or the second object are both images, the target data processing scheme may be a scheme for performing beautification processing on image elements such as characters and/or scenery in the images, and when the first object is an image and the second image is text and/or audio, the target data processing scheme may be a scheme for converting the image into text and/or audio descriptive contents, otherwise, when the first object is text and/or audio descriptive contents and the second object is an image, the target data processing scheme may be a scheme for converting the text and/or audio descriptive contents into images.
Optionally, the data types of the first object, the second object and/or the third object may be any other types not mentioned herein, but no matter how the data types of the first object, the second object and/or the third object are changed, if the data processing scheme is determined based on the analysis of the first object and/or the second object, and the third object is processed based on the data processing scheme, the method falls within the technical protection scope requested by the processing method of the present application.
Optionally, when the data types of the first object, the second object and the third object are all images, the processing method of the present application may further include at least one of the following:
the first object, the second object or the third object is a photographed image;
the first object, the second object, or the third object is an image stored at a preset storage location.
In this embodiment, the processing device is configured to perform analysis to obtain or determine a first object of the target data processing scheme, which may be an image obtained by performing an image capturing operation for a user, or the first object may also be an image that is pre-captured by the user and stored in a preset storage location, or the first object may also be an image that is stored in a preset storage location for another user.
Alternatively, the second object may be a user that captures an image stored in a preset storage location in advance, or the second object may also store an image in a preset storage location for other users. Alternatively, the second object may be an image obtained by performing an image capturing operation for the user, as with the first object.
Alternatively, the third object that needs to perform the optimization processing operation of the processing device may also be an image obtained by performing the image capturing operation for the user, or the third object may also be an image that is captured in advance for the user and stored in the preset storage location, or the third object may also be an image stored in the preset storage location for other users.
Optionally, when the first object and/or the second object are images obtained by shooting by the user, if the first object is a photograph obtained by shooting by the user in the first light environment and the second object is a photograph obtained by shooting by the user in the second light environment, and both the two photographs include the user a, if the user prefers the imaging effect of the user a in the second light environment, the step S10 and the steps included therein may be executed, and the target data processing scheme of the image of the user a in the second object may be obtained by performing the optimization processing on the image of the user a obtained from the first object. Therefore, the user A can perform the same optimization processing on other images such as the third object shot or stored at the preset storage position according to the target data processing scheme to obtain the favorite imaging effect.
Optionally, when the first object and/or the second object are both images stored by the user at the preset storage location, if both the first object and/or the second object contain the user a and the user prefers the imaging effect of the user a in the second object, the step S10 and the steps contained therein may be executed to obtain the target data processing scheme of the image of the user a in the second object by performing the optimization processing on the image of the user a in the first object. Therefore, the user A can perform the same optimization processing on other images such as the third object shot or stored at the preset storage position according to the target data processing scheme to obtain the favorite imaging effect.
Optionally, when the first object and/or the second object is an image captured by a user and the second object is an image stored in a preset storage location, and similarly, both photos of the first object and/or the second object include the user a, and the user prefers the imaging effect of the user a in the photo of the second object, the step S10 and the steps included therein may be executed to obtain the target data processing scheme of the image of the user a in the second object from the image of the user a in the first object by performing the optimization processing. Therefore, the user A can perform the same optimization processing on other images such as the third object shot or stored at the preset storage position according to the target data processing scheme to obtain the favorite imaging effect.
Optionally, the preset storage location may be a data platform, such as a cloud server, a cloud platform, or the like, where the processing device is connected through a network. Or, the preset storage location may be one or more data storage areas configured locally for the processing device, such as a running memory and/or a solid state disk, etc.
Optionally, when the first object or the third object is an image stored in a preset storage location, the processing device may be obtained by downloading network data or directly performing a data reading operation.
Optionally, the processing method may further include:
and displaying the image after the third image is processed.
In this embodiment, after the processing device performs optimization processing on the third object with the number type being the image according to the target data processing scheme, the processed image may also be displayed for the user through a man-machine interaction interface provided for the user.
Optionally, as shown in fig. 4, in the case that the third object is a picture acquired during the shooting process of the user, after the processing device performs the process of optimizing the beauty on the image to obtain a post-beauty picture, the post-beauty picture may be previewed, and the images determined after the process of optimizing the beauty in different processes may be combined and displayed to the user for previewing. Optionally, the processing device may further determine a video model by calling the fine-tuning CPT, and then forming a smooth video after the application of the beauty algorithm based on the previous frame image (original image) and the next frame image (beautified image) captured by the user and the descriptive language autonomously input by the user, so that the video model is utilized to convert the image after the beauty optimization processing to determine the smooth video for viewing by the user.
In the embodiment, the processing method determines the precise description text of the beautification scheme through the pre-input image of the user and the beautification image corresponding to the image, and determines the photo after automatic beautification according to the combination of the precise description text and the image to be beautified. Therefore, the user can automatically beautify the image and obtain good image beautifying effect without tedious setting and adjustment of beautifying parameters.
The embodiment of the application also provides processing equipment, which comprises a memory and a processor, wherein a processing program is stored in the memory, and the processing program realizes the steps of the processing method in any embodiment when being executed by the processor.
The embodiment of the application also provides a storage medium, and a processing program is stored on the storage medium, and when the processing program is executed by a processor, the steps of the processing method in any embodiment are realized.
The embodiments of the processing device and the storage medium provided by the application may include all technical features of any embodiment of the processing method, and the expansion and explanation contents of the description are basically the same as those of each embodiment of the method, and are not repeated here.
Embodiments of the present application also provide a computer program product comprising computer program code which, when run on a computer, causes the computer to perform the method as in the various possible embodiments described above.
The embodiment of the application also provides a chip, which comprises a memory and a processor, wherein the memory is used for storing a computer program, and the processor is used for calling and running the computer program from the memory, so that the device provided with the chip executes the method in the various possible implementation manners.
It can be understood that the above-mentioned scenario is merely an example, and does not constitute a limitation on the application scenario of the technical solution provided in the embodiment of the present application, and the technical solution of the present application may also be applied to other scenarios. For example, as one of ordinary skill in the art can know, with the evolution of the system architecture and the appearance of new service scenarios, the technical solution provided by the embodiment of the present application is also applicable to similar technical problems.
The embodiment numbers of the present application are merely for description and do not represent advantages or disadvantages of the embodiments.
The steps in the method of the embodiment of the application can be sequentially adjusted, combined and deleted according to actual needs.
The units in the device of the embodiment of the application can be combined, divided and deleted according to actual needs.
In the present application, the same or similar term concept, technical solution and/or application scenario description will be generally described in detail only when first appearing and then repeatedly appearing, and for brevity, the description will not be repeated generally, and in understanding the present application technical solution and the like, reference may be made to the previous related detailed description thereof for the same or similar term concept, technical solution and/or application scenario description and the like which are not described in detail later.
In the present application, the descriptions of the embodiments are emphasized, and the details or descriptions of the other embodiments may be referred to.
The technical features of the technical solution of the present application may be arbitrarily combined, and for brevity of description, all possible combinations of the technical features in the embodiments are not described, however, as long as there is no contradiction between the combinations of the technical features, all should be considered as the scope of the present application.
From the above description of the embodiments, it will be clear to those skilled in the art that the example method may be implemented by means of software plus a necessary general purpose hardware platform, but of course also by means of hardware, although in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or what contributes to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) as above, comprising several instructions for causing a terminal device (which may be a mobile phone, a computer, a server, a controlled terminal, or a network device, etc.) to perform the method of each embodiment of the present application.
In the described embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, the processes or functions in accordance with embodiments of the present application are produced in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable devices. The computer instructions may be stored in a storage medium or transmitted from one storage medium to another storage medium, for example, from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital subscriber line), or wireless (e.g., infrared, wireless, microwave, etc.) means. The storage media may be any available media that can be accessed by a computer or a data storage device such as a server, data center, or the like that contains an integration of one or more available media. Usable media may be magnetic media (e.g., floppy disks, storage disks, magnetic tape), optical media (e.g., DVD), or semiconductor media (e.g., solid State Disk (SSD)), among others.
The foregoing description is only of the preferred embodiments of the present application, and is not intended to limit the scope of the application, but rather is intended to cover any equivalents of the structures or equivalent processes disclosed herein or in the alternative, which may be employed directly or indirectly in other related arts.

Claims (10)

1. A method of processing, comprising:
at least one target data processing scheme is acquired or determined according to the first object and/or the second object, and a third object is processed according to the target data processing scheme.
2. The method of processing according to claim 1, further comprising:
processing is performed on the first object to obtain or determine the second object.
3. The processing method according to claim 2, wherein processing for the first object comprises:
the first object is processed according to at least one initial data processing scheme.
4. The process of claim 1, comprising at least one of:
the obtaining or determining at least one target data processing scheme according to the first object and/or the second object comprises the following steps: invoking a preset determination model to perform conversion processing on the first object and the second object so as to acquire or determine at least one target data processing scheme;
The processing the third object according to the target data processing scheme includes: and processing at least one sub-object to be processed in the third object according to the target data processing scheme.
5. The processing method according to claim 4, wherein the obtaining or determining manner of the sub-object to be processed includes at least one of:
a first mode: acquiring or determining at least one sub-object to be processed according to at least one application range indication;
the second mode is as follows: and acquiring or determining at least one sub-object to be processed according to the first object and/or the second object.
6. The process of claim 5, wherein the second mode comprises at least one of:
determining a first sub-object shared by the first object and the second object as the sub-object to be processed;
and determining a second sub-object identical to the first sub-object as the sub-object to be processed.
7. The process of any one of claims 1 to 6, further comprising at least one of:
acquiring or determining a new processing scheme according to at least one scheme adjustment parameter and the target data processing scheme, and processing a third object according to the new processing scheme;
The data type of each of the first object, the second object, and/or the third object comprises at least one of an image, text, animation, video, and/or audio.
8. The processing method of claim 7, wherein when the data types of the first object, the second object, and the third object are each images, the method further comprises at least one of:
the first object, the second object or the third object is a photographed image;
the first object, the second object or the third object is an image stored at a preset storage location;
and displaying the image after the third image is processed.
9. A processing apparatus, comprising: memory, a processor, wherein the memory has stored thereon a processing program which, when executed by the processor, implements the steps of the processing method according to any of claims 1 to 8.
10. A storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the processing method according to any of claims 1 to 8.
CN202310946108.9A 2023-07-28 2023-07-28 Processing method, processing apparatus, and storage medium Pending CN116861010A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310946108.9A CN116861010A (en) 2023-07-28 2023-07-28 Processing method, processing apparatus, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310946108.9A CN116861010A (en) 2023-07-28 2023-07-28 Processing method, processing apparatus, and storage medium

Publications (1)

Publication Number Publication Date
CN116861010A true CN116861010A (en) 2023-10-10

Family

ID=88234103

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310946108.9A Pending CN116861010A (en) 2023-07-28 2023-07-28 Processing method, processing apparatus, and storage medium

Country Status (1)

Country Link
CN (1) CN116861010A (en)

Similar Documents

Publication Publication Date Title
CN107948530B (en) Image processing method, terminal and computer readable storage medium
CN107959795B (en) Information acquisition method, information acquisition equipment and computer readable storage medium
CN109743504B (en) Auxiliary photographing method, mobile terminal and storage medium
CN109639996B (en) High dynamic scene imaging method, mobile terminal and computer readable storage medium
CN113179374A (en) Image processing method, mobile terminal and storage medium
CN109348137B (en) Mobile terminal photographing control method and device, mobile terminal and storage medium
CN111885307A (en) Depth-of-field shooting method and device and computer readable storage medium
CN110086993B (en) Image processing method, image processing device, mobile terminal and computer readable storage medium
CN113126844A (en) Display method, terminal and storage medium
CN114845044B (en) Image processing method, intelligent terminal and storage medium
CN112532838B (en) Image processing method, mobile terminal and computer storage medium
CN113286106B (en) Video recording method, mobile terminal and storage medium
CN114900613A (en) Control method, intelligent terminal and storage medium
CN109729269B (en) Image processing method, terminal equipment and computer readable storage medium
CN114025215A (en) File processing method, mobile terminal and storage medium
CN116861010A (en) Processing method, processing apparatus, and storage medium
CN109978757B (en) Image processing method and terminal equipment
CN108335301B (en) Photographing method and mobile terminal
CN109495683B (en) Interval shooting method and device and computer readable storage medium
CN113194227A (en) Processing method, mobile terminal and storage medium
CN113840062B (en) Camera control method, mobile terminal and readable storage medium
CN115334240B (en) Image shooting method, intelligent terminal and storage medium
CN111866391B (en) Video shooting method and device and computer readable storage medium
CN115412672B (en) Shooting display method, intelligent terminal and readable storage medium
CN109361875B (en) Shooting storage method and device and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination