CN112578965A - Processing method and device and electronic equipment - Google Patents

Processing method and device and electronic equipment Download PDF

Info

Publication number
CN112578965A
CN112578965A CN202011563380.1A CN202011563380A CN112578965A CN 112578965 A CN112578965 A CN 112578965A CN 202011563380 A CN202011563380 A CN 202011563380A CN 112578965 A CN112578965 A CN 112578965A
Authority
CN
China
Prior art keywords
input
information
user
target
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011563380.1A
Other languages
Chinese (zh)
Inventor
贾杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202011563380.1A priority Critical patent/CN112578965A/en
Publication of CN112578965A publication Critical patent/CN112578965A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range

Abstract

The application discloses a processing method, a processing device and electronic equipment, belongs to the technical field of communication, and can solve the problem that the steps are complicated when a user uses the electronic equipment to process data information, so that the efficiency of the user in using the electronic equipment is reduced. The method comprises the following steps: receiving a first input of a user to a first object; responding to the first input, acquiring characteristic information of the first object, and removing target characteristics of the first object to obtain a second object; wherein the feature information indicates N features of the target object, and the target feature is at least one of the N features.

Description

Processing method and device and electronic equipment
Technical Field
The application belongs to the technical field of communication, and particularly relates to a processing method, a processing device and electronic equipment.
Background
With the development of electronic technology, more and more users transmit data information by using electronic equipment, and in order to improve the efficiency of transmitting data information, users can transmit data information by one key by forwarding data or copying and pasting data.
In the process of forwarding information or copying and pasting data information by a user, part of contents in the data information may not be required by the user, and the user cannot directly edit the information.
Therefore, the steps are complicated when the user uses the electronic equipment to process the data information, and the efficiency of using the electronic equipment by the user is reduced.
Disclosure of Invention
The embodiment of the application aims to provide a processing method, a processing device and electronic equipment, and can solve the problem that the steps are complicated when a user uses the electronic equipment to process data information, so that the efficiency of using the electronic equipment by the user is reduced.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides a processing method, where the method includes: receiving a first input of a user to a first object; responding to the first input, acquiring characteristic information of the first object, and removing target characteristics of the first object to obtain a second object; wherein the feature information indicates N features of the target object, and the target feature is at least one of the N features.
In a second aspect, an embodiment of the present application provides a processing apparatus, where the apparatus includes: the device comprises a receiving module, an obtaining module and a generating module; the receiving module is used for receiving a first input of a user to a first object; the generating module is configured to obtain feature information of the first object in response to the first input received by the receiving module, and remove a target feature of the first object to obtain a second object; wherein the feature information indicates N features of the target object, and the target feature is at least one of the N features.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, and when executed by the processor, the program or instructions implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In this embodiment of the application, after receiving a first input of a user to a first object including N features, an electronic device may acquire feature information indicating the N features of the first object, remove at least one feature (i.e., a target feature) of the N features in the first object, and generate a second object. Through the process of removing the target characteristics, the user can automatically remove the target characteristics in the first object after using the electronic equipment to acquire the first object to generate the second object, and thus, the electronic equipment is not required to process the first object again after acquiring the first object, so that steps of processing the first object by the user are reduced, time is saved, and efficiency for using the electronic equipment is improved.
Drawings
FIG. 1 is a schematic flow chart of a processing method provided in an embodiment of the present application;
fig. 2 is a schematic diagram of an interface applied by a processing method according to an embodiment of the present disclosure;
fig. 3 is a second schematic diagram of an interface applied by a processing method according to an embodiment of the present application;
fig. 4 is a third schematic view of an interface applied by a processing method according to an embodiment of the present application;
fig. 5 is a fourth schematic view of an interface applied by a processing method according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of a processing apparatus according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure;
fig. 8 is a second schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The processing method provided by the embodiment of the present application is described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
The processing method provided by the embodiment of the invention can be applied to scenes for removing local characteristics of any object.
For a scene of removing a local feature of an arbitrary object, it is assumed that a user a needs to copy digital content in a table file 1 in a table editing class application in an electronic device 1, where the file content in the file 1 includes the digital content and format content, and specifically, 3 numbers in the digital content are distributed in 3 groups of multiple merged cells, where the 3 numbers are the digital content, and a merging manner of the 3 groups of multiple merged cells is the format content. After the user a selects and copies the 3 numbers and the areas where the distributed cells are located, the form file 2 is opened, and the copied content is pasted in the form file 2, so that the digital content is presented in the form file 2 in the same display mode as that in the form file 1, that is, in a display mode in which the 3 numbers are distributed in 3 groups of a plurality of combined cells, however, for the user a, the format content is content which does not need to be copied. At this time, the user needs to adjust the format in the form file 2 by himself. In the process of copying the form file 1, since all the contents of the area selected by the user in the file content are copied, the user needs to edit and adjust the file for the second time, so that the operation steps of the user are increased, and the efficiency of the user in using the electronic device is reduced.
In the embodiment of the present application, after the electronic device receives the paste input of the region where the copied 3 numbers and the cells distributed therein are pasted in the form file 2 by the user, the electronic device obtains the feature information in the region, including the content information of the 3 numbers and the format information of the cells distributed in the 3 numbers, automatically removes the format information according to the removal information method of the default removal format information included in the electronic device, generates the file content including the 3 numbers, and then pastes only the 3 numbers in the form file 2. In this way, the electronic device does not need to paste the 3 numbers and the cells distributed in the table file 1 and then perform the format processing again, so that the steps of processing the 3 numbers and the cells distributed in the numbers by the user are reduced, the time is saved, and the efficiency of using the electronic device is improved.
The present embodiment provides a processing method, as shown in fig. 1, the present embodiment is applied to a first electronic device, and the electronic device positioning method includes the following steps 301 and 302:
step 301: the processing device receives a first input to a first object.
In this embodiment of the present application, the first object is an arbitrary file object in the electronic device. The file object may be a file generated by any application in the electronic device, for example, a file generated by a document editing application; but also information in any application in the electronic device, for example speech information in a chat application.
Optionally, in this embodiment of the present application, the first input is used to duplicate the first object, that is, to copy the first object.
In this embodiment of the application, the first input may be a touch input, for example, a click input, a voice input, or an input of a special gesture, which is not limited in this embodiment of the application.
In this embodiment, the first input may also be used to instruct the electronic device to extract the instruction of the feature information in the first object.
In one example, the processing device may display a floating window containing an option to extract feature information in the current display interface after receiving the first input. The floating window may be used to ask the user whether feature information in the first object needs to be extracted.
Step 302: in response to the first input, the processing device obtains feature information of the first object, removes a target feature of the first object, and obtains a second object.
In an embodiment of the present invention, the feature information indicates N features of the first object, and the target feature is at least one of the N features.
In the embodiment of the present application, the first object includes N features, and each feature is composed of data information. Wherein each of the N features includes different data information.
In an example, when the first object is a file generated by an arbitrary application, the N features may be a format feature of the file and a content feature of the file, where data information constituting the format feature and the content feature are different.
In another example, when the first object is information in any application in an electronic device, the N features may be different N sub-features constituting the information, where data information constituting each sub-feature is different.
In the embodiment of the present application, the processing device may preset the type of removing the local information in advance, so that the same type of local information may be removed each time the second input is performed. For example, when the first object is a file generated by any application, the N features may be format features of the file and content features of the file, and the removed local information may be preset as format information after the processing device receives the second input, so that the format information in the file may be automatically removed every time the second input is received for the file generated by any application.
In an example, the type of the removal partial information preset by the above mentioned authorization may be user-defined, or may be preset in advance for the electronic device, for example, at the time of factory setting, that is, a default type of the removal partial information is set in the electronic device.
In this embodiment, the second object is the object with the target feature removed by the processing device.
In this embodiment of the application, after obtaining the second object, the processing device may directly display the second object on a display screen of the electronic device, or may not display the second object on the display screen of the electronic device, which is not limited in this embodiment of the application.
In the processing method provided by the embodiment of the application, after receiving a first input of a user to a first object including N features, a processing device may acquire feature information indicating the N features of the first object, remove at least one feature (i.e., a target feature) of the N features in the first object, and generate a second object. Through the process of removing the target features, the user can automatically remove the target features in the first object after acquiring the first object by using the processing device, only part of features required by the user are reserved, and the second object is generated.
Optionally, in this embodiment of the application, in the step 301, the processing method provided in this embodiment of the application may include the following step a 1:
step A1: the processing device receives a copy-paste input for the first object.
On the basis of the step a1, after obtaining the second object in the step 302, the processing method provided by the embodiment of the present application may further include the following step a 2:
step A2: the processing device displays the second object at the target location.
Illustratively, the copy-paste input is for copy-pasting the first object.
For example, the copy-and-paste input may be a touch input, for example, a click input, a drag input, a long-press input, a voice input, or a special gesture input, which is not limited in this embodiment of the present application.
It is understood that the copy-paste input may be an intermittent input mode in an actual operation, for example, the copy operation and the paste operation are respectively completed by two click inputs, or a continuous line input mode, for example, the copy operation and the paste operation are completed by a single drag input.
Illustratively, the target position is a position where the processing device receives a copy-paste input of the first object from the user.
For example, the target position may be a default position preset by the electronic device, or may be a position customized by the user.
In an example, when the copy-and-paste input is a touch input, the target position may be a position at which the electronic device is finally released after receiving the touch input of the user on the display screen of the electronic device; the position may also be a position selected by the electronic device in the process of receiving the touch input of the user on the display screen of the electronic device, which is not limited in the embodiment of the present application.
Example 1: taking the first object as a table document as an example, as shown in (a) of fig. 2, in the table document 31 shown in (a) of fig. 2, 4 numbers are distributed in different tables, and the table in which each number is located has format information of the merge cell. Assuming that the user needs to copy the 4 numbers into the form document 32, the user selects the 4 numbers in the form document by framing the form document to form a dashed box 32, at this time, an option window 33 is displayed in a floating manner in an interface of the form document 31, the user performs a click input on a "copy" option in the option window, then, as shown in (b) of fig. 2, the user opens the form document 34, selects the 4 numbers in the form document 34, the option window is displayed in a floating manner in an interface of the form document 34, the user selects a "paste" option in the option window (i.e., the copy-paste input), the electronic device automatically obtains the digital information of the 4 numbers in the form document and the format information corresponding to the 4 numbers (i.e., the feature information of the first object), and simultaneously automatically removes the format information, the 4 numbers (i.e., the second object) are displayed only in 4 tables in the table document 34.
Therefore, after the electronic equipment receives the first input of the user, the local characteristic information can be automatically removed, and the second object with the local characteristic information removed is displayed on the display screen of the electronic equipment, so that the user is not required to perform complicated processing when part of information in the first object needs to be removed, processing steps are reduced, and the efficiency of the user in using the electronic equipment is improved.
Optionally, in this embodiment of the application, in the step 301, the processing method provided in this embodiment of the application may include the following step B1:
step B1: the processing device receives an input to the first object that is forwarded to the target user.
On the basis of the step B1, after obtaining the second object in the step 302, the processing method provided by the embodiment of the present application may further include the following step B2:
step B2: and the processing device sends the second object to the target user.
For example, the target user may be another electronic device capable of sending information with the electronic device where the processing apparatus is located.
For example, the first object may include text information, voice information, and multimedia information, which is not limited in this embodiment of the application.
In an example, in the case that the first object includes the first voice, after the step B1 and before the step B2, the processing method provided by the embodiment of the present application further includes the following step B3:
step B3: the processing device identifies the first voice and divides the first voice into N voice sections.
Illustratively, the N features are the N speech segments.
For example, the first voice may be voice information, or a voice information portion in video information, or a voice information portion in other multimedia files, which is not limited in this embodiment of the application.
The processing means may analyze the start point and the end point of the N speech segments in the first speech by using an end point detection technique, and in particular, may identify a speech region and a silence region. And after the recognition is finished, extracting the voice data of the voice area, and recognizing by using a voice recognition technology to obtain result information corresponding to the voice data. The result information can be text information, namely voice information can be converted into text information to be displayed to a user.
In an example, a preset volume may be pre-stored in the electronic device, and voices within the preset volume correspond to the mute region, and voices above the preset volume correspond to the voice region. Example 2: as shown in fig. 3, fig. 3 is a speech recognition diagram corresponding to a speech segment in which the speech content is "happy new year", and the sound decibels at the beginning and end of each word are smaller than the preset volume, so that the beginning and end of each word are silence regions, and the remaining parts are speech regions, namely, speech region 41, speech region 42, speech region 43, and speech region 44.
Illustratively, the N speech segments are speech segments that are decomposed by the processing device in a predetermined manner.
In one example, the processing device may parse the speech information identified by the endpoint detection and speech recognition techniques according to semantic grammar rules. For example, after the electronic device recognizes that the first voice includes voice information of "go to class on day with little brightness", the voice information may be further decomposed into "go to class" and "day" with semantic grammar rules, which correspond to the subject, predicate and shape, respectively, that is, the electronic device may be divided into 3 voice segments on the display interface according to the modes of "go to class" and "day on day" and present them to the user.
It is understood that the semantic grammar rules described above are pre-stored in the electronic device.
In another example, the processing device may also recognize N different speech segment information using only end-point detection technique analysis and speech recognition techniques.
For example, the first input may be further used to trigger the processing device to recognize the first voice and extract the voice section as N voice segments.
Example 2: taking the first object as the voice information as an example, in the chat application, when the user a wants to use the electronic device 1 to forward the voice information 1 "xiaowang" and celebrate your happy new year "to the electronic device 2 of the user B, as shown in (a) in fig. 4, after the user clicks and inputs a forwarding option in an option bar 42 displayed beside the voice information 1 on the chat interface 41 (i.e., the first input), the electronic device 1 analyzes the voice information 1 by using an endpoint detection technique and a voice recognition technique to extract 8 voice segments in the voice information 1, which are" xiao "," wang "," you "," new "," yearly "," happy ", and" happy ", respectively, and as shown in (B) in fig. 4, characters corresponding to the 8 voice segments are displayed on the user identification interface 43.
Therefore, when the first object is voice information, after the electronic device receives the first input of the user, the voice recognition technology and the endpoint detection technology can be utilized to extract a plurality of voice segments, so that the voice information is decomposed into a plurality of features, and the electronic device can remove local information subsequently.
Optionally, in this embodiment, in the step 302, the processing method provided in this embodiment may include the following steps C1 to C3:
step C1: in response to the first input, the processing device obtains feature information of the first object.
Step C2: the processing device displays N feature marks corresponding to the feature information.
Step C3: and the processing device receives a second input of a target feature identifier in the N feature identifiers, and removes the target feature corresponding to the target feature identifier in the first object to obtain a second object.
For example, the above feature information may refer to the foregoing description, and will not be described herein again.
Illustratively, the feature identifiers are used to indicate different features of the first object.
For example, the feature identifier may be an identifier in the form of a control, which may be used to select different features.
For example, the feature identifier may be a text identifier, a picture identifier, or a multimedia identifier, which is not limited in this embodiment of the present application.
For example, the target feature identifier may be a feature to be removed by the user.
In this embodiment of the application, the second input may be a touch input, for example, a click input, a voice input, or an input of a special gesture, which is not limited in this embodiment of the application.
In this embodiment, the second input is used to select feature information that needs to be retained in the first object, or to select feature information that needs to be removed from the first object.
In an example, after receiving the first input, the processing device may obtain feature information of the first object, display a floating window, and display a feature identifier corresponding to the feature information of the first object in the floating window, specifically, the feature identifier may be displayed in the form of N features, and the user may perform a second input on part or all of the N features used for indicating the feature information, so as to select feature information that needs to be retained in the first object or select feature information that needs to be removed in the first object.
In one example, when the second input is used to select feature information that needs to be retained in the first object, the target feature is an unselected feature in the second input, and when the second input is used to select feature information that needs to be removed from the first object, the target feature is a selected feature in the second input.
Example 4, in combination with example 2, when the user clicks and inputs "small" and "king" in the text corresponding to the 8 voice segments, the electronic device 1 removes the voice segments corresponding to the "small" and "king" to generate new voice information, where the text information corresponding to the new voice information is "happy birthday congratulatory".
Therefore, the electronic equipment can display the N characteristics of the first object in an identification form, so that a user can conveniently select the N characteristics, and a second object required by the user is generated.
Optionally, in this embodiment of the application, in the step 302, removing the target feature of the first object to obtain the second object, the processing method provided in this embodiment of the application may include the following steps C1 to D:
step D: the processing device removes the first user information contained in the first object to obtain a second object.
For example, the first user information may be indication information of other electronic devices except the electronic device corresponding to the processing apparatus, for example, a user name of the other electronic devices, device information of the other electronic devices, and contact information of the other electronic devices.
For example, the processing device may identify the first user information in different manners according to different forms of the first object, and further remove the first user information included in the first object to obtain the second object.
In one example, when the first object is text information, the processing device may identify the first user information in the first object according to a semantic grammar rule preset in the electronic device in advance through a semantic identification technology, and obtain a second object with the first user information removed.
In one example, when the first object is voice information, the processing device may identify the first user information in the first object by using a voice recognition technology in combination with a semantic grammar rule preset in the electronic device in advance, and obtain a second object from which the first user information is removed.
In one example, when the first object is video information, the processing device may identify the first user information in the first object by using a speech recognition technology in combination with a semantic grammar rule preset in the electronic device in advance, and obtain a second object from which the first user information is removed. For example, video frames of the first object containing the first user information content are removed.
Therefore, the processing device can automatically remove the user information when recognizing that the first object contains the user information, so that the user can conveniently perform further sending or other operations, the steps of processing the first object by the user are saved, and the efficiency of using the electronic equipment by the user is improved.
Optionally, in this embodiment of the application, in the step D, removing the target feature of the first object to obtain the second object, the processing method provided in this embodiment of the application may include the following step E:
step E: the processing device removes the first user information contained in the first object, and adds the content containing the second user information to obtain a second object.
For example, the form of the content of the second user information and the form of the first user information of the first object may be matched with each other. For example, if the first user information of the first object is in the form of voice-type content, the second user information is also in the form of voice-type content; the first user information of the first object is in the form of text type content, and then the second user information is also in the form of text type content.
For example, in a case that the type of the first object is a voice type, that is, the first object is a target voice, the first user information of the first object is a target voice segment, and the content of the second user information is the first voice segment, the step E may include the following steps Ea and Eb:
step Ea: the processing means generates a first speech segment.
Step Eb: the processing device replaces the target voice segment in the target voice with the first voice segment to generate a second voice.
Illustratively, the first speech segment includes the first user information.
Illustratively, the first speech segment is a speech segment input by the electronic device. The first voice segment may be pre-stored in the electronic device or may be user-defined.
Illustratively, the target feature is a feature indicated by the target feature identifier.
For example, the first speech segment may be automatically generated by the electronic device, or may be manually recorded.
In an example, the first speech segment may be a speech segment automatically generated by the electronic device according to the first user information that the electronic device recognizes itself.
Further, when the second user information is the picture information, the electronic device may recognize the text information in the picture information by using Optical Character Recognition (OCR), and then automatically convert the recognized text information into text by using a voice conversion technology.
Furthermore, when the second user information is character information, the electronic device can directly identify the character information in the picture information, and then automatically convert the identified character information into characters by using a voice conversion technology.
For example, the local removal information means may use a speech synthesis technique to replace the target speech segment with the first speech segment after the first speech segment is obtained.
Illustratively, prior to step Ea above, the processing means may display an option to ask the user if the first speech segment needs to be replaced.
In one example, the option of asking the user whether the first speech segment needs to be replaced may be displayed in a floating window, and the option may be displayed in the form of a control.
For example, the second user corresponding to the second user information may be any user capable of sending information in the electronic device.
Illustratively, the target speech segment is a speech segment matching the user information of the target user from among the N speech segments.
For example, the target speech segment may be automatically recognized by the electronic device according to the content of the first speech segment and by combining semantic grammar rules. For example: when the content of the first speech segment is a noun, the electronic equipment can search the corresponding noun in the speech content of the target speech segment, and correspondingly replace the noun.
It is understood that the above-mentioned alternative process may be to remove the target speech segment in advance for the electronic device, and then add the first speech segment to the position of the target speech segment in the target speech.
Example 3: with reference to the example 2, as shown in (a) of fig. 5, after the user clicks and inputs the user B to be forwarded (i.e., the second user corresponding to the second user information), as shown in (B) of fig. 5, the user clicks and inputs the option 51 of "replacing the voice in the voice information", so that the electronic device automatically recognizes the text information of the user B, converts the user B into the voice by using the voice conversion technology, automatically synthesizes the voice into the first voice segment by using the voice synthesis technology, generates the second voice segment, and sends the second voice segment to the electronic device 2 of the user B.
Therefore, the processing device can remove local features by replacing the first voice section in the voice section information, directly update the first voice into the second voice required by the processing device, and send the second voice to the target user, so that the time for the user to re-edit the voice information is saved, and the efficiency of the user for sending the voice information by using the electronic equipment is improved.
In the processing method provided in the embodiment of the present application, the execution main body may be a processing apparatus, or a control module in the processing apparatus for executing the processing method. In the embodiment of the present application, a processing device executing a processing method is taken as an example to describe the processing device provided in the embodiment of the present application.
Fig. 6 is a schematic diagram of a possible structure of a processing device provided in an embodiment of the present application. As shown in fig. 6, the processing apparatus 600 includes: a receiving module 601, a generating module 602; the receiving module 601 is configured to receive a first input to a first object; the generating module 602 is configured to, in response to the first input received by the receiving module 602, obtain feature information of the first object, and remove a target feature of the first object to obtain a second object; wherein the feature information indicates N features of the target object, and the target feature is at least one of the N features.
In the processing apparatus provided in the embodiment of the present application, after receiving a first input of a user to a first object including N features, the processing apparatus may obtain feature information indicating the N features of the first object, remove at least one feature (i.e., a target feature) of the N features in the first object, and generate a second object. Through the process of removing the target features, the user can automatically remove the target features in the first object after acquiring the first object by using the processing device, only part of features required by the user are reserved, and the second object is generated.
Optionally, in this embodiment of the application, the apparatus 600 further includes a display module 603; the receiving module 601 is specifically configured to receive a copy and paste input for a first object; the display module 602 is configured to display the second object generated by the generating module at a target position.
Optionally, in this embodiment of the application, the apparatus 600 further includes a sending module 604; the receiving module 601 is specifically configured to receive an input of the first object forwarded to the target user; the sending module 604 is configured to send the second object generated by the generating module 602 to the target user.
Optionally, in this embodiment of the application, the apparatus 600 further includes a display module 603 and an obtaining module 605; the obtaining module 605 is configured to obtain feature information of the first object in response to the first input received by the receiving module 601; the display module 603 is further configured to display N feature identifiers corresponding to the feature information acquired by the acquisition module; the generating module 602 is specifically configured to receive a second input of a target feature identifier of the N feature identifiers, and remove a target feature corresponding to the target feature identifier from the first object to obtain a second object.
Optionally, in this embodiment of the application, the generating module 602 is specifically configured to remove the first user information included in the first object to obtain a second object.
Optionally, in this embodiment of the application, the generating module 602 is specifically configured to remove first user information included in the first object, and add content including second user information to obtain a second object.
The processing device in the embodiment of the present application may be a device, and may also be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The processing device in the embodiment of the present application may be a device having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The processing device provided in the embodiment of the present application can implement each process implemented by the method embodiments of fig. 1 to fig. 5, and is not described here again to avoid repetition.
It should be noted that, as shown in fig. 6, modules that are necessarily included in the processing apparatus 600 are illustrated by solid line boxes, such as a receiving module 601; modules that may or may not be included in the processing device 600 are illustrated with dashed boxes, such as the display module 603.
Optionally, as shown in fig. 7, an electronic device 700 is further provided in this embodiment of the present application, and includes a processor 701, a memory 702, and a program or an instruction stored in the memory 702 and executable on the processor 701, where the program or the instruction is executed by the processor 701 to implement each process of the processing method embodiment, and can achieve the same technical effect, and no further description is provided here to avoid repetition.
It should be noted that the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 8 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 100 includes, but is not limited to: a radio frequency unit 101, a network module 102, an audio output unit 103, an input unit 104, a sensor 105, a display unit 106, a user input unit 107, an interface unit 108, a memory 109, and a processor 110. Wherein the user input unit 107 includes: touch panel 1071 and other input devices 1072, display unit 106 including display panel 1061, input unit 104 including image processor 1041 and microphone 1042, memory 109 may be used to store software programs (e.g., an operating system, application programs needed for at least one function), and various data.
Those skilled in the art will appreciate that the electronic device 100 may further comprise a power source (e.g., a battery) for supplying power to various components, and the power source may be logically connected to the processor 110 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system. The electronic device structure shown in fig. 8 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is omitted here.
Wherein, the user input unit 107 is used for receiving a first input to a first object; a processor 110, configured to obtain feature information of the first object in response to the first input by the user input unit 107, and remove a target feature of the first object to obtain a second object; wherein the feature information indicates N features of the first object, and the target feature is at least one of the N features.
According to the electronic device provided by the embodiment of the application, after receiving a first input of a user on a first object including N features, the electronic device may acquire feature information indicating the N features of the first object, remove at least one feature (i.e., a target feature) of the N features in the first object, and generate a second object. Through the process of removing the target features, the target features in the first object can be automatically removed after the user uses the electronic equipment to acquire the first object, only partial features required by the user are reserved, and the second object is generated.
Optionally, the user input unit 107 is specifically configured to receive copy and paste input for the first object; the display unit 106 is configured to display the second object at a target position.
Optionally, the user input unit 107 is specifically configured to receive an input of the first object forwarded to the target user; the rf unit 101 is configured to send the second object processed by the processor 110 to the target user.
Optionally, the processor 110 is specifically configured to, in response to the first input received by the user input unit 107, obtain feature information of the first object; the display unit 106 is further configured to display N feature identifiers corresponding to the feature information; the processor 110 is specifically configured to receive a second input of a target feature identifier of the N feature identifiers, and remove a target feature corresponding to the target feature identifier from the first object to obtain a second object.
Optionally, the processor 110 is specifically configured to remove the first user information included in the first object to obtain a second object.
Optionally, the processor 110 is specifically configured to remove first user information included in the first object, and add content including second user information to obtain a second object.
It should be understood that, in the embodiment of the present application, the input Unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, and the Graphics Processing Unit 1041 processes image data of a still picture or a video obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 106 may include a display panel 1061, and the display panel 1061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 107 includes a touch panel 1071 and other input devices 1072. The touch panel 1071 is also referred to as a touch screen. The touch panel 1071 may include two parts of a touch detection device and a touch controller. Other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. The memory 109 may be used to store software programs as well as various data including, but not limited to, application programs and an operating system. The processor 110 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the processing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the processing method embodiment, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (14)

1. A method of processing, the method comprising:
receiving a first input to a first object;
responding to the first input, acquiring feature information of the first object, and removing target features of the first object to obtain a second object;
wherein the feature information is used to indicate N features of the first object, and the target feature is at least one of the N features.
2. The method of claim 1, wherein receiving the first input to the first object comprises:
receiving a copy-and-paste input for a first object;
after obtaining the second object, the method further comprises:
displaying the second object at a target location.
3. The method of claim 1, wherein receiving the first input to the first object comprises:
receiving an input to the first object forwarded to the target user;
after obtaining the second object, the method further comprises:
and sending the second object to the target user.
4. The method of claim 1, wherein the obtaining feature information of the first object and removing a target feature of the first object in response to the first input to obtain a second object comprises:
in response to the first input, obtaining feature information of the first object;
displaying N characteristic marks corresponding to the characteristic information;
and receiving a second input of a target feature identifier in the N feature identifiers, and removing the target feature corresponding to the target feature identifier in the first object to obtain a second object.
5. The method of claim 1, wherein removing the target feature of the first object to obtain a second object comprises:
and removing the first user information contained in the first object to obtain a second object.
6. The method of claim 5, wherein removing the first user information included in the first object to obtain a second object comprises:
and removing the first user information contained in the first object, and adding the content containing the second user information to obtain a second object.
7. A processing apparatus, characterized in that the apparatus comprises: the device comprises a receiving module, an obtaining module and a generating module;
the receiving module is used for receiving a first input of a first object;
the generating module is used for responding to the first input received by the receiving module, acquiring the characteristic information of the first object, and removing the target characteristic of the first object to obtain a second object;
wherein the feature information is used to indicate N features of the first object, and the target feature is at least one of the N features.
8. The apparatus of claim 7, further comprising a display module;
the receiving module is specifically used for receiving copy and paste input of the first object;
the display module is configured to display the second object generated by the generation module at a target position.
9. The apparatus of claim 7, further comprising a transmitting module;
the receiving module is specifically configured to receive an input of the first object forwarded to the target user;
the sending module is configured to send the second object generated by the generating module to the target user.
10. The apparatus of claim 7, further comprising an acquisition module and a display module;
the obtaining module is used for responding to the first input received by the receiving module to obtain the characteristic information of the first object;
the display module is used for displaying N feature identifiers corresponding to the feature information acquired by the acquisition module;
the generating module is specifically configured to receive a second input of a target feature identifier of the N feature identifiers, and remove a target feature corresponding to the target feature identifier from the first object to obtain a second object.
11. The apparatus of claim 7,
the generating module is specifically configured to remove first user information included in the first object to obtain a second object.
12. The apparatus of claim 11,
the generating module is specifically configured to remove first user information included in the first object, and add content including second user information to obtain a second object.
13. An electronic device comprising a processor, a memory, and a program or instructions stored on the memory and executable on the processor, the program or instructions when executed by the processor implementing the steps of the processing method of any of claims 1-6.
14. A readable storage medium, characterized in that it stores thereon a program or instructions which, when executed by a processor, implement the steps of the processing method according to any one of claims 1 to 6.
CN202011563380.1A 2020-12-25 2020-12-25 Processing method and device and electronic equipment Pending CN112578965A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011563380.1A CN112578965A (en) 2020-12-25 2020-12-25 Processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011563380.1A CN112578965A (en) 2020-12-25 2020-12-25 Processing method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN112578965A true CN112578965A (en) 2021-03-30

Family

ID=75140531

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011563380.1A Pending CN112578965A (en) 2020-12-25 2020-12-25 Processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN112578965A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113079086A (en) * 2021-04-07 2021-07-06 维沃移动通信有限公司 Message transmitting method, message transmitting device, electronic device, and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103051518A (en) * 2012-12-14 2013-04-17 东莞宇龙通信科技有限公司 Information forwarding method and communication terminal thereof
US20150143255A1 (en) * 2013-11-15 2015-05-21 Motorola Mobility Llc Name Composition Assistance in Messaging Applications
CN104881279A (en) * 2015-05-12 2015-09-02 广东欧珀移动通信有限公司 Mass messaging method and device
CN108173745A (en) * 2017-12-26 2018-06-15 福建中金在线信息科技有限公司 A kind of retransmission method and device based on instant messaging
CN108809809A (en) * 2018-06-08 2018-11-13 腾讯科技(武汉)有限公司 Message method, computer equipment and storage medium
CN110399232A (en) * 2019-06-21 2019-11-01 平安普惠企业管理有限公司 Paste processing method, device, equipment and computer readable storage medium
CN112035878A (en) * 2020-08-31 2020-12-04 维沃移动通信(杭州)有限公司 Information display method and device and electronic equipment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103051518A (en) * 2012-12-14 2013-04-17 东莞宇龙通信科技有限公司 Information forwarding method and communication terminal thereof
US20150143255A1 (en) * 2013-11-15 2015-05-21 Motorola Mobility Llc Name Composition Assistance in Messaging Applications
CN104881279A (en) * 2015-05-12 2015-09-02 广东欧珀移动通信有限公司 Mass messaging method and device
CN108173745A (en) * 2017-12-26 2018-06-15 福建中金在线信息科技有限公司 A kind of retransmission method and device based on instant messaging
CN108809809A (en) * 2018-06-08 2018-11-13 腾讯科技(武汉)有限公司 Message method, computer equipment and storage medium
CN110399232A (en) * 2019-06-21 2019-11-01 平安普惠企业管理有限公司 Paste processing method, device, equipment and computer readable storage medium
CN112035878A (en) * 2020-08-31 2020-12-04 维沃移动通信(杭州)有限公司 Information display method and device and electronic equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113079086A (en) * 2021-04-07 2021-07-06 维沃移动通信有限公司 Message transmitting method, message transmitting device, electronic device, and storage medium
CN113079086B (en) * 2021-04-07 2023-06-27 维沃移动通信有限公司 Message transmission method, message transmission device, electronic device, and storage medium

Similar Documents

Publication Publication Date Title
CN109782997B (en) Data processing method, device and storage medium
CN102984050A (en) Method, client and system for searching voices in instant messaging
CN105069013A (en) Control method and device for providing input interface in search interface
CN113342435A (en) Expression processing method and device, computer equipment and storage medium
CN112181253A (en) Information display method and device and electronic equipment
CN112383662B (en) Information display method and device and electronic equipment
CN112578965A (en) Processing method and device and electronic equipment
CN109246299A (en) Rapid Speech recording method, device, mobile terminal and computer storage medium
CN105491237A (en) Contact information display method and terminal
CN110992958B (en) Content recording method, content recording apparatus, electronic device, and storage medium
CN113055529B (en) Recording control method and recording control device
WO2023045922A1 (en) Information input method and apparatus
WO2022213986A1 (en) Voice recognition method and apparatus, electronic device, and readable storage medium
CN113157966B (en) Display method and device and electronic equipment
CN112653919B (en) Subtitle adding method and device
CN113079086B (en) Message transmission method, message transmission device, electronic device, and storage medium
CN113238686B (en) Document processing method and device and electronic equipment
CN112288835A (en) Image text extraction method and device and electronic equipment
CN114024929A (en) Voice message processing method and device, electronic equipment and medium
CN113778595A (en) Document generation method and device and electronic equipment
CN113593614A (en) Image processing method and device
CN112417095A (en) Voice message processing method and device
CN113573096A (en) Video processing method, video processing device, electronic equipment and medium
CN113138676A (en) Expression symbol display method and device
CN112764551A (en) Vocabulary display method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210330

RJ01 Rejection of invention patent application after publication