CN109254712B - Information processing method and electronic equipment - Google Patents

Information processing method and electronic equipment Download PDF

Info

Publication number
CN109254712B
CN109254712B CN201811158298.3A CN201811158298A CN109254712B CN 109254712 B CN109254712 B CN 109254712B CN 201811158298 A CN201811158298 A CN 201811158298A CN 109254712 B CN109254712 B CN 109254712B
Authority
CN
China
Prior art keywords
picture
copied
interactive interface
user
instruction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811158298.3A
Other languages
Chinese (zh)
Other versions
CN109254712A (en
Inventor
薛朋岳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201811158298.3A priority Critical patent/CN109254712B/en
Publication of CN109254712A publication Critical patent/CN109254712A/en
Priority to GB1912803.2A priority patent/GB2577989B/en
Priority to US16/584,365 priority patent/US11491396B2/en
Priority to DE102019125937.1A priority patent/DE102019125937A1/en
Application granted granted Critical
Publication of CN109254712B publication Critical patent/CN109254712B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus

Abstract

The embodiment of the application discloses an information processing method and electronic equipment, when a user operates a picture to generate a processing instruction, the picture is processed to obtain at least one object of the picture, and the user selects a target object from the at least one object to operate, so that the user is prevented from manually extracting the object from the picture, and the user operation is simplified.

Description

Information processing method and electronic equipment
Technical Field
The present application relates to the field of information processing technologies, and in particular, to an information processing method and an electronic device.
Background
When a user operates an electronic device (e.g., a computer, a mobile communication terminal, etc.), some pictures may be used, and in some cases, only part of the contents of the pictures may be needed, and at this time, the user needs to input the pictures into a special picture processing software, manually cut out the needed part of the contents through the picture processing software, and copy and paste the cut out contents to a needed position. Moreover, when the contents of the required pictures are different, the users are required to process the pictures by using different software, which requires the users to be familiar with different picture processing software.
Therefore, the traditional picture processing mode causes the operation of the user to be more complicated.
Disclosure of Invention
The present application is directed to an information processing method and an electronic device, which at least partially overcome the technical problems in the prior art.
In order to achieve the purpose, the application provides the following technical scheme:
an information processing method comprising:
acquiring a processing instruction for a picture, wherein the picture comprises a plurality of objects;
in response to the processing instruction, processing the picture to obtain at least one object of the picture; wherein a type of a first object of the at least one object is consistent with the picture;
generating an interactive interface, wherein the interactive interface comprises the at least one object; the interactive interface is used for a user to select a target object.
In the above method, preferably, the at least one object includes two objects of different types.
Preferably, the method further includes: the picture.
The above method, preferably, the processing the picture to obtain at least one object of the picture includes:
carrying out contour detection on the picture to obtain at least one contour; obtaining at least one first object based on the at least one contour;
alternatively, the first and second electrodes may be,
and carrying out foreground and background segmentation on the picture to obtain a foreground picture and a background picture.
The above method, preferably, the processing the picture to obtain at least one object of the picture includes:
and performing text recognition on the picture to obtain at least one text part, wherein text objects contained in different text parts are different.
The above method, preferably, further comprises:
saving at least one object of the picture in association with the picture.
Preferably, the method for generating an interactive interface includes:
obtaining an input operation for an input area;
generating the interactive interface, wherein the interactive interface further comprises: and the operation control is used for indicating to execute the pasting instruction.
The above method, preferably, further comprises:
inputting the at least one object within the input area in response to the paste instruction;
alternatively, the first and second electrodes may be,
and inputting a target object selected by the user in the at least one object in the input area in response to the paste instruction.
An electronic device, comprising:
a display unit for displaying information;
a memory for storing at least one set of instructions;
a processor for invoking and executing the set of instructions in the memory, by executing the set of instructions:
acquiring a processing instruction for a picture, wherein the picture comprises a plurality of objects;
in response to the processing instruction, processing the picture to obtain at least one object of the picture; wherein a type of a first object of the at least one object is consistent with the picture;
generating an interactive interface, wherein the interactive interface comprises the at least one object; the interactive interface is used for a user to select a target object.
In the above electronic device, preferably, the at least one object includes two objects of different types.
According to the scheme, when the user operates the picture to generate the processing instruction, the picture is processed to obtain at least one object of the picture, and the user selects the target object from the at least one object to operate, so that the user is prevented from manually extracting the object from the picture, and the user operation is simplified.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a flowchart of an implementation of an information processing method according to an embodiment of the present application;
fig. 2 is a schematic diagram of a picture a provided in the present embodiment;
fig. 3 is a schematic diagram of three objects obtained by processing a picture a according to an embodiment of the present application;
fig. 4 is an exemplary diagram after an interactive interface displayed after a user presses an input box of a session interface for a long time according to an embodiment of the present application;
fig. 5 is an exemplary diagram of a session interface after sending the picture a according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims, as well as in the drawings described above, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It should be understood that the data so used may be interchanged under appropriate circumstances such that embodiments of the application described herein may be practiced otherwise than as specifically illustrated.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art based on the embodiments of the present invention without inventive step, are within the scope of the present invention.
The information processing method provided by the embodiment of the application is applied to electronic equipment, and the electronic equipment can be a computer, such as a desktop computer, a notebook computer, a tablet personal computer and the like, and can also be a mobile communication terminal, such as a smart phone and the like.
Referring to fig. 1, fig. 1 is a flowchart illustrating an implementation of an information processing method according to an embodiment of the present disclosure. The information processing method provided by the embodiment of the application can comprise the following steps:
step S11: a processing instruction for a picture is acquired, wherein the picture comprises a plurality of objects.
The plurality of objects may be only one type of object, or may include at least two types of objects. For example, the plurality of objects may be all images or graphics, or the plurality of objects may include both images and text, or may include graphics.
Step S12: responding to a processing instruction, processing the picture to obtain at least one object of the picture; wherein the type of the first object in the at least one object is consistent with the picture.
In an embodiment of the present application, the picture is processed to separate at least one object from the picture, so as to obtain at least one object independent of the picture. The type of at least one object in the at least one object is consistent with the type of the picture, and the first object can be considered as a subgraph of the picture.
Step S13: generating an interactive interface, wherein the interactive interface comprises the at least one object; the interactive interface is used for a user to select a target object.
The interactive interface can be directly generated after obtaining the at least one object, and after the user selects the target object from the at least one object, the user can also process the selected target object, such as copying, cutting and the like. Then, the user can paste the copied or cut content to a desired place, such as an editable document, e.g., a word document, a PPT document, etc., and can also paste the copied or cut content to an input box, e.g., an input box of a dialog interface of an instant messaging application (e.g., WeChat, 263, SMS, etc.).
The interactive interface may be generated after the user performs a preset operation.
The interactive interface may include only the at least one object, and may further include other selectable objects, for example, at least one object obtained by processing other pictures. That is, several objects obtained by processing at least two pictures may be included in the interactive interface. The at least one object obtained by processing the current picture is arranged in front of the at least one object obtained by processing other pictures, so that a user can preferentially see the at least one object obtained by processing the current picture.
According to the information processing method, when the user operates the picture to generate the processing instruction, the picture is processed to obtain the at least one object of the picture, namely the at least one object is separated from the picture, and the user selects the target object from the at least one object to operate, so that the user is prevented from manually extracting the object from the picture, and the user operation is simplified.
In an optional embodiment, the at least one object includes two objects of different types. For example, the two objects with different types may include text and a clear picture (i.e., a picture containing no text), and the content in the clear picture may be an image or a graphic. That is, the pictures are processed to obtain objects of different types.
In an optional embodiment, the interactive interface may further include the picture, that is, the user may select an original picture as the target object, or may select an object separated from the original picture as the target object.
In an optional embodiment, one implementation manner of processing the picture to obtain the at least one object of the picture may be:
the first method is as follows: carrying out contour detection on the picture to obtain at least one contour; each contour corresponds to a first object, and at least one first object is obtained based on the at least one contour. After the contours are detected, the picture may be segmented such that each segmented region contains only one contour, i.e. each region contains only one object. Further, before segmentation, in order to make the obtained object more accurate, the detected object in the contour may be compared with known objects in the database, and an object most similar to the object in the contour is determined, so that the contour is modified according to the most similar object, and the object in the contour is more complete. For example, assuming that the detected object in the first contour is most similar to the known human body object, and the object in the first contour is found to lack a hair part by comparison, the hair detection can be performed on the edge of the first contour in the above picture, and then the detected contour of the hair is merged with the first contour to obtain the complete human body contour.
Alternatively, the first and second liquid crystal display panels may be,
the second method comprises the following steps: and carrying out foreground and background segmentation on the picture to obtain a foreground picture and a background picture, wherein the foreground picture and the background picture are both first objects.
In some embodiments, the above two modes can be configured simultaneously, and a user can select one mode to process the picture according to actual needs to obtain at least one object of the picture.
In an optional embodiment, if the picture further includes a text, the picture may be processed to obtain a first object, and the picture may be further subjected to text recognition to obtain at least one text portion, where different text portions include different text objects. That is, at least one object obtained by processing the picture includes a second object in addition to the first object, and the second object is text of a different type from the picture.
For example, the above-mentioned picture may be subjected to optical Character recognition ocr (optical Character recognition) to obtain a text object.
All text in the picture may be recognized as a whole, i.e. the recognized text is treated as a text portion. The recognized text may also be divided into a plurality of text portions according to the position, size, color or shape of the text in the picture. Namely:
the display areas of the text objects contained in different text parts in the picture are different; alternatively, the first and second electrodes may be,
the shapes of the text objects contained in different text parts in the picture are different; alternatively, the first and second electrodes may be,
the sizes of the text objects contained in different text parts in the pictures are different; alternatively, the first and second liquid crystal display panels may be,
the text objects contained in different text portions differ in color in the picture.
In an optional embodiment, after obtaining the at least one object, the at least one object may be further saved in association with the picture. Before processing the picture to obtain at least one object of the picture, the method may further include:
judging whether the picture is associated with at least one object;
and if the judgment result is negative, processing the picture to obtain at least one object of the picture.
And if so, acquiring at least one object associated with the picture.
In the embodiment of the present application, if the picture has been processed to obtain at least one object, when the same processing instruction for the picture is received again, the at least one object may be directly read without processing the picture again, and the at least one object may be quickly obtained.
In an alternative embodiment, one implementation manner of generating the interactive interface may be:
an input operation for an input area is obtained. The input area may be any area where information can be input, for example, an editable document interface, or an input box. The input operation may be a single click, a long press, or the like.
Generating an interactive interface, wherein the interactive interface further comprises: and the operation control is used for indicating to execute the paste instruction.
After the user selects the target object, the user can operate the operation control to generate a sticky instruction and input the target object selected by the user in the input area.
In addition, if the user does not select the target object, after the display time length of the interactive interface exceeds a certain time length, a paste instruction is automatically generated, and all objects in the interactive interface are pasted to the input area as the target objects.
That is to say, after the interactive interface is generated, the information processing method provided by the present application may further include:
at least one object in the interactive interface is input within the input area in response to the paste instruction. That is, all selectable objects in the interactive interface are entered within the input area in response to the paste instruction.
Alternatively, the first and second electrodes may be,
and responding to the pasting instruction, and inputting a target object selected by the user in the at least one object in the input area.
Fig. 2 is a schematic diagram of a picture (for convenience of description, referred to as picture a) provided in the embodiment of the present application, where the picture a includes marrio, a game character, and a text "Super Mario Run". A user may trigger to generate a processing instruction for the picture a through a control (such as a control pointed by a finger in fig. 2) provided for the picture a, and after the user triggers to generate the processing instruction for the picture a, the picture a is divided into three parts, namely, a picture a and two-part texts, as shown in fig. 3, which are schematic diagrams of three objects obtained by processing the picture a. Wherein, the picture a only includes marrio (i.e. a picture of marrio) and does not include text, and the two parts of text are "Super Mario" and "Run", respectively. Specifically, in application, the interface shown in fig. 3 may or may not be displayed, or may be displayed only when the user triggers a viewing instruction.
After dividing the picture a into the picture a and the two-part text, the user may press for a long time at the input box of a certain session interface so as to display an interactive interface, as shown in fig. 4, which is an exemplary diagram after the interactive interface is displayed for the user after pressing for a long time at the input box of the session interface. The interactive interface comprises a picture a (marked by 'Image' in figure 4) and two parts of texts (namely, 'Super Mario' and 'Run'), which are obtained by the Image splitting, and also comprises a picture A (marked by 'Screen' in figure 4) and a 'pass' control.
Suppose that the user selects the item "Image" 2 and clicks the pass "control, the picture a is displayed in the input box, and after the user triggers the sending instruction, the picture a is displayed in the dialog box, as shown in fig. 5, which is an exemplary diagram of the session interface after sending the picture a.
Corresponding to the embodiment of the method, the application also provides the electronic equipment. A schematic structural diagram of the electronic device provided in the present application is shown in fig. 6, and may include:
a display unit 61, a memory 62, and a processor 63; wherein the content of the first and second substances,
the display unit 61 is used for displaying information;
the memory 62 is used for storing at least one set of instructions;
the processor 63 is configured to call and execute a set of instructions in the memory 62, and by executing the set of instructions:
acquiring a processing instruction for a picture, wherein the picture comprises a plurality of objects;
responding to a processing instruction, processing the picture to obtain at least one object of the picture; wherein the type of a first object in the at least one object is consistent with the picture;
generating an interactive interface, wherein the interactive interface comprises the at least one object; the interactive interface is used for a user to select a target object.
According to the electronic equipment, when a user operates the picture to generate the processing instruction, the picture is processed to obtain at least one object of the picture, namely the at least one object is separated from the picture, and the user selects the target object from the at least one object to operate, so that the user is prevented from manually extracting the object from the picture, and the user operation is simplified.
In an optional embodiment, the at least one object includes two objects of different types.
In an optional embodiment, the interactive interface may further include: the above pictures.
In an optional embodiment, when the processor 63 processes the picture to obtain at least one object of the picture, it may specifically be configured to:
carrying out contour detection on the picture to obtain at least one contour; at least one first object is derived based on the at least one contour.
Alternatively, the first and second liquid crystal display panels may be,
and carrying out foreground and background segmentation on the picture to obtain a foreground picture and a background picture.
In an optional embodiment, when the processor 63 processes the picture to obtain at least one object of the picture, it may further be configured to:
and performing text recognition on the picture to obtain at least one text part, wherein text objects contained in different text parts are different.
In an alternative embodiment, the processor 63 may be further configured to:
and storing at least one object of the picture in association with the picture.
In an optional embodiment, when the processor 63 generates the interaction interface, it may specifically be configured to:
obtaining an input operation for an input area;
generating the interactive interface, wherein the interactive interface further comprises: and the operation control is used for indicating to execute the pasting instruction.
In an alternative embodiment, the processor 63 may be further configured to:
inputting the at least one object in the input area in response to a paste instruction;
alternatively, the first and second electrodes may be,
and inputting a target object selected by the user in the at least one object in the input area in response to the paste instruction.
The method and the electronic device provided by the embodiment of the invention can realize explosion mapping, that is, a user executes mapping operation aiming at the picture, responds to the mapping operation, responds to the processing instruction with the picture corresponding to the mapping operation, and separates all objects in the picture, for example, the picture is Mariothis mushroom. Then marrio, mushrooms, boxes and backgrounds can be divided in one treatment. And the separated objects are displayed for the user to select.
In another embodiment, if the electronic device responds to the over-map operation, the electronic device presents the plurality of objects separated from the picture for selection by the user in response to the paste operation. Of course, the content displayed in response to the paste operation may include a plurality of objects separated from the picture, and may further include a paste operation control, so that the user can quickly generate a paste instruction after selecting a desired object. Content that has been previously copied or cut may also be included; of course, if the over-map operation is not responded before, only the content copied or cut and the operation control pasted previously are included in the displayed content.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
It should be understood that the technical problems can be solved by combining and combining the features of the embodiments from the claims.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (8)

1. An information processing method characterized by comprising:
obtaining an input operation for an input area; the input operation represents a pasting operation;
generating an interactive interface based on the input operation;
if the interactive interface comprises a pasting operation control, a copied or cut picture and at least one object corresponding to the copied or cut picture, representing that the copied or cut picture responds to a processing instruction; the at least one object is an object obtained from the copied or cut picture in response to a processing instruction, the interactive interface is used for a user to select a target object, and the paste operation control is used for instructing to execute a paste instruction;
and if the interactive interface comprises a pasting operation control and does not comprise the copied or cut picture and at least one object corresponding to the copied or cut picture, representing that the copied or cut picture does not respond to a processing instruction.
2. The method of claim 1, wherein the at least one object comprises two objects of different types.
3. The method of claim 1, wherein obtaining at least one object from the picture by the copied or cut picture in response to the processing instruction comprises:
carrying out contour detection on the picture to obtain at least one contour; obtaining at least one first object based on the at least one contour;
alternatively, the first and second electrodes may be,
and carrying out foreground and background segmentation on the picture to obtain a foreground picture and a background picture.
4. The method of claim 3, wherein obtaining at least one object from the picture by the copied or cut picture in response to the processing instruction comprises:
and performing text recognition on the picture to obtain at least one text part, wherein text objects contained in different text parts are different.
5. The method of claim 1, further comprising:
saving at least one object of the picture in association with the picture.
6. The method of claim 1, further comprising:
if the user does not select the target object, automatically generating a paste instruction after the display duration of the interactive interface exceeds a certain duration so as to input the picture and the at least one object in the input area;
alternatively, the first and second electrodes may be,
and responding to the indication of the pasting operation control to execute a pasting instruction, and inputting a target object selected by the user in the at least one object in the input area.
7. An electronic device, comprising:
a display unit for displaying information;
a memory for storing at least one set of instructions;
a processor for invoking and executing the set of instructions in the memory, by executing the set of instructions:
obtaining an input operation for an input area; the input operation represents a pasting operation;
generating an interactive interface based on the input operation;
if the interactive interface comprises a paste operation control, a copied or cut picture and at least one object corresponding to the copied or cut picture, representing that the copied or cut picture responds to a processing instruction; the at least one object is an object obtained from the copied or cut picture in response to a processing instruction, the interactive interface is used for a user to select a target object, and the paste operation control is used for instructing to execute a paste instruction;
and if the interactive interface comprises a pasting operation control and does not comprise the copied or cut picture and at least one object corresponding to the copied or cut picture, representing that the copied or cut picture does not respond to a processing instruction.
8. The electronic device of claim 7, wherein the at least one object comprises two objects of different types.
CN201811158298.3A 2018-09-30 2018-09-30 Information processing method and electronic equipment Active CN109254712B (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CN201811158298.3A CN109254712B (en) 2018-09-30 2018-09-30 Information processing method and electronic equipment
GB1912803.2A GB2577989B (en) 2018-09-30 2019-09-05 Information processing method and electronic device
US16/584,365 US11491396B2 (en) 2018-09-30 2019-09-26 Information processing method and electronic device
DE102019125937.1A DE102019125937A1 (en) 2018-09-30 2019-09-26 Information processing method and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811158298.3A CN109254712B (en) 2018-09-30 2018-09-30 Information processing method and electronic equipment

Publications (2)

Publication Number Publication Date
CN109254712A CN109254712A (en) 2019-01-22
CN109254712B true CN109254712B (en) 2022-05-31

Family

ID=65045832

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811158298.3A Active CN109254712B (en) 2018-09-30 2018-09-30 Information processing method and electronic equipment

Country Status (1)

Country Link
CN (1) CN109254712B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113835598A (en) * 2021-09-03 2021-12-24 维沃移动通信(杭州)有限公司 Information acquisition method and device and electronic equipment

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100516638B1 (en) * 2001-09-26 2005-09-22 엘지전자 주식회사 Video telecommunication system
WO2004008750A1 (en) * 2002-06-05 2004-01-22 Seiko Epson Corporation Digital camera
CN104461474A (en) * 2013-09-12 2015-03-25 北京三星通信技术研究有限公司 Mobile terminal and screen-shooting method and device therefor
CN105704396A (en) * 2014-11-24 2016-06-22 中兴通讯股份有限公司 Picture processing method and device
CN104978146B (en) * 2015-06-30 2017-11-24 广东欧珀移动通信有限公司 A kind of picture operation method and mobile terminal
CN106020647A (en) * 2016-05-23 2016-10-12 珠海市魅族科技有限公司 Picture content automatic extracting method and system
US10649614B2 (en) * 2016-12-30 2020-05-12 Facebook, Inc. Image segmentation in virtual reality environments
CN107743193A (en) * 2017-09-26 2018-02-27 深圳市金立通信设备有限公司 Picture editor's way choice method, terminal and computer-readable recording medium
CN107908337A (en) * 2017-12-14 2018-04-13 广州三星通信技术研究有限公司 Share the method and apparatus of picture material

Also Published As

Publication number Publication date
CN109254712A (en) 2019-01-22

Similar Documents

Publication Publication Date Title
US20210243493A1 (en) Video processing method, video processing device, and storage medium
KR102347336B1 (en) Gaze point determination method and apparatus, electronic device and computer storage medium
CN106484266A (en) A kind of text handling method and device
CN111612873A (en) GIF picture generation method and device and electronic equipment
CN109388506B (en) Data processing method and electronic equipment
WO2017140242A1 (en) Information processing method and client
CN111722775A (en) Image processing method, device, equipment and readable storage medium
CN112925520A (en) Method and device for building visual page and computer equipment
CN113342435A (en) Expression processing method and device, computer equipment and storage medium
CN113037925B (en) Information processing method, information processing apparatus, electronic device, and readable storage medium
CN109254712B (en) Information processing method and electronic equipment
US11491396B2 (en) Information processing method and electronic device
CN108009176B (en) Specification display method, device and equipment based on AR technology
CN112764606A (en) Identification display method and device and electronic equipment
CN108804652B (en) Method and device for generating cover picture, storage medium and electronic device
CN115357158A (en) Message processing method and device, electronic equipment and storage medium
CN112286430B (en) Image processing method, apparatus, device and medium
CN114063845A (en) Display method, display device and electronic equipment
CN112862558B (en) Method and system for generating product detail page and data processing method
CN114518859A (en) Display control method, display control device, electronic equipment and storage medium
US10915753B2 (en) Operation assistance apparatus, operation assistance method, and computer readable recording medium
CN113127058A (en) Data annotation method, related device and computer program product
CN113313027A (en) Image processing method, image processing device, electronic equipment and storage medium
CN112684912A (en) Candidate information display method and device and electronic equipment
CN111639474A (en) Document style reconstruction method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant