CN113835598A - Information acquisition method and device and electronic equipment - Google Patents
Information acquisition method and device and electronic equipment Download PDFInfo
- Publication number
- CN113835598A CN113835598A CN202111034064.XA CN202111034064A CN113835598A CN 113835598 A CN113835598 A CN 113835598A CN 202111034064 A CN202111034064 A CN 202111034064A CN 113835598 A CN113835598 A CN 113835598A
- Authority
- CN
- China
- Prior art keywords
- information
- image
- target information
- target
- character
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 111
- 238000000605 extraction Methods 0.000 claims description 21
- 230000004044 response Effects 0.000 claims description 16
- 230000015572 biosynthetic process Effects 0.000 claims description 3
- 238000003786 synthesis reaction Methods 0.000 claims description 3
- 238000010586 diagram Methods 0.000 description 25
- 230000000694 effects Effects 0.000 description 9
- 238000004590 computer program Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 6
- 238000004891 communication Methods 0.000 description 5
- 238000004883 computer application Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 2
- 238000012015 optical character recognition Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000003825 pressing Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 1
- 238000007599 discharging Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04845—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
- G06F3/04883—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The application discloses an information acquisition method, an information acquisition device and electronic equipment, and belongs to the technical field of image processing. The information acquisition method comprises the following steps: acquiring a first image; extracting information of the first image based on the target information characteristics to obtain first target information; wherein the first target information is partial information in the first image.
Description
Technical Field
The application belongs to the technical field of image processing, and particularly relates to an information acquisition method and device and electronic equipment.
Background
With the popularization of electronic devices, scenes shot by a user using the electronic device are more and more, for example, the user uses a mobile phone to shoot courseware such as a slideshow (PowerPoint, PPT) or a word (word) document spoken by a teacher.
In the prior art, after a user uses electronic equipment to photograph courseware such as PPT, word documents and the like, the user needs to enter an album to manually cut the photographed image, and then the user can obtain the desired content.
Disclosure of Invention
The embodiment of the application aims to provide an information acquisition method, an information acquisition device and electronic equipment, and the problems that in the prior art, in order to acquire desired image content, the image needs to be cut manually, and the operation is complex and time-consuming are solved.
In a first aspect, an embodiment of the present application provides an information obtaining method, where the method includes:
acquiring a first image;
extracting information of the first image based on the target information characteristics to obtain first target information;
wherein the first target information is partial information in the first image.
In a second aspect, an embodiment of the present application provides an information acquiring apparatus, including:
the first acquisition module is used for acquiring a first image;
the first extraction module is used for extracting information of the first image based on the target information characteristics to obtain first target information;
wherein the first target information is partial information in the first image.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, and when executed by the processor, the program or instructions implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the steps of the method according to the first aspect.
In a sixth aspect, the present application provides a computer program product comprising a computer program that, when executed by a processor, implements the steps of the method according to the first aspect.
In the embodiment of the application, by acquiring the first image and extracting the information of the first image based on the target information characteristics, partial information required by a user in the first image can be automatically extracted according to the target information characteristics, so that the user can conveniently look up and store files with specific contents.
Drawings
Fig. 1 is a schematic flowchart of an information acquisition method provided in an embodiment of the present application;
fig. 2 is a second schematic flowchart of an information obtaining method according to an embodiment of the present application;
fig. 3 is a third schematic flowchart of an information obtaining method according to an embodiment of the present application;
fig. 4 is a fourth schematic flowchart of an information obtaining method according to an embodiment of the present application;
fig. 5 is a fifth flowchart illustrating an information obtaining method according to an embodiment of the present application;
fig. 6 is a sixth schematic flowchart of an information acquisition method according to an embodiment of the present application;
fig. 7 is one of scene diagrams of an information acquisition method provided in an embodiment of the present application;
fig. 8 is a second scenario diagram of an information obtaining method according to an embodiment of the present application;
fig. 9 is a third scene diagram of an information obtaining method according to an embodiment of the present application;
fig. 10 is a fourth view of a scene of an information obtaining method provided in an embodiment of the present application;
fig. 11 is a fifth view of a scene of an information obtaining method according to an embodiment of the present application;
fig. 12 is a sixth view of a scene of an information obtaining method according to an embodiment of the present application;
fig. 13 is a seventh view of a scene of an information acquisition method provided in an embodiment of the present application;
fig. 14 is an eighth view of a scene of an information acquisition method provided in an embodiment of the present application;
fig. 15 is a seventh schematic flowchart of an information acquisition method according to an embodiment of the present application;
fig. 16 is a ninth view of a scene of an information acquisition method provided in an embodiment of the present application;
fig. 17 is a tenth of a scene diagram of an information acquisition method provided in an embodiment of the present application;
fig. 18 is an eleventh view of a scene diagram of an information acquisition method according to an embodiment of the present application;
fig. 19 is a twelfth scene diagram of an information acquisition method according to an embodiment of the present application;
fig. 20 is a thirteen-scene diagram of an information acquisition method provided in an embodiment of the present application;
fig. 21 is a schematic structural diagram of an information acquisition apparatus according to an embodiment of the present application;
fig. 22 is a schematic structural diagram of an electronic device provided in an embodiment of the present application;
fig. 23 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments that can be derived by one of ordinary skill in the art from the embodiments given herein are intended to be within the scope of the present disclosure.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The following describes in detail information acquisition provided by the embodiments of the present application through specific embodiments and application scenarios thereof with reference to the accompanying drawings.
In order to solve the technical problems in the prior art, an embodiment of the present application provides an information acquisition method, which can automatically extract part of information required by a user in a first image according to a target information feature by acquiring the first image and extracting information from the first image based on the target information feature, so that the user can conveniently look up and store a file with specific content, and the problems of complex operation and time consumption in the prior art are solved.
The information acquisition method provided by the embodiment of the application is at least suitable for the following two application scenarios:
the application scene one: a teacher speaks electronic documents such as PPT, word and the like on a computer or a projection screen, and highlights contents are prompted through special marks, wherein the electronic documents comprise characters and/or images, and the important contents can be characters and/or local images; when a user uses an electronic device such as a mobile phone to shoot an electronic document described by a teacher, the user only wants to acquire key contents. For the first application scenario, the embodiment of the application performs shooting on the current page of the electronic document based on the user instruction, performs image recognition based on the special mark, and acquires the identified key content of the electronic document, so that the user can directly view the key content of the current page.
Application scenario two: a teacher or a leader examines and approves documents such as PPT, word or printout and the like submitted by a user, marks the places needing to be modified by using marks and adds some annotations; the document content comprises words and/or images; when a user uses electronic equipment such as a mobile phone and the like to shoot a document fed back by a teacher or a leader, the user only wants to acquire the content with the special mark. For the second application scenario, in the embodiment of the application, the current page of documents such as PPT, word or printout and the like is shot based on the user instruction, image recognition is performed based on the special mark, and the recognized place needing to be modified in the current page is obtained, so that the user can directly view the place needing to be modified in the current page.
For the above application scenario, in the embodiment of the present application, a "scan mode" is added to a camera application on an electronic device, an application interface corresponding to the "scan mode" may be referred to as a "scan interface", and a control element, which may be referred to as a "scan" key, is added to a control element of the camera application. "scan mode" means that when "scan mode" is selected (activated), the user can perform at least one selection operation on the "scan" key; every time the user selects the scan key, the electronic device executes the following operations: and calling a camera to capture (namely shoot) a preview image of the current scanning object of the camera in real time, carrying out image recognition on the preview image based on preset information characteristics, and acquiring and caching specific information in the recognized scanning object. When the scanning mode is terminated (ended), the electronic equipment accumulatively finishes extracting the specific information of at least one scanning object respectively, and at the moment, the specific information of each scanning object in the cache is stored into a document, so that a user can conveniently and intensively look up the specific information of each scanning object.
Fig. 1 is a schematic flow diagram of an information obtaining method provided in an embodiment of the present application, and as shown in fig. 1, the method includes:
Alternatively, the first image may be an image obtained by shooting or a cached preview image. Taking the first image as an example obtained by shooting: receiving a third input of the user; the third input may be a click input of a user, or a voice instruction input by the user, or a specific gesture input by the user, which may be specifically determined according to actual use requirements, and this is not limited in this embodiment of the application. The specific gesture in the embodiment of the application can be any one of a single-click gesture, a sliding gesture, a dragging gesture, a pressure identification gesture, a long-press gesture, an area change gesture, a double-press gesture and a double-click gesture; the click input in the embodiment of the application can be click input, double-click input, click input of any number of times and the like, and can also be long-time press input or short-time press input. The third input may also be a click operation, a press operation, a touch operation, or the like of a target control of the camera application on the electronic device by the user. In response to a third input, a first image is captured. Alternatively, the first image may be a current page (e.g., page 1) of display objects, including a blackboard, a whiteboard, a slide projection (e.g., electronic document such as PPT, word, etc.), or a paper print, etc.
102, extracting information of the first image based on the target information characteristics to obtain first target information; the first target information is partial information in the first image.
Optionally, the target information characteristic may include at least one of: character features, image features; wherein the character features may include at least one of: character mark characteristics, character attribute characteristics and character type characteristics. Optionally, the text label comprises: underlining, shading, etc.; the character attribute features include: font, character size, character boldness, whether the character is inclined, etc.; the character type features include: numbers, characters, symbols, english, links, etc. The image features may include at least one of: image size features, image background features, image foreground features. Optionally, the image size features comprise: aspect ratio features, etc.; aspect ratio features refer to images of a particular scale, for example, images with an aspect ratio of 4: 3. The image background features include: watermarks, borders, etc.; the image foreground features include: two-dimensional codes, and the like. Optionally, the target information features may adopt default values, or may be selected by a user according to own needs and actual scenes.
According to the information acquisition method provided by the embodiment of the application, the first image is acquired, information extraction is carried out on the first image based on the target information characteristics to obtain the first target information, the first target information is partial information in the first image, partial information required by a user in the first image can be automatically extracted according to the target information characteristics, and the user can conveniently look up and store files with specific contents.
Optionally, after the first image is acquired, image recognition is performed on the first image based on the target information feature, and first target information matched with the target information feature in the content of the first image is identified. Optionally, when the user performs a selection operation on the "scan" key once, the specific information in the scanned object may be obtained by performing step 101 and step 102, and the specific information in the scanned object is cached; when the scanning mode is selected (activated), a user can select the scanning key for multiple times, so that specific information in multiple different scanning objects can be acquired and cached; when the scanning mode is terminated, the specific information of each scanning object in the cache is saved as a document for storage, so that a user can conveniently refer to the specific information of each scanning object. It should be noted that, by caching the specific information in the scanned object obtained once, the information access speed can be increased, and the memory usage can be reduced.
Fig. 2 is a second schematic flowchart of an information obtaining method according to an embodiment of the present application, and as shown in fig. 2, the method includes:
Optionally, the first input may be a click input of a user, or a voice instruction input by the user, or a specific gesture input by the user, which may be specifically determined according to actual usage requirements, and this is not limited in this embodiment of the application. The specific gesture in the embodiment of the application can be any one of a single-click gesture, a sliding gesture, a dragging gesture, a pressure identification gesture, a long-press gesture, an area change gesture, a double-press gesture and a double-click gesture; the click input in the embodiment of the application can be click input, double-click input, click input of any number of times and the like, and can also be long-time press input or short-time press input. The third input may also be a click operation, a press operation, or a touch operation by a user on a target information feature option included in a user interface edge pop-up tab window of a computer application on the electronic device. The target information characteristic options comprise options corresponding to character characteristics and/or image characteristics; the character features may include at least one of: character marking characteristics, character attribute characteristics and character type characteristics; the image features may include at least one of: image size features, image background features, image foreground features.
Optionally, step 201 and step 202 describe an implementation manner of determining the target information features, and the user may select a specific category of the target information features based on the user's own needs and actual scenes.
Optionally, for the description and explanation of the steps 203-204, reference may be made to the above-mentioned steps 101-102, and the same technical effects can be achieved, and in order to avoid repetition, the description is not repeated here.
According to the information acquisition method provided by the embodiment of the application, the specific types of the target information characteristics are selected according to the self requirements and the actual scene of the user, the first scanning object is shot based on the user instruction on the basis, the specific content required by the user in the first scanning object can be automatically extracted according to the target information characteristics, and the user can conveniently look up and store the file of the specific content.
Optionally, the implementation manner of step 102 of the information obtaining method shown in fig. 1 may include at least one of the following manners:
mode 1: and under the condition that the target information features are the character features, extracting the character information with the character features in the first image to obtain the first target information.
Optionally, in a case that the target information feature is a Character feature, recognizing all text information in the first image by using an Optical Character Recognition (OCR) technology, and outputting the recognized text information in a text form; and then extracting the character information with character features in the recognized character information, and taking the extracted character information with character features as first target information.
Mode 2: and under the condition that the target information characteristic is the image characteristic, extracting the image information with the image characteristic in the first image to obtain the first target information.
Optionally, when the target information feature is the image feature, image recognition is performed on the first image by using an image recognition technology, an area image having the image feature in the first image is recognized, the recognized area image is captured, and the captured area image is used as the first target information.
Optionally, in a case that the target information feature includes both a character feature and an image feature, the specific character information having the character feature is obtained by recognizing the character information in the first image, and the area image having the image feature in the first image is captured as the specific image, where the first target information includes the specific character information and the specific image.
Fig. 3 is a third schematic flowchart of an information obtaining method according to an embodiment of the present application, and as shown in fig. 3, the method includes:
Optionally, for the description and explanation of the steps 301-302, reference may be made to the above-mentioned step 101-102, and the same technical effects can be achieved, and in order to avoid repetition, the description is not repeated here.
And 303, displaying the first target information on a shooting preview interface.
Optionally, the first target information extracted from the first image is displayed on a shooting preview interface of the electronic device, and the user can view and check the extracted information in real time.
Compared with the scheme that a display window is additionally popped up to display a result when photographing is performed firstly and then recognition is performed in the prior art, the information acquisition method provided by the embodiment of the application displays the first target information on the photographing preview interface, and the scheme is simple to implement.
And 304, in the case of receiving a second input of the first target information by the user, responding to the second input, and updating the information content of the first target information.
Optionally, the second input may be a click operation of the user on the preview interface, or may also be a click operation of the user on a target control on the preview interface, where the target control may be an "edit" control. Optionally, the updating the information content of the first target information may include: and editing and typesetting the information content of the first target information.
Optionally, in a case that the user needs to modify the first target information displayed in the preview interface, the user may manually modify the first target information displayed in the preview interface through the second input.
According to the information acquisition method provided by the embodiment of the application, the first target information in the first image is automatically extracted according to the target information characteristics and displayed on the shooting preview interface, and the information content of the first target information can be updated by a user, so that the user can conveniently check and verify the information content in real time, and the reliability of the identification result can be improved.
Optionally, after providing the scheme that the first target information is displayed on the shooting preview interface based on step 303, the embodiment of the present application further provides a manual display cancellation manner or an automatic display cancellation manner for canceling the display of the first target information, specifically:
the manual display canceling mode comprises the following steps: and receiving a fourth input of the user, and canceling the display of the first target information on the shooting preview interface in response to the fourth input. Optionally, the fourth input may be a double-click operation performed by the user on the shooting preview interface, or a click operation performed by the user on a specific control on the shooting preview interface, where the specific control may be a "close" control.
The automatic display cancellation mode is as follows: and under the condition that the display duration of the first target information displayed on the shooting preview interface reaches the target duration, canceling the display of the first target information on the shooting preview interface. Optionally, timing is started when the first target information is displayed on the shooting preview interface, and the display of the first target information is automatically cancelled when the display duration reaches the target duration.
According to the information acquisition method provided by the embodiment of the application, the display efficiency of the application interface is improved by providing two display cancellation modes and canceling the display of the first target information in time.
Fig. 4 is a fourth schematic flowchart of an information obtaining method provided in an embodiment of the present application, and as shown in fig. 4, the method includes:
Optionally, for the description and explanation of the steps 401-402, reference may be made to the step 101-102, and the same technical effects can be achieved, and in order to avoid repetition, the description is not repeated here.
Optionally, the target file includes file information, and the file information includes at least one of: information storage type, information storage format and information content layout; the information storage type comprises a document and an image; the information storage format comprises word, excel, pdf and the like; the layout of the information content comprises a typesetting mode and the like. The file information may be set by the user or may adopt a default setting.
According to the information acquisition method provided by the embodiment of the application, the first image is acquired, the acquired first image is automatically subjected to information extraction based on the target information characteristics to obtain the first target information, the first target information is stored as the preset type of target file, and the first target information is part of information in the first image, so that a user can conveniently look up specific content.
Fig. 5 is a fifth schematic flowchart of an information obtaining method provided in the embodiment of the present application, and as shown in fig. 5, the method includes:
Optionally, for the description and explanation of the steps 501-504, reference may be made to the above-mentioned step 201-204, and the same technical effect can be achieved, and in order to avoid repetition, the description is not repeated here.
Examples are as follows: the first image corresponds to page k of the display object, which may be PPT or the like, and the second image corresponds to page k + x of the display object. Optionally, the display object includes m pages and includes the target information feature in n pages, m is an integer greater than or equal to 2, and n is an integer greater than or equal to 2.
Optionally, the second target information and the first target information are stored in one file together, so that a plurality of target information respectively obtained from different pages of the document are stored in a centralized manner, and centralized display and viewing are facilitated.
According to the information acquisition method provided by the embodiment of the application, for the scene of a scanning object comprising a plurality of pages, the images of the pages are respectively acquired and image recognition is carried out, the target information corresponding to the target information characteristics in each page is recognized, the recognized target information is stored in one file together, a user can conveniently check the specific content in each page, and the information acquisition requirement of the user can be met.
Optionally, after information extraction is performed on the first image based on the target information features to obtain first target information, information extraction is performed on the second image based on the target information features to obtain second target information; then, the storing manner of the first target information and the second target information may include at least one of:
storage mode a: and under the condition that the first target information comprises first character information and the second target information comprises second character information, storing the first character information and the second character information according to the extraction sequence of the information to obtain a first target file, for example, the first target file can be stored in a segmented manner according to the extraction sequence.
Storage mode b: and under the condition that the first target information comprises a third image and the second target information comprises a fourth image, carrying out image synthesis on the third image and the fourth image to obtain a second target file.
And a storage mode c: and under the condition that the first target information comprises third character information and a fifth image, storing the third character information and the fifth image in the same file to obtain a third target file.
Storage mode d: and under the condition that the first target information comprises fourth text information and a sixth image, and the second target information comprises fifth text information and a seventh image, storing the fourth text information, the sixth image, the fifth text information and the seventh image in the same file to obtain a fourth target file, wherein a file display area of the fourth target file comprises a first text display area and a first image display area, the fourth text information and the fifth text information are displayed in the first text display area, and the sixth image and the seventh image are displayed in the first image display area.
The following is exemplified with reference to a specific extraction process:
the information acquisition method provided by the embodiment of the present application is described by taking the storage method a as an example, where the target information feature includes a character feature. Fig. 6 is a sixth schematic flowchart of an information acquisition method provided in the embodiment of the present application, fig. 7 is one of scene diagrams of the information acquisition method provided in the embodiment of the present application, fig. 8 is a second scene diagram of the information acquisition method provided in the embodiment of the present application, fig. 9 is a third scene diagram of the information acquisition method provided in the embodiment of the present application, fig. 10 is a fourth scene diagram of the information acquisition method provided in the embodiment of the present application, fig. 11 is a fifth scene diagram of the information acquisition method provided in the embodiment of the present application, fig. 12 is a sixth scene diagram of the information acquisition method provided in the embodiment of the present application, fig. 13 is a seventh scene diagram of the information acquisition method provided in the embodiment of the present application, and fig. 14 is an eighth scene diagram of the information acquisition method provided in the embodiment of the present application; as shown in fig. 6, the method includes:
Optionally, the first input may be a click operation of a user on a target information feature option included in a user interface edge pop-up tab window of a computer application on the electronic device.
Optionally, the target information feature comprises a character feature. FIG. 7 illustrates an initial interface of a scan 701 mode of a camera application of an electronic device, the initial interface displaying a plurality of controls, the camera application executing respective functions of a selected control in the event that a user selects a different control; the target control may be a scan control 702. Fig. 8 shows a pop-up tab window 801 on the edge of a user interface of an electronic device, where the tab window 801 includes a plurality of information feature options, and a first input by a user may be a selection operation of the user on an information feature option corresponding to "extract an intagliated content" included in the tab window 801; optionally, the user may only select one type of target information feature through selection operation, or may select multiple types of information feature options; among the information feature options, "extract underline content" is to extract content with underline features, "extract content with shading" is to extract content with shading features, "extract bold font content" is to extract content with font bold features, and "extract italic font content" is to extract content with italic font features.
Optionally, the third input is a selection operation of a target control applied to the camera on the electronic device by the user, and the selection operation may be a click operation, a press operation, a touch operation, or the like.
Alternatively, fig. 9 shows the shooting preview interface after the electronic device starts the "scan" mode, which the user can start by pressing the scan control 902 for a long time; a prompt window 901 is displayed on the user interface, and "… … in scan" is displayed in the prompt window 901. Fig. 10 shows a preview interface of shooting in which the text information "first page first segment", "first page second segment", and "first page third segment" are underlined and have a target information feature of "extracting underlined content"; the user clicks on a scan control 902 of the camera application; the electronic equipment shoots a first scanning object corresponding to the shooting preview interface to obtain a shot first image; and identifying the text information in the first image, and determining the text information 'first page first section', 'first page second section' and 'first page third section' as first target information.
Optionally, identifying the text information with the underlined feature in fig. 10 includes: in the case of the first page first section, the first page second section, and the first page third section, as shown in fig. 11, a recognition preview frame 1101 is provided on the scan interface, and the first object information is displayed in the recognition preview frame 1101.
Optionally, the second input may be a click operation of the user on the identified preview box, or may also be a click operation of the user on a target control on the identified preview box.
And 608, in response to the second input, editing the first target information displayed in the recognition preview frame.
Alternatively, fig. 12 shows a scenario in which the first object information displayed in the recognition preview frame 1101 is edited, and the user finds that the "none" word should be the "paragraph" when checking the first object information shown in fig. 12, and therefore, the user edits the first object information displayed in the recognition preview frame 1101 by a third input. After the editing is completed, the display of the identification preview frame can be cancelled on the scanning interface.
And step 609, under the condition that the display duration of the identification preview frame on the scanning interface reaches the target duration, canceling to display the identification preview frame on the scanning interface.
Alternatively, the fifth input may be a click operation of the target control of the camera application by the user again.
Optionally, the fifth input may be a click operation of the user on a "scan" control of the camera application, which is received again after the document is turned from the current page (e.g., page 1), at which time, a second scan object (e.g., page 2 of the document) corresponding to the preview content in the scan interface is scanned; and responding to a fifth input, shooting a second scanning object (for example, the 2 nd page) corresponding to the preview content in the scanning interface to acquire a second image. FIG. 13 illustrates a scene in which the user again clicks on the scan control 902 of the camera application, filming a second scan object; and performing image recognition on the second image based on the target information characteristics, and acquiring and caching the recognized second target information.
Optionally, setting an identification preview frame on the scanning interface, and simultaneously displaying the second target information and the first target information in the identification preview frame; storing the second target information and the first target information together in a file; fig. 14 illustrates a scenario in which the second target information and the first target information are displayed within a recognition preview box 1401; in the case where the display duration of the recognition preview frame 1401 is displayed on the scan interface to reach the target duration, the recognition preview frame 1401 is canceled from being displayed on the scan interface. After the user has performed information acquisition operations on all pages of the document, the user may press the scan control 902 for a long time to exit the "scan" mode, and the user interface of the camera application is restored to the initial interface of the "scan" mode.
According to the information acquisition method provided by the embodiment of the application, the first scanning object is shot based on the user instruction, and the first image corresponding to the first scanning object is subjected to image recognition based on the character characteristics, so that the specific content required by the user in the first scanning object is automatically acquired, the user does not need to enter an album to manually cut the shot image, and the user can conveniently check the specific content.
The information acquisition method provided by the embodiment of the present application is described by taking an example in which the target information feature includes an image feature. Fig. 15 is a seventh schematic flowchart of an information acquisition method provided in the embodiment of the present application, fig. 16 is a ninth schematic flowchart of a scenario diagram of the information acquisition method provided in the embodiment of the present application, fig. 17 is a tenth schematic flowchart of the information acquisition method provided in the embodiment of the present application, fig. 18 is an eleventh schematic diagram of the information acquisition method provided in the embodiment of the present application, fig. 19 is a twelfth schematic diagram of the information acquisition method provided in the embodiment of the present application, and fig. 20 is a thirteenth schematic diagram of the information acquisition method provided in the embodiment of the present application. As shown in fig. 15, the method includes:
in step 1501, a first input from a user is received.
Optionally, the first input may be a click operation of a user on a target information feature option included in a tab window popped up from an edge of a user interface of a computer application on the electronic device.
Optionally, fig. 16 shows a plurality of information feature options included in a user interface edge pop-up tab window 1601 of the electronic device, and the first input of the user may be a selection operation of the user on an information feature option corresponding to "extract a framed image" included in the pop-up tab window 1601; optionally, only one type of target information features may be selected through the selection operation, and also multiple types of information feature options may be selected; among the information feature options, the "extracting a framed image" is to extract an image with a frame feature, the "extracting a watermarked image" is to extract an image with a watermark feature, the "extracting a two-dimensional code image" is to extract an image with a two-dimensional code feature, and the "extracting an image with a ratio of 4: 3" is to extract an image with a length-width ratio of 4: 3. FIG. 17 illustrates the capture preview interface after the electronic device has turned on the "scan" mode, which the user can turn on by long pressing scan control 902; a prompt window 901 is displayed on the user interface, and "… … in scan" is displayed in the prompt window 901.
In step 1503, a third input from the user is received.
Optionally, the third input is a selection operation of a target control applied to the camera on the electronic device by the user, and the selection operation may be a click operation, a press operation, a touch operation, or the like.
Alternatively, the third input of the user may be an operation of clicking the scan control 902 by the user as shown in fig. 18, and shooting the first scan object to acquire a scene of the first image.
Optionally, picture 4 in fig. 18 has a "framed" feature; in the case that the first input of the user is that the user selects the information feature option corresponding to "extract the framed image" in the pop-up tab window 1601, the area image captured in step 1505 is picture 4, that is, the first target information is picture 4.
And step 1506, setting an identification preview frame on the scanning interface, and displaying the first target information in the identification preview frame.
Alternatively, fig. 19 shows a scene in which first target information, which is an area image, is displayed within the recognition preview frame 1401.
Optionally, the second input may be a click operation of the user on the identified preview box, or may also be a click operation of the user on a target control on the identified preview box.
Optionally, the editing means comprises at least one of: rotation, filter, graffiti and cutting.
In step 1510, a fifth input from the user is received.
Alternatively, the fifth input may be a click operation of the target control of the camera application by the user again.
Step 1512, performing image recognition on the second image based on the target information features, acquiring and caching the recognized second target information, and storing the second target information and the first target information in one file together.
Optionally, the second object information and the first object information may be stored in the same document, and since the second object information and the first object information are both images in the case that the object information features are image features, the second object information and the first object information may also be directly spliced into an image.
According to the information acquisition method provided by the embodiment of the application, the first scanning object is shot based on the user instruction, and the first image corresponding to the first scanning object is subjected to image recognition based on the image characteristics, so that the specific content in the first scanning object is acquired, a user can directly view the specific content required by the user in the first scanning object, and the user can conveniently look up and store the file of the specific content.
Alternatively, taking the storage manner d as an example that the target information feature includes a character feature and an image feature, fig. 20 shows that the first scanning object includes both a character feature (e.g., underlined text) and an image feature (e.g., a framed image), the first target information includes fourth text information and a sixth image 2001, and the fourth text information is:
"first page first paragraph
First page second section
First page third section ".
And simultaneously storing the fourth text information and the sixth image in a fourth target file, wherein the fourth target file can be a document or an image.
It should be noted that, in the information obtaining method provided in the embodiment of the present application, the execution main body may be an information obtaining apparatus, such as a mobile phone, or a control module in the information obtaining apparatus for executing the loaded information obtaining method. In the embodiment of the present application, an information acquisition apparatus executes a method for acquiring loaded information as an example, and the information acquisition apparatus provided in the embodiment of the present application is described.
The embodiment of the application also provides an information acquisition device. Fig. 21 is a schematic structural diagram of an information acquisition apparatus according to an embodiment of the present application, and as shown in fig. 21, the information acquisition apparatus 2100 includes: a first acquisition module 2101 and a first extraction module 2102; wherein,
a first obtaining module 2101 configured to obtain a first image;
a first extraction module 2102, configured to perform information extraction on the first image based on a target information feature to obtain first target information;
wherein the first target information is partial information in the first image.
According to the information acquisition device provided by the embodiment of the application, the first image is acquired, information extraction is carried out on the first image based on the target information characteristics, partial information required by a user in the first image can be automatically extracted according to the target information characteristics, and the user can conveniently look up and store files with specific contents.
Optionally, the apparatus further comprises:
the receiving module is used for receiving a first input of a user;
a determination module to determine the target information feature in response to the first input;
wherein the target information characteristics include at least one of: character features, image features; the first target information includes at least one of: character information and image information.
Optionally, the first extraction module 2102 is specifically configured to:
under the condition that the target information features are the character features, extracting character information with the character features in the first image to obtain first target information;
and under the condition that the target information feature is the image feature, extracting the image information with the image feature in the first image to obtain the first target information.
Optionally, the apparatus further comprises:
the display module is used for displaying the first target information on a shooting preview interface;
and the updating module is used for responding to a second input of the first target information by the user under the condition of receiving the second input of the first target information, and updating the information content of the first target information.
Optionally, the apparatus further comprises:
and the storage module is used for storing the first target information into a preset type of target file.
Optionally, the apparatus further comprises:
the second acquisition module is used for acquiring a second image;
the second extraction module is used for extracting information of the second image based on the target information characteristics to obtain second target information;
wherein the second target information is partial information in the second image.
The information acquisition device in the embodiment of the present application may be a device, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The information acquisition device in the embodiment of the present application may be a device having an operating system. The operating system may be an Android (Android) operating system, an IOS operating system, or other possible operating systems, which is not specifically limited in the embodiments of the present application.
The information acquisition device provided in the embodiment of the present application can implement each process implemented by the information acquisition device in the method embodiments of fig. 1 to fig. 20, and can achieve the same technical effect, and for avoiding repetition, details are not repeated here.
As shown in fig. 22, an electronic device 2200 provided in this embodiment of the present application includes a processor 2201, a memory 2202, and a program or an instruction stored in the memory 2202 and executable on the processor 2201, where the program or the instruction when executed by the processor 2201 implements each process of the above-mentioned information obtaining method embodiment, and can achieve the same technical effect, and no further description is provided here to avoid repetition.
It should be noted that the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Optionally, fig. 23 is a schematic diagram of a hardware structure of an electronic device implementing the embodiment of the present application.
The electronic device 2300 includes, but is not limited to: radio frequency unit 2301, network module 2302, audio output unit 2303, input unit 2304, sensor 2305, display unit 2306, user input unit 2307, interface unit 2308, memory 2309, and processor 2310.
Those skilled in the art will appreciate that the electronic device 2300 may also include a power supply (e.g., a battery) to power the various components, which may be logically coupled to the processor 2310 via a power management system to manage charging, discharging, and power consumption management functions via the power management system. The electronic device structure shown in fig. 23 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description thereof is omitted.
The processor 2310 is configured to obtain a first image; extracting information of the first image based on the target information characteristics to obtain first target information; wherein the first target information is partial information in the first image.
According to the electronic device provided by the embodiment of the application, the first image is obtained, information extraction is carried out on the first image based on the target information characteristics, partial information required by a user in the first image can be automatically extracted according to the target information characteristics, and the user can conveniently look up and store files with specific contents.
Optionally, the user input unit 2307, further configured to receive a first input by a user;
a processor 2310 further configured to determine, in response to the first input, the target information characteristic; wherein the target information characteristics include at least one of: character features, image features; the first target information includes at least one of: character information and image information.
The electronic equipment provided by the embodiment of the application selects the specific type of the target information characteristic according to the self requirement and the actual scene of the user, shoots the first scanning object based on the user indication on the basis, can automatically extract the specific content in the first scanning object according to the target information characteristic, and is convenient for the user to look up the specific content.
Optionally, the processor 2310 is further configured to:
under the condition that the target information features are the character features, extracting character information with the character features in the first image to obtain first target information;
and under the condition that the target information feature is the image feature, extracting the image information with the image feature in the first image to obtain the first target information.
Optionally, the character features include at least one of: character marking characteristics, character attribute characteristics and character type characteristics;
the image features include at least one of: image size features, image background features, image foreground features.
Optionally, a display unit 2306, configured to display the first target information on a shooting preview interface;
the processor 2310 is further configured to, in a case that a second input of the first target information by the user is received, update information content of the first target information in response to the second input.
According to the electronic equipment provided by the embodiment of the application, the first target information in the first image is automatically extracted according to the target information characteristics and displayed on the shooting preview interface, and the information content of the first target information can be updated by a user, so that the user can check and verify the information content in real time, the reliability of the identification result is improved, and the information acquisition requirement of the user is met.
Optionally, the memory 2309 is further configured to store the first target information as a preset type of target file.
According to the electronic device provided by the embodiment of the application, the first image is obtained, the obtained first image is automatically subjected to information extraction based on the target information characteristics to obtain the first target information, the first target information is stored as the preset type of target file, and the first target information is part of information in the first image, so that a user can conveniently look up specific content.
Optionally, the processor 2310 is further configured to obtain a second image; extracting information of the second image based on the target information characteristics to obtain second target information; wherein the second target information is partial information in the second image.
According to the electronic equipment provided by the embodiment of the application, for a scene with a scanning object comprising a plurality of pages, the images of the pages are respectively acquired and image recognition is carried out, the target information corresponding to the target information characteristics in each page is recognized, the recognized target information is stored in one file, a user can conveniently check the specific content in each page, and the information acquisition requirement of the user can be met.
Optionally, the memory 2309, further to:
under the condition that the first target information comprises first character information and the second target information comprises second character information, storing the first character information and the second character information according to the extraction sequence of the information to obtain a first target file;
under the condition that the first target information comprises a third image and the second target information comprises a fourth image, carrying out image synthesis on the third image and the fourth image to obtain a second target file;
under the condition that the first target information comprises third text information and a fifth image, storing the third text information and the fifth image in the same file to obtain a third target file;
and under the condition that the first target information comprises fourth text information and a sixth image, and the second target information comprises fifth text information and a seventh image, storing the fourth text information, the sixth image, the fifth text information and the seventh image in the same file to obtain a fourth target file, wherein a file display area of the fourth target file comprises a first text display area and a first image display area, the fourth text information and the fifth text information are displayed in the first text display area, and the sixth image and the seventh image are displayed in the first image display area.
It should be understood that, in the embodiment of the present application, the input Unit 2304 may include a Graphics Processing Unit (GPU) 23041 and a microphone 23042, and the Graphics processor 23041 processes image data of still pictures or videos obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 2306 may include a display panel 23061, and the display panel 23061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 2307 includes a touch panel 23071 and other input devices 23072. The touch panel 23071 is also referred to as a touch screen. The touch panel 23071 may include two parts of a touch detection device and a touch controller. Other input devices 23072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. The memory 2309 may be used to store software programs as well as various data including, but not limited to, application programs and an operating system. The processor 2310 may integrate an application processor, which primarily handles operating systems, user interfaces, application programs, etc., and a modem processor, which primarily handles wireless communications. It is to be appreciated that the modem processor can be separate from and integrated with the processor 2310.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the above-mentioned information obtaining method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction to implement each process of the above information obtaining method embodiment, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
The embodiment of the present application further provides a computer program product, which includes a computer program, and when the computer program is executed by a processor, the computer program implements each process of the information obtaining method embodiment, and can achieve the same technical effect, and for avoiding repetition, details are not repeated here.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a computer software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.
Claims (15)
1. An information acquisition method, comprising:
acquiring a first image;
extracting information of the first image based on the target information characteristics to obtain first target information;
wherein the first target information is partial information in the first image.
2. The information acquisition method according to claim 1, wherein before the acquiring the first image, further comprising:
receiving a first input of a user;
determining the target information characteristic in response to the first input;
wherein the target information characteristics include at least one of: character features, image features; the first target information includes at least one of: character information and image information.
3. The information acquisition method according to claim 2, wherein the extracting information of the first image based on the target information feature to obtain first target information comprises:
under the condition that the target information features are the character features, extracting character information with the character features in the first image to obtain first target information;
and under the condition that the target information feature is the image feature, extracting the image information with the image feature in the first image to obtain the first target information.
4. The information acquisition method according to claim 2, wherein the character feature includes at least one of: character marking characteristics, character attribute characteristics and character type characteristics;
the image features include at least one of: image size features, image background features, image foreground features.
5. The information acquisition method according to claim 1, characterized in that the method further comprises:
displaying the first target information on a shooting preview interface;
and in the case of receiving second input of the first target information by the user, responding to the second input, and updating the information content of the first target information.
6. The information acquisition method according to claim 1, wherein after extracting information from the first image based on the target information feature to obtain first target information, the method further comprises:
and storing the first target information as a preset type of target file.
7. The information acquisition method according to claim 2, wherein after extracting information from the first image based on the target information feature to obtain first target information, the method further comprises:
acquiring a second image;
extracting information of the second image based on the target information characteristics to obtain second target information;
wherein the second target information is partial information in the second image.
8. The information acquisition method according to claim 7, wherein after obtaining the second target information, the method further comprises:
under the condition that the first target information comprises first character information and the second target information comprises second character information, storing the first character information and the second character information according to the extraction sequence of the information to obtain a first target file;
under the condition that the first target information comprises a third image and the second target information comprises a fourth image, carrying out image synthesis on the third image and the fourth image to obtain a second target file;
under the condition that the first target information comprises third text information and a fifth image, storing the third text information and the fifth image in the same file to obtain a third target file;
and under the condition that the first target information comprises fourth text information and a sixth image, and the second target information comprises fifth text information and a seventh image, storing the fourth text information, the sixth image, the fifth text information and the seventh image in the same file to obtain a fourth target file, wherein a file display area of the fourth target file comprises a first text display area and a first image display area, the fourth text information and the fifth text information are displayed in the first text display area, and the sixth image and the seventh image are displayed in the first image display area.
9. An information acquisition apparatus characterized by comprising:
the first acquisition module is used for acquiring a first image;
the first extraction module is used for extracting information of the first image based on the target information characteristics to obtain first target information;
wherein the first target information is partial information in the first image.
10. The information acquisition apparatus according to claim 9, characterized in that the apparatus further comprises:
the receiving module is used for receiving a first input of a user;
a determination module to determine the target information feature in response to the first input;
wherein the target information characteristics include at least one of: character features, image features; the first target information includes at least one of: character information and image information.
11. The information acquisition apparatus according to claim 10, wherein the first extraction module is specifically configured to:
under the condition that the target information features are the character features, extracting character information with the character features in the first image to obtain first target information;
and under the condition that the target information feature is the image feature, extracting the image information with the image feature in the first image to obtain the first target information.
12. The information acquisition apparatus according to claim 9, characterized in that the apparatus further comprises:
the display module is used for displaying the first target information on a shooting preview interface;
and the updating module is used for responding to a second input of the first target information by the user under the condition of receiving the second input of the first target information, and updating the information content of the first target information.
13. The information acquisition apparatus according to claim 9, characterized in that the apparatus further comprises:
and the storage module is used for storing the first target information into a preset type of target file.
14. The information acquisition apparatus according to claim 10, characterized in that the apparatus further comprises:
the second acquisition module is used for acquiring a second image;
the second extraction module is used for extracting information of the second image based on the target information characteristics to obtain second target information;
wherein the second target information is partial information in the second image.
15. An electronic device comprising a processor, a memory, and a program or instructions stored on the memory and executable on the processor, the program or instructions when executed by the processor implementing the steps of the information acquisition method according to any one of claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111034064.XA CN113835598A (en) | 2021-09-03 | 2021-09-03 | Information acquisition method and device and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111034064.XA CN113835598A (en) | 2021-09-03 | 2021-09-03 | Information acquisition method and device and electronic equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113835598A true CN113835598A (en) | 2021-12-24 |
Family
ID=78962082
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111034064.XA Pending CN113835598A (en) | 2021-09-03 | 2021-09-03 | Information acquisition method and device and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113835598A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115543161A (en) * | 2022-11-04 | 2022-12-30 | 广州市保伦电子有限公司 | Matting method and device suitable for whiteboard all-in-one machine |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140267796A1 (en) * | 2013-03-14 | 2014-09-18 | Samsung Electronics Co., Ltd. | Application information processing method and apparatus of mobile terminal |
CN106170799A (en) * | 2014-01-27 | 2016-11-30 | 皇家飞利浦有限公司 | From image zooming-out information and information is included in clinical report |
CN109254712A (en) * | 2018-09-30 | 2019-01-22 | 联想(北京)有限公司 | Information processing method and electronic equipment |
US20190197308A1 (en) * | 2017-12-22 | 2019-06-27 | Google Llc | Graphical user interface created via inputs from an electronic document |
CN110312036A (en) * | 2019-06-27 | 2019-10-08 | 维沃移动通信有限公司 | A kind of content sends, methods of exhibiting and terminal |
CN111079503A (en) * | 2019-08-02 | 2020-04-28 | 广东小天才科技有限公司 | Character recognition method and electronic equipment |
CN111104927A (en) * | 2019-12-31 | 2020-05-05 | 维沃移动通信有限公司 | Target person information acquisition method and electronic equipment |
CN111353422A (en) * | 2020-02-27 | 2020-06-30 | 维沃移动通信有限公司 | Information extraction method and device and electronic equipment |
CN112000834A (en) * | 2020-08-26 | 2020-11-27 | 北京百度网讯科技有限公司 | Document processing method, device, system, electronic equipment and storage medium |
CN112446259A (en) * | 2019-09-02 | 2021-03-05 | 深圳中兴网信科技有限公司 | Image processing method, device, terminal and computer readable storage medium |
CN113194024A (en) * | 2021-03-22 | 2021-07-30 | 维沃移动通信(杭州)有限公司 | Information display method and device and electronic equipment |
-
2021
- 2021-09-03 CN CN202111034064.XA patent/CN113835598A/en active Pending
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140267796A1 (en) * | 2013-03-14 | 2014-09-18 | Samsung Electronics Co., Ltd. | Application information processing method and apparatus of mobile terminal |
CN106170799A (en) * | 2014-01-27 | 2016-11-30 | 皇家飞利浦有限公司 | From image zooming-out information and information is included in clinical report |
US20190197308A1 (en) * | 2017-12-22 | 2019-06-27 | Google Llc | Graphical user interface created via inputs from an electronic document |
CN109254712A (en) * | 2018-09-30 | 2019-01-22 | 联想(北京)有限公司 | Information processing method and electronic equipment |
CN110312036A (en) * | 2019-06-27 | 2019-10-08 | 维沃移动通信有限公司 | A kind of content sends, methods of exhibiting and terminal |
CN111079503A (en) * | 2019-08-02 | 2020-04-28 | 广东小天才科技有限公司 | Character recognition method and electronic equipment |
CN112446259A (en) * | 2019-09-02 | 2021-03-05 | 深圳中兴网信科技有限公司 | Image processing method, device, terminal and computer readable storage medium |
CN111104927A (en) * | 2019-12-31 | 2020-05-05 | 维沃移动通信有限公司 | Target person information acquisition method and electronic equipment |
CN111353422A (en) * | 2020-02-27 | 2020-06-30 | 维沃移动通信有限公司 | Information extraction method and device and electronic equipment |
CN112000834A (en) * | 2020-08-26 | 2020-11-27 | 北京百度网讯科技有限公司 | Document processing method, device, system, electronic equipment and storage medium |
CN113194024A (en) * | 2021-03-22 | 2021-07-30 | 维沃移动通信(杭州)有限公司 | Information display method and device and electronic equipment |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115543161A (en) * | 2022-11-04 | 2022-12-30 | 广州市保伦电子有限公司 | Matting method and device suitable for whiteboard all-in-one machine |
CN115543161B (en) * | 2022-11-04 | 2023-08-15 | 广东保伦电子股份有限公司 | Image matting method and device suitable for whiteboard integrated machine |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20120131520A1 (en) | Gesture-based Text Identification and Selection in Images | |
US8448061B1 (en) | User interfaces and methods to create electronic documents with forms implementing content input fields | |
CN114302009A (en) | Video processing method, video processing device, electronic equipment and medium | |
CN115454365A (en) | Picture processing method and device, electronic equipment and medium | |
CN113794831B (en) | Video shooting method, device, electronic equipment and medium | |
CN111638839A (en) | Screen capturing method and device and electronic equipment | |
CN111310747A (en) | Information processing method, information processing apparatus, and storage medium | |
US20240061990A1 (en) | Document Generation Method and Electronic Device and Non-transitory Readable Storage Medium | |
US7400785B2 (en) | Systems and methods for associating images | |
CN110245572A (en) | Region content identification method, device, computer equipment and storage medium | |
CN113835598A (en) | Information acquisition method and device and electronic equipment | |
CN111639474A (en) | Document style reconstruction method and device and electronic equipment | |
CN111724455A (en) | Image processing method and electronic device | |
CN113163256B (en) | Method and device for generating operation flow file based on video | |
CN114679546A (en) | Display method and device, electronic equipment and readable storage medium | |
CN113253904A (en) | Display method, display device and electronic equipment | |
CN113436297A (en) | Picture processing method and electronic equipment | |
CN113268961A (en) | Travel note generation method and device | |
CN113360684A (en) | Picture management method and device and electronic equipment | |
CN111796733A (en) | Image display method, image display device and electronic equipment | |
CN113805709B (en) | Information input method and device | |
CN115278378B (en) | Information display method, information display device, electronic apparatus, and storage medium | |
CN114519859A (en) | Text recognition method, text recognition device, electronic equipment and medium | |
CN114995698A (en) | Image processing method and device | |
CN117311884A (en) | Content display method, device, electronic equipment and readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |