CN109085982B - Content identification method and device and mobile terminal - Google Patents

Content identification method and device and mobile terminal Download PDF

Info

Publication number
CN109085982B
CN109085982B CN201810619031.3A CN201810619031A CN109085982B CN 109085982 B CN109085982 B CN 109085982B CN 201810619031 A CN201810619031 A CN 201810619031A CN 109085982 B CN109085982 B CN 109085982B
Authority
CN
China
Prior art keywords
frame
touch
identification
content
user interface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810619031.3A
Other languages
Chinese (zh)
Other versions
CN109085982A (en
Inventor
揭骏仁
林建华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201810619031.3A priority Critical patent/CN109085982B/en
Publication of CN109085982A publication Critical patent/CN109085982A/en
Application granted granted Critical
Publication of CN109085982B publication Critical patent/CN109085982B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application discloses a content identification method, a content identification device and a mobile terminal, and relates to the technical field of mobile terminals. Wherein, the method comprises the following steps: when an identification result responding to identification touch is displayed, displaying a frame selection key on a display screen corresponding to the identification result, wherein the identification touch is touch operation for identifying content of a user interface; displaying an adjustable screenshot frame on the user interface when touch on the frame selection key is received; and identifying the content in the cutout frame. In the method, the content can be directly identified on the user interface, and the operation is simple and convenient.

Description

Content identification method and device and mobile terminal
Technical Field
The present application relates to the field of mobile terminal technologies, and in particular, to a content identification method and apparatus, and a mobile terminal.
Background
The display screen of the mobile terminal can display various contents, and if a user wants to acquire detailed information of some of the displayed contents, the corresponding contents need to be copied to the browser search box, so that the operation process is complicated.
Disclosure of Invention
In view of the above problems, the present application provides a content identification method, a content identification device, and a mobile terminal, which are used for identifying content of a user interface, simplifying an identification process, and improving user experience.
In a first aspect, an embodiment of the present application provides a content identification method, where the method includes: when an identification result responding to identification touch is displayed, displaying a frame selection key on a display screen corresponding to the identification result, wherein the identification touch is touch operation for identifying content of a user interface; displaying an adjustable screenshot frame on the user interface when touch on the frame selection key is received; and identifying the content in the cutout frame.
In a second aspect, an embodiment of the present application provides a content identification apparatus, including: the display module is used for displaying a frame selection key on a display screen corresponding to an identification result when the identification result responding to the identification touch is displayed, wherein the identification touch is touch operation for identifying the content of a user interface; the frame selection module is used for displaying an adjustable frame truncation frame on the user interface when touch control on the frame selection key is received; and the identification module is used for identifying the content in the cutout frame.
In a third aspect, an embodiment of the present application provides a mobile terminal, including a display screen, a memory and a processor, where the display screen and the memory are coupled to the processor, and the memory stores instructions, and when the instructions are executed by the processor, the processor performs the above-mentioned method.
In a fourth aspect, the present application provides a computer-readable storage medium having program code executable by a processor, the program code causing the processor to perform the above-mentioned method.
According to the content identification method, the content identification device and the mobile terminal, the frame selection key is displayed while the identification result is displayed. When the touch of the user on the frame selection key is received, the adjustable frame truncation frame is displayed, and the content of the frame truncation frame is identified, so that the content identification can be directly carried out on a user interface, and the operation is simple and convenient.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 shows a flow chart of a content identification method proposed by an embodiment of the present application;
fig. 2 shows a first display diagram proposed by an embodiment of the present application;
FIG. 3 illustrates a second display schematic presented by an embodiment of the present application;
fig. 4 shows a flow chart of a content recognition method proposed by another embodiment of the present application;
FIG. 5 illustrates a third display schematic presented by an embodiment of the present application;
FIG. 6 illustrates a fourth display schematic presented by an embodiment of the present application;
fig. 7 shows a fifth display diagram proposed by an embodiment of the present application;
fig. 8 shows a sixth display diagram proposed by an embodiment of the present application;
fig. 9 shows a seventh display diagram proposed by an embodiment of the present application;
fig. 10 is a functional block diagram showing a content recognition apparatus according to an embodiment of the present application;
fig. 11 is a block diagram illustrating a structure of a mobile terminal according to an embodiment of the present application;
fig. 12 is a schematic structural diagram of a mobile terminal according to an embodiment of the present application;
fig. 13 is a block diagram illustrating a mobile terminal for performing a content recognition method according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
At present, when a user surfs the internet for chatting, reading characters, viewing pictures or watching videos through a mobile terminal, the user often has an interest in some contents and searches for more detailed information. At this time, the user needs to copy the content of interest or remember the content of interest first, then open the browser, paste the copied content into the search box of the browser or input the remembered content into the search box of the browser to search for detailed information, which results in a very tedious operation process, long time consumption and easy error generation.
Further, in order to solve the problem that the operation process of the search is complicated, the displayed content can be selected through technologies such as pressure sensing, the selected content is identified, an identification result is obtained, and the information acquisition speed is improved. However, through a great deal of research, the inventor finds that, when content is identified directly in response to the touch of the user on the user interface, the identified content may not be the content that the user actually wants to identify, which results in a deviation of the identification result from the user's needs and a poor user experience.
In view of the above technical problems, embodiments of the present application provide a content identification method, a content identification device, and a mobile terminal, which display a frame selection key at the same time when displaying an identification result, so that the identification can be performed by re-framing content on a user interface through the frame selection key, and an identification result that fits the user's needs is obtained.
The information display method, device and mobile terminal provided by the embodiments of the present application will be described below with reference to the accompanying drawings by specific embodiments.
Referring to fig. 1, an embodiment of the present application provides a content identification method. The content identification method is used for identifying all or part of content in a user interface displayed by a display screen. In a specific embodiment, the content recognition method is applied to the content recognition apparatus shown in fig. 10 and the mobile terminal 400 (fig. 11, 12) corresponding to the content recognition apparatus 300. The content identification method may specifically include the following steps:
step S110: and when an identification result responding to identification touch is displayed, displaying a frame selection key on a display screen corresponding to the identification result, wherein the identification touch is touch operation for identifying the content of the user interface.
When a user wants to identify some content of the user interface to obtain more detailed information of the content, an identification touch can be performed on the user interface. The touch operation corresponding to the recognition touch is not limited in the embodiment of the present application, such as a long press with a single finger, a long press with two fingers, a long press with multiple fingers, a long press with a finger joint, a click with a single finger, a click with two fingers, a click with multiple fingers, a click with a finger joint, a large-area press with a single finger, a large-area press with two fingers, a large-area press with multiple fingers, a slide with a single finger, two fingers, or multiple fingers along a preset trajectory, and the like. If the touch operation is sliding according to a preset track, the sliding track can be a closed graph so as to identify the content in the closed graph.
And when receiving the identification touch control, the mobile terminal identifies the content corresponding to the identification touch control in the user interface and displays the identification result. The identification may include word segmentation and corresponding search of corresponding content, and the identification result may include one or more of word segmentation results, brief introduction and links of videos, books, people, and the like, a map of a place, a purchase channel of a commodity, schedule information, express delivery information, and the like. As shown in fig. 2, a specific recognition result display manner, the displayed recognition result may include word segmentation of corresponding content, and the user may select a word from the word segmentation result and then perform copying, full selection, translation, or search, etc.
Since the content identified in response to the recognition touch is not necessarily the content that the user wants to identify, the correspondingly obtained recognition result may not be the recognition result that the user wants to obtain. Therefore, when the display screen displays the identification result, a frame selection key may be displayed corresponding to the identification result, as shown in "click frame selection area" in fig. 2, the frame selection key is a frame selection entry, and the user may re-display the frame selection on the user interface through the frame selection key.
Step S120: and when the touch control of the frame selection key is received, displaying an adjustable frame truncation frame on the user interface.
And when the user triggers the displayed frame selection key, displaying an adjustable frame truncation frame on the user interface. Referring to fig. 3, the cutout frame K may be a rectangular frame as shown, or may be a closed figure with other shapes, such as a circle, a prism, a triangle, a polygon, and so on. The user interface is displayed on the display screen during touch control identification.
Step S130: and identifying the content in the cutout frame.
As shown in fig. 3, the frame K corresponds to the content of the user interface. The content in the cut-out frame represents the content that needs to be identified, and therefore, the content in the cut-out frame can be identified.
In the embodiment of the application, under the condition that the content of the user interface is identified in response to the identification touch to obtain the identification result, the frame selection key is displayed. And when touch control on the frame selection key is received, performing frame selection on the user interface through the frame capture frame, and then identifying the frame selection content. Therefore, when the content is identified through touch operation, the identified content is not the content which the user actually wants to identify, the corresponding identified result is not the identified result which the user wants to obtain, and the identified content can be re-selected and identified on the user interface through the frame selection key.
In the embodiment of the application, the adjustable cutout frame can be adjusted by a user according to requirements, so that the content in the cutout frame is the content which the user wants to identify. Specifically, referring to fig. 4, the method provided in the embodiment of the present application includes:
step S210: and receiving a recognition touch operation acting on the user interface.
The user can initiate a recognition touch operation for recognizing the display content on the user interface. As mentioned above, the user interface may be a chat interface, a web interface, a video interface, a display interface of various applications, and the like, and is not limited in the embodiment of the present application.
Step S220: and identifying the display content corresponding to the touch position of the identified touch operation.
In response to the identifying touch, identifying display content in the touched user interface.
As a particular embodiment, the identified display content may be all of the content in the user interface.
As a specific embodiment, the identified display content may be content corresponding to a touch position of the touch operation in the user interface. The content specifically corresponding to the touch position may be a text paragraph where the touch position is located, a picture where the touch position is located, and a control where the touch position is located. For example, as shown in fig. 5, the displayed interface is a touched user interface, a circle a represents a touch position, and a text paragraph where the touch position is located is used as the display content to be identified.
As a specific implementation manner, the identified display content is a character displayed in a text control corresponding to the touch position. The method specifically comprises the following steps: determining a text control corresponding to the touch position; and acquiring the text in the text control for recognition. The text control corresponding to the touch position may be the text control closest to the touch position.
That is, if the touch position touches the text control, the touched text control is used as the text control to be recognized. For example, in the chat interface shown in fig. 5, a circle a represents a touch position, and a text control B corresponding to chat information is a text control touched at the touch position, so that the chat information in the text control B is identified.
If the touch position is the position of the non-text control, the text in the text control closest to the touch position can be identified. For example, in the chat interface shown in fig. 6, a circle a represents a touch position, and if the circle a is not touched on the text control, and the text control B corresponding to the chat information is the text control closest to the touch position, the chat information in the text control B is identified.
The display content may be identified by some existing identification methods, such as word segmentation and semantic identification, which are not limited herein. In addition, the displayed content may be recognized through a background screenshot, and after the content to be recognized is captured, the content may be recognized through picture analysis, for example, through Optical Character Recognition (OCR).
Step S230: and when an identification result responding to identification touch is displayed, displaying a frame selection key on a display screen corresponding to the identification result, wherein the identification touch is touch operation for identifying the content of the user interface.
And simultaneously displaying the frame selection key and the identification result. The recognition result and the frame selection key can be displayed on different adjacent cards, respectively, as shown in fig. 2, the card C1 displays the recognition result and the frame selection key is displayed on the card C2. The card is a carrier for displaying information, and can be a control or a combination of a plurality of controls. The information displayed by the card in the embodiment of the application can be information corresponding to the identification result and a frame selection key. Different recognition results of the same display content can be displayed in the same card, or different recognition results of the same display content can be displayed in different cards.
In addition, the recognition result and the frame selection key may be displayed on the same card.
Step S240: and when the touch control of the frame selection key is received, displaying an adjustable frame truncation frame on the user interface.
Step S250: and identifying the content in the cutout frame.
It can be understood that, as shown in fig. 2, the frame selection key may be a virtual key, and when receiving the touch of the user, the user jumps to a display interface for performing frame selection on the touch user interface, where the frame selection is implemented by a frame capture frame K.
As a specific embodiment, when the user interface displays the adjustable screenshot frame, as shown in fig. 3, the interface displayed on the display screen is a user interface corresponding to the touch recognition, and the screenshot frame K is displayed on the user interface.
As a specific embodiment, as shown in fig. 7, when the user interface displays an adjustable screenshot frame, the touch-controlled user interface may be displayed in a reduced size, and the screenshot frame K is displayed in the reduced user interface.
Optionally, when the cut-off frame is displayed, only the cut-off frame in the user interface may be operated, and other positions may not be operated.
In the embodiment of the present application, when the screenshot box is displayed on the user interface in response to the touch on the frame selection key, a position where the screenshot box is displayed first is not limited.
As an embodiment, the screenshot box may be displayed in a preset position with a preset size. The preset size may be a preset fixed size or a preset proportional size to the user interface. The preset position may be a fixed position on the display screen or a position on the touch-controlled user interface.
As an embodiment, the display position of the cutout frame may be determined according to the touch position of the recognition touch. Specifically, the frame of the capture frame may be selected from the touch position frame. Optionally, the screenshot frame may be a preset size, or a minimum screenshot frame in which a touch area corresponding to the touch position is framed.
In this embodiment, if the touch position is on the control, the screenshot box may select the control box touched by the touch position. Specifically, a control corresponding to the touch position for recognizing the touch may be determined, and the correspondence may be represented in the user interface, where the position of the control overlaps with the touch position. And displaying an adjustable screenshot frame in which the control frame is selected on the user interface.
In the embodiment of the application, the content framed by the screenshot box can be identified. The identification may be directly obtaining the content in the screenshot box, for example, the screenshot box includes a text control, obtaining the text in the text control, the screenshot box includes a picture control, directly obtaining the picture in the picture control, and the like. In addition, the identification may be performed by cutting the content framed by the capture frame into a picture, for example, taking the edge of the capture frame as a cut edge, capturing all the content in the capture frame to obtain the cut picture, and then identifying the content in the picture. The Recognition process may first obtain the content in the picture through image processing, such as OCR (Optical Character Recognition), and the like, and without limitation, may be implemented through an existing processing manner that can obtain various contents in the image, such as a text, a picture, a two-dimensional code, and the like.
In one specific embodiment, when the cutout frame is displayed in response to the touch of the frame selection key, all or part of the content in the cutout frame is identified.
As a specific embodiment, when the screenshot box is displayed in response to the touch of the frame selection key, one or more identification types are provided, wherein the identification types represent the types of the contents to be identified, such as two-dimensional codes, commodities, texts, pictures and the like. The mobile terminal can receive a target identification type selected by a user from one or more identification types; and identifying the content corresponding to the target identification type in the cutout frame. That is, one identification type selected by a user from one or more identification types is received, and the content belonging to the target identification type in the content in the cutout frame is identified by taking the identification type selected by the user as the target identification type. Fig. 7 shows three alternative recognition types of a two-dimensional code, a commodity and a text.
In this embodiment, the identification of the different identification types may be different in identification content, for example, the text identification only parses the text content in the cut-off frame to obtain the identification result, and the picture identification only parses the picture in the cut-off frame to obtain the identification result. The identification modes can be different, such as identification of texts, pictures and two-dimensional codes is realized by a server of corresponding identification processing, and the identification of the commodities can be skipped to a third-party shopping platform, such as Taobao and the like, and the contents in the cutout frame are transmitted to the third-party shopping platform for identification. The display of the recognition result can also be different, for example, the recognition of texts, commodities, pictures and the like can be directly displayed through cards in the forms of word segmentation, brief introduction, links and the like, and the recognition of the commodities can be displayed through a third-party shopping platform.
Specifically, the identification process may be to analyze the content in the capture frame, obtain the content corresponding to the target identification type, and identify the content of the target identification type. For example, if the target identification type is a two-dimensional code, the content in the cutout frame is analyzed to obtain the two-dimensional code, and then the two-dimensional code is identified to obtain information contained in the two-dimensional code. For another example, if the target recognition type is a text, performing operations such as word segmentation, parsing, semantic search and the like on the text in the cutout frame, and feeding back a recognition result of the text.
Optionally, when the target identification type is a text, the text content in the screenshot box can be obtained by analyzing the screenshot box, then the text content is filtered, after messy codes in the text content are filtered out, an effective text is obtained, and then the effective text is analyzed and identified. In this embodiment, the messy codes may be texts other than the preset type of texts, and if the preset type of texts are chinese characters, english characters, and selected common punctuations, other characters, other punctuations other than the common punctuations, and the like are all determined as the messy codes.
In this embodiment, optionally, if the user does not select the target recognition type, one recognition type may be used as a default recognition type to recognize the content of the default recognition type in the cutout frame. Optionally, if the user does not select the target identification type, the identification may not be performed, and after the user selects the identification type, the content corresponding to the selected identification type is identified.
In an embodiment of the present application, the capture frame is an adjustable capture frame, and the adjustment includes an adjustment of a position and an adjustment of a size. The mobile terminal can receive the adjustment of the size or the position of the cutout frame; and identifying the content in the adjusted screenshot frame.
For example, as shown in fig. 3 and 7, when the user enters the frame selection interface and then selects the screenshot frame in the user interface, the user can adjust the size of the screenshot frame by pulling the screenshot frame in different directions, and as shown in fig. 8, the screenshot frame K is adjusted to be smaller than that shown in fig. 3. The user may also drag the capture frame to change the position by pressing the border line, the inner area, etc. of the capture frame, as shown in fig. 9, which is the capture frame after the position adjustment with respect to fig. 8.
In the embodiment of the present application, the shape of the screenshot frame may also be changed, a request for changing the shape of the screenshot frame is received from a user, and the shape of the screenshot frame is changed into a circle, a triangle, or any other polygon according to the request. For example, a shape selection button may be provided corresponding to the cutout frame K, and when a press of the button is received, a selectable shape is displayed. If the user selects a triangle, the shape of the current cutout frame is changed to a triangle.
As a specific implementation manner, in the embodiment of the present application, when a screenshot frame is displayed in response to a touch of a frame selection key, if an adjustment to the screenshot frame is received, identifying content in the adjusted screenshot frame; and if the adjustment of the screenshot box is not received, the screenshot box is not identified.
In one embodiment, when a cut-out frame is displayed in response to a touch on a frame selection key, content in the cut-out frame is identified. And if the adjustment of the cutout frame is received, identifying the content in the adjusted cutout frame, and updating the identification result according to the identification of the adjusted cutout frame.
Optionally, in this embodiment of the application, if the identification of the content is unsuccessful, a prompt message indicating that the identification fails may be displayed.
To sum up, in the content identification method provided in the embodiment of the present application, when a touch to a frame selection key is received, a frame selection interface is entered, and a frame capture frame for performing frame selection on a user interface is provided. When the adjustment of the size or the position of the screenshot frame is received, the content in the adjusted screenshot frame is identified, so that the user can adjust the size and the position of the screenshot frame according to the requirement of the user, and the content frame to be identified is selected in the screenshot frame to be identified, so that the identification result to be identified is obtained.
The embodiment of the present application further provides a content identification apparatus 300, please refer to fig. 10, where the apparatus 300 includes: the display module 310 is configured to display a frame selection key on a display screen corresponding to an identification result when the identification result corresponding to the identification touch is displayed, where the identification touch is a touch operation for identifying content of a user interface. A frame selection module 320, configured to display an adjustable frame-truncating frame on the user interface when the touch on the frame selection key is received. An identifying module 330 is configured to identify content in the capture frame.
Optionally, the display module may be configured to display the identification result and the frame selection key on the same card; or the identification result and the frame selection key are respectively displayed on different adjacent cards.
Optionally, in this embodiment of the present application, the method may further include: the operation receiving module is used for receiving the identification touch operation acting on the user interface; the identification module 330 is configured to identify display content corresponding to the touch position of the identified touch operation.
Optionally, the recognition module 330 may include a position determination unit, configured to determine a text control corresponding to the touch position; and the identification unit is used for acquiring the text in the text control for identification.
Optionally, the embodiment may further include: the type determining module is used for receiving a target identification type selected from one or more identification types by a user; the identification module is used for identifying the content corresponding to the target identification type in the cutout frame.
Optionally, in this embodiment of the application, an adjusting module may be further included, configured to receive an adjustment on a size or a position of the cutout frame; the identification module is used for identifying the content in the adjusted cutout frame.
Optionally, the framing module may be configured to display the cutout frame at a preset position in a preset size.
Optionally, the embodiment of the present application may include a control determining module, configured to determine a control corresponding to the touch position of the identified touch; the box module may be configured to display an adjustable screenshot box in the user interface that boxes the control.
Optionally, the identification module 330 includes a screenshot unit, configured to capture the content framed by the screenshot frame as a picture; and the identification unit is used for identifying the content in the picture.
In the embodiment of the application, when a user is in a chat, looks up text in a browser or other use scenes, the user may attempt to recognize the text of the area selected by the user by using the existing capability of the system, such as acquiring the content in the space for recognition, performing OCR recognition, and the like, and display the result to the user. And meanwhile, a card is provided at the bottom, the card comprises a frame selection key as a frame selection inlet, and a user can enter a manual screenshot frame after clicking so that the user can manually select an area needing screenshot and then identify the intercepted picture, wherein when the screenshot picture is identified, the user can select two-dimensional code identification, text identification, article identification or other types of identification. Optionally, for text recognition, the analyzed result can be preliminarily screened, messy codes and characters are filtered, if no effective text exists after filtering, the process that any content is not recognized is carried out, if the effective text exists after filtering, the effective text is transmitted to text recognition, and the filtered text is recognized.
Referring to fig. 11 again, based on the content identification method and apparatus, the embodiment of the present application further provides a mobile terminal 400. As shown in fig. 11, the mobile terminal 400 includes a display screen 120, a memory 104 and a processor 102, the display screen 120 and the memory 104 are coupled to the processor 102, the display screen 120 is used for displaying a user interface, an identification result, and the like, the memory 104 stores instructions, and the processor 102 executes the method provided by the embodiment of the present application when the instructions are executed by the processor 102.
Specifically, as shown in fig. 12, the mobile terminal 400 may include an electronic body 10, where the electronic body 10 includes a housing 12 and a display 120 disposed on the housing 12. The housing 12 may be made of metal, such as steel or aluminum alloy. In this embodiment, the display screen 120 generally includes a display panel 111, and may also include a circuit and the like for responding to a touch operation performed on the display panel 111. The Display panel 111 may be a Liquid Crystal Display (LCD) panel, and in some embodiments, the Display panel 111 is a touch screen 109.
Referring to fig. 13, in an actual application scenario, the mobile terminal 400 may be used as a smartphone terminal, in which case the electronic body 10 generally further includes one or more processors 102 (only one is shown in the figure), a memory 104, an RF (Radio Frequency) module 106, an audio circuit 110, a sensor 114, an input module 118, and a power module 122. It will be understood by those skilled in the art that the structure shown in fig. 13 is merely illustrative and is not intended to limit the structure of the electronic body 10. For example, the electronics body section 10 may also include more or fewer components than shown in FIG. 13, or have a different correspondence than shown in FIG. 13.
Those skilled in the art will appreciate that all other components are peripheral devices with respect to the processor 102, and the processor 102 is coupled to the peripheral devices through a plurality of peripheral interfaces 124. The peripheral interface 124 may be implemented based on the following criteria: universal Asynchronous Receiver/Transmitter (UART), General Purpose Input/Output (GPIO), Serial Peripheral Interface (SPI), and Inter-Integrated Circuit (I2C), but the present invention is not limited to these standards. In some examples, the peripheral interface 124 may comprise only a bus; in other examples, the peripheral interface 124 may also include other elements, such as one or more controllers, for example, a display controller for interfacing with the display panel 111 or a memory controller for interfacing with a memory. These controllers may also be separate from the peripheral interface 124 and integrated within the processor 102 or a corresponding peripheral.
The memory 104 may be used to store software programs and modules, and the processor 102 executes various functional applications and data processing by executing the software programs and modules stored in the memory 104. The memory 104 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid state memory. In some examples, the memory 104 may further include memory remotely located from the processor 102, which may be connected to the electronics body portion 10 or the display screen 120 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The RF module 106 is configured to receive and transmit electromagnetic waves, and achieve interconversion between the electromagnetic waves and electrical signals, so as to communicate with a communication network or other devices. The RF module 106 may include various existing circuit elements for performing these functions, such as an antenna, a radio frequency transceiver, a digital signal processor, an encryption/decryption chip, a Subscriber Identity Module (SIM) card, memory, and so forth. The RF module 106 may communicate with various networks such as the internet, an intranet, a wireless network, or with other devices via a wireless network. The wireless network may comprise a cellular telephone network, a wireless local area network, or a metropolitan area network. The Wireless network may use various Communication standards, protocols, and technologies, including, but not limited to, Global System for Mobile Communication (GSM), Enhanced Mobile Communication (Enhanced Data GSM Environment, EDGE), wideband Code division multiple Access (W-CDMA), Code Division Multiple Access (CDMA), Time Division Multiple Access (TDMA), Wireless Fidelity (WiFi) (e.g., IEEE802.1 a, IEEE802.11 b, IEEE802.1 g, and/or IEEE802.11 n), Voice over internet protocol (VoIP), world wide mail Access (Microwave for Wireless communications, wimax), and any other suitable protocol for instant messaging, and may even include those protocols that have not yet been developed.
The audio circuitry 110, speaker 101, microphone 103, microphone 105 collectively provide an audio interface between a user and the electronics body section 10 or the display screen 120.
The sensor 114 is disposed in the electronics body portion 10 or in the display screen 120, examples of the sensor 114 include, but are not limited to: acceleration sensor 114F, gyroscope 114G, magnetometer 114H, and other sensors.
In this embodiment, the input module 118 may include the touch screen 109 disposed on the display screen 120, and the touch screen 109 may collect a touch operation of the user (for example, an operation of the user on or near the touch screen 109 using any suitable object or accessory such as a finger, a stylus, etc.) on or near the touch screen 109, so that the touch gesture of the user may be obtained and the corresponding connection device may be driven according to a preset program, and thus, the user may select the target area through a touch operation on the display screen. Optionally, the touch screen 109 may include a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 102, and can receive and execute commands sent by the processor 102. In addition, the touch detection function of the touch screen 109 may be implemented by various types, such as resistive, capacitive, infrared, and surface acoustic wave. In addition to the touch screen 109, in other variations, the input module 118 may include other input devices, such as keys 107. The keys 107 may include, for example, character keys for inputting characters, and control keys for activating control functions. Examples of such control keys include a "back to home" key, a power on/off key, and the like.
The display screen 120 is used to display information input by the user, information provided to the user, and various graphical user interfaces of the electronic body section 10, which may be composed of graphics, text, icons, numbers, video, and any combination thereof. In one example, the touch screen 109 may be disposed on the display panel 111 so as to be integral with the display panel 111.
The power module 122 is used to provide power supply to the processor 102 and other components. Specifically, the power module 122 may include a power management system, one or more power sources (e.g., batteries or ac power), a charging circuit, a power failure detection circuit, an inverter, a power status indicator light, and any other components associated with the generation, management, and distribution of power within the electronics body portion 10 or the display screen 120.
The mobile terminal 400 further comprises a locator 119, the locator 119 being configured to determine an actual location of the mobile terminal 400. In this embodiment, the locator 119 implements the positioning of the mobile terminal 400 by using a positioning service, which is understood to be a technology or a service for obtaining the position information (e.g., longitude and latitude coordinates) of the mobile terminal 400 by using a specific positioning technology and marking the position of the positioned object on an electronic map.
It should be understood that the above-described mobile terminal 400 is not limited to a smartphone terminal, but should refer to a computer device that can be used in mobility. Specifically, the mobile terminal 400 refers to a mobile computer device equipped with an intelligent operating system, and the mobile terminal 400 includes, but is not limited to, a smart phone, a smart watch, a tablet computer, and the like.
It should be noted that, in the present specification, the embodiments are all described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other. For the device-like embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment. For any processing manner described in the method embodiment, all the processing manners may be implemented by corresponding processing modules in the apparatus embodiment, and details in the apparatus embodiment are not described again.
In the description herein, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present application, "plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and the scope of the preferred embodiments of the present application includes other implementations in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (mobile terminal) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments. In addition, functional units in the embodiments of the present application may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Although embodiments of the present application have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present application, and that variations, modifications, substitutions and alterations may be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not necessarily depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (9)

1. A method for identifying content, the method comprising:
receiving an identification touch operation acting on a user interface;
if the touch position for identifying the touch operation is on a text control, taking the text control where the touch position is as the text control corresponding to the touch position;
if the touch position is not on the text control, taking the text control closest to the touch position as the text control corresponding to the touch position;
acquiring a text in the text control for recognition;
when an identification result responding to identification touch is displayed, displaying a frame selection key on a display screen corresponding to the identification result, wherein the identification touch is touch operation for identifying content of a user interface;
when touch control on the frame selection key is received, displaying an adjustable frame clipping frame on the user interface, and displaying the user interface in a reduced mode;
identifying content within the screenshot frame if an adjustment to the screenshot frame is received,
if the adjustment of the screenshot frame is not received, the content in the screenshot frame is not identified;
the displaying an adjustable screenshot box on the user interface, comprising:
if the touch position is on the text control, determining the control where the touch position for identifying touch is located;
and displaying an adjustable screenshot frame in which the control frame where the touch position is located is selected on the user interface.
2. The method of claim 1, further comprising:
receiving a target identification type selected by a user from one or more identification types;
and identifying the content corresponding to the target identification type in the cutout frame.
3. The method of claim 2, wherein the object recognition type comprises:
two-dimensional codes, merchandise, or text.
4. The method of claim 1, further comprising:
receiving an adjustment to a size or position of the cutout frame;
and identifying the content in the adjusted screenshot frame.
5. The method of claim 1, wherein displaying a frame selection key on a display screen corresponding to an identification result when displaying the identification result in response to the identification touch comprises:
displaying the identification result and the frame selection key on the same card; or
And respectively displaying the identification result and the frame selection key on different adjacent cards.
6. The method of claim 1, wherein the identifying the content in the truncated frame comprises:
intercepting the content of the frame selection of the image intercepting frame into a picture;
identifying content in the picture.
7. An apparatus for identifying content, the apparatus comprising:
the operation receiving module is used for receiving the identification touch operation acting on the user interface;
the identification module comprises a position determination unit and an identification unit, wherein the position determination unit is used for taking a text control where a touch position is located as a text control corresponding to the touch position if the touch position for identifying the touch operation is located on the text control, the position determination unit is also used for taking the text control closest to the touch position as the text control corresponding to the touch position if the touch position is not located on the text control, and the identification unit is used for acquiring a text in the text control for identification;
the display module is used for displaying a frame selection key on a display screen corresponding to an identification result when the identification result responding to the identification touch is displayed, wherein the identification touch is touch operation for identifying the content of a user interface;
the frame selection module is used for displaying an adjustable frame cutting frame on the user interface and displaying the user interface in a reduced mode when touch control on the frame selection key is received;
the identification module is further used for identifying the content in the screenshot frame if the adjustment of the screenshot frame is received, and not identifying the content in the screenshot frame if the adjustment of the screenshot frame is not received;
the displaying an adjustable screenshot box on the user interface, comprising:
if the touch position is on the text control, determining the control where the touch position for identifying touch is located;
and displaying an adjustable screenshot frame in which the control frame where the touch position is located is selected on the user interface.
8. A mobile terminal comprising a display, a memory, and a processor, the display and the memory coupled to the processor, the memory storing instructions that, when executed by the processor, the processor performs the method of any of claims 1-6.
9. A computer-readable storage medium having program code executable by a processor, the program code causing the processor to perform the method of any of claims 1 to 6.
CN201810619031.3A 2018-06-08 2018-06-08 Content identification method and device and mobile terminal Active CN109085982B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810619031.3A CN109085982B (en) 2018-06-08 2018-06-08 Content identification method and device and mobile terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810619031.3A CN109085982B (en) 2018-06-08 2018-06-08 Content identification method and device and mobile terminal

Publications (2)

Publication Number Publication Date
CN109085982A CN109085982A (en) 2018-12-25
CN109085982B true CN109085982B (en) 2020-12-08

Family

ID=64839612

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810619031.3A Active CN109085982B (en) 2018-06-08 2018-06-08 Content identification method and device and mobile terminal

Country Status (1)

Country Link
CN (1) CN109085982B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110413363B (en) * 2019-07-26 2023-02-21 维沃移动通信有限公司 Screenshot method and terminal equipment
CN110764685B (en) * 2019-10-24 2023-04-18 上海掌门科技有限公司 Method and device for identifying two-dimensional code
CN113260970B (en) * 2019-11-28 2024-01-23 京东方科技集团股份有限公司 Picture identification user interface system, electronic equipment and interaction method
CN112596656A (en) * 2020-12-28 2021-04-02 北京小米移动软件有限公司 Content identification method, device and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103455590A (en) * 2013-08-29 2013-12-18 百度在线网络技术(北京)有限公司 Method and device for retrieving in touch-screen device
CN106484266A (en) * 2016-10-18 2017-03-08 北京锤子数码科技有限公司 A kind of text handling method and device
CN106951893A (en) * 2017-05-08 2017-07-14 奇酷互联网络科技(深圳)有限公司 Text information acquisition methods, device and mobile terminal

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120092269A1 (en) * 2010-10-15 2012-04-19 Hon Hai Precision Industry Co., Ltd. Computer-implemented method for manipulating onscreen data

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103455590A (en) * 2013-08-29 2013-12-18 百度在线网络技术(北京)有限公司 Method and device for retrieving in touch-screen device
CN106484266A (en) * 2016-10-18 2017-03-08 北京锤子数码科技有限公司 A kind of text handling method and device
CN106951893A (en) * 2017-05-08 2017-07-14 奇酷互联网络科技(深圳)有限公司 Text information acquisition methods, device and mobile terminal

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"如何运用手机百度识图找图找资料";霜糖落;《https://jingyan.baidu.com/article/456c463b4f74590a58314428.html》;20150701;第1-4页 *
"百度识图功能怎么用 百度识图使用教程";怜幽小草;《https://jingyan.baidu.com/article/eb9f7b6d4e8661869364e8a4.html》;20180509;第1-2页 *

Also Published As

Publication number Publication date
CN109085982A (en) 2018-12-25

Similar Documents

Publication Publication Date Title
US11460983B2 (en) Method of processing content and electronic device thereof
CN108958576B (en) Content identification method and device and mobile terminal
US11237703B2 (en) Method for user-operation mode selection and terminals
WO2019233212A1 (en) Text identification method and device, mobile terminal, and storage medium
CN109085982B (en) Content identification method and device and mobile terminal
AU2014201716B2 (en) Apparatus and method for providing additional information by using caller phone number
CN111464716B (en) Certificate scanning method, device, equipment and storage medium
CN109101498B (en) Translation method and device and mobile terminal
CN108932102B (en) Data processing method and device and mobile terminal
CN107766548B (en) Information display method and device, mobile terminal and readable storage medium
CN108388671B (en) Information sharing method and device, mobile terminal and computer readable medium
CN108512997B (en) Display method, display device, mobile terminal and storage medium
CN109032491B (en) Data processing method and device and mobile terminal
CN108803972B (en) Information display method, device, mobile terminal and storage medium
WO2019201109A1 (en) Word processing method and apparatus, and mobile terminal and storage medium
CN108803961B (en) Data processing method and device and mobile terminal
CN109032465B (en) Data processing method and device and mobile terminal
US10963121B2 (en) Information display method, apparatus and mobile terminal
CN109101163B (en) Long screen capture method and device and mobile terminal
CN109062648B (en) Information processing method and device, mobile terminal and storage medium
CN110221736B (en) Icon processing method and device, mobile terminal and storage medium
CN108958578B (en) File control method and device and electronic device
CN110362699B (en) Picture searching method and device, mobile terminal and computer readable medium
CN112286430B (en) Image processing method, apparatus, device and medium
CN111796736B (en) Application sharing method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant