CN108958576B - Content identification method and device and mobile terminal - Google Patents

Content identification method and device and mobile terminal Download PDF

Info

Publication number
CN108958576B
CN108958576B CN201810588338.1A CN201810588338A CN108958576B CN 108958576 B CN108958576 B CN 108958576B CN 201810588338 A CN201810588338 A CN 201810588338A CN 108958576 B CN108958576 B CN 108958576B
Authority
CN
China
Prior art keywords
content
identification
user interface
touch
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810588338.1A
Other languages
Chinese (zh)
Other versions
CN108958576A (en
Inventor
段丽霞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201810588338.1A priority Critical patent/CN108958576B/en
Publication of CN108958576A publication Critical patent/CN108958576A/en
Priority to PCT/CN2019/088874 priority patent/WO2019233318A1/en
Application granted granted Critical
Publication of CN108958576B publication Critical patent/CN108958576B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures

Abstract

The embodiment of the application discloses a content identification method, a content identification device and a mobile terminal, and relates to the technical field of mobile terminals. Wherein, the method comprises the following steps: when receiving an identification touch control of a user interface, identifying the content of the user interface; if the content identification of the user interface fails, displaying an adjustable picture interception frame on the user interface; and identifying the content in the cutout frame. In the method, the content can be directly identified on the user interface, and the operation is simple and convenient.

Description

Content identification method and device and mobile terminal
Technical Field
The present application relates to the field of mobile terminal technologies, and in particular, to a content identification method and apparatus, and a mobile terminal.
Background
The display screen of the mobile terminal can display various contents, and if a user wants to acquire detailed information of some of the displayed contents, the corresponding contents need to be copied to the browser search box, so that the operation process is complicated.
Disclosure of Invention
In view of the above problems, the present application provides a content identification method, a content identification device, and a mobile terminal, which are used for identifying content of a user interface, simplifying an identification process, and improving user experience.
In a first aspect, an embodiment of the present application provides a content identification method, where the method includes: when receiving an identification touch control of a user interface, identifying the content of the user interface; if the content identification of the user interface fails, displaying an adjustable screenshot frame on the user interface; and identifying the content in the cutout frame.
In a second aspect, an embodiment of the present application provides a content identification apparatus, including: the first identification module is used for identifying the content of the user interface when receiving identification touch control of the user interface; the frame selection module is used for displaying an adjustable frame on the user interface if the content identification of the user interface fails; and the second identification module is used for identifying the content in the cutout frame.
In a third aspect, an embodiment of the present application provides a mobile terminal, including a display screen, a memory and a processor, where the display screen and the memory are coupled to the processor, and the memory stores instructions, and when the instructions are executed by the processor, the processor performs the above-mentioned method.
In a fourth aspect, the present application provides a computer-readable storage medium having program code executable by a processor, the program code causing the processor to perform the above-mentioned method.
According to the content identification method, the content identification device and the mobile terminal, under the condition that the content identification of the user interface fails in response to the identification touch, the adjustable frame truncation frame is displayed, and the content of the frame truncation frame is identified, so that the content identification can be directly carried out on the user interface, and the operation is simple and convenient.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 shows a flow chart of a content identification method proposed by an embodiment of the present application;
fig. 2 shows a first display diagram proposed by an embodiment of the present application;
FIG. 3 illustrates a second display schematic presented by an embodiment of the present application;
fig. 4 shows a third display diagram proposed by an embodiment of the present application;
fig. 5 shows a flow chart of a content recognition method proposed by another embodiment of the present application;
FIG. 6 illustrates a fourth display schematic presented by an embodiment of the present application;
fig. 7 shows a fifth display diagram proposed by an embodiment of the present application;
fig. 8 shows a sixth display diagram proposed by an embodiment of the present application;
fig. 9 shows a seventh display diagram proposed by an embodiment of the present application;
fig. 10 is a functional block diagram showing a content recognition apparatus according to an embodiment of the present application;
fig. 11 is a block diagram illustrating a structure of a mobile terminal according to an embodiment of the present application;
fig. 12 is a schematic structural diagram of a mobile terminal according to an embodiment of the present application;
fig. 13 is a block diagram illustrating a mobile terminal for performing a content recognition method according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
At present, when a user surfs the internet for chatting, reading characters, viewing pictures or watching videos through a mobile terminal, the user often has an interest in some contents and searches for more detailed information. At this time, the user needs to copy the content of interest or remember the content of interest first, then open the browser, paste the copied content into the search box of the browser or input the remembered content into the search box of the browser to search for detailed information, which results in a very tedious operation process, long time consumption and easy error generation.
Further, in order to solve the problem that the operation process of the search is complicated, the displayed content can be selected through technologies such as pressure sensing, the selected content is identified, an identification result is obtained, and the information acquisition speed is improved. However, the inventors have conducted extensive research and found that, when content is identified directly in response to a user touching a user interface, the identification may fail, and a result of the identification intended by the user may not be obtained.
In view of the above technical problems, embodiments of the present application provide a content identification method, a content identification device, and a mobile terminal, where an adjustable capture frame is displayed on a user interface when identification fails, so that identification can be performed by re-framing content on the user interface to obtain an identification result that meets the user requirements.
The content identification method, device and mobile terminal provided by the embodiments of the present application will be described with reference to the accompanying drawings and specific embodiments.
Referring to fig. 1, an embodiment of the present application provides a content identification method, where the content identification method is used to identify all or part of content in a user interface displayed on a display screen. In a specific embodiment, the content recognition method is applied to the content recognition apparatus shown in fig. 10 and the mobile terminal 400 (fig. 11, 12) corresponding to the content recognition apparatus 300. The content identification method may specifically include the following steps:
step S110: and when receiving the identification touch control of the user interface, identifying the content of the user interface.
When a user wants to identify some content of the user interface to obtain more detailed information of the content, an identification touch can be performed on the user interface. The touch operation corresponding to the recognition touch is not limited in the embodiment of the present application, such as a long press with a single finger, a long press with two fingers, a long press with multiple fingers, a long press with a finger joint, a click with a single finger, a click with two fingers, a click with multiple fingers, a click with a finger joint, a large-area press with a single finger, a large-area press with two fingers, a large-area press with multiple fingers, a slide with a single finger, two fingers, or multiple fingers along a preset trajectory, and the like. If the touch operation is sliding according to a preset track, the sliding track can be a closed graph so as to identify the content in the closed graph.
And when receiving the identification touch control, the mobile terminal identifies the content corresponding to the identification touch control in the user interface. As a particular embodiment, the identified display content may be all of the content in the user interface.
As a specific embodiment, the identified display content may be content corresponding to a touch position of the touch operation in the user interface. The content specifically corresponding to the touch position may be a text paragraph where the touch position is located, a picture where the touch position is located, and a control where the touch position is located. For example, as shown in fig. 2, the displayed interface is a touched user interface, a circle a represents a touch position, and a text paragraph where the circle a is located is used as display content to be identified.
As a specific implementation manner, the identified display content is a character displayed in a text control corresponding to the touch position. The method specifically comprises the following steps: determining a text control corresponding to the touch position of the recognition touch; and acquiring the text in the text control for recognition. The text control corresponding to the touch position may be the text control closest to the touch position.
That is, if the touch position touches the text control, the touched text control is used as the text control to be recognized. For example, in the chat interface shown in fig. 2, a circle a represents a touch position, and a text control B corresponding to chat information is a text control touched at the touch position, so that the chat information in the text control B is identified.
If the touch position is the position of the non-text control, the text in the text control closest to the touch position can be identified. For example, in the chat interface shown in fig. 3, a circle a represents a touch position, and if the circle a is not touched on the text control, and the text control B corresponding to the chat information is the text control closest to the touch position, the chat information in the text control B is identified.
The display content may be identified by some existing identification methods, such as word segmentation and semantic identification, which are not limited herein.
In addition, the displayed content may be recognized through a background screenshot, and after the content to be recognized is captured, the content may be recognized through picture analysis, for example, through Optical Character Recognition (OCR). Specifically, the touch position of the recognition touch in the user interface is determined; in the user interface, screenshot is carried out in a preset range of the touch position; and performing text recognition on the picture obtained by the screenshot. The size of the preset range is not limited in the embodiment of the present application, and may be a rectangular range with a preset length and width, a circular range with a preset radius, or other preset shapes and preset size ranges.
Step S120: and if the content identification of the user interface fails, displaying an adjustable screenshot frame on the user interface.
The identification of the content of the user interface may fail for various reasons. The reasons for the identification failure may be various, for example, the network connection is unstable, the mobile terminal is not connected to the network, and other network problems cause that the identification cannot be performed or the identification is overtime; or, for example, in response to the recognition touch, no effective recognition content can be obtained; or if the content to be identified is unhealthy information, etc., which are not exhaustive.
If the recognition fails, the recognition result is not obtained, and an adjustable screenshot frame can be displayed on the user interface. Referring to fig. 4, the cutout frame K may be a rectangular frame as shown in the figure, or may be a closed figure with other shapes, such as a circle, a prism, a triangle, a polygon, and so on. The user interface is displayed on the display screen during touch control identification.
Step S130: and identifying the content in the cutout frame.
As shown in fig. 4, the frame K corresponds to the content of the user interface. The content in the cut-out frame represents the content that needs to be identified, and therefore, the content in the cut-out frame can be identified.
In the embodiment of the application, when the recognition touch of the user interface is received, and the content recognition of the user interface is failed, an adjustable frame truncation frame is displayed on the user interface, the user interface is framed through the frame truncation frame, and then the framed content is recognized. Therefore, under the condition that the identification fails, the identification content can be re-selected and identified again on the user interface through the frame selection key.
In the embodiment of the application, the adjustable cutout frame can be adjusted by a user according to requirements, so that the content in the cutout frame is the content which the user wants to identify. Specifically, referring to fig. 5, the method provided in the embodiment of the present application includes:
step S210: and when receiving the identification touch control of the user interface, identifying the content of the user interface.
The user can initiate a recognition touch operation for recognizing the display content on the user interface. As mentioned above, the user interface may be a chat interface, a web interface, a video interface, a user interface of various applications, and the like, and is not limited in the embodiment of the present application. And when the touch of the user is received, content identification is carried out on the touched user interface.
Step S220: and if the content identification of the user interface fails, displaying an adjustable screenshot frame on the user interface.
If the identification fails, the identification result of the content in the touched user interface is not obtained, a frame cutting frame is displayed on the user interface, and the user interface is subjected to frame selection identification.
As a specific embodiment, when the user interface displays the adjustable screenshot frame, as shown in fig. 4, the interface displayed on the display screen is the user interface touched during the recognition touch, and the screenshot frame K is displayed on the user interface.
As a specific embodiment, as shown in fig. 6, when the user interface displays an adjustable screenshot frame, the touch-controlled user interface may be displayed in a reduced size, and the screenshot frame K is displayed in the reduced user interface.
Optionally, when the cut-off frame is displayed, only the cut-off frame in the user interface may be operated, and other positions may not be operated.
In the embodiment of the present application, when the screenshot frame starts to be displayed after the recognition failure, the position where the screenshot frame is displayed first is not limited.
As an embodiment, the screenshot box may be displayed in a preset position with a preset size. The preset size may be a preset fixed size or a preset proportional size to the user interface. The preset position may be a fixed position on the display screen or a position on the touch-controlled user interface.
As an embodiment, the display position of the cutout frame may be determined according to the touch position of the recognition touch. Specifically, the frame of the capture frame may be selected from the touch position frame. Optionally, the screenshot frame may be a preset size, or a minimum screenshot frame in which a touch area corresponding to the touch position is framed.
In this embodiment, if the touch position is on the control, the screenshot box may select the control box touched by the touch position. Specifically, a control corresponding to the touch position for recognizing the touch may be determined, and the correspondence may be represented in the user interface, where the position of the control overlaps with the touch position. And displaying an adjustable screenshot frame in which the control frame is selected on the user interface.
Optionally, before displaying the screenshot frame, a prompt message may be displayed to prompt the user whether the identification fails or whether the framing identification is entered, and if so, the adjustable screenshot frame is displayed on the user interface; if the user selects no, the identification process can be quitted, the identification of the user interface is finished, and the identification result is not obtained.
Step S230: and identifying the content in the cutout frame.
In the embodiment of the application, the content framed by the screenshot box can be identified. The identification may be directly obtaining the content in the screenshot box, for example, the screenshot box includes a text control, obtaining the text in the text control, the screenshot box includes a picture control, directly obtaining the picture in the picture control, and the like. In addition, the identification may be performed by cutting the content framed by the capture frame into a picture, for example, taking the edge of the capture frame as a cut edge, capturing all the content in the capture frame to obtain the cut picture, and then identifying the content in the picture. The Recognition process may first obtain the content in the picture through image processing, such as OCR (Optical Character Recognition), and the like, and without limitation, may be implemented through an existing processing manner that can obtain various contents in the image, such as a text, a picture, a two-dimensional code, and the like.
As a specific implementation mode, the user interface displays the cutout frame, and all or part of the content in the cutout frame is identified.
As a specific embodiment, when the cutout frame is displayed, one or more options of identification types are provided, wherein the identification types represent types of contents to be identified, such as two-dimensional codes, commodities, texts, pictures and the like. The mobile terminal can receive a target identification type selected by a user from one or more identification types; and identifying the content corresponding to the target identification type in the cutout frame. That is, one identification type selected by a user from one or more identification types is received, and the content belonging to the target identification type in the content in the cutout frame is identified by taking the identification type selected by the user as the target identification type. Fig. 6 shows three alternative recognition types of a two-dimensional code, a commodity and a text.
In this embodiment, the identification of the different identification types may be different in identification content, for example, the text identification only parses the text content in the cut-off frame to obtain the identification result, and the picture identification only parses the picture in the cut-off frame to obtain the identification result. The identification modes can be different, such as identification of texts, pictures and two-dimensional codes is realized by a server of corresponding identification processing, and the identification of the commodities can be skipped to a third-party shopping platform, such as Taobao and the like, and the contents in the cutout frame are transmitted to the third-party shopping platform for identification. The display of the recognition result can also be different, for example, the recognition of texts, commodities, pictures and the like can be directly displayed through cards in the forms of word segmentation, brief introduction, links and the like, and the recognition of the commodities can be displayed through a third-party shopping platform.
Specifically, the identification process may be to analyze the content in the capture frame, obtain the content corresponding to the target identification type, and identify the content of the target identification type. For example, if the target identification type is a two-dimensional code, the content in the cutout frame is analyzed to obtain the two-dimensional code, and then the two-dimensional code is identified to obtain information contained in the two-dimensional code. For another example, if the target recognition type is a text, performing operations such as word segmentation, parsing, semantic search and the like on the text in the cutout frame, and feeding back a recognition result of the text.
Optionally, when the target identification type is a text, the text content in the screenshot box can be obtained by analyzing the screenshot box, then the text content is filtered, after messy codes in the text content are filtered out, an effective text is obtained, and then the effective text is analyzed and identified. In this embodiment, the messy codes may be texts other than the preset type of texts, and if the preset type of texts are chinese characters, english characters, and selected common punctuations, other characters, other punctuations other than the common punctuations, and the like are all determined as the messy codes.
In this embodiment, optionally, if the user does not select the target recognition type, one recognition type may be used as a default recognition type to recognize the content of the default recognition type in the cutout frame. Optionally, if the user does not select the target identification type, the identification may not be performed, and after the user selects the identification type, the content corresponding to the selected identification type is identified.
In an embodiment of the present application, the capture frame is an adjustable capture frame, and the adjustment includes an adjustment of a position and an adjustment of a size. The mobile terminal can receive the adjustment of the size or the position of the cutout frame; and identifying the content in the adjusted screenshot frame.
For example, as shown in fig. 4 and fig. 6, when the user enters the frame selection interface and then selects the screenshot frame in the user interface, the user can adjust the size of the screenshot frame by pulling the screenshot frame in different directions, and as shown in fig. 7, the screenshot frame K is adjusted to be smaller than that in fig. 4. The user may also drag the capture frame to change the position by pressing the border line, the inner area, etc. of the capture frame, as shown in fig. 8, which is the capture frame after the position adjustment with respect to fig. 7.
In the embodiment of the present application, the shape of the screenshot frame may also be changed, a request for changing the shape of the screenshot frame is received from a user, and the shape of the screenshot frame is changed into a circle, a triangle, or any other polygon according to the request. For example, a shape selection button may be provided corresponding to the cutout frame K, and when a press of the button is received, a selectable shape is displayed. If the user selects a triangle, the shape of the current cutout frame is changed to a triangle.
As a specific implementation manner, in the embodiment of the present application, when a screenshot frame is displayed in response to a failure in recognizing touch control, if no adjustment to the screenshot frame has been received, the screenshot frame is not recognized; and after the adjustment of the screenshot frame is received, identifying the content in the adjusted screenshot frame to obtain an identification result.
In a specific embodiment, when the cut-off frame is displayed in response to the failure of touch identification, the content in the cut-off frame is identified, and an identification result is obtained. And if the adjustment of the cutout frame is received, identifying the content in the adjusted cutout frame, and updating the identification result according to the identification of the adjusted cutout frame. That is, as shown in fig. 5, in this embodiment, after step S230, step S240 may be further included: receiving an adjustment to a size or position of the cutout frame. Step S250: and carrying out screenshot recognition on the adjusted screenshot frame.
In this embodiment of the present application, the recognition in the screenshot box may include word segmentation and corresponding search for corresponding content, and the recognition result may include one or more of word segmentation result, brief introduction and link of movie, television, book, people, etc., map of a place, purchase channel of a commodity, schedule information, express information, etc., which is not limited in this embodiment of the present application and may be any explanatory information for displaying content. As shown in fig. 9, a specific recognition result display manner may be displayed, the displayed recognition result may include a word segmentation of corresponding content, and the user may select a word from the word segmentation result and then perform copying, full selection, translation, or search, etc.
Optionally, in this embodiment of the application, the identification result may be displayed through a card, as shown in fig. 9, the card C displays the identification result. The card is a carrier for displaying information, and can be a control or a combination of a plurality of controls. The information displayed by the card in the embodiment of the application can be information corresponding to the identification result. Different recognition results of the same display content can be displayed in the same card, or different recognition results of the same display content can be displayed in different cards.
Optionally, in this embodiment of the application, if the content in the cutout frame is not successfully identified, a prompt message indicating that the identification is failed may be displayed.
To sum up, in the content identification method provided in the embodiment of the present application, when the identification touch is received, content identification is performed on the user interface. And if the identification fails, providing a cutout frame for framing the user interface. When the adjustment of the size or the position of the screenshot frame is received, the content in the adjusted screenshot frame is identified, so that the user can adjust the size and the position of the screenshot frame according to the requirement of the user, and the content frame to be identified is selected in the screenshot frame to be identified, so that the identification result to be identified is obtained.
The embodiment of the present application further provides a content identification apparatus 300, please refer to fig. 10, where the apparatus 300 includes: the first identification module 310 is configured to perform content identification on a user interface when receiving an identification touch of the user interface. And the frame selection module 320 is used for displaying an adjustable frame cut-off frame on the user interface if the content of the user interface is failed to be identified. A second identification module 330, configured to identify content in the capture frame.
Optionally, the first identifying module 310 may include a control determining unit, configured to determine a text control corresponding to the touch position of the identifying touch; and the identification unit is used for acquiring the text in the text control for identification.
Optionally, the first identifying module may include a position determining unit, configured to determine a touch position of the identifying touch; the image acquisition unit is used for carrying out screenshot in the user interface within a preset range of the touch position; and the recognition unit is used for performing text recognition on the picture obtained by the screenshot.
Optionally, the apparatus may further include: the type determining module is used for receiving a target identification type selected from one or more identification types by a user; the second identification module is used for identifying the content corresponding to the target identification type in the cutout frame.
Optionally, the apparatus may further include: and the adjusting module is used for receiving the adjustment of the size or the position of the cutout frame. The second identification module is also used for carrying out screenshot identification on the adjusted screenshot frame.
Optionally, the frame selection module 320 may be further configured to display the cutout frame in a preset position with a preset size. Or the control is used for determining the control corresponding to the touch position of the recognition touch; displaying an adjustable screenshot frame in the user interface, wherein the screenshot frame is selected from the control frame.
In summary, in the embodiment of the present application, when a user refers to a text in a chat or a browser, after a text in an area selected by the user fails to be identified, a manual screenshot frame is displayed, so that the user manually selects an area to be screenshot, and then identifies the captured picture, wherein when identifying the screenshot picture, the user can select two-dimensional code identification, text identification, or article identification. The method comprises the steps of analyzing the text of the text, analyzing the text, and obtaining the result of the text, wherein the analyzed result can be preliminarily screened, messy codes and characters are filtered, the process that any content is not identified is carried out if no effective text exists after filtering, and the process is transferred to text identification if the effective text exists after filtering.
Referring to fig. 11 again, based on the content identification method and apparatus, the embodiment of the present application further provides a mobile terminal 400. As shown in fig. 11, the mobile terminal 400 includes a display 120, a memory 104 and a processor 102, the display 120 and the memory 104 are coupled to the processor 102, the display 120 is used for displaying content, parsing a recognition result, and the like, the memory 104 stores instructions, and the processor 102 executes the method provided by the embodiment of the present application when the instructions are executed by the processor 102.
Specifically, as shown in fig. 12, the mobile terminal 400 may include an electronic body 10, where the electronic body 10 includes a housing 12 and a display 120 disposed on the housing 12. The housing 12 may be made of metal, such as steel or aluminum alloy. In this embodiment, the display screen 120 generally includes a display panel 111, and may also include a circuit and the like for responding to a touch operation performed on the display panel 111. The Display panel 111 may be a Liquid Crystal Display (LCD) panel, and in some embodiments, the Display panel 111 is a touch screen 109.
Referring to fig. 13, in an actual application scenario, the mobile terminal 400 may be used as a smartphone terminal, in which case the electronic body 10 generally further includes one or more processors 102 (only one is shown in the figure), a memory 104, an RF (Radio Frequency) module 106, an audio circuit 110, a sensor 114, an input module 118, and a power module 122. It will be understood by those skilled in the art that the structure shown in fig. 13 is merely illustrative and is not intended to limit the structure of the electronic body 10. For example, the electronics body section 10 may also include more or fewer components than shown in FIG. 13, or have a different correspondence than shown in FIG. 13.
Those skilled in the art will appreciate that all other components are peripheral devices with respect to the processor 102, and the processor 102 is coupled to the peripheral devices through a plurality of peripheral interfaces 124. The peripheral interface 124 may be implemented based on the following criteria: universal Asynchronous Receiver/Transmitter (UART), General Purpose Input/Output (GPIO), Serial Peripheral Interface (SPI), and Inter-Integrated Circuit (I2C), but the present invention is not limited to these standards. In some examples, the peripheral interface 124 may comprise only a bus; in other examples, the peripheral interface 124 may also include other elements, such as one or more controllers, for example, a display controller for interfacing with the display panel 111 or a memory controller for interfacing with a memory. These controllers may also be separate from the peripheral interface 124 and integrated within the processor 102 or a corresponding peripheral.
The memory 104 may be used to store software programs and modules, and the processor 102 executes various functional applications and data processing by executing the software programs and modules stored in the memory 104. The memory 104 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid state memory. In some examples, the memory 104 may further include memory remotely located from the processor 102, which may be connected to the electronics body portion 10 or the display screen 120 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The RF module 106 is configured to receive and transmit electromagnetic waves, and achieve interconversion between the electromagnetic waves and electrical signals, so as to communicate with a communication network or other devices. The RF module 106 may include various existing circuit elements for performing these functions, such as an antenna, a radio frequency transceiver, a digital signal processor, an encryption/decryption chip, a Subscriber Identity Module (SIM) card, memory, and so forth. The RF module 106 may communicate with various networks such as the internet, an intranet, a wireless network, or with other devices via a wireless network. The wireless network may comprise a cellular telephone network, a wireless local area network, or a metropolitan area network. The Wireless network may use various Communication standards, protocols, and technologies, including, but not limited to, Global System for Mobile Communication (GSM), Enhanced Mobile Communication (Enhanced Data GSM Environment, EDGE), wideband Code division multiple Access (W-CDMA), Code Division Multiple Access (CDMA), Time Division Multiple Access (TDMA), Wireless Fidelity (WiFi) (e.g., Institute of Electrical and Electronics Engineers (IEEE) standard IEEE 802.10A, IEEE802.11 b, IEEE802.1 g, and/or IEEE802.11 n), Voice over internet protocol (VoIP), world wide mail Access (Microwave for Wireless Communication), Wi-11 Wireless Access (wimax), and any other suitable protocol for instant messaging, and may even include those protocols that have not yet been developed.
The audio circuitry 110, speaker 101, microphone 103, microphone 105 collectively provide an audio interface between a user and the electronics body section 10 or the display screen 120.
The sensor 114 is disposed in the electronics body portion 10 or in the display screen 120, examples of the sensor 114 include, but are not limited to: acceleration sensor 114F, gyroscope 114G, magnetometer 114H, and other sensors.
In this embodiment, the input module 118 may include the touch screen 109 disposed on the display screen 120, and the touch screen 109 may collect a touch operation of the user (for example, an operation of the user on or near the touch screen 109 using any suitable object or accessory such as a finger, a stylus, etc.) on or near the touch screen 109, so that the touch gesture of the user may be obtained and the corresponding connection device may be driven according to a preset program, and thus, the user may select the target area through a touch operation on the display screen. Optionally, the touch screen 109 may include a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 102, and can receive and execute commands sent by the processor 102. In addition, the touch detection function of the touch screen 109 may be implemented by various types, such as resistive, capacitive, infrared, and surface acoustic wave. In addition to the touch screen 109, in other variations, the input module 118 may include other input devices, such as keys 107. The keys 107 may include, for example, character keys for inputting characters, and control keys for activating control functions. Examples of such control keys include a "back to home" key, a power on/off key, and the like.
The display screen 120 is used to display information input by the user, information provided to the user, and various graphical user interfaces of the electronic body section 10, which may be composed of graphics, text, icons, numbers, video, and any combination thereof. In one example, the touch screen 109 may be disposed on the display panel 111 so as to be integral with the display panel 111.
The power module 122 is used to provide power supply to the processor 102 and other components. Specifically, the power module 122 may include a power management system, one or more power sources (e.g., batteries or ac power), a charging circuit, a power failure detection circuit, an inverter, a power status indicator light, and any other components associated with the generation, management, and distribution of power within the electronics body portion 10 or the display screen 120.
The mobile terminal 400 further comprises a locator 119, the locator 119 being configured to determine an actual location of the mobile terminal 400. In this embodiment, the locator 119 implements the positioning of the mobile terminal 400 by using a positioning service, which is understood to be a technology or a service for obtaining the position information (e.g., longitude and latitude coordinates) of the mobile terminal 400 by using a specific positioning technology and marking the position of the positioned object on an electronic map.
It should be understood that the above-described mobile terminal 400 is not limited to a smartphone terminal, but should refer to a computer device that can be used in mobility. Specifically, the mobile terminal 400 refers to a mobile computer device equipped with an intelligent operating system, and the mobile terminal 400 includes, but is not limited to, a smart phone, a smart watch, a tablet computer, and the like.
It should be noted that, in the present specification, the embodiments are all described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other. For the device-like embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment. For any processing manner described in the method embodiment, all the processing manners may be implemented by corresponding processing modules in the apparatus embodiment, and details in the apparatus embodiment are not described again.
In the description herein, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present application, "plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and the scope of the preferred embodiments of the present application includes other implementations in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (mobile terminal) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments. In addition, functional units in the embodiments of the present application may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Although embodiments of the present application have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present application, and that variations, modifications, substitutions and alterations may be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not necessarily depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (10)

1. A method for identifying content, the method comprising:
when receiving an identification touch of a user interface, if a touch position of the identification touch operation is on a text control, taking the text control where the touch position is located as a text control corresponding to the touch position;
if the touch position is not on the text control, taking the text control closest to the touch position as the text control corresponding to the touch position;
acquiring a text in the text control for recognition;
if the content of the user interface is failed to be identified, determining a control corresponding to the touch position of the identified touch;
displaying an adjustable screenshot frame including the control frame on the user interface, and zooming out and displaying the user interface;
if the adjustment of the cutout frame is received, identifying the content in the cutout frame, and displaying an identification result;
if the adjustment of the cutout frame is not received, the content in the cutout frame is not identified.
2. The method of claim 1, wherein the identifying the content of the user interface comprises:
determining a text control corresponding to the touch position of the recognition touch;
and acquiring the text in the text control for recognition.
3. The method of claim 1, wherein the identifying the content of the user interface comprises:
determining a touch position of the recognition touch;
in the user interface, screenshot is carried out in a preset range of the touch position;
and performing text recognition on the picture obtained by the screenshot.
4. The method of claim 1, further comprising:
receiving a target identification type selected by a user from one or more identification types;
and identifying the content corresponding to the target identification type in the cutout frame.
5. The method of claim 4, wherein identifying the type comprises:
two-dimensional codes, merchandise, or text.
6. The method of claim 1, further comprising:
receiving an adjustment to a size or position of the cutout frame;
and carrying out screenshot recognition on the adjusted screenshot frame.
7. The method of claim 1, wherein displaying an adjustable screenshot box on the user interface comprises:
and displaying the cutout frame at a preset position according to a preset size.
8. An apparatus for identifying content, the apparatus comprising:
the first identification module is used for taking a text control where a touch position is located as a text control corresponding to the touch position if the touch position of the identified touch operation is on the text control when the identified touch of the user interface is received, and taking the text control closest to the touch position as the text control corresponding to the touch position if the touch position is not on the text control, and acquiring a text in the text control for identification;
the framing module is used for determining a control corresponding to the touch position of the identified touch if the content identification of the user interface fails, displaying an adjustable framing frame in which the control is framed on the user interface, and displaying the user interface in a reduced mode;
and the second identification module is used for identifying the content in the cutout frame and displaying an identification result if the adjustment of the cutout frame is received, and not identifying the content in the cutout frame if the adjustment of the cutout frame is not received.
9. A mobile terminal comprising a display, a memory, and a processor, the display and the memory coupled to the processor, the memory storing instructions that, when executed by the processor, the processor performs the method of any of claims 1 to 7.
10. A computer-readable storage medium having program code executable by a processor, the program code causing the processor to perform the method of any of claims 1 to 7.
CN201810588338.1A 2018-06-08 2018-06-08 Content identification method and device and mobile terminal Active CN108958576B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201810588338.1A CN108958576B (en) 2018-06-08 2018-06-08 Content identification method and device and mobile terminal
PCT/CN2019/088874 WO2019233318A1 (en) 2018-06-08 2019-05-28 Content identification method and device, and mobile terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810588338.1A CN108958576B (en) 2018-06-08 2018-06-08 Content identification method and device and mobile terminal

Publications (2)

Publication Number Publication Date
CN108958576A CN108958576A (en) 2018-12-07
CN108958576B true CN108958576B (en) 2021-02-02

Family

ID=64494007

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810588338.1A Active CN108958576B (en) 2018-06-08 2018-06-08 Content identification method and device and mobile terminal

Country Status (2)

Country Link
CN (1) CN108958576B (en)
WO (1) WO2019233318A1 (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108958576B (en) * 2018-06-08 2021-02-02 Oppo广东移动通信有限公司 Content identification method and device and mobile terminal
CN109933275A (en) * 2019-02-12 2019-06-25 努比亚技术有限公司 A kind of knowledge screen method, terminal and computer readable storage medium
CN110647640B (en) * 2019-09-30 2023-01-10 京东方科技集团股份有限公司 Computer system, method for operating a computing device and system for operating a computing device
CN111310482A (en) * 2020-01-20 2020-06-19 北京无限光场科技有限公司 Real-time translation method, device, terminal and storage medium
CN112596656A (en) * 2020-12-28 2021-04-02 北京小米移动软件有限公司 Content identification method, device and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106325688A (en) * 2016-08-17 2017-01-11 北京锤子数码科技有限公司 Text processing method and device
CN106484266A (en) * 2016-10-18 2017-03-08 北京锤子数码科技有限公司 A kind of text handling method and device
CN107358226A (en) * 2017-06-23 2017-11-17 联想(北京)有限公司 The recognition methods of electronic equipment and electronic equipment

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6320982B2 (en) * 2014-11-26 2018-05-09 ネイバー コーポレーションNAVER Corporation Translated sentence editor providing apparatus and translated sentence editor providing method
CN105005551A (en) * 2015-06-29 2015-10-28 东南(福建)汽车工业有限公司 Method for implementing rapid acquisition of picture characters in document revision
CN106020694B (en) * 2016-05-24 2023-01-31 北京京东尚科信息技术有限公司 Electronic equipment, and method and device for dynamically adjusting selected area
CN107632773A (en) * 2017-10-17 2018-01-26 北京百度网讯科技有限公司 For obtaining the method and device of information
CN107797750A (en) * 2017-10-27 2018-03-13 珠海市魅族科技有限公司 A kind of screen content identifying processing method, apparatus, terminal and medium
CN108958576B (en) * 2018-06-08 2021-02-02 Oppo广东移动通信有限公司 Content identification method and device and mobile terminal

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106325688A (en) * 2016-08-17 2017-01-11 北京锤子数码科技有限公司 Text processing method and device
CN106484266A (en) * 2016-10-18 2017-03-08 北京锤子数码科技有限公司 A kind of text handling method and device
CN107358226A (en) * 2017-06-23 2017-11-17 联想(北京)有限公司 The recognition methods of electronic equipment and electronic equipment

Also Published As

Publication number Publication date
WO2019233318A1 (en) 2019-12-12
CN108958576A (en) 2018-12-07

Similar Documents

Publication Publication Date Title
CN108958576B (en) Content identification method and device and mobile terminal
US11460983B2 (en) Method of processing content and electronic device thereof
WO2019233212A1 (en) Text identification method and device, mobile terminal, and storage medium
US11237703B2 (en) Method for user-operation mode selection and terminals
AU2014201716B2 (en) Apparatus and method for providing additional information by using caller phone number
CN109085982B (en) Content identification method and device and mobile terminal
CN108932102B (en) Data processing method and device and mobile terminal
CN109101498B (en) Translation method and device and mobile terminal
CN111464716B (en) Certificate scanning method, device, equipment and storage medium
CN107766548B (en) Information display method and device, mobile terminal and readable storage medium
CN108197264B (en) Webpage acceleration display method and device, mobile terminal and storage medium
CN109032491B (en) Data processing method and device and mobile terminal
CN108512997B (en) Display method, display device, mobile terminal and storage medium
US9977661B2 (en) Method and system for generating a user interface
CN108803972B (en) Information display method, device, mobile terminal and storage medium
WO2019201109A1 (en) Word processing method and apparatus, and mobile terminal and storage medium
US10963121B2 (en) Information display method, apparatus and mobile terminal
CN108803961B (en) Data processing method and device and mobile terminal
CN109032465B (en) Data processing method and device and mobile terminal
CN109101163B (en) Long screen capture method and device and mobile terminal
CN109062648B (en) Information processing method and device, mobile terminal and storage medium
CN110221736B (en) Icon processing method and device, mobile terminal and storage medium
CN108958578B (en) File control method and device and electronic device
CN110362699B (en) Picture searching method and device, mobile terminal and computer readable medium
CN112286430B (en) Image processing method, apparatus, device and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant