CN110795007B - Method and device for acquiring screenshot information - Google Patents

Method and device for acquiring screenshot information Download PDF

Info

Publication number
CN110795007B
CN110795007B CN201910859179.9A CN201910859179A CN110795007B CN 110795007 B CN110795007 B CN 110795007B CN 201910859179 A CN201910859179 A CN 201910859179A CN 110795007 B CN110795007 B CN 110795007B
Authority
CN
China
Prior art keywords
information
image
identification code
electronic identification
necessary information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910859179.9A
Other languages
Chinese (zh)
Other versions
CN110795007A (en
Inventor
张昆
杨骅
刘彪
吴益明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Liandi Information Accessibility Co ltd
Original Assignee
Shenzhen Liandi Information Accessibility Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Liandi Information Accessibility Co ltd filed Critical Shenzhen Liandi Information Accessibility Co ltd
Priority to CN201910859179.9A priority Critical patent/CN110795007B/en
Publication of CN110795007A publication Critical patent/CN110795007A/en
Priority to PCT/CN2020/096793 priority patent/WO2021047230A1/en
Application granted granted Critical
Publication of CN110795007B publication Critical patent/CN110795007B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/958Organisation or management of web site content, e.g. publishing, maintaining pages or automatic linking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K19/00Record carriers for use with machines and with at least a part designed to carry digital markings
    • G06K19/06Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code
    • G06K19/06009Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code with optically detectable marking
    • G06K19/06046Constructional details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition

Abstract

The application is applicable to the technical field of information processing, and provides a method and a device for acquiring screenshot information, wherein the method comprises the following steps: acquiring a first screenshot instruction, and intercepting a corresponding first image according to the first screenshot instruction; acquiring a second synthesis instruction, and synthesizing the first image and an electronic identification code into a second image according to the second synthesis instruction, wherein the electronic identification code comprises a one-dimensional code, a two-dimensional code and a bar code, and the electronic identification code is used for storing necessary information of the first image, and the necessary information comprises content information and structural information of the content; and scanning the electronic identification code to obtain the necessary information, and converting the necessary information into identifiable text information. According to the method, the electronic identification code synthesized image with the image information is directly output after the image is intercepted, the next image processing is not needed, the image processing steps are simplified, and the conversion efficiency of converting the image information into the identifiable text information is improved.

Description

Method and device for acquiring screenshot information
Technical Field
The application belongs to the technical field of information processing, and particularly relates to a method and device for acquiring screenshot information and a computer readable storage medium.
Background
The screenshot operation is a functional operation frequently used in work and life, most of the technologies at present simply finish screenshot at a software level or a system level, if the screenshot image needs to be further processed afterwards, such as machine learning or searching information in the image, an OCR technology needs to be used for processing text information in the image, the processing flow is complicated, the error rate is high, and the information conversion efficiency is low. In another aspect, vision impaired people are substantially unable to process such information using screen reading software.
Disclosure of Invention
In view of this, the embodiments of the present application provide a method and apparatus for obtaining screenshot information, which can solve the technical problem of low information conversion efficiency.
A first aspect of an embodiment of the present application provides a method for obtaining screenshot information, including:
acquiring a first screenshot instruction, and intercepting a corresponding first image according to the first screenshot instruction;
acquiring a second synthesis instruction, and synthesizing the first image and an electronic identification code into a second image according to the second synthesis instruction, wherein the electronic identification code comprises a one-dimensional code, a two-dimensional code and a bar code, and the electronic identification code is used for storing necessary information of the first image, and the necessary information comprises content information and structural information of the content;
And scanning the electronic identification code to obtain the necessary information, and converting the necessary information into identifiable text information.
A second aspect of an embodiment of the present application provides an apparatus for obtaining screenshot information, including:
the first acquisition unit is used for acquiring a first screenshot instruction, storing necessary information of the first image into an electronic identification code, and intercepting the corresponding first image according to the first screenshot instruction; acquiring a second synthesis instruction, and synthesizing the first image and an electronic identification code into a second image according to the second synthesis instruction, wherein the electronic identification code comprises a one-dimensional code, a two-dimensional code and a bar code, and the necessary information comprises content information and structural information of the content;
and the scanning unit is used for scanning the electronic identification code to obtain the necessary information and converting the necessary information into identifiable text information.
A third aspect of the embodiments of the present application provides a terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the method of the first aspect when executing the computer program.
A fourth aspect of the embodiments of the present application provides a computer readable storage medium storing a computer program which, when executed by a processor, implements the steps of the method of the first aspect described above.
Compared with the prior art, the embodiment of the application has the beneficial effects that: in the application, a first screenshot instruction is acquired through user equipment, and a corresponding first image is intercepted according to the first screenshot instruction; the user equipment acquires a second synthesis instruction, synthesizes the first image and the electronic identification code into a second image according to the second synthesis instruction, scans the electronic identification code to obtain the necessary information, and converts the necessary information into identifiable text information. By the method, the electronic identification code synthesized image with the image information is directly output after the image is intercepted, the next image processing is not needed, the image processing steps are simplified, and the conversion efficiency of converting the image information into the identifiable text information is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required for the embodiments or the description of the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings can be obtained according to these drawings without making any creative effort for a person skilled in the art.
Fig. 1 is a block diagram illustrating a part of a structure of a mobile phone according to an embodiment of the present application;
fig. 2 is a schematic software structure of the mobile phone 100 according to the embodiment of the present application;
fig. 3 is a schematic structural diagram of a terminal device for obtaining screenshot information according to an embodiment of the present application;
FIG. 4 shows a schematic flow chart of a method of obtaining screenshot information provided herein;
FIG. 5 is a schematic flow chart of a necessary information storage mode in a method for obtaining screenshot information provided by the present application;
FIG. 6 is a schematic diagram of a corresponding code of a web page in another method for obtaining screenshot information according to the present application;
FIG. 7 is a schematic diagram of a rendered page in another method for obtaining screenshot information provided by the present application;
FIG. 8 is a schematic diagram showing recognition results in another method for obtaining screenshot information provided in the present application;
FIG. 9 shows a schematic flow chart of another method of obtaining screenshot information provided herein;
FIG. 10 shows a schematic flow chart of another method of obtaining screenshot information provided herein;
fig. 11 is a schematic diagram of an apparatus for obtaining screenshot information according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system configurations, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It should be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
As used in this specification and the appended claims, the term "if" may be interpreted as "when..once" or "in response to a determination" or "in response to detection" depending on the context. Similarly, the phrase "if a determination" or "if a [ described condition or event ] is detected" may be interpreted in the context of meaning "upon determination" or "in response to determination" or "upon detection of a [ described condition or event ]" or "in response to detection of a [ described condition or event ]".
In addition, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used merely to distinguish between descriptions and are not to be construed as indicating or implying relative importance.
Reference in the specification to "one embodiment" or "some embodiments" or the like means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," and the like in the specification are not necessarily all referring to the same embodiment, but mean "one or more but not all embodiments" unless expressly specified otherwise. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise.
The method for acquiring the screenshot information provided by the embodiment of the application can be applied to terminal equipment such as mobile phones, tablet computers, wearable equipment, vehicle-mounted equipment, augmented reality (augmented reality, AR)/Virtual Reality (VR) equipment, notebook computers, ultra-mobile personal computer (UMPC), netbooks, personal digital assistants (personal digital assistant, PDA) and the like, and the specific type of the terminal equipment is not limited.
For example, the terminal device may be a Station (ST) in a WLAN, a cellular telephone, a cordless telephone, a Session initiation protocol (Session InitiationProtocol, SIP) telephone, a wireless local loop (Wireless Local Loop, WLL) station, a personal digital assistant (Personal Digital Assistant, PDA) device, a handheld device with wireless communication capabilities, a computing device or other processing device connected to a wireless modem, an in-vehicle device, a car networking terminal, a computer, a laptop computer, a handheld communication device, a handheld computing device, a satellite radio, a wireless modem card, a television Set Top Box (STB), a customer premise equipment (customer premise equipment, CPE) and/or other devices for communicating over a wireless system as well as next generation communication systems, such as a mobile terminal in a 5G network or a mobile terminal in a future evolved public land mobile network (Public Land Mobile Network, PLMN) network, etc.
By way of example, but not limitation, when the terminal device is a wearable device, the wearable device may also be a generic name for applying wearable technology to intelligently design daily wear, developing wearable devices, such as glasses, gloves, watches, apparel, shoes, and the like. The wearable device is a portable device that is worn directly on the body or integrated into the clothing or accessories of the user. The wearable device is not only a hardware device, but also can realize a powerful function through software support, data interaction and cloud interaction. The generalized wearable intelligent device comprises full functions, large size, and complete or partial functions which can be realized independent of a smart phone, such as a smart watch or a smart glasses, and is only focused on certain application functions, and needs to be matched with other devices such as the smart phone for use, such as various smart bracelets, smart jewelry and the like for physical sign monitoring.
Taking the terminal equipment as a mobile phone as an example. Fig. 1 is a block diagram illustrating a part of a structure of a mobile phone according to an embodiment of the present application. Referring to fig. 1, a mobile phone includes: radio Frequency (RF) circuitry 110, memory 120, input unit 130, display unit 140, sensor 150, audio circuitry 160, wireless fidelity (wireless fidelity, wiFi) module 170, processor 180, and power supply 190. Those skilled in the art will appreciate that the handset configuration shown in fig. 1 is not limiting of the handset and may include more or fewer components than shown, or may combine certain components, or may be arranged in a different arrangement of components.
The following describes the components of the mobile phone in detail with reference to fig. 1:
the RF circuit 110 may be used for receiving and transmitting signals during the process of receiving and transmitting information or communication, specifically, after receiving downlink information of the base station, the downlink information is processed by the processor 180; in addition, the data of the design uplink is sent to the base station. Typically, RF circuitry includes, but is not limited to, antennas, at least one amplifier, transceivers, couplers, low noise amplifiers (Low Noise Amplifier, LNAs), diplexers, and the like. In addition, RF circuit 110 may also communicate with networks and other devices via wireless communications. The wireless communications may use any communication standard or protocol including, but not limited to, global system for mobile communications (Global System of Mobile communication, GSM), general packet radio service (General Packet Radio Service, GPRS), code division multiple access (Code Division Multiple Access, CDMA), wideband code division multiple access (Wideband Code Division Multiple Access, WCDMA), long term evolution (Long Term Evolution, LTE)), email, short message service (Short Messaging Service, SMS), and the like.
The memory 120 may be used to store software programs and modules, and the processor 180 performs various functional applications and data processing of the cellular phone by running the software programs and modules stored in the memory 120. The memory 120 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like; the storage data area may store data (such as audio data, phonebook, etc.) created according to the use of the handset, etc. In addition, memory 120 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The input unit 130 may be used to receive input numeric or character information and to generate key signal inputs related to user settings and function control of the mobile phone 100. In particular, the input unit 130 may include a touch panel 131 and other input devices 132. The touch panel 131, also referred to as a touch screen, may collect touch operations thereon or thereabout by a user (e.g., operations of the user on the touch panel 131 or thereabout by using any suitable object or accessory such as a finger, a stylus, etc.), and drive the corresponding connection device according to a predetermined program. Alternatively, the touch panel 131 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch azimuth of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device and converts it into touch point coordinates, which are then sent to the processor 180, and can receive commands from the processor 180 and execute them. In addition, the touch panel 131 may be implemented in various types such as resistive, capacitive, infrared, and surface acoustic wave. The input unit 130 may include other input devices 132 in addition to the touch panel 131. In particular, other input devices 132 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, mouse, joystick, etc.
The display unit 140 may be used to display information input by a user or information provided to the user and various menus of the mobile phone. The display unit 140 may include a display panel 141, and alternatively, the display panel 141 may be configured in the form of a liquid crystal display (Liquid Crystal Display, LCD), an Organic Light-Emitting Diode (OLED), or the like. Further, the touch panel 131 may cover the display panel 141, and when the touch panel 131 detects a touch operation thereon or thereabout, the touch panel is transferred to the processor 180 to determine the type of the touch event, and then the processor 180 provides a corresponding visual output on the display panel 141 according to the type of the touch event. Although in fig. 1, the touch panel 131 and the display panel 141 are two independent components to implement the input and output functions of the mobile phone, in some embodiments, the touch panel 131 and the display panel 141 may be integrated to implement the input and output functions of the mobile phone.
The handset 100 may also include at least one sensor 150, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor that may adjust the brightness of the display panel 141 according to the brightness of ambient light, and a proximity sensor that may turn off the display panel 141 and/or the backlight when the mobile phone moves to the ear. As one of the motion sensors, the accelerometer sensor can detect the acceleration in all directions (generally three axes), and can detect the gravity and direction when stationary, and can be used for applications of recognizing the gesture of a mobile phone (such as horizontal and vertical screen switching, related games, magnetometer gesture calibration), vibration recognition related functions (such as pedometer and knocking), and the like; other sensors such as gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc. that may also be configured with the handset are not described in detail herein.
Audio circuitry 160, speaker 161, microphone 162 may provide an audio interface between the user and the handset. The audio circuit 160 may transmit the received electrical signal converted from audio data to the speaker 161, and the electrical signal is converted into a sound signal by the speaker 161 to be output; on the other hand, the microphone 162 converts the collected sound signal into an electrical signal, which is received by the audio circuit 160 and converted into audio data, which is processed by the audio data output processor 180 and sent to, for example, another cell phone via the RF circuit 110, or which is output to the memory 120 for further processing.
WiFi belongs to a short-distance wireless transmission technology, and a mobile phone can help a user to send and receive emails, browse webpages, access streaming media and the like through the WiFi module 170, so that wireless broadband Internet access is provided for the user. Although fig. 1 shows a WiFi module 170, it is understood that it does not belong to the necessary configuration of the handset 100, and can be omitted entirely as needed within the scope of not changing the essence of the invention.
The processor 180 is a control center of the mobile phone, connects various parts of the entire mobile phone using various interfaces and lines, and performs various functions and processes data of the mobile phone by running or executing software programs and/or modules stored in the memory 120 and calling data stored in the memory 120, thereby performing overall monitoring of the mobile phone. Optionally, the processor 180 may include one or more processing units; preferably, the processor 180 may integrate an application processor that primarily handles operating systems, user interfaces, applications, etc., with a modem processor that primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 180.
The handset 100 further includes a power supply 190 (e.g., a battery) for powering the various components, which may preferably be logically connected to the processor 180 via a power management system so as to provide for managing charging, discharging, and power consumption by the power management system.
Although not shown, the handset 100 may also include a camera. Alternatively, the position of the camera on the mobile phone 100 may be front or rear, which is not limited in the embodiment of the present application.
Alternatively, the mobile phone 100 may include a single camera, a dual camera, or a triple camera, which is not limited in the embodiments of the present application.
For example, the cell phone 100 may include three cameras, one of which is a main camera, one of which is a wide angle camera, and one of which is a tele camera.
Alternatively, when the mobile phone 100 includes a plurality of cameras, the plurality of cameras may be all front-mounted, all rear-mounted, or one part of front-mounted, another part of rear-mounted, which is not limited in the embodiments of the present application.
In addition, although not shown, the mobile phone 100 may further include a bluetooth module, etc., which will not be described herein.
Fig. 2 is a schematic software structure of the mobile phone 100 according to the embodiment of the present application. Taking the mobile phone 100 operating system as an Android system as an example, in some embodiments, the Android system is divided into four layers, namely an application layer, an application framework layer (FWK), a system layer and a hardware abstraction layer, and the layers are communicated through software interfaces.
As shown in fig. 2, the application layer may be a series of application packages, where the application packages may include applications such as short messages, calendars, cameras, video, navigation, gallery, phone calls, etc.
The application framework layer provides an application programming interface (application programming interface, API) and programming framework for application programs of the application layer. The application framework layer may include some predefined functions, such as functions for receiving events sent by the application framework layer.
As shown in fig. 2, the application framework layer may include a window manager, a resource manager, a notification manager, and the like.
The window manager is used for managing window programs. The window manager can acquire the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like. The content provider is used to store and retrieve data and make such data accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phonebooks, etc.
The resource manager provides various resources for the application program, such as localization strings, icons, pictures, layout files, video files, and the like.
The notification manager allows the application to display notification information in a status bar, can be used to communicate notification type messages, can automatically disappear after a short dwell, and does not require user interaction. Such as notification manager is used to inform that the download is complete, message alerts, etc. The notification manager may also be a notification in the form of a chart or scroll bar text that appears on the system top status bar, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, a text message is prompted in a status bar, a prompt tone is emitted, the electronic device vibrates, and an indicator light blinks, etc.
The application framework layer may further include:
a view system including visual controls, such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, a display interface including a text message notification icon may include a view displaying text and a view displaying a picture.
The telephony manager is used to provide the communication functions of the handset 100. Such as the management of call status (including on, hung-up, etc.).
The system layer may include a plurality of functional modules. For example: sensor service module, physical state identification module, three-dimensional graphics processing library (such as OpenGL ES), etc.
The sensor service module is used for monitoring sensor data uploaded by various sensors of the hardware layer and determining the physical state of the mobile phone 100;
the physical state recognition module is used for analyzing and recognizing gestures, faces and the like of the user;
the three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
The system layer may further include:
the surface manager is used to manage the display subsystem and provides a fusion of 2D and 3D layers for multiple applications.
Media libraries support a variety of commonly used audio, video format playback and recording, still image files, and the like. The media library may support a variety of audio and video encoding formats, such as MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, etc.
The hardware abstraction layer is a layer between hardware and software. The hardware abstraction layer may include display drivers, camera drivers, sensor drivers, etc. for driving the relevant hardware of the hardware layer, such as a display screen, camera, sensor, etc.
The following embodiments may be implemented on a cell phone 100 having the above-described hardware/software architecture. The following embodiments will take the mobile phone 100 as an example, and describe a method for obtaining screenshot information provided in the embodiments of the present application.
Fig. 3 is a schematic structural diagram of a terminal device for obtaining screenshot information according to an embodiment of the present application. As shown in fig. 3, a terminal device 3 of this embodiment that acquires screenshot information includes: at least one processor 30 (only one shown in fig. 3), a memory 31, and a computer program 32 stored in the memory 31 and executable on the at least one processor 30, the processor 30 executing the computer program 32 to perform the steps of any of the various method embodiments described above for obtaining screenshot information.
The terminal device 3 for obtaining the screenshot information may be a computing device such as a desktop computer, a notebook computer, a palm computer, a cloud server, etc. The terminal device for obtaining the screenshot information may include, but is not limited to, a processor 30 and a memory 31. It will be appreciated by those skilled in the art that fig. 3 is merely an example of a terminal device 3 for obtaining screenshot information, and is not limited to a terminal device 3 for obtaining screenshot information, and may include more or less components than those illustrated, or may combine some components, or different components, for example, may also include an input/output device, a network access device, and so on.
The processor 30 may be a central processing unit (Central Processing Unit, CPU), the processor 30 may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 31 may in some embodiments be an internal storage unit of the terminal device 3 for obtaining the screenshot information, for example a hard disk or a memory of the terminal device 3 for obtaining the screenshot information. The memory 31 may also be an external storage device of the terminal device 3 for obtaining the screenshot information, for example, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash Card (Flash Card) or the like, which are provided on the terminal device 3 for obtaining the screenshot information. Further, the memory 31 may also include both an internal storage unit and an external storage device of the terminal device 3 for acquiring the screenshot information. The memory 31 is used for storing an operating system, application programs, boot loader (BootLoader), data, other programs etc., such as program codes of the computer program etc. The memory 31 may also be used for temporarily storing data that has been output or is to be output.
In the aspect of the screenshot function of the current information barrier-free technology, after the screenshot action is finished, the image information is further processed, and then the image information is transmitted to a user in a voice playing mode, so that the processing flow is complex, and the information conversion efficiency is low. In order to solve the above technical problem, the present application proposes a method for obtaining screenshot information, and referring to fig. 4, fig. 4 shows a schematic flowchart of a method for obtaining screenshot information provided in the present application, which can be applied to the mobile phone 100 described above by way of example and not limitation.
S401, acquiring a first screenshot instruction, and intercepting a corresponding first image according to the first screenshot instruction.
First, a first screenshot instruction triggered by a user is acquired, wherein the first screenshot instruction is generated by the user executing a preset operation on equipment. Different preset operations correspond to different instructions, and the different instructions trigger the device to respond differently.
The first screenshot instruction may be triggered by: pressing or rotating a trigger event generated by a key of the device according to a preset operation condition; and/or monitor a trigger event generated by an operation control in a screen of the clicking device; and/or monitoring a trigger event generated by inputting a preset touch screen gesture in the screen; and/or monitoring a trigger event generated by driving the equipment to move according to a preset motion track; and/or monitor a trigger event generated by inputting a preset voice signal. The triggering mode is suitable for visually impaired users or non-visually impaired users. The vision barrier comprises all vision disorder users such as blindness, myopia, hyperopia and the like.
As an embodiment of the present invention, the preset operation condition may be that a time for which the key is pressed exceeds a preset threshold. As another embodiment of the present invention, the preset operating condition may be that a key is pressed in a specified manner, such as double-click or triple-click.
The operation control comprises a movable button suspended on a display interface of the equipment or a fixed button fixed on the display interface; or, the operation control is a hidden button in a menu key collected on the display interface.
The preset touch screen gestures comprise user-defined touch screen gestures, for example, touch screen gestures sliding in all directions, including left sliding or right sliding and the like; touch screen gestures that draw regular geometric figures, including drawing circles or triangles, etc.; drawing irregularly shaped touch screen gestures, including drawing continuous curves, etc.
The preset motion trail comprises a preset angle of rotation or a preset route of movement of the equipment. For example, the fuselage rotates 90 °; or along parallel straight lines, etc. When the user drives the equipment to move, after the equipment is detected to start to move, position characteristic points on a movement route of the equipment are extracted according to a preset time interval in the continuous process of movement, the position characteristic points are fitted with the position characteristic points of a preset movement track, and when the fitting degree of the position characteristic points and the position characteristic points is larger than a preset threshold value, a triggering event is generated.
The user can input voice information such as screenshot or screenshot in the preset voice signal to generate a trigger event.
After the user triggers the first screenshot instruction in the above manner, the device acquires the first screenshot instruction, and intercepts the interface of the current device according to the first screenshot instruction to form a first image. The first image may include, but is not limited to, a screenshot image of a web page interface, an app main interface, a desktop, and the like.
S402, acquiring a second synthesis instruction, storing necessary information of the first image into an electronic identification code, and synthesizing the first image and the electronic identification code into a second image according to the second synthesis instruction, wherein the electronic identification code comprises a one-dimensional code, a two-dimensional code and a bar code, and the necessary information comprises content information and structural information of the content.
The second synthetic instruction may be triggered automatically by the device or by the user. The electronic identification code comprises, but is not limited to, one-dimensional codes, two-dimensional codes, bar codes and the like, and can store data. The necessary information includes, but is not limited to, content information and structure information of the content, image source information, and the like, wherein the structure information of the content refers to the corresponding control position and paragraph, separation, line feed, title, position or size of the name, and the like in the first image. Wherein, the content information and the structure information of the content are text type or other representation modes.
The device synthesizes the electronic identification code into the first image according to the second synthesis instruction, and the electronic identification code can be synthesized to any position of the first image or a position preset by a user. The selection of the two-dimensional code position can be determined according to actual application scenes, so that on one hand, a non-vision-impaired user can conveniently browse pictures, and on the other hand, the daily use requirement of the vision-impaired user is met. And after the second image is obtained, storing the second image into an album of the equipment, so that the subsequent turning is facilitated.
As an embodiment of the present application, if the target user of the device is only a visually impaired person, the two-dimensional code may also be directly used as the second image. Because the vision impaired person does not have a browsing requirement on the first image, the two-dimensional code can be directly used as the second image, and the image processing step is simplified.
As another embodiment of the application, the two-dimensional code can be directly used as the second image, and the two-dimensional code not only stores the necessary information but also stores the first image, so that the use of visually impaired users and non-visually impaired users can be simultaneously satisfied.
Specifically, the storing the necessary information of the first image in the electronic identification code includes the following steps, please refer to fig. 5, fig. 5 shows a schematic flowchart of a necessary information storing manner in a method for obtaining screenshot information provided in the present application, which is an example and not a limitation, and the method may be applied to the mobile phone 100.
S501, acquiring codes corresponding to the first image.
The first image is a screen shot of a device-on-screen page, and the page is composed of underlying code. The code of the current page can be obtained and the next operation can be performed. For example, referring to fig. 6, fig. 6 shows a schematic diagram of a code corresponding to a web page in another method for obtaining screenshot information provided in the present application.
And S502, rendering the page corresponding to the first image according to the code.
After the code shown in fig. 6 is acquired, the code is rendered through a browser or a user agent, wherein the rendering means that the browser or the user agent analyzes html source codes, creates a DOM tree, analyzes CSS codes, calculates final style data and builds a CSSOM tree, the DOM tree and the CSSOM tree form a rendering tree, and the page is drawn on a screen according to the rendering tree. The page shown in fig. 7 is obtained by the code rendering shown in fig. 6, and referring to fig. 7, fig. 7 shows a schematic diagram of a rendered page in another method for obtaining screenshot information provided in the present application.
S503, identifying the necessary information in the page, and storing the necessary information into the electronic identification code.
The necessary information in the page is extracted by a browser or user agent. Referring to fig. 8, fig. 8 shows a schematic diagram of a recognition result in another method for obtaining screenshot information provided in the present application. The type of image structure shown in the figures is for illustration only and is not intended to be limiting in any way.
Specifically, the storing the necessary information in the electronic identification code includes:
and storing the necessary information into the electronic identification code according to the mapping relation between the text information and the structural information of the content.
Since different text information may be included in different image structures, the user can acquire more detailed image information by establishing a mapping relationship between the structure information of the content and the corresponding text information.
S403, scanning the electronic identification code to obtain the necessary information, and converting the necessary information into identifiable text information.
The executing party for scanning the electronic identification code can be the equipment or the user can scan the electronic identification code through an external scanning device. The necessary information is obtained through the scanning mode, the content information and the structural information of the content in the necessary information have a corresponding relation, the necessary information is required to be further converted into identifiable text information, the text information is suitable for a mode of acquiring information by a non-vision-impaired user, if the user is a vision-impaired user, the text information is required to be transmitted to the user through modes such as voice playing or other man-machine interaction, and the like, and the text information can be determined according to specific application scenes.
In the embodiment, a first screenshot instruction is acquired through user equipment, and a corresponding first image is intercepted according to the first screenshot instruction; the user equipment acquires a second synthesis instruction, synthesizes the first image and the electronic identification code into a second image according to the second synthesis instruction, scans the electronic identification code to obtain the necessary information, and converts the necessary information into identifiable text information. By the method, the electronic identification code synthesized image with the image information is directly output after the image is intercepted, the next image processing is not needed, the image processing steps are simplified, and the conversion efficiency of converting the image information into the identifiable text information is improved.
Optionally, after capturing the corresponding first image on the basis of the embodiment shown in fig. 4, the method further includes the following steps, please refer to fig. 9, fig. 9 shows a schematic flowchart of another method for obtaining screenshot information provided in the present application, which may be applied to the mobile phone 100 as an example and not a limitation. In this embodiment, S901, S903, and S903 are the same as S401 to S403 in the previous embodiment, and refer to the descriptions related to S401 to S403 in the previous embodiment specifically, which are not repeated here.
S901, acquiring a first screenshot instruction, and intercepting a corresponding first image according to the first screenshot instruction.
And S902, when the second synthesis instruction is not acquired, taking the first image as an output result.
After the first image is acquired, the device determines whether the second synthesis instruction is acquired, and when the first screenshot instruction is acquired, step S402 is executed, and when the second synthesis instruction is not acquired, the first image is used as an output result.
S903, acquiring a second synthesis instruction, storing necessary information of the first image into an electronic identification code, and synthesizing the first image and the electronic identification code into a second image according to the second synthesis instruction, wherein the electronic identification code comprises a one-dimensional code, a two-dimensional code and a bar code, and the necessary information comprises content information and structural information of the content.
S904, scanning the electronic identification code to obtain the necessary information, and converting the necessary information into identifiable text information.
In this embodiment, the device determines whether to acquire the second synthesis instruction, and then executes steps different from the steps.
Optionally, after capturing the corresponding first image on the basis of the embodiment shown in fig. 4, the method further includes the following step, please refer to fig. 10, fig. 10 shows a schematic flowchart of another method for obtaining screenshot information provided in the present application, which may be applied to the mobile phone 100 as an example and not a limitation. In this embodiment, S1001 and S1003, S1004, S1005 are the same as S401 to S403 in the previous embodiment, and specific reference is made to the description related to S401 to S403 in the previous embodiment, which is not repeated here.
S1001, acquiring a first screenshot instruction, and intercepting a corresponding first image according to the first screenshot instruction.
S1002, a second synthesis instruction is acquired.
S1003, performing binarization processing or optical character recognition on the first image to obtain the necessary information, and storing the text information into the electronic identification code.
In most images, the text information in the first image can be extracted through binarization processing because the text information and the background gray level difference are large. The binarization processing refers to that the gray level images with 256 brightness levels are selected through proper thresholds to obtain a binarized image which can still reflect the whole and local characteristics of the image.
The optical character recognition (Optical Character Recognition, OCR) refers to a process in which an electronic device determines its shape by detecting dark and light patterns and then translates the shape into computer text using a character recognition method.
And processing the first image in the mode to obtain the corresponding necessary information.
It will be understood that, unlike the embodiment shown in fig. 5, the embodiment shown in fig. 5 extracts the necessary information by extracting the code corresponding to the first image, and the embodiment extracts the necessary information in the image by directly processing the first image.
S1004, storing necessary information of the first image into an electronic identification code, and synthesizing the first image and the electronic identification code into a second image according to the second synthesis instruction, wherein the electronic identification code comprises a one-dimensional code, a two-dimensional code and a bar code, the electronic identification code is used for storing the necessary information of the first image, and the necessary information comprises text information and structural information of content.
S1005, scanning the electronic identification code to obtain the necessary information, and converting the necessary information into identifiable text information.
In this embodiment, the necessary information is obtained by performing binarization processing or optical character recognition on the first image, and the text information is stored in the electronic identification code. The method simplifies the steps of image processing and improves the conversion efficiency of image information.
Referring to fig. 11, fig. 11 is a schematic diagram of an apparatus for obtaining screenshot information, where fig. 11 is a schematic diagram of an apparatus for obtaining screenshot information, and fig. 11 is a schematic diagram of an apparatus for obtaining screenshot information, where the apparatus for obtaining screenshot information includes:
a first obtaining unit 111, configured to obtain a first screenshot instruction, and intercept a corresponding first image according to the first screenshot instruction; acquiring a second synthesis instruction, storing necessary information of the first image into an electronic identification code, and synthesizing the first image and the electronic identification code into a second image according to the second synthesis instruction, wherein the electronic identification code comprises a one-dimensional code, a two-dimensional code and a bar code, and the necessary information comprises content information and structural information of the content;
and a scanning unit 112, configured to scan the electronic identification code to obtain the necessary information, and convert the necessary information into identifiable text information.
According to the device for acquiring the screenshot information, a first screenshot instruction is acquired through user equipment, and a corresponding first image is intercepted according to the first screenshot instruction; the user equipment acquires a second synthesis instruction, synthesizes the first image and the electronic identification code into a second image according to the second synthesis instruction, scans the electronic identification code to obtain the necessary information, and converts the necessary information into identifiable text information. By the method, the electronic identification code synthesized image with the image information is directly output after the image is intercepted, the next image processing is not needed, the image processing steps are simplified, and the conversion efficiency of converting the image information into the identifiable text information is improved.
The device further comprises:
the second acquisition unit is used for acquiring codes corresponding to the first image;
a rendering unit, configured to render a page corresponding to the first image according to the code;
and the identification unit is used for identifying the necessary information in the page and storing the necessary information into the electronic identification code.
And the computing unit is used for carrying out binarization processing or optical character recognition on the first image to obtain the necessary information, and storing the text information into the electronic identification code.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic of each process, and should not limit the implementation process of the embodiment of the present application in any way.
It should be noted that, because the content of information interaction and execution process between the above devices/units is based on the same concept as the method embodiment of the present application, specific functions and technical effects thereof may be referred to in the method embodiment section, and will not be described herein again.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
The embodiment of the application also provides a network device, which comprises: at least one processor, a memory, and a computer program stored in the memory and executable on the at least one processor, which when executed by the processor performs the steps of any of the various method embodiments described above.
Embodiments of the present application also provide a computer readable storage medium storing a computer program which, when executed by a processor, implements steps that may implement the various method embodiments described above.
Embodiments of the present application provide a computer program product which, when run on a mobile terminal, causes the mobile terminal to perform steps that may be performed in the various method embodiments described above.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present application implements all or part of the flow of the method of the above embodiments, and may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, where the computer program, when executed by a processor, may implement the steps of each of the method embodiments described above. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a photographing device/terminal apparatus, recording medium, computer Memory, read-Only Memory (ROM), random access Memory (RAM, random Access Memory), electrical carrier signals, telecommunications signals, and software distribution media. Such as a U-disk, removable hard disk, magnetic or optical disk, etc. In some jurisdictions, computer readable media may not be electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/network device and method may be implemented in other manners. For example, the apparatus/network device embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical functional division, and there may be additional divisions in actual implementation, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
The above embodiments are only for illustrating the technical solution of the present application, and are not limiting; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application.

Claims (6)

1. A method for obtaining screenshot information, comprising: acquiring a first screenshot instruction, and intercepting a corresponding first image according to the first screenshot instruction; acquiring a second synthesis instruction, storing necessary information of the first image into an electronic identification code, and synthesizing the first image and the electronic identification code into a second image according to the second synthesis instruction, wherein the electronic identification code comprises a one-dimensional code, a two-dimensional code and a bar code, and the necessary information comprises content information and structural information of the content; scanning the electronic identification code to obtain the necessary information, and converting the necessary information into identifiable text information;
The storing the necessary information of the first image into an electronic identification code includes: acquiring codes corresponding to the first image; rendering a page corresponding to the first image according to the code; identifying the necessary information in the page and storing the necessary information into the electronic identification code; the rendering means that a browser or a user agent analyzes html source codes, creates a DOM tree, analyzes CSS codes, calculates final style data and builds a CSSOM tree, wherein the DOM tree and the CSSOM tree form a rendering tree, and a page is drawn on a screen according to the rendering tree;
the storing the necessary information into the electronic identification code includes: and storing the necessary information into the electronic identification code according to the mapping relation between the text information and the structural information of the content.
2. The method of claim 1, wherein the obtaining a first screenshot instruction, after capturing a corresponding first image according to the first screenshot instruction, further comprises: and when the second synthesis instruction is not acquired, taking the first image as an output result.
3. The method of claim 1, wherein after the fetching of the second synthetic instruction, further comprising: and carrying out binarization processing or optical character recognition on the first image to obtain the necessary information, and storing the text information into the electronic identification code.
4. An apparatus for obtaining screenshot information, comprising:
the acquisition unit is used for acquiring a first screenshot instruction and intercepting a corresponding first image according to the first screenshot instruction;
the synthesizing unit is used for acquiring a second synthesizing instruction, storing necessary information of the first image into an electronic identification code, synthesizing the first image and the electronic identification code into a second image according to the second synthesizing instruction, wherein the electronic identification code comprises a one-dimensional code, a two-dimensional code and a bar code, and the necessary information comprises content information and structural information of the content;
the scanning unit is used for scanning the electronic identification code to obtain the necessary information and converting the necessary information into identifiable text information;
the storing the necessary information of the first image into an electronic identification code includes: acquiring codes corresponding to the first image; rendering a page corresponding to the first image according to the code; identifying the necessary information in the page and storing the necessary information into the electronic identification code; the rendering means that a browser or a user agent analyzes html source codes, creates a DOM tree, analyzes CSS codes, calculates final style data and builds a CSSOM tree, wherein the DOM tree and the CSSOM tree form a rendering tree, and a page is drawn on a screen according to the rendering tree;
The storing the necessary information into the electronic identification code includes: and storing the necessary information into the electronic identification code according to the mapping relation between the text information and the structural information of the content.
5. A server comprising a baseboard management controller, a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any one of claims 1 to 3 when executing the computer program.
6. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the steps of the method according to any one of claims 1 to 3.
CN201910859179.9A 2019-09-11 2019-09-11 Method and device for acquiring screenshot information Active CN110795007B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910859179.9A CN110795007B (en) 2019-09-11 2019-09-11 Method and device for acquiring screenshot information
PCT/CN2020/096793 WO2021047230A1 (en) 2019-09-11 2020-06-18 Method and apparatus for obtaining screenshot information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910859179.9A CN110795007B (en) 2019-09-11 2019-09-11 Method and device for acquiring screenshot information

Publications (2)

Publication Number Publication Date
CN110795007A CN110795007A (en) 2020-02-14
CN110795007B true CN110795007B (en) 2023-12-26

Family

ID=69427114

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910859179.9A Active CN110795007B (en) 2019-09-11 2019-09-11 Method and device for acquiring screenshot information

Country Status (2)

Country Link
CN (1) CN110795007B (en)
WO (1) WO2021047230A1 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110795007B (en) * 2019-09-11 2023-12-26 深圳市联谛信息无障碍有限责任公司 Method and device for acquiring screenshot information
CN115003399A (en) * 2021-01-28 2022-09-02 深圳市迪迪金科技有限公司 Tape programmer, electric toy and control method thereof
CN115033318B (en) * 2021-11-22 2023-04-14 荣耀终端有限公司 Character recognition method for image, electronic device and storage medium
CN115550299A (en) * 2022-10-09 2022-12-30 深圳依时货拉拉科技有限公司 Image information communication method, electronic device, and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104298744A (en) * 2014-10-10 2015-01-21 广州三星通信技术研究有限公司 Method and device for sharing and obtaining webpage content
CN104851117A (en) * 2014-02-13 2015-08-19 腾讯科技(深圳)有限公司 Method for fusing image with two-dimensional barcode and device thereof
CN106127837A (en) * 2015-05-07 2016-11-16 顶漫画股份有限公司 The multi-language support system of network caricature
CN107659416A (en) * 2017-03-27 2018-02-02 广州视源电子科技股份有限公司 Method, apparatus, conference terminal and the storage medium that a kind of minutes are shared
CN108259973A (en) * 2017-12-20 2018-07-06 青岛海信电器股份有限公司 The display methods of the graphic user interface of smart television and television image sectional drawing
WO2018129051A1 (en) * 2017-01-04 2018-07-12 Advanced Functional Fabrics Of America Uniquely identifiable articles of fabric and social networks employing them
WO2019023884A1 (en) * 2017-07-31 2019-02-07 深圳传音通讯有限公司 Smart terminal-based merchant information sharing method and merchant information sharing system
WO2019119800A1 (en) * 2017-12-20 2019-06-27 聚好看科技股份有限公司 Method for processing television screenshot, smart television, and storage medium

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1396538A (en) * 2002-08-07 2003-02-12 深圳矽感科技有限公司 Method and system for electronizing character and chart information on ordinary carrier
KR100719776B1 (en) * 2005-02-25 2007-05-18 에이디정보통신 주식회사 Portable cord recognition voice output device
JP2007150788A (en) * 2005-11-29 2007-06-14 Canon Inc Facsimile machine and its control method
CN104715498A (en) * 2014-12-30 2015-06-17 上海孩子国科教设备有限公司 Linked data storing method and system
CN108205674B (en) * 2017-12-22 2022-04-15 广州爱美互动网络科技有限公司 Social APP content identification method, electronic device, storage medium and system
CN108964915A (en) * 2018-05-07 2018-12-07 浙江大学 A kind of printed matter non-intrusive interaction method based on two dimensional code auxiliary
CN110795007B (en) * 2019-09-11 2023-12-26 深圳市联谛信息无障碍有限责任公司 Method and device for acquiring screenshot information

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104851117A (en) * 2014-02-13 2015-08-19 腾讯科技(深圳)有限公司 Method for fusing image with two-dimensional barcode and device thereof
CN104298744A (en) * 2014-10-10 2015-01-21 广州三星通信技术研究有限公司 Method and device for sharing and obtaining webpage content
CN106127837A (en) * 2015-05-07 2016-11-16 顶漫画股份有限公司 The multi-language support system of network caricature
WO2018129051A1 (en) * 2017-01-04 2018-07-12 Advanced Functional Fabrics Of America Uniquely identifiable articles of fabric and social networks employing them
CN107659416A (en) * 2017-03-27 2018-02-02 广州视源电子科技股份有限公司 Method, apparatus, conference terminal and the storage medium that a kind of minutes are shared
WO2019023884A1 (en) * 2017-07-31 2019-02-07 深圳传音通讯有限公司 Smart terminal-based merchant information sharing method and merchant information sharing system
CN108259973A (en) * 2017-12-20 2018-07-06 青岛海信电器股份有限公司 The display methods of the graphic user interface of smart television and television image sectional drawing
WO2019119800A1 (en) * 2017-12-20 2019-06-27 聚好看科技股份有限公司 Method for processing television screenshot, smart television, and storage medium

Also Published As

Publication number Publication date
CN110795007A (en) 2020-02-14
WO2021047230A1 (en) 2021-03-18

Similar Documents

Publication Publication Date Title
CN108496150B (en) Screen capture and reading method and terminal
CN110795007B (en) Method and device for acquiring screenshot information
US10031656B1 (en) Zoom-region indicator for zooming in an electronic interface
CN108156508B (en) Barrage information processing method and device, mobile terminal, server and system
KR20120000850A (en) Mobile terminal and operation method thereof
CN109857297B (en) Information processing method and terminal equipment
CN107707762A (en) A kind of method for operating application program and mobile terminal
CN110888705B (en) Method for processing unread message corner marks and electronic equipment
CN113552986A (en) Multi-window screen capturing method and device and terminal equipment
KR20150087024A (en) Mobile terminal and method for controlling the same
CN111656347B (en) Project display method and terminal
WO2019076377A1 (en) Image viewing method and mobile terminal
CN109917988B (en) Selected content display method, device, terminal and computer readable storage medium
CN112837057A (en) Method for preventing payment code from being stolen, terminal equipment and computer readable storage medium
KR101672215B1 (en) Mobile terminal and operation method thereof
CN112835493A (en) Screen capture display method and device and terminal equipment
CN113031838B (en) Screen recording method and device and electronic equipment
KR20120062427A (en) Mobile terminal and operation method thereof
CN113409041B (en) Electronic card selection method, device, terminal and storage medium
CN109918580A (en) A kind of searching method and terminal device
CN111061574B (en) Object sharing method and electronic device
CN115841181B (en) Residual oil distribution prediction method, device, equipment and storage medium
CN111325316B (en) Training data generation method and device
CN116304355B (en) Object-based information recommendation method and device, electronic equipment and storage medium
CN111126996B (en) Image display method and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant