CN108932448B - Electronic screen-based click-to-read code identification method, terminal and click-to-read pen - Google Patents

Electronic screen-based click-to-read code identification method, terminal and click-to-read pen Download PDF

Info

Publication number
CN108932448B
CN108932448B CN201710376209.1A CN201710376209A CN108932448B CN 108932448 B CN108932448 B CN 108932448B CN 201710376209 A CN201710376209 A CN 201710376209A CN 108932448 B CN108932448 B CN 108932448B
Authority
CN
China
Prior art keywords
code
screen
click
pen
screen shot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201710376209.1A
Other languages
Chinese (zh)
Other versions
CN108932448A (en
Inventor
陈政安
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Jiuzhou Media Technology Co ltd
Original Assignee
Shenzhen Jiuzhou Media Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Jiuzhou Media Technology Co ltd filed Critical Shenzhen Jiuzhou Media Technology Co ltd
Priority to CN201710376209.1A priority Critical patent/CN108932448B/en
Publication of CN108932448A publication Critical patent/CN108932448A/en
Application granted granted Critical
Publication of CN108932448B publication Critical patent/CN108932448B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/04Electrically-operated educational appliances with audible presentation of the material to be studied

Abstract

The embodiment of the invention provides a touch and talk code identification method based on an electronic screen, a terminal and a touch and talk pen, and relates to the technical field of electronics. The method comprises the following steps: controlling a screen of the terminal to display a picture paved with point reading codes at preset brightness and gray scale; receiving click operation of a touch and talk pen on the screen; and according to the clicking operation, controlling the terminal to refresh the picture paved with the point reading codes displayed on the screen at a preset frequency, so that the point reading pen reads and identifies the point reading codes in the picture. The embodiment of the invention can expand the reading code range which can be identified by one-time clicking operation of the reading pen, increase the identification length of the reading code and expand the code value range of the reading code.

Description

Electronic screen-based click-to-read code identification method, terminal and click-to-read pen
Technical Field
The invention belongs to the technical field of electronics, and particularly relates to a touch and talk code identification method based on an electronic screen, a terminal and a touch and talk pen.
Background
With the continuous improvement of the living standard of people, the touch and talk pen has become popular as an electronic product for assisting learning. At present, various reading pen products on the market are attractive in appearance, the styles are diversified, and students at each age can find reading pen products suitable for themselves in the learning process.
However, the currently marketed point-and-read pen basically reads the OID code printed on the paper medium through the optical sensor, wherein the OID code is usually 2-byte coded, and 65536 IDs can be coded at most, and then decodes the read OID code and plays the decoded information in voice. The point reading identification mode has the following defects:
the OID codes which are clicked and identified by the touch and talk pen can be only printed on a paper medium to be identified, and only one OID code can be correspondingly identified by one click, so that the identification range of the touch and talk codes is small; in addition, since the identification length of the point reading code is generally two bytes, the code value range is only more than 6 ten thousand, so that the application scenarios of the point reading code are relatively limited.
Disclosure of Invention
In view of this, embodiments of the present invention provide a method, a terminal, and a touch and talk pen for recognizing a touch and talk code based on an electronic screen, so as to solve the problems in the prior art that an OID code recognized by a touch and talk pen can only be printed on a paper medium to be recognized, and only one OID code can be recognized by one click, the recognition range of the touch and talk code is small, and the application scenario of the touch and talk code is relatively limited.
In a first aspect, an embodiment of the present invention provides a method for identifying a point-to-read code based on an electronic screen, including:
controlling a screen of the terminal to display a picture paved with point reading codes at preset brightness and gray scale;
receiving click operation of a touch and talk pen on the screen;
and according to the clicking operation, controlling the terminal to refresh the picture paved with the point reading codes displayed on the screen at a preset frequency, so that the point reading pen reads and identifies the point reading codes in the picture.
In a second aspect, an embodiment of the present invention provides a method for identifying a click-to-read code based on an electronic screen, including:
when the terminal refreshes the picture paved with the point reading code displayed on the screen at a preset frequency, controlling a camera component on the point reading pen to take a picture of the point reading code region on the screen according to the preset frequency to obtain a plurality of screen shot pictures;
controlling the touch and talk pen to cache the plurality of screen shot pictures at a set cache frequency according to the preset frequency;
acquiring click-to-read codes in the multiple screen shot pictures, and splicing the click-to-read codes in the multiple screen shot pictures according to the cache sequence of the multiple screen shot pictures to generate a multiple-picture group code;
and identifying and outputting the voice information corresponding to the multi-image group code according to the corresponding relation between the multi-image group code and the pre-stored code segment and the voice information.
In a third aspect, an embodiment of the present invention provides a terminal, where the terminal includes:
the screen display control unit is used for controlling a screen of the terminal to display a picture paved with point reading codes at preset brightness and gray scale;
the click operation receiving unit is used for receiving click operation of the click-to-read pen on the screen;
and the refreshing frequency control unit is used for controlling the terminal to refresh the picture paved with the point reading codes displayed on the screen at a preset frequency according to the clicking operation, so that the point reading pen reads and identifies the point reading codes in the picture.
In a fourth aspect, an embodiment of the present invention provides a touch and talk pen, where the touch and talk pen includes:
the terminal comprises a shooting unit and a control unit, wherein the shooting unit is used for controlling a camera shooting assembly on a point reading pen to shoot a point reading code area on a screen according to a preset frequency when the terminal refreshes a picture paved with the point reading code displayed on the screen at the preset frequency so as to obtain a plurality of screen shot pictures;
the cache unit is used for caching the plurality of screen shot pictures at a set cache frequency according to the preset frequency control point reading pen;
the code combining unit is used for acquiring the click-to-read codes in the multiple screen shot pictures, splicing the click-to-read codes in the multiple screen shot pictures according to the cache sequence of the multiple screen shot pictures and generating a multi-picture code combining unit;
and the recognition unit is used for recognizing and outputting the voice information corresponding to the multi-image group code according to the corresponding relation between the multi-image group code and the pre-stored code segment and the voice information.
In a fifth aspect, an embodiment of the present invention provides a terminal, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor implements the steps of the method according to the first aspect when executing the computer program.
In a sixth aspect, the present invention provides a computer-readable storage medium, which stores a computer program, wherein the computer program, when executed by a processor, implements the steps of the method according to the first aspect.
In a seventh aspect, an embodiment of the present invention provides a touch and talk pen, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor implements the steps of the method according to the second aspect when executing the computer program.
In an eighth aspect, the present invention provides a computer-readable storage medium, which stores a computer program, wherein the computer program, when executed by a processor, implements the steps of the method according to the second aspect.
Compared with the prior art, the embodiment of the invention has the following beneficial effects:
the method comprises the steps that a picture paved with point reading codes is displayed at preset brightness and gray scale through a screen of a control terminal; receiving click operation of a touch and talk pen on the screen; and according to the click operation, the terminal is controlled to refresh the picture laid with the point reading codes displayed on the screen at a preset frequency, so that the point reading pen reads and identifies the point reading codes in the picture, the range of the point reading codes which can be identified by one click operation of the point reading pen can be expanded, the identification length of the point reading codes is increased, and the code value range of the point reading codes is expanded.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic flow chart of a click-to-read code identification method based on an electronic screen according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of a click-to-read code identification method based on an electronic screen according to an embodiment of the present invention;
FIG. 3 is a schematic flow chart of a click-to-read code recognition method based on an electronic screen according to an embodiment of the present invention;
fig. 4 is a schematic block diagram of a terminal according to an embodiment of the present invention;
FIG. 5 is a schematic block diagram of a touch and talk pen according to an embodiment of the present invention;
FIG. 6 is a schematic block diagram of a touch and talk pen according to an embodiment of the present invention;
fig. 7 is a schematic block diagram of a terminal according to an embodiment of the present invention;
fig. 8 is a schematic block diagram of a touch and talk pen according to an embodiment of the present invention.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present invention with unnecessary detail.
In order to explain the technical means of the present invention, the following description will be given by way of specific examples.
Fig. 1 is a schematic flowchart of a click-to-read code identification method based on an electronic screen according to an embodiment of the present invention, where an execution subject of the method is a terminal according to an embodiment of the present invention. Referring to fig. 1, the method for identifying a point-to-read code based on an electronic screen according to this embodiment may include the following steps:
and S101, controlling a screen of the terminal to display a picture paved with a point reading code at preset brightness and gray scale.
In this embodiment, the terminal includes, but is not limited to, an electronic device with a screen display function, such as a mobile phone and a tablet computer. The terminal is internally stored with the picture paved with the point reading code, and the brightness value and the gray value of the picture paved with the point reading code can be rendered while the display brightness and the gray value of a screen of the terminal are controlled.
Because when the picture contrast is lower, can lead to the point reading pen can't discern the point reading code in the picture, and when the picture contrast is higher, can produce some noise jamming to the point reading code that the point reading pen obtained, consequently, adopt control terminal's screen to show the picture of laying the point reading code with preset luminance and grey scale in this embodiment, can make the picture show with certain contrast like this, guarantee the precision and the sensitivity of point reading code discernment.
And step S102, receiving the click operation of the touch and talk pen on the screen.
In this embodiment, the pen point of the touch and talk pen is provided with an optical recognition sensor, which can capture an image of a touch and talk code area of the electronic screen by means of close-range contact, wherein the touch and talk code area is a contact area of the touch and talk pen and the electronic screen.
In this embodiment, a pressure sensor is disposed below an electronic screen of the terminal, and the terminal can detect whether a pressure value of a preset display area on the electronic screen is greater than a preset threshold value to determine whether a click operation of a touch and talk pen is received, specifically: if the pressure sensor detects that the pressure value of a preset display area on the electronic screen is larger than a preset threshold value, the click operation of a click-and-read pen is received; otherwise, it indicates that the click operation of the click-to-read pen is not received. The preset display area is a display area with a point reading picture laid on the screen.
Preferably, in other implementation examples, the terminal may further detect, by the pressure sensor, whether a pressure value of a preset display area on the electronic screen is greater than a preset threshold, and whether a duration of the pressure greater than the preset threshold exceeds a preset time to determine whether the click operation of the touch and talk pen is received, specifically: if the pressure sensor detects that the pressure value of a preset display area on the electronic screen is larger than a preset threshold value and the duration time of the pressure larger than the preset threshold value is larger than preset time, the electronic screen receives the click operation of the touch and talk pen; otherwise, the electronic screen does not receive the click operation of the touch and talk pen. Therefore, misjudgment can be avoided, and the situation that the user receives the click operation of the touch and talk pen when the user touches the electronic screen carelessly is avoided.
And step S103, according to the clicking operation, controlling the terminal to refresh the picture laid with the point reading codes displayed on the screen at a preset frequency, so that the point reading pen reads and identifies the point reading codes in the picture.
Preferably, in this embodiment, the preset frequency is 10 ms/time, that is, the terminal is controlled to refresh the display picture on the screen every 10ms, and the time that the point-reading pen stays on the screen is generally greater than or equal to 100ms every time the point-reading pen performs a click operation on the screen, so that when the point-reading pen performs a click operation on the screen, 10 different point-reading codes can be read, the identification range of the point-reading codes is extended, the identification length of the point-reading codes is increased, and the code value range of the point-reading codes is extended.
It should be noted that the preset frequency of 10 ms/time is only a preferred implementation example exemplified by the present invention, and is not limited to the present invention, in other implementation examples, other frequency values may also be used, and after the frequency is set, the user may also modify the preset frequency according to the use habit of the user, as long as it is ensured that the touch and talk pen can read and obtain a plurality of different touch and talk codes for a plurality of times in one click operation.
As can be seen from the above, in the point-read code identification method based on the electronic screen provided in this embodiment, since the picture laid with the point-read code is displayed at the preset brightness and gray level by the screen of the control terminal; receiving click operation of a touch and talk pen on the screen; and according to the click operation, the terminal is controlled to refresh the picture laid with the point reading codes displayed on the screen at a preset frequency, so that the point reading pen reads and identifies the point reading codes in the picture, the range of the point reading codes which can be identified by one click operation of the point reading pen can be expanded, the identification length of the point reading codes is increased, and the code value range of the point reading codes is expanded.
Fig. 2 is a schematic flowchart of a touch and talk code recognition method based on an electronic screen according to an embodiment of the present invention, where an execution main body of the method is a touch and talk pen according to an embodiment of the present invention. Referring to fig. 2, the method for identifying a point-to-read code based on an electronic screen according to this embodiment may include the following steps:
step S201, when the terminal refreshes the picture laid with the point reading code displayed on the screen at a preset frequency, controlling a camera component on the point reading pen to shoot the point reading code area on the screen according to the preset frequency, and acquiring a plurality of screen shot pictures.
In this embodiment, the pen point of the touch and talk pen is provided with an optical recognition sensor, which can capture an image of a touch and talk code area of the electronic screen by means of close-range contact, wherein the touch and talk code area is a contact area of the touch and talk pen and the electronic screen.
In this embodiment, when the terminal refreshes the picture laid with the click-to-read code displayed on the screen according to the preset frequency, the click-to-read pen also controls the camera component arranged on the pen head to take pictures of the click-to-read code region picture displayed on the screen for multiple times according to the conversion frequency of the picture on the screen, so as to obtain multiple screen shot pictures corresponding to the click-to-read code region pictures converted on the screen for multiple times respectively. The image of the code reading area is an image displayed in a contact area of the touch reading pen and the screen.
The preset frequency can be set by a user according to the use habit of the user and the shooting speed of the camera shooting assembly on the touch and talk pen, and the camera shooting assembly can smoothly complete shooting of the touch and talk code area on the screen according to the shooting speed corresponding to the preset frequency. Preferably, in this embodiment, the preset frequency is 10 ms/time.
Step S202, caching the plurality of screen shot pictures at a set caching frequency according to the preset frequency control point reading pen.
In this embodiment, the touch-and-talk pen may preset a caching frequency of a picture according to a refresh frequency of the terminal control screen, and then cache the picture according to the caching frequency set before after obtaining a plurality of screen shot pictures, for example: in a specific application, if the refreshing frequency of the screen controlled by the terminal enables the click-to-read pen to perform a click operation on the screen once and to obtain 5 screen shot images, the cache frequency may be set to 5 caches per second, so that the click-to-read codes in all the cached images in each cache period can form a complete code segment.
Step S203, acquiring the click-to-read codes in the multiple screen shot pictures, and splicing the click-to-read codes in the multiple screen shot pictures according to the cache sequence of the multiple screen shot pictures to generate a multi-picture group code.
In this embodiment, after a plurality of screen shot pictures cached in a certain caching period are obtained, the click-to-read codes in the plurality of screen shot pictures can be respectively extracted, and then the click-to-read codes in the plurality of screen shot pictures are spliced according to the caching order of the click-to-read codes in the plurality of screen shot pictures, wherein the plurality of screen shot pictures are cached according to the shooting time order, so that the click-to-read codes in the plurality of screen shot pictures are spliced according to the caching order, that is, the click-to-read codes in the plurality of screen pictures are spliced according to the shooting time order of the plurality of screen pictures, and each screen picture is marked with a shooting time point.
In this embodiment, the multi-image group code is a code segment of a click-to-read code that is spliced by click-to-read codes in the multiple screen shot pictures.
Step S204, identifying and outputting the voice information corresponding to the multi-image group code according to the corresponding relation between the multi-image group code and the pre-stored code segment and the voice information.
In this embodiment, the reading pen stores a corresponding relationship between a code segment and voice information in advance, and after the reading pen acquires the multi-image group code, the reading pen may query the corresponding relationship according to the multi-image group code to identify the voice information corresponding to the multi-image group code, and then play and output the identified voice information to a user through a voice device on the reading pen.
As can be seen from the above, in the method for identifying a point-read code based on an electronic screen provided by this embodiment, when the terminal refreshes the picture laid with the point-read code displayed on the screen at the preset frequency, the camera component on the point-read pen is controlled to take a picture of the point-read code region on the screen according to the preset frequency, so as to obtain a plurality of screen shot pictures; controlling the touch and talk pen to cache the plurality of screen shot pictures at a set cache frequency according to the preset frequency; acquiring click-to-read codes in the multiple screen shot pictures, and splicing the click-to-read codes in the multiple screen shot pictures according to the cache sequence of the multiple screen shot pictures to generate a multiple-picture group code; and identifying and outputting the voice information corresponding to the multi-image group code according to the corresponding relation between the multi-image group code and the pre-stored code segment and the voice information, so that the reading range which can be identified by one-time clicking operation of the reading pen can be expanded, the identification length of the reading code is increased, and the code value range of the reading code is expanded.
Fig. 3 is a schematic flowchart of a touch and talk code recognition method based on an electronic screen according to an embodiment of the present invention, where an execution subject of the method is a touch and talk pen according to an embodiment of the present invention. Referring to fig. 3, with respect to the previous embodiment, the method for identifying a click-to-read code based on an electronic screen according to this embodiment further includes, before the acquiring the click-to-read codes in the multiple screen shot pictures:
step S303, carrying out infrared interference filtering processing on the plurality of screen shot pictures;
step S304, carrying out gray level processing on the plurality of screen shot pictures after the infrared interference filtering processing, and obtaining the gray level pictures of the plurality of screen shot pictures.
Preferably, in this embodiment, a high-pass filter or a preset noise filtering algorithm may be adopted to perform infrared interference filtering processing on the multiple screen shot pictures. Because the terminal electronic screen's infrared effect, the point-and-read pen is through making a video recording the subassembly is right the picture that the electronic screen was shot probably has the noise, consequently, adopts in this embodiment to carry out infrared interference filtering processing to a plurality of screen shots before the point-and-read code in obtaining a plurality of screen shots, can filter the infrared noise on the picture like this, the extraction of the point-and-read code on the picture of being convenient for.
In addition, in this embodiment, since the gray level processing is performed on the plurality of shot pictures after the infrared interference filtering, a plurality of gray level pictures corresponding to the plurality of screen shot pictures are generated, and then the next click-to-read code recognition is performed by using the plurality of gray level pictures, the precision and the sensitivity of the click-to-read code recognition can be improved.
It should be noted that, in this embodiment, the implementation manners of the steps S301 to S302 and the steps S305 to S306 are completely the same as the implementation manners of the steps S201 to S202 in the embodiment shown in fig. 2, and therefore, the description thereof is omitted.
Therefore, it can be seen that the electronic-screen-based click-to-read code identification method provided by the embodiment can also expand the click-to-read code range that can be identified by one click operation of the click-to-read pen, increase the identification length of the click-to-read code, and expand the code value range of the click-to-read code; compared with the previous embodiment, in the embodiment, the infrared noise filtering processing and the gray level processing are performed on the picture before the point reading code is acquired, so that the accuracy and the sensitivity of point reading code identification can be further improved.
Fig. 4 is a schematic block diagram of a terminal according to an embodiment of the present invention. For convenience of explanation, only the portions related to the present embodiment are shown.
Referring to fig. 4, the present embodiment provides a terminal 4, including:
a screen display control unit 41, configured to control a screen of the terminal to display a picture with a dot reading code laid thereon at preset brightness and gray;
a click operation receiving unit 42, configured to receive a click operation of the touch and talk pen on the screen;
and a refresh frequency control unit 43, configured to control the terminal to refresh the picture laid with the point reading code displayed on the screen at a preset frequency according to the click operation, so that the point reading pen reads and identifies the point reading code in the picture.
It should be noted that, since each unit in the terminal provided in the embodiment of the present invention is based on the same concept as the method embodiment shown in fig. 1 of the present invention, the technical effect brought by the unit is the same as the method embodiment shown in fig. 1 of the present invention, and specific contents may refer to the description in the method embodiment shown in fig. 1 of the present invention, and are not described again here.
Therefore, it can be seen that the terminal provided by the embodiment of the invention can also expand the range of the point-reading codes which can be identified by one-time clicking operation of the point-reading pen, increase the identification length of the point-reading codes, and expand the code value range of the point-reading codes.
Fig. 5 is a schematic block diagram of a touch and talk pen according to an embodiment of the present invention. For convenience of explanation, only the portions related to the present embodiment are shown.
Referring to fig. 5, the present embodiment provides a touch and talk pen 5, which includes:
the shooting unit 51 is used for controlling a camera component on the point reading pen to shoot a point reading code area on the screen according to a preset frequency when the terminal refreshes the picture laid with the point reading code displayed on the screen at the preset frequency, so as to obtain a plurality of screen shot pictures;
the cache unit 52 is configured to control the point-reading pen to cache the multiple screen shot pictures at a set cache frequency according to the preset frequency;
a code combining unit 53, configured to obtain click-to-read codes in the multiple screen shot pictures, and splice the click-to-read codes in the multiple screen shot pictures according to the cache sequence of the multiple screen shot pictures to generate a multiple-picture code;
and the identifying unit 54 is configured to identify and output the voice information corresponding to the multi-image group code according to the corresponding relationship between the multi-image group code and the pre-stored code segment and the voice information.
Optionally, referring to fig. 6, the touch and talk pen 5 further includes:
a filtering unit 55, configured to perform infrared interference filtering processing on the multiple screen shot pictures;
and the gray processing unit 56 is configured to perform gray processing on the multiple screen shot pictures after the infrared interference filtering processing, and acquire gray pictures of the multiple screen shot pictures.
It should be noted that, since each unit in the terminal provided in the embodiment of the present invention is based on the same concept as the method embodiment shown in fig. 2 to fig. 3 of the present invention, the technical effect brought by the unit is the same as the method embodiment shown in fig. 2 to fig. 3 of the present invention, and specific contents may refer to the description in the method embodiment shown in fig. 2 to fig. 3 of the present invention, and are not described again here.
Therefore, it can be seen that the click-to-read pen provided by the embodiment of the present invention can also extend the range of the click-to-read code that can be recognized by the click-to-read pen through one click operation, increase the recognition length of the click-to-read code, and extend the code value range of the click-to-read code.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
Fig. 7 is a schematic diagram of a terminal according to an embodiment of the present invention. As shown in fig. 7, the terminal 7 of this embodiment includes: a processor 70, a memory 71 and a computer program 72 stored in said memory 71 and executable on said processor 70. The processor 70, when executing the computer program 72, implements the steps in the method embodiment of fig. 1 described above, such as the steps 101 to 103 of fig. 1. Alternatively, the processor 70, when executing the computer program 72, implements the functions of the modules/units in the embodiment shown in fig. 4, such as the functions of the units 41 to 43 shown in fig. 4.
Illustratively, the computer program 72 may be partitioned into one or more modules/units that are stored in the memory 71 and executed by the processor 70 to implement the present invention. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program 72 in the terminal 7. For example, the computer program 72 may be divided into a screen display control unit, a click operation reception light cloud, and a refresh frequency control unit, and the specific functions of each unit are as follows:
the screen display control unit is used for controlling a screen of the terminal to display a picture paved with point reading codes at preset brightness and gray scale;
the click operation receiving unit is used for receiving click operation of the click-to-read pen on the screen;
and the refreshing frequency control unit is used for controlling the terminal to refresh the picture paved with the point reading codes displayed on the screen at a preset frequency according to the clicking operation, so that the point reading pen reads and identifies the point reading codes in the picture.
The terminal can be a desktop computer, a notebook, a palm computer, a cloud server and other computing equipment. The terminal device may include, but is not limited to, a processor 70, a memory 71. It will be appreciated by those skilled in the art that fig. 7 is only an example of a terminal 7 and does not constitute a limitation of the terminal 7, and that it may comprise more or less components than those shown, or some components may be combined, or different components, for example the terminal may further comprise input output devices, network access devices, buses, etc.
In another embodiment of the invention, a computer-readable storage medium is provided, storing a computer program which, when executed by a processor, implements: the implementation manner described in the embodiment of the method shown in fig. 1 of the present invention may also implement the implementation manner of the terminal described in the embodiment shown in fig. 4 of the present invention, and is not described herein again.
Fig. 8 is a schematic diagram of a touch and talk pen according to an embodiment of the present invention. As shown in fig. 8, the terminal 8 of this embodiment includes: a processor 80, a memory 81 and a computer program 82 stored in said memory 81 and executable on said processor 80. The processor 80, when executing the computer program 82, implements the steps in the method embodiments of fig. 2 or fig. 3 described above, such as the steps 201 to 204 shown in fig. 2. Alternatively, the processor 80, when executing the computer program 82, implements the functions of the modules/units in the embodiments shown in fig. 5 or fig. 6, such as the functions of the units 51 to 54 shown in fig. 5.
Illustratively, the computer program 82 may be partitioned into one or more modules/units that are stored in the memory 81 and executed by the processor 80 to implement the present invention. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution process of the computer program 82 in the stylus 8. For example, the computer program 82 may be divided into a shooting unit, a buffer unit, a group code unit, and an identification unit, and each unit has the following specific functions:
the terminal comprises a shooting unit and a control unit, wherein the shooting unit is used for controlling a camera shooting assembly on a point reading pen to shoot a point reading code area on a screen according to a preset frequency when the terminal refreshes a picture paved with the point reading code displayed on the screen at the preset frequency so as to obtain a plurality of screen shot pictures;
the cache unit is used for caching the plurality of screen shot pictures at a set cache frequency according to the preset frequency control point reading pen;
the code combining unit is used for acquiring the click-to-read codes in the multiple screen shot pictures, splicing the click-to-read codes in the multiple screen shot pictures according to the cache sequence of the multiple screen shot pictures and generating a multi-picture code combining unit;
and the recognition unit is used for recognizing and outputting the voice information corresponding to the multi-image group code according to the corresponding relation between the multi-image group code and the pre-stored code segment and the voice information.
The point-and-read pen may include, but is not limited to, a processor 80, a memory 81. It will be appreciated by those skilled in the art that fig. 8 is merely an example of the stylus 8 and does not constitute a limitation of the stylus 8 and may include more or less components than those shown, or some components may be combined, or different components, e.g. the terminal may also include input output devices, network access devices, buses, etc.
In another embodiment of the invention, a computer-readable storage medium is provided, storing a computer program which, when executed by a processor, implements: the implementation manner described in the method embodiment shown in fig. 2 or fig. 3 of the present invention may also implement the implementation manner of the terminal described in the embodiment shown in fig. 5 or fig. 6 of the present invention, and is not described herein again.
The Processor 70 or 80 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The storage 71 or 81 may be an internal storage unit of the terminal 7 or the stylus 8, such as a hard disk or a memory of the terminal 7 or the stylus 8. The memory 71 or 81 may also be an external storage device of the terminal 7 or the touch-and-talk pen 8, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are equipped on the terminal 7 or the touch-and-talk pen 8. Further, the memory 71 or 81 may also include both an internal storage unit of the terminal 7 or the stylus pen 8 and an external storage device. The memory 71 or 81 is used for storing the computer program and other programs and data required by the terminal 7 or the stylus 8. The memory 71 or 81 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments may be implemented. . Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.

Claims (8)

1. A point reading code identification method based on an electronic screen is characterized by comprising the following steps:
when the terminal refreshes a picture paved with point reading codes displayed on a screen at a preset frequency, controlling a camera component on a point reading pen to take pictures of the point reading code area on the screen according to the preset frequency, and acquiring a plurality of screen shot pictures;
controlling the touch and talk pen to cache the plurality of screen shot pictures at a set cache frequency according to the preset frequency;
acquiring click-to-read codes in the multiple screen shot pictures, and splicing the click-to-read codes in the multiple screen shot pictures according to the cache sequence of the multiple screen shot pictures to generate a multiple-picture group code;
and identifying and outputting the voice information corresponding to the multi-image group code according to the corresponding relation between the multi-image group code and the pre-stored code segment and the voice information.
2. The electronic-screen-based click-to-read code identification method according to claim 1, wherein before the acquiring the click-to-read codes in the multiple screen shots and splicing the click-to-read codes in the multiple screen shots according to the caching order of the multiple screen shots, generating the multi-image group code, the method further comprises:
carrying out infrared interference filtering processing on the plurality of screen shot pictures;
and carrying out gray level processing on the plurality of screen shot pictures after the infrared interference filtering processing to obtain gray level pictures of the plurality of screen shot pictures.
3. A point-and-read pen, comprising:
the terminal comprises a shooting unit, a control unit and a display unit, wherein the shooting unit is used for controlling a camera shooting component on a point reading pen to shoot a point reading code area on a screen according to a preset frequency when the terminal refreshes a picture paved with point reading codes displayed on the screen at the preset frequency so as to acquire a plurality of screen shot pictures;
the cache unit is used for caching the plurality of screen shot pictures at a set cache frequency according to the preset frequency control point reading pen;
the code combining unit is used for acquiring the click-to-read codes in the multiple screen shot pictures, splicing the click-to-read codes in the multiple screen shot pictures according to the cache sequence of the multiple screen shot pictures and generating a multi-picture code combining unit;
and the recognition unit is used for recognizing and outputting the voice information corresponding to the multi-image group code according to the corresponding relation between the multi-image group code and the pre-stored code segment and the voice information.
4. The point-and-read pen of claim 3, further comprising:
the filtering unit is used for carrying out infrared interference filtering processing on the plurality of screen shot pictures;
and the gray processing unit is used for carrying out gray processing on the plurality of screen shot pictures after the infrared interference filtering processing to obtain the gray pictures of the plurality of screen shot pictures.
5. A terminal comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the steps of the method according to claim 1 or 2 are implemented when the processor executes the computer program.
6. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to claim 1 or 2.
7. A stylus comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor implements the steps of the method of claim 1 or 2 when executing the computer program.
8. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to claim 1 or 2.
CN201710376209.1A 2017-05-24 2017-05-24 Electronic screen-based click-to-read code identification method, terminal and click-to-read pen Expired - Fee Related CN108932448B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710376209.1A CN108932448B (en) 2017-05-24 2017-05-24 Electronic screen-based click-to-read code identification method, terminal and click-to-read pen

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710376209.1A CN108932448B (en) 2017-05-24 2017-05-24 Electronic screen-based click-to-read code identification method, terminal and click-to-read pen

Publications (2)

Publication Number Publication Date
CN108932448A CN108932448A (en) 2018-12-04
CN108932448B true CN108932448B (en) 2021-06-01

Family

ID=64449915

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710376209.1A Expired - Fee Related CN108932448B (en) 2017-05-24 2017-05-24 Electronic screen-based click-to-read code identification method, terminal and click-to-read pen

Country Status (1)

Country Link
CN (1) CN108932448B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111027341A (en) * 2019-12-28 2020-04-17 安徽硕威智能科技有限公司 Interaction method, device and system based on OID two-dimensional password identification and storage medium thereof

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI395202B (en) * 2009-11-24 2013-05-01 Kuo Ping Yang Method and computer program product of putting identification codes in a document
TW201351293A (en) * 2012-02-17 2013-12-16 Ebru Keni Entertainment and educational optical reader system
CN102981618A (en) * 2012-11-16 2013-03-20 Tcl集团股份有限公司 Display terminal and touch control method and touch control system
TWI489352B (en) * 2013-08-13 2015-06-21 Wistron Corp Optical touch positioning method, system and optical touch positioner
CN105446628B (en) * 2015-12-31 2018-08-07 北京奇禄管理咨询有限公司 A kind of reading method
CN105931500B (en) * 2016-04-28 2020-01-07 汎达科技(深圳)有限公司 Image equipment control method based on touch and talk pen and touch and talk pen system
CN106372701B (en) * 2016-08-30 2019-06-07 西安小光子网络科技有限公司 A kind of coding of optical label and recognition methods

Also Published As

Publication number Publication date
CN108932448A (en) 2018-12-04

Similar Documents

Publication Publication Date Title
TWI751161B (en) Terminal equipment, smart phone, authentication method and system based on face recognition
CN109413563B (en) Video sound effect processing method and related product
CN110070063B (en) Target object motion recognition method and device and electronic equipment
WO2015001437A1 (en) Image processing method and apparatus, and electronic device
KR102491773B1 (en) Image deformation control method, device and hardware device
CN109118447B (en) Picture processing method, picture processing device and terminal equipment
CN107690804B (en) Image processing method and user terminal
CN111510630A (en) Image processing method, device and storage medium
CN109492607B (en) Information pushing method, information pushing device and terminal equipment
TWI608428B (en) Image processing system for generating information by image recognition and related method
CN108513069B (en) Image processing method, image processing device, storage medium and electronic equipment
CN108764139B (en) Face detection method, mobile terminal and computer readable storage medium
CN110084204B (en) Image processing method and device based on target object posture and electronic equipment
CN113038165B (en) Method, apparatus and storage medium for determining encoding parameter set
CN113391778A (en) Paper-like screen display method, device and equipment
CN112183173B (en) Image processing method, device and storage medium
CN109034148A (en) One kind is based on character image identification audio reading method and its device
CN109981989B (en) Method and device for rendering image, electronic equipment and computer readable storage medium
CN112163993A (en) Image processing method, device, equipment and storage medium
CN108932448B (en) Electronic screen-based click-to-read code identification method, terminal and click-to-read pen
JP6085067B2 (en) User data update method, apparatus, program, and recording medium
CN108492266B (en) Image processing method, image processing device, storage medium and electronic equipment
US11810336B2 (en) Object display method and apparatus, electronic device, and computer readable storage medium
CN110971924A (en) Method, device, storage medium and system for beautifying in live broadcast process
CN108769527B (en) Scene identification method and device and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20210601