CN115909345A - Touch and talk pen information interaction method and system - Google Patents

Touch and talk pen information interaction method and system Download PDF

Info

Publication number
CN115909345A
CN115909345A CN202310226537.9A CN202310226537A CN115909345A CN 115909345 A CN115909345 A CN 115909345A CN 202310226537 A CN202310226537 A CN 202310226537A CN 115909345 A CN115909345 A CN 115909345A
Authority
CN
China
Prior art keywords
image
user
determining
information
reference image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310226537.9A
Other languages
Chinese (zh)
Other versions
CN115909345B (en
Inventor
颜榅辉
陈许忠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen City Cultural Beyond Technology Co ltd
Original Assignee
Shenzhen City Cultural Beyond Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen City Cultural Beyond Technology Co ltd filed Critical Shenzhen City Cultural Beyond Technology Co ltd
Priority to CN202310226537.9A priority Critical patent/CN115909345B/en
Publication of CN115909345A publication Critical patent/CN115909345A/en
Application granted granted Critical
Publication of CN115909345B publication Critical patent/CN115909345B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The invention relates to the field of intelligent identification equipment, and particularly discloses a method and a system for information interaction of a touch and talk pen, wherein the method comprises the steps of receiving activation information input by a user, starting a physical information monitoring port, and acquiring handheld data of the user based on the physical information monitoring port; performing data analysis on the handheld data, and determining the intelligence rating of the user according to the data analysis result; acquiring an image segment containing equipment parameters according to preset image acquisition equipment, and determining a reference image according to the image segment containing the equipment parameters; and generating identification information based on the reference image, and performing output conversion on the identification information according to the intelligence rating. The method and the device perform pre-recognition on the object to be detected based on the preset reference image, expand the time span of the recognition process through the pre-recognition process, reduce the working pressure of instant recognition, improve the recognition speed, generate different interactive information according to the identity of the user, and optimize the user experience.

Description

Touch and talk pen information interaction method and system
Technical Field
The invention relates to the field of intelligent identification equipment, in particular to a method and a system for information interaction of a touch and talk pen.
Background
The reading pen enriches the experience of children by enabling the children to participate in various targeted games and activities and continuously stimulating senses such as touch, vision, hearing and the like, increases the interest of the children and develops the cranial nerves of the children. The point-reading pen is small and convenient, is very portable, can be used at any time and anywhere, namely, the point-reading pen can pronounce, adds sound to boring characters, enriches book contents, makes reading and learning more interesting, and can fully realize edutainment.
With the progress of an image recognition algorithm, the existing point-and-read pen is enough to recognize most contents, but the recognition process needs to consume a large amount of computing resources, and the recognition efficiency is not high; therefore, how to improve the recognition speed and optimize the use experience on the existing mature reading pen recognition technology is the technical problem to be solved by the technical scheme of the invention.
Disclosure of Invention
The invention aims to provide a method and a system for information interaction of a touch and talk pen, so as to solve the problems in the background technology.
In order to achieve the purpose, the invention provides the following technical scheme:
a method for information interaction of a touch and talk pen, the method comprising:
receiving activation information input by a user, starting a physical information monitoring port, and acquiring handheld data of the user based on the physical information monitoring port;
carrying out data analysis on the handheld data, and determining the intelligence rating of the user according to the data analysis result;
acquiring an image segment containing equipment parameters according to preset image acquisition equipment, and determining a reference image according to the image segment containing the equipment parameters;
and generating identification information based on the reference image, and performing output conversion on the identification information according to the intelligence rating.
As a further scheme of the invention: the step of receiving activation information input by a user and opening a physical information monitoring port, and acquiring handheld data of the user based on the physical information monitoring port comprises the following steps:
receiving an activation request containing activation time input by a user, and starting a physical information monitoring port when the activation time reaches a preset time condition;
acquiring a force application area and a force application load of a user based on a physical information monitoring port;
converting the force application area and the force application load thereof into a holding graph according to a preset conversion rule;
counting the holding graph according to time to obtain handheld data;
and the color value of each pixel point in the grip chart is related to the force application load.
As a further scheme of the invention: the step of performing data analysis on the handheld data and determining the intelligence rating of the user according to the data analysis result comprises the following steps:
reading the holding diagram, inputting the holding diagram into a trained recognition model, and determining the physiological age of the user;
when the physiological age reaches a preset age line, acquiring a sound signal of a user based on preset audio acquisition equipment;
identifying the sound signal, and correcting the physiological age according to an identification result;
and determining the intelligence rating of the user according to the corrected physiological age.
As a further scheme of the invention: the step of obtaining an image segment containing device parameters according to a preset image acquisition device and determining a reference image according to the image segment containing the device parameters comprises:
acquiring an image fragment according to preset image acquisition equipment, and recording equipment parameters at the acquisition moment; the equipment parameters comprise the distance and the angle between the acquisition point and the object to be detected;
arranging the image segments according to the equipment parameters, and traversing a preset reference image library sequentially according to the image segments to obtain a shrinkage limit library; the contraction library is a subset of the reference image library;
and determining a reference image according to the quantity characteristics in the shrinkage library.
As a further scheme of the invention: the step of determining a reference image according to the quantity features in the shrinkage library comprises:
recording the number of images in the shrinkage library in real time;
determining a quantity curve according to the number of the images and the number of the image segments; the number of the images is a dependent variable, and the number of the image segments is an independent variable;
the query quantity value is an independent variable point corresponding to one time, and a reference image is determined according to the distribution characteristics of the independent variable points;
a derivative of the quantity curve is obtained, and a core segment is determined from the derivative.
As a further scheme of the invention: the step of generating identification information based on the reference image and performing output conversion on the identification information according to the intellectual rating includes:
reading related information of a reference image; the related information is a table containing a position item and a content item;
performing comparative identification on the core fragment according to the related information;
receiving image information acquired by a preset pen point acquisition port, and identifying the image information according to a comparison type identification result to obtain identification information;
and determining a conversion rule of the identification information according to the intelligence rating, executing a conversion process, and outputting the converted identification information.
The technical scheme of the invention also provides a touch and talk pen information interaction system, which comprises:
the handheld data acquisition module is used for receiving activation information input by a user, starting a physical information monitoring port and acquiring handheld data of the user based on the physical information monitoring port;
the intelligence rating module is used for carrying out data analysis on the handheld data and determining the intelligence rating of the user according to the data analysis result;
the reference image determining module is used for acquiring an image segment containing equipment parameters according to preset image acquisition equipment and determining a reference image according to the image segment containing the equipment parameters;
and the output conversion module is used for generating identification information based on the reference image and performing output conversion on the identification information according to the intelligence rating.
As a further scheme of the invention: the handheld data acquisition module comprises:
the port opening unit is used for receiving an activation request which is input by a user and contains activation time, and opening the physical information monitoring port when the activation time reaches a preset time condition;
the load monitoring unit is used for acquiring a force application area and a force application load of a user based on the physical information monitoring port;
the load conversion unit is used for converting the force application area and the force application load thereof into a holding graph according to a preset conversion rule;
the data statistics unit is used for counting the holding diagram according to time to obtain handheld data;
and the color value of each pixel point in the grip chart is related to the force application load.
As a further scheme of the invention: the intelligence rating module comprises:
the physiological age determining unit is used for reading the holding diagram, inputting the holding diagram into a trained recognition model and determining the physiological age of the user;
the sound acquisition unit is used for acquiring a sound signal of a user based on a preset audio acquisition device when the physiological age reaches a preset age line;
a physiological age correcting unit for recognizing the sound signal and correcting the physiological age according to the recognition result;
and the rating execution unit is used for determining the intelligence rating of the user according to the corrected physiological age.
As a further scheme of the invention: the reference image determination module includes:
the image acquisition unit is used for acquiring image fragments according to preset image acquisition equipment and recording equipment parameters at the acquisition moment; the equipment parameters comprise the distance and the angle between the acquisition point and the object to be detected;
the image arrangement unit is used for arranging the image segments according to the equipment parameters and traversing a preset reference image library according to the image segments in sequence to obtain a shrinkage limit library; the shrinkage library is a subset of the reference image library;
and the quantity analysis unit is used for determining a reference image according to the quantity characteristics in the shrinkage library.
Compared with the prior art, the invention has the beneficial effects that: according to the invention, the identity of the user is judged through the mechanical sensor, the information on the object to be detected is acquired through the image acquisition equipment, the corresponding reference image is inquired, the object to be detected is pre-identified based on the reference image, the time span of the identification process is expanded through the pre-identification process, the working pressure of instant identification is reduced, the identification speed is improved, different interaction information is generated according to the identity of the user, and the user experience is optimized.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention.
Fig. 1 is a flow chart of a touch and talk pen information interaction method.
Fig. 2 is a first sub-flow block diagram of a touch and talk pen information interaction method.
Fig. 3 is a second sub-flow block diagram of the information interaction method of the touch and talk pen.
Fig. 4 is a third sub-flow block diagram of the information interaction method of the touch and talk pen.
Fig. 5 is a fourth sub-flow block diagram of the information interaction method of the touch and talk pen.
Fig. 6 is a block diagram of the structure of the information interaction system of the touch and talk pen.
Detailed Description
In order to make the technical problems, technical solutions and advantageous effects of the present invention more clearly understood, the present invention is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Fig. 1 is a flow chart of a touch and talk pen information interaction method, in an embodiment of the present invention, the touch and talk pen information interaction method includes:
step S100: receiving activation information input by a user, starting a physical information monitoring port, and acquiring handheld data of the user based on the physical information monitoring port;
the activation information is a signal indicating that the user is using the stylus; the physical information monitoring port is used for monitoring physical information, the physical information related to the technical scheme of the invention is mainly a mechanical parameter, and a hardware parameter of the physical information is a preset sensor.
Step S200: performing data analysis on the handheld data, and determining the intelligence rating of the user according to the data analysis result;
the obtained handheld data are analyzed, the identity of the user can be determined, the intelligence development degree of people in different age groups is different, adults and minors are different certainly, the interactive information can be adjusted according to the intelligent rating result, and the use satisfaction degree of the user is improved.
Step S300: acquiring an image segment containing equipment parameters according to preset image acquisition equipment, and determining a reference image according to the image segment containing the equipment parameters;
the handheld pen is provided with image acquisition equipment for acquiring an image of an object to be detected, identifying the image and obtaining information in the object to be detected; the image acquisition and collection equipment has blind spots, and the image acquired each time is mostly a subset of the whole image, so the image is called an image segment, and a reference image corresponding to an object to be detected can be inquired by taking the image segment as a reference; the precondition of the process is that the production side of the touch and talk pen carries out the record in advance on the object to be detected.
Step S400: generating identification information based on the reference image, and performing output conversion on the identification information according to the intelligence rating;
after the reference image is determined, the specific character recognition process is assisted based on the reference image, and the character recognition task can be completed under the condition of less computing resources.
It is worth mentioning that the intelligence rating is a top-level concept, different levels have no height, and only have a distinguishing function, for example, the first level is a minor, the second level is an old, the third level is a man, the fourth level is a woman, the different levels correspond to different output conversion rules, and the rating function is classification.
Fig. 2 is a first sub-flow block diagram of a touch and talk pen information interaction method, where the receiving of activation information input by a user and the opening of a physical information monitoring port, and the step of acquiring handheld data of the user based on the physical information monitoring port includes:
step S101: receiving an activation request containing activation time input by a user, and starting a physical information monitoring port when the activation time reaches a preset time condition;
the above-mentioned contents provide a "false touch prevention" function, when the activation time is extremely short, the activation request is not taken as a trigger condition, only when the activation time reaches a certain degree, it is regarded that the user has sent the activation request, and at this time, the physical information monitoring port is opened.
Step S102: acquiring a force application area and a force application load of a user based on a physical information monitoring port;
the force application area and the force application load of the user can be obtained by means of the mechanical sensor, and then the pressure intensity of each point is calculated.
Step S103: converting the force application area and the force application load thereof into a holding graph according to a preset conversion rule; wherein, the color value of each pixel point in the grip chart is related to the force application load;
and (3) expanding the side surface of the point reading pen by a preset reference line, and marking a force application area and pressure intensity on the expanded side surface to obtain a holding graph.
Step S104: counting the holding graph according to time to obtain handheld data;
every moment corresponds to one holding graph, and all the holding graphs are counted to be used as handheld data.
Fig. 3 is a second sub-flow diagram of the information interaction method of the touch and talk pen, where the step of performing data analysis on the handheld data and determining the intelligence rating of the user according to the data analysis result includes:
step S201: reading the holding diagram, inputting the holding diagram into a trained recognition model, and determining the physiological age of the user;
the grip graphs of different users are different, samples are collected in advance, an identification model can be trained, when a new grip graph is obtained, the grip graph can be identified according to the identification model, and the physiological age of the user is further determined; it is noted that the physiological age does not represent the actual age, it is related to the area of grip and pressure, and it is contemplated that the physiological age of the identified male person is normally greater than the physiological age of the female person during the performance of the method.
Step S202: when the physiological age reaches a preset age line, acquiring a sound signal of a user based on preset audio acquisition equipment;
when the physiological age is sufficiently large, the audio acquisition device is activated, the audio acquisition device acquires the sound signal, and the sound signal further judges the user.
Step S203: identifying the sound signal, and correcting the physiological age according to an identification result;
the development speed of different people is different, and the accuracy of physiological age can be improved by combining sound signals.
Step S204: determining the intelligence rating of the user according to the corrected physiological age;
the user may be classified by the corrected physiological age, i.e. the mental rating.
Fig. 4 is a third sub-flow block diagram of a touch and talk pen information interaction method, where the step of acquiring an image segment containing device parameters according to a preset image acquisition device and determining a reference image according to the image segment containing device parameters includes:
step S301: acquiring an image fragment according to preset image acquisition equipment, and recording equipment parameters at the acquisition moment; the equipment parameters comprise the distance and the angle between the acquisition point and the object to be detected;
step S302: arranging the image segments according to the equipment parameters, and traversing a preset reference image library sequentially according to the image segments to obtain a shrinkage limit library; the contraction library is a subset of the reference image library;
the method comprises the steps that image fragments are obtained according to preset image acquisition equipment, and the position of the whole image where the obtained image fragments belong can be determined by inquiring equipment parameters at the obtaining moment, so that a reference image library is screened, the target range can be continuously reduced, and the target range is represented by a reduction limit library.
Step S303: determining a reference image according to the quantity characteristics in the shrinkage library;
when the number of the images in the shrinkage limit library is stabilized to be one, the images in the shrinkage limit library are the reference images.
Specifically, the step of determining the reference image according to the quantity features in the shrinkage library includes:
recording the number of images in the shrinkage library in real time;
determining a quantity curve according to the number of the images and the number of the image segments; the number of the images is a dependent variable, and the number of the image segments is an independent variable;
in order to better reflect the quantity relation, the image quantity and the quantity of the image segments are converted into curves for processing.
The query quantity value is an independent variable point corresponding to one time, and a reference image is determined according to the distribution characteristics of the independent variable points;
acquiring a derivative of the quantity curve, and determining a core segment according to the derivative;
the process of determining whether the number of images is one based on the number curve is not complicated, and the important point is whether the number of images is one continuously or not, and the distribution characteristics are determined by the distribution characteristics, namely, the independent variable positions corresponding to the number of points which is one.
On the basis, the derivative is carried out on the numerical curve, and the point of numerical mutation can be inquired, wherein the numerical mutation is generally that the reduction limit range is suddenly reduced, and at the moment, the used image segment is the core segment.
Fig. 5 is a fourth sub-flow block diagram of the information interaction method of the touch and talk pen, where the step of generating identification information based on the reference image and performing output conversion on the identification information according to the intelligence rating includes:
step S401: reading related information of a reference image; the related information is a table containing a position item and a content item;
when the reference image is recorded, the position item and the content item are recorded, which represents which contents correspond to different positions.
Step S402: carrying out comparative identification on the core fragment according to the related information;
on the premise that the relevant information is known, the core fragment can be identified in a comparison mode, the identification in the comparison mode only needs to be carried out by adopting the existing identification algorithm, and the identification efficiency is extremely high.
Step S403: receiving image information acquired by a preset pen point acquisition port, and identifying the image information according to a comparison type identification result to obtain identification information;
the pen point acquisition port is matched with an identification algorithm to be an existing identification mode, the mode consumes large computing resources and is low in identification efficiency, a core fragment is an area which needs to be identified by a user in a large probability, comparison type identification is carried out on the core fragment to be a preprocessing mode, and when image information acquired by the pen point acquisition port belongs to the core fragment, identification information can be obtained without identification.
Step S404: determining a conversion rule of the identification information according to the intelligence rating, executing a conversion process, and outputting the converted identification information;
and determining a conversion mode of the identification information according to the intelligence rating, obtaining the identification information which is more suitable for the user, and outputting the converted identification information.
Fig. 6 is a block diagram of a structure of a touch and talk pen information interaction system, which is a preferred embodiment of the technical solution of the present invention, and provides a touch and talk pen information interaction system, where the system 10 includes:
the handheld data acquisition module 11 is configured to receive activation information input by a user, open a physical information monitoring port, and acquire handheld data of the user based on the physical information monitoring port;
the intelligence rating module 12 is used for performing data analysis on the handheld data and determining the intelligence rating of the user according to the data analysis result;
the reference image determining module 13 is configured to obtain an image segment containing an apparatus parameter according to a preset image acquisition apparatus, and determine a reference image according to the image segment containing the apparatus parameter;
and the output conversion module 14 is used for generating identification information based on the reference image and performing output conversion on the identification information according to the intelligence rating.
The handheld data acquisition module 11 includes:
the port opening unit is used for receiving an activation request containing activation time input by a user, and opening the physical information monitoring port when the activation time reaches a preset time condition;
the load monitoring unit is used for acquiring a force application area and a force application load of a user based on the physical information monitoring port;
the load conversion unit is used for converting the force application area and the force application load thereof into a holding graph according to a preset conversion rule;
the data statistics unit is used for counting the holding diagram according to time to obtain handheld data;
and the color value of each pixel point in the grip chart is related to the force application load.
The intellectual rating module 12 comprises:
the physiological age determining unit is used for reading the holding graph, inputting the holding graph into a trained recognition model and determining the physiological age of the user;
the sound acquisition unit is used for acquiring a sound signal of a user based on a preset audio acquisition device when the physiological age reaches a preset age line;
a physiological age correcting unit for recognizing the sound signal and correcting the physiological age according to the recognition result;
and the rating execution unit is used for determining the intelligence rating of the user according to the corrected physiological age.
The reference image determination module 13 includes:
the image acquisition unit is used for acquiring image fragments according to preset image acquisition equipment and recording equipment parameters at the acquisition moment; the equipment parameters comprise the distance and the angle between the acquisition point and the object to be detected;
the image arrangement unit is used for arranging the image segments according to the equipment parameters and traversing a preset reference image library according to the image segments in sequence to obtain a shrinkage limit library; the contraction library is a subset of the reference image library;
and the quantity analysis unit is used for determining a reference image according to the quantity characteristics in the shrinkage library.
The functions which can be realized by the touch and talk pen information interaction method are all completed by computer equipment which comprises one or more processors and one or more memories, wherein at least one program code is stored in the one or more memories, and the program code is loaded and executed by the one or more processors to realize the functions of the touch and talk pen information interaction method.
The processor fetches instructions and analyzes the instructions one by one from the memory, then completes corresponding operations according to the instruction requirements, generates a series of control commands, enables all parts of the computer to automatically, continuously and coordinately act to form an organic whole, realizes the input of programs, the input of data, the operation and the output of results, and the arithmetic operation or the logic operation generated in the process is completed by the arithmetic unit; the Memory comprises a Read-Only Memory (ROM) for storing a computer program, and a protection device is arranged outside the Memory.
Illustratively, a computer program can be partitioned into one or more modules, which are stored in memory and executed by a processor to implement the present invention. One or more of the modules may be a series of computer program instruction segments capable of performing certain functions, which are used to describe the execution of the computer program in the terminal device.
Those skilled in the art will appreciate that the above description of the service device is merely exemplary and not limiting of the terminal device, and may include more or less components than those described, or combine certain components, or different components, such as may include input output devices, network access devices, buses, etc.
The Processor may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. The general-purpose processor may be a microprocessor or the processor may be any conventional processor or the like, which is the control center of the terminal equipment and connects the various parts of the entire user terminal using various interfaces and lines.
The memory may be used to store computer programs and/or modules, and the processor may implement various functions of the terminal device by operating or executing the computer programs and/or modules stored in the memory and calling data stored in the memory. The memory mainly comprises a storage program area and a storage data area, wherein the storage program area can store an operating system, application programs (such as an information acquisition template display function, a product information publishing function and the like) required by at least one function and the like; the storage data area may store data created according to the use of the berth-state display system (e.g., product information acquisition templates corresponding to different product types, product information that needs to be issued by different product providers, etc.), and the like. In addition, the memory may include high speed random access memory, and may also include non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other volatile solid state storage device.
The terminal device integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the modules/units in the system according to the above embodiment may be implemented by a computer program, which may be stored in a computer-readable storage medium and used by a processor to implement the functions of the embodiments of the system. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include: any entity or device capable of carrying computer program code, recording medium, U.S. disk, removable hard disk, magnetic diskette, optical disk, computer Memory, read-Only Memory (ROM), random Access Memory (RAM), electrical carrier wave signal, telecommunications signal, software distribution medium, etc.
It should be noted that, in this document, the term "comprises/comprising" or any other variation thereof is intended to cover a non-exclusive inclusion, so that a process, a method, an article or an apparatus including a series of elements includes not only those elements but also other elements not explicitly listed or inherent to such a process, method, article or apparatus. Without further limitation, an element defined by the phrases "comprising a component of' 8230; \8230;" does not exclude the presence of another like element in a process, method, article, or apparatus that comprises the element.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. A method for information interaction by a touch and talk pen is characterized by comprising the following steps:
receiving activation information input by a user, starting a physical information monitoring port, and acquiring handheld data of the user based on the physical information monitoring port;
carrying out data analysis on the handheld data, and determining the intelligence rating of the user according to the data analysis result;
acquiring an image segment containing equipment parameters according to preset image acquisition equipment, and determining a reference image according to the image segment containing the equipment parameters;
and generating identification information based on the reference image, and performing output conversion on the identification information according to the intelligence rating.
2. The touch-and-talk pen information interaction method according to claim 1, wherein the step of receiving activation information input by a user, opening a physical information monitoring port, and acquiring handheld data of the user based on the physical information monitoring port comprises:
receiving an activation request containing activation time input by a user, and starting a physical information monitoring port when the activation time reaches a preset time condition;
acquiring a force application area and a force application load of a user based on a physical information monitoring port;
converting the force application area and the force application load thereof into a holding graph according to a preset conversion rule;
counting the holding graph according to time to obtain handheld data;
and the color value of each pixel point in the grip chart is related to the force application load.
3. The method for interacting information with a touch and talk pen according to claim 2, wherein the step of performing data analysis on the hand-held data and determining the intellectual rating of the user according to the data analysis result comprises:
reading the holding graph, inputting the holding graph into a trained recognition model, and determining the physiological age of the user;
when the physiological age reaches a preset age line, acquiring a sound signal of a user based on preset audio acquisition equipment;
recognizing the sound signal, and correcting the physiological age according to a recognition result;
and determining the intelligence rating of the user according to the corrected physiological age.
4. The method for interacting information with a touch and talk pen according to claim 1, wherein the step of obtaining an image segment containing device parameters according to a preset image acquisition device and determining a reference image according to the image segment containing device parameters comprises:
acquiring an image fragment according to preset image acquisition equipment, and recording equipment parameters at the acquisition moment; the equipment parameters comprise the distance and the angle between the acquisition point and the object to be detected;
arranging the image segments according to the equipment parameters, and traversing a preset reference image library sequentially according to the image segments to obtain a shrinkage limit library; the contraction library is a subset of the reference image library;
and determining a reference image according to the quantity characteristics in the shrinkage library.
5. The method for interacting information with a touch and talk pen according to claim 4, wherein the step of determining the reference image according to the quantity feature in the shrinkage library comprises:
recording the number of images in the shrinkage library in real time;
determining a quantity curve according to the quantity of the images and the quantity of the image segments; the number of the images is a dependent variable, and the number of the image segments is an independent variable;
the query quantity value is an independent variable point corresponding to one time, and a reference image is determined according to the distribution characteristics of the independent variable points;
a derivative of the quantity curve is obtained, and a core segment is determined from the derivative.
6. The method of claim 5, wherein the step of generating identification information based on the reference image and performing output conversion on the identification information according to the intellectual rating comprises:
reading related information of a reference image; the related information is a table containing a position item and a content item;
carrying out comparative identification on the core fragment according to the related information;
receiving image information acquired by a preset pen point acquisition port, and identifying the image information according to a comparison type identification result to obtain identification information;
and determining a conversion rule of the identification information according to the intelligence rating, executing a conversion process, and outputting the converted identification information.
7. A touch and talk pen information interaction system, which is characterized in that the system comprises:
the handheld data acquisition module is used for receiving activation information input by a user, starting a physical information monitoring port and acquiring handheld data of the user based on the physical information monitoring port;
the intelligence rating module is used for carrying out data analysis on the handheld data and determining the intelligence rating of the user according to the data analysis result;
the reference image determining module is used for acquiring an image segment containing equipment parameters according to preset image acquisition equipment and determining a reference image according to the image segment containing the equipment parameters;
and the output conversion module is used for generating identification information based on the reference image and performing output conversion on the identification information according to the intelligence rating.
8. The touch-and-talk pen information interaction system of claim 7, wherein the hand-held data acquisition module comprises:
the port opening unit is used for receiving an activation request containing activation time input by a user, and opening the physical information monitoring port when the activation time reaches a preset time condition;
the load monitoring unit is used for acquiring a force application area and a force application load of a user based on the physical information monitoring port;
the load conversion unit is used for converting the force application area and the force application load thereof into a holding graph according to a preset conversion rule;
the data statistics unit is used for counting the holding diagram according to time to obtain handheld data;
and the color value of each pixel point in the grip chart is related to the force application load.
9. The touch-and-talk pen information interaction system of claim 8, wherein the mental rating module comprises:
the physiological age determining unit is used for reading the holding diagram, inputting the holding diagram into a trained recognition model and determining the physiological age of the user;
the sound acquisition unit is used for acquiring a sound signal of a user based on a preset audio acquisition device when the physiological age reaches a preset age line;
a physiological age correcting unit for recognizing the sound signal and correcting the physiological age according to the recognition result;
and the rating execution unit is used for determining the intelligence rating of the user according to the corrected physiological age.
10. The point-reading pen information interaction system according to claim 7, wherein the reference image determination module comprises:
the image acquisition unit is used for acquiring image fragments according to preset image acquisition equipment and recording equipment parameters at the acquisition moment; the equipment parameters comprise the distance and the angle between the acquisition point and the object to be detected;
the image arrangement unit is used for arranging the image segments according to the equipment parameters and traversing a preset reference image library according to the image segments in sequence to obtain a shrinkage limit library; the contraction library is a subset of the reference image library;
and the quantity analysis unit is used for determining a reference image according to the quantity characteristics in the shrinkage library.
CN202310226537.9A 2023-03-10 2023-03-10 Touch and talk pen information interaction method and system Active CN115909345B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310226537.9A CN115909345B (en) 2023-03-10 2023-03-10 Touch and talk pen information interaction method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310226537.9A CN115909345B (en) 2023-03-10 2023-03-10 Touch and talk pen information interaction method and system

Publications (2)

Publication Number Publication Date
CN115909345A true CN115909345A (en) 2023-04-04
CN115909345B CN115909345B (en) 2023-05-30

Family

ID=85742826

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310226537.9A Active CN115909345B (en) 2023-03-10 2023-03-10 Touch and talk pen information interaction method and system

Country Status (1)

Country Link
CN (1) CN115909345B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101874738A (en) * 2009-12-23 2010-11-03 中国科学院自动化研究所 Method for biophysical analysis and identification of human body based on pressure accumulated footprint image
CN110853424A (en) * 2019-10-12 2020-02-28 深圳瑞诺信息科技有限公司 Voice learning method, device and system with visual recognition
CN112437189A (en) * 2019-08-26 2021-03-02 北京小米移动软件有限公司 Identity recognition method, device and medium
CN114120324A (en) * 2021-11-25 2022-03-01 长沙师范学院 Intelligent object identification method and system based on big data analysis in click-to-read scene
CN114356068A (en) * 2020-09-28 2022-04-15 北京搜狗智能科技有限公司 Data processing method and device and electronic equipment
CN115034720A (en) * 2022-06-28 2022-09-09 广东省农业科学院果树研究所 Method and system for judging preservation quality state in fruit storage and transportation process

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101874738A (en) * 2009-12-23 2010-11-03 中国科学院自动化研究所 Method for biophysical analysis and identification of human body based on pressure accumulated footprint image
CN112437189A (en) * 2019-08-26 2021-03-02 北京小米移动软件有限公司 Identity recognition method, device and medium
CN110853424A (en) * 2019-10-12 2020-02-28 深圳瑞诺信息科技有限公司 Voice learning method, device and system with visual recognition
CN114356068A (en) * 2020-09-28 2022-04-15 北京搜狗智能科技有限公司 Data processing method and device and electronic equipment
CN114120324A (en) * 2021-11-25 2022-03-01 长沙师范学院 Intelligent object identification method and system based on big data analysis in click-to-read scene
CN115034720A (en) * 2022-06-28 2022-09-09 广东省农业科学院果树研究所 Method and system for judging preservation quality state in fruit storage and transportation process

Also Published As

Publication number Publication date
CN115909345B (en) 2023-05-30

Similar Documents

Publication Publication Date Title
CN113111154B (en) Similarity evaluation method, answer search method, device, equipment and medium
CN109240582A (en) Point reading control method and intelligent device
CN109871450A (en) Based on the multi-modal exchange method and system for drawing this reading
CN108596180A (en) Parameter identification, the training method of parameter identification model and device in image
CN110795714A (en) Identity authentication method and device, computer equipment and storage medium
CN113094478B (en) Expression reply method, device, equipment and storage medium
CN112016346A (en) Gesture recognition method, device and system and information processing method
CN114332514B (en) Font evaluation method and system
CN113011412A (en) Character recognition method, device, equipment and storage medium based on stroke order and OCR (optical character recognition)
CN109388935A (en) Document verification method and device, electronic equipment and readable storage medium storing program for executing
CN115830627A (en) Information storage method and device, electronic equipment and computer readable storage medium
CN113673528B (en) Text processing method, text processing device, electronic equipment and readable storage medium
CN107992872B (en) Method for carrying out text recognition on picture and mobile terminal
CN114171031A (en) Voiceprint recognition model training method based on multi-task learning and confrontation training
CN110263346B (en) Semantic analysis method based on small sample learning, electronic equipment and storage medium
CN115909345A (en) Touch and talk pen information interaction method and system
CN111428569A (en) Visual identification method and device for picture book or teaching material based on artificial intelligence
CN116311316A (en) Medical record classification method, system, terminal and storage medium
CN115937660A (en) Verification code identification method and device
CN115512181A (en) Method and device for training area generation network and readable storage medium
CN114612919A (en) Bill information processing system, method and device
CN116486789A (en) Speech recognition model generation method, speech recognition method, device and equipment
CN110795716A (en) Identity authentication method based on CNN, user equipment, storage medium and device
CN112712450A (en) Real-time interaction method, device, equipment and storage medium based on cloud classroom
CN111476195A (en) Face detection method, face detection device, robot and computer-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant