CN107066919B - Method, apparatus and computer-readable storage medium for solving pen tip position - Google Patents

Method, apparatus and computer-readable storage medium for solving pen tip position Download PDF

Info

Publication number
CN107066919B
CN107066919B CN201611188735.7A CN201611188735A CN107066919B CN 107066919 B CN107066919 B CN 107066919B CN 201611188735 A CN201611188735 A CN 201611188735A CN 107066919 B CN107066919 B CN 107066919B
Authority
CN
China
Prior art keywords
image
coordinates
point
nth
pen
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201611188735.7A
Other languages
Chinese (zh)
Other versions
CN107066919A (en
Inventor
陈刚
梁桥
姚锦辉
肖云龙
谭伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN201611188735.7A priority Critical patent/CN107066919B/en
Publication of CN107066919A publication Critical patent/CN107066919A/en
Application granted granted Critical
Publication of CN107066919B publication Critical patent/CN107066919B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/32Digital ink

Abstract

The embodiment of the invention provides a method for solving the position of a pen point, which comprises the following steps: acquiring a first image shot when a user clicks a first point in a paper surface through a pen point, wherein the paper surface coordinate of the first point is known; acquiring a first transformation function according to the image information on the first image; and calculating the image coordinate of the first point according to the paper surface coordinate of the first point and a first transformation function, and taking the image coordinate of the first point as the pen point image coordinate. The method of the invention facilitates the solution of the pen point position of the intelligent pen, thereby obviously reducing the calculated amount. In addition, the embodiment of the invention provides a device for solving the pen point position.

Description

Method, apparatus and computer-readable storage medium for solving pen tip position
Technical Field
The embodiment of the invention relates to the field of image processing, in particular to a method and equipment for solving a pen point position.
Background
This section is intended to provide a background or context to the embodiments of the invention that are recited in the claims. The description herein is not admitted to be prior art by inclusion in this section.
The intelligent pen is an electronic pen with a built-in camera, and can continuously shoot code points on paper when writing, so that handwriting can be calculated. When using this type of smart pen, the user can write on the paper that is printed with the code point according to traditional writing habit, and user's writing handwriting will all be noted down with writing the process, can be immediately synchronous to other equipment on to supply the equipment of different grade type to use.
At present, some intelligent pens are available, for the writing and writing processes of the intelligent pens, the image coordinate of the pen point of the intelligent pen is the key for the operation of the whole intelligent pen, due to the limitation of hardware of the intelligent pen, the image coordinate of the pen point cannot be displayed on an image shot in the writing process, so the solution of the image coordinate of the pen point becomes the basis that the intelligent pen can restore the handwriting, the calculation and correction of the image coordinate of the pen point or the calculation of the pen point track is generally called as the solution of the pen point position of the intelligent pen, the current solution of the pen point position usually adopts operations such as recursive calculation, angle rotation transformation correction and the like, and the operation difficulty of a user is large and the calculation amount is large.
Disclosure of Invention
However, due to the existing technology and operation reasons, the existing pen point position solving method is difficult to operate and complex in calculation.
Therefore, in the prior art, the operation difficulty is large, and the calculation is complex, which is a very annoying process.
For this reason, an improved method for solving the pen tip position is highly required, so that the method is simple to operate and has a small calculation amount.
In this context, embodiments of the present invention are intended to provide a method of solving for a pen tip position.
In a first aspect of embodiments of the present invention, there is provided a method for solving a pen tip position, including: acquiring a first image shot when a user clicks a first point in a paper surface through a pen point, wherein the paper surface coordinate of the first point is known; acquiring a first transformation function according to the image information on the first image; and calculating the image coordinate of the first point according to the paper surface coordinate of the first point and a first transformation function, and acquiring the pen point image coordinate according to the image coordinate of the first point.
In an embodiment of the first aspect of the present invention, obtaining the first transformation function from the image information on the first image comprises: carrying out image analysis on the first image to obtain image coordinates and paper surface coordinates of at least 4 points in the image; and calculating a first transformation function for transforming the image coordinates to the paper coordinates according to the image coordinates and the paper coordinates of the at least 4 points.
In an embodiment of the first aspect of the present invention, wherein obtaining the first transformation function comprises obtaining a first perspective transformation matrix of 3 x 3; calculating the image coordinates of the first point from the paper coordinates of the first point and a first transformation function comprises: inverting the perspective transformation matrix to obtain a perspective transformation inverse matrix; and the image coordinate of the first point is equal to the product of the first perspective transformation inverse matrix and the paper surface coordinate of the first point and the product of a first scale coefficient, and the first scale coefficient is the reciprocal of a homogeneous item of a result obtained by multiplying the first perspective transformation inverse matrix and the paper surface coordinate of the first point.
In an embodiment of the first aspect of the present invention, the method further includes: and correcting the pen point image coordinates by clicking other preset points with known paper surface coordinates on the paper surface.
In an embodiment of the first aspect of the present invention, the correcting the pen-tip image coordinates specifically includes: the method comprises the following steps: when a user clicks an Nth point in a paper surface through a pen point, acquiring a shot Nth image, wherein the paper surface coordinate of the Nth point is known; step two: acquiring an Nth transformation function according to the image information on the Nth image; step three: calculating the image coordinate of the Nth point according to the paper surface coordinate of the Nth point and an Nth transformation function, wherein N is an integer greater than or equal to 2; when the position of the pen point moves, repeating the first step to the third step to calculate M image coordinates of the position of the pen point, wherein M is a positive integer; and correcting the pen point image coordinates according to the image coordinates of the first point and the M image coordinates.
In an embodiment of the first aspect of the present invention, the obtaining the nth transform function according to the image information on the nth image includes: carrying out image analysis on the Nth image to obtain image coordinates and paper surface coordinates of at least 4 points in the image; and calculating an Nth transformation function for transforming the image coordinates to the paper coordinates according to the image coordinates and the paper coordinates of the at least 4 points.
In an embodiment of the first aspect of the present invention, wherein obtaining the nth transform function comprises obtaining an nth perspective transform matrix of 3 x 3; calculating the image coordinate of the nth point according to the paper surface seat N transformation function of the nth point comprises: inverting the Nth perspective transformation matrix to obtain an N perspective transformation inverse matrix; and the image coordinate of the Nth point is equal to the product of the Nth perspective transformation inverse matrix and the paper surface coordinate of the Nth point and an Nth proportionality coefficient, and the Nth proportionality coefficient is the reciprocal of a homogeneous item of a result obtained by multiplying the Nth perspective transformation inverse matrix and the paper surface coordinate of the Nth point.
In one embodiment of the first aspect of the present invention, correcting pen tip image coordinates in dependence on the image coordinates of the first point and the M image coordinates comprises: deleting abnormal values in the M image coordinates and the image coordinates of the first point to obtain X image coordinates, and taking the average value of the X image coordinates as the pen point image coordinates; or taking the image coordinate of the first point and the average value of the M image coordinates as the pen point image coordinate.
In an embodiment of the first aspect of the present invention, any of the methods above further comprises: acquiring a plurality of measured images when a user writes; acquiring a plurality of actual measurement transformation functions corresponding to the plurality of actual measurement images according to the image information of the plurality of actual measurement images; and acquiring a plurality of pen point paper coordinates according to the plurality of actually measured transformation functions and the pen point image coordinates, and acquiring the handwriting of the user according to the plurality of pen point paper coordinates.
In an embodiment of the first aspect of the present invention, the obtaining the plurality of measured transformation functions corresponding to the plurality of measured images according to the image information of the plurality of measured images includes: analyzing each image information of a plurality of measured images to obtain image coordinates and paper surface coordinates of at least 4 points in each image; and calculating a plurality of actual measurement transformation functions corresponding to the plurality of actual measurement images transformed from the image coordinates to the paper coordinates according to the image coordinates and the paper coordinates of the at least 4 points.
In one embodiment of the first aspect of the present invention, obtaining the plurality of measured transformation functions corresponding to the plurality of measured images comprises obtaining a perspective transformation matrix of 3 × 3 corresponding to the plurality of measured images; obtaining a plurality of pen point paper coordinates according to the plurality of measured transformation functions and the pen point image coordinates comprises: and the pen point paper surface coordinates are equal to the pen point image coordinates multiplied by a 3 x 3 perspective transformation matrix corresponding to the real measurement images multiplied by a corresponding proportionality coefficient, and the proportionality coefficient is the reciprocal of a homogeneous term of a result obtained by multiplying the pen point image coordinates by the corresponding perspective transformation matrix.
In a second aspect of the embodiments of the present invention, there is provided a pen tip position solving apparatus including: the system comprises an acquisition unit, a processing unit and a display unit, wherein the acquisition unit is used for acquiring a first image shot when a user clicks a first point in a paper surface through a pen point, and the paper surface coordinate of the first point is known; the processing unit is used for acquiring a first transformation function according to the image information on the first image; and the calculating unit is used for calculating the image coordinate of the first point according to the paper surface coordinate of the first point and a first transformation function and acquiring the pen point image coordinate according to the image coordinate of the first point.
In an embodiment of the second aspect of the present invention, the processing unit is specifically configured to perform image analysis on the first image to obtain image coordinates and paper surface coordinates of at least 4 points in the image; and calculating a first transformation function for transforming the image coordinates to the paper coordinates according to the image coordinates and the paper coordinates of the at least 4 points.
In an embodiment of the second aspect of the present invention, the processing unit is specifically configured to obtain a first perspective transformation matrix of 3 × 3; the computing unit is specifically configured to invert the perspective transformation matrix to obtain the perspective transformation inverse matrix; and the image coordinate of the first point is equal to the product of the first perspective transformation inverse matrix and the paper surface coordinate of the first point and the product of a first scale coefficient, and the first scale coefficient is the reciprocal of a homogeneous item of a result obtained by multiplying the first perspective transformation inverse matrix and the paper surface coordinate of the first point.
In an embodiment of the second aspect of the present invention, the apparatus further includes: and the correction unit is used for correcting the pen point image coordinates by clicking other preset points with known paper surface coordinates on the paper surface.
In an embodiment of the second aspect of the present invention, the correcting unit corrects the pen tip image coordinates, and specifically includes: the method comprises the following steps: when a user clicks an Nth point in a paper surface through a pen point, acquiring a shot Nth image, wherein the paper surface coordinate of the Nth point is known; step two: acquiring an Nth transformation function according to the image information on the Nth image; step three: calculating the image coordinate of the Nth point according to the paper surface coordinate N transformation function of the Nth point, wherein N is an integer greater than or equal to 2; when the position of the pen point moves, repeating the first step to the third step to calculate M image coordinates of the position of the pen point, wherein M is a positive integer; and correcting pen point image coordinates according to the first point and the M image coordinates.
In an embodiment of the second aspect of the present invention, the obtaining the nth transform function according to the image information on the nth image comprises: carrying out image analysis on the Nth image to obtain image coordinates and paper surface coordinates of at least 4 points in the image; and calculating a first transformation function for transforming the image coordinates to the paper coordinates according to the image coordinates and the paper coordinates of the at least 4 points.
In an embodiment of the second aspect of the invention, obtaining the nth transformation function comprises obtaining an nth perspective transformation matrix of 3 x 3; calculating the image coordinate of the nth point according to the paper surface seat N transformation function of the nth point comprises: inverting the Nth perspective transformation matrix to obtain an N perspective transformation inverse matrix; and the image coordinate of the Nth point is equal to the product of the Nth perspective transformation inverse matrix and the paper surface coordinate of the Nth point and an Nth proportionality coefficient, and the Nth proportionality coefficient is the reciprocal of a homogeneous item of a result obtained by multiplying the Nth perspective transformation inverse matrix and the paper surface coordinate of the Nth point.
In one embodiment of the second aspect of the present invention, correcting the pen tip image coordinates in dependence on the image coordinates of the first point and the M image coordinates comprises: deleting abnormal values in the M image coordinates and the image coordinates of the first point to obtain X image coordinates, and taking the average value of the X image coordinates as the pen point image coordinates; or taking the image coordinate of the first point and the average value of the M image coordinates as the pen point image coordinate.
In an embodiment of the second aspect of the present invention, any one of the above apparatus further comprises: the handwriting unit is used for acquiring a plurality of measured images when a user writes; acquiring a plurality of actual measurement transformation functions corresponding to the plurality of actual measurement images according to the image information of the plurality of actual measurement images; and acquiring a plurality of pen point paper coordinates according to the plurality of actually measured transformation functions and the pen point image coordinates, and acquiring the handwriting of the user according to the plurality of pen point paper coordinates.
In an embodiment of the second aspect of the present invention, the obtaining the plurality of measured transformation functions corresponding to the plurality of measured images according to the image information of the plurality of measured images includes: analyzing each image information of a plurality of measured images to obtain image coordinates and paper surface coordinates of at least 4 points in each image; and calculating a plurality of actual measurement transformation functions corresponding to the plurality of actual measurement images transformed from the image coordinates to the paper coordinates according to the image coordinates and the paper coordinates of the at least 4 points.
In one embodiment of the second aspect of the present invention, obtaining the plurality of measured transformation functions corresponding to the plurality of measured images comprises obtaining a perspective transformation matrix of 3 × 3 corresponding to the plurality of measured images; obtaining a plurality of pen point paper coordinates according to the plurality of measured transformation functions and the pen point image coordinates comprises: and the pen point paper surface coordinates are equal to the pen point image coordinates multiplied by a 3 x 3 perspective transformation matrix corresponding to the real measurement images multiplied by a corresponding proportionality coefficient, and the proportionality coefficient is the reciprocal of a homogeneous term of a result obtained by multiplying the pen point image coordinates by the corresponding perspective transformation matrix.
In a third aspect of embodiments of the present invention, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements any of the pen tip position solving methods provided in the first aspect described above.
According to the embodiment of the invention, the pen point image coordinates can be calculated only by clicking one point by the user, so the operation is simple, and the calculation needs perspective transformation and inverse perspective transformation, so the calculation amount is small, so the pen point image coordinates calculation method has the advantages of simple operation and small calculation amount.
Drawings
The above and other objects, features and advantages of exemplary embodiments of the present invention will become readily apparent from the following detailed description read in conjunction with the accompanying drawings. Several embodiments of the invention are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which:
fig. 1 schematically shows the structure of a smart pen and a photographing diagram;
FIG. 2A schematically illustrates a flow chart of a method of solving for a location of a pen tip, in accordance with one embodiment of the present invention;
FIG. 2B schematically illustrates a first image schematic according to one embodiment of the invention;
FIG. 2C schematically illustrates a perspective transformed image schematic according to an embodiment of the present invention;
FIG. 3 schematically illustrates a flow chart of a method of solving for a location of a pen tip, in accordance with another embodiment of the invention;
FIG. 4 is a schematic diagram showing a structure of a pen tip position solving apparatus according to a further embodiment of the present invention;
FIG. 5 schematically illustrates a hardware architecture diagram of a smart pen device according to yet another embodiment of the present invention;
FIG. 6 is a schematic diagram illustrating the structure of a computer-readable storage medium according to the latter embodiment of the present invention;
in the drawings, the same or corresponding reference numerals indicate the same or corresponding parts.
Detailed Description
The principles and spirit of the present invention will be described with reference to a number of exemplary embodiments. It is understood that these embodiments are given solely for the purpose of enabling those skilled in the art to better understand and to practice the invention, and are not intended to limit the scope of the invention in any way. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
As will be appreciated by one skilled in the art, embodiments of the present invention may be embodied as a system, apparatus, device, method, or computer program product. Accordingly, the present disclosure may be embodied in the form of: entirely hardware, entirely software (including firmware, resident software, micro-code, etc.), or a combination of hardware and software.
According to an embodiment of the invention, a method and apparatus are presented.
In this context, it is to be understood that the terminology involved is intended to be in the nature of words of description. Moreover, any number of elements in the drawings are by way of example and not by way of limitation, and any nomenclature is used solely for differentiation and not by way of limitation.
The principles and spirit of the present invention are explained in detail below with reference to several representative embodiments of the invention.
Summary of The Invention
The inventor finds that the hardware structure of each smart pen is fixed, and the positions of the camera and the pen point of each smart pen are relatively fixed, so that the position of the pen point of each smart pen in an image shot by the camera is relatively fixed, but the positions of the camera and the pen point of each smart pen are different, so that the image coordinate of the pen point needs to be calculated. The method is also used for solving the pen point position of the intelligent pen based on the characteristic that the image coordinates of the pen point of the single intelligent pen are consistent in the coordinates of a plurality of pictures shot by the camera.
Having described the general principles of the invention, various non-limiting embodiments of the invention are described in detail below.
Application scene overview
Referring to fig. 1, fig. 1 is a schematic view of a structure and shooting of an intelligent pen, as shown in fig. 1, the intelligent pen includes a camera 1 and a pen point 2, and when the pen point 2 moves, the camera 1 shoots a picture, as shown by a shaded portion in fig. 1.
Exemplary method
In the following, in connection with the application scenarios of the figures, the method for use according to an exemplary embodiment of the invention is described with reference to the figures. It should be noted that the above application scenarios are merely illustrated for the convenience of understanding the spirit and principles of the present invention, and the embodiments of the present invention are not limited in this respect. Rather, embodiments of the present invention may be applied to any scenario where applicable.
Referring to fig. 2A, fig. 2A is a method according to an exemplary embodiment of the present invention, as shown in fig. 2A, the method is performed by the smart pen as shown in fig. 1, and the method as shown in fig. 2A includes the following steps:
step S201, the smart pen acquires a first image shot when a user clicks a first point in the paper surface through a pen point, wherein the paper surface coordinate of the first point is known.
The first image in step S201 is shown in fig. 2B, and for convenience of description, the paper surface coordinates of the first point are referred to as (X, Y). The small square (code dot pattern) in the photo is deformed differently due to the different angles of the lens, and the image coordinates of the pen point are not in the first image field of view at this time, which corresponds to (x0, y 0).
Step S202, the smart pen obtains a first transformation function according to the image information on the first image.
The first transformation function in step S202 may include a 3 × 3 first perspective transformation matrix, and the manner of obtaining the 3 × 3 first perspective transformation matrix and the first scale coefficient may specifically be:
at least 4 point paper surface coordinates and 4 point image coordinates are obtained by analyzing the first image as shown in fig. 2B, and the paper surface may be code paper, but the paper surface may also be paper of other properties, such as coordinate paper and the like. The 4 points may be points forming a quadrilateral from arbitrarily selected connecting lines on the image plane, so as to be based on convenience of calculation. The paper surface is not limited to a traditional paper plane, and can be any plane such as plastic and a display screen. The image coordinates of the above-mentioned 4 points can be directly obtained by analyzing or measuring the first image. In one embodiment, the paper coordinates of the 4 points may be obtained by first obtaining a coding pattern on the captured image, and calculating the paper coordinates of the corresponding points according to a preset rule of the coding pattern. Calculating to obtain a first Perspective transformation function H0 according to the paper coordinates of at least 4 points and the image coordinates of 4 points, where the Perspective transformation function H0 is a 3 × 3 matrix and can be obtained by directly substituting the paper coordinates of at least 4 points and the image coordinates of 4 points, and a specific obtaining method may refer to a function of obtaining Perspective transformation (english: get Perspective transformation) in a Perspective algorithm, and a mathematical expression of the Perspective transformation may specifically be:
t·Ppaper=H0×Pimage;
wherein t is a first scale coefficient, the first scale coefficient may be a reciprocal of a homogeneous term of a result obtained by multiplying the first perspective transformation inverse matrix by the paper surface coordinate of the first point, H0-bit first perspective transformation function, Pimage is a homogeneous image coordinate of 3 × 1 of the image plane, and the homogeneous term (third term) is 1; the Ppaper is a homogeneous paper surface coordinate of a paper plane 3 multiplied by 1, and a homogeneous term (third term) is 1; t is a first scaling factor, with the aim of making the homogeneous term for Ppaper 1. Multiplying the image coordinate Pimage of any point of the image plane by H0 to obtain t & Ppaper, dividing the result by t to make the homogeneous term of the final result be 1, and obtaining the corresponding paper surface coordinate Ppaper of the point on the paper plane, wherein the first two terms of the paper surface coordinate Ppaper correspond to the two-dimensional paper surface coordinate.
Fig. 2C shows a drawing of the paper coordinates obtained by subjecting fig. 2B to perspective transformation, in which pen tip coordinates (X, Y) are the paper coordinates of the pen tip.
In another embodiment, the transformation function may also be calculated from the image coordinates and page coordinates of a plurality of points on the image according to the following relationship: ppaper ═ hxpimage, where H is the transform function.
Step S203, the intelligent pen calculates to obtain the image coordinate of the first point according to the paper surface coordinate (X, Y) of the first point and the first transformation function, and obtains the pen point image coordinate according to the image coordinate of the first point.
The implementation method of step S203 may specifically be: the perspective transformation function H0 is inverted to obtain a perspective inverse transformation function H0-1The image coordinates (x0, y0) of the first point may be equal to H0-1*(X,Y)*t。
The technique shown in fig. 2A can calculate the pen point image coordinates by clicking only one point, so that the operation is simple, and the calculation is performed by performing the perspective transformation and the inverse perspective transformation in this time, so that the technique has the advantages of simple operation and small calculation amount.
Referring to fig. 3, fig. 3 is a diagram illustrating another exemplary method according to the present invention, which is performed by the smart pen shown in fig. 1, and further includes correcting the image coordinates of the pen tip by clicking on a preset point on the paper with known coordinates of other paper. As shown in FIG. 3, the method shown in FIG. 2A may further include the following steps for correcting the image coordinates of the pen tip:
step S301, when a user clicks an Nth point in a paper surface through a pen point, the intelligent pen obtains a shot Nth image, and the paper surface coordinate of the Nth point is known.
Step S302, the intelligent pen obtains an Nth transformation function according to the image information on the Nth image.
The manner of obtaining the nth transformation function in step S302 may refer to the method in step S202, which is not described herein again.
Step S303, the intelligent pen calculates the image coordinate of the Nth point according to the paper logo of the Nth point and the Nth transformation function, wherein N is an integer larger than or equal to 2.
The specific way for the smart pen in step 303 to calculate the image coordinate of the nth point according to the paper icon of the nth point and the nth transformation function may refer to the implementation method in step S203, which is not described herein again.
And step S304, repeating the steps S301 to S303 to calculate M image coordinates of the pen point position when the pen point position moves, wherein M is a positive integer.
And S305, correcting pen point image coordinates by the intelligent pen according to the first point and the M image coordinates.
The implementation method of the step S305 may specifically be: and taking the image coordinate of the first point and the average value of the M image coordinates as the updated pen point image coordinate. Of course, the implementation method of step S305 may also specifically be: and deleting abnormal values in the M image coordinates and the image coordinates of the first point to obtain X image coordinates, and taking the average value of the X image coordinates as the updated pen point image coordinates. The abnormal value may be determined in various ways, for example, by comparing the M image coordinates and the image coordinate of the first point with the average value to obtain a deviation, and if the deviation exceeds a deviation threshold, determining the abnormal value.
Optionally, after step S305, the method may further include: acquiring a plurality of measured images when a user writes; acquiring a plurality of actual measurement transformation functions corresponding to the plurality of actual measurement images according to the image information of the plurality of actual measurement images; and acquiring a plurality of pen point paper coordinates according to the plurality of actually measured transformation functions and the pen point image coordinates, and acquiring the handwriting of the user according to the plurality of pen point paper coordinates.
Optionally, the obtaining, according to the image information of the measured images, a plurality of measured transformation functions corresponding to the measured images includes: analyzing each image information of a plurality of measured images to obtain image coordinates and paper surface coordinates of at least 4 points in each image; and calculating a plurality of actual measurement transformation functions corresponding to the plurality of actual measurement images transformed from the image coordinates to the paper coordinates according to the image coordinates and the paper coordinates of the at least 4 points.
Optionally, wherein obtaining the plurality of measured transformation functions corresponding to the plurality of measured images comprises obtaining a perspective transformation matrix of 3 × 3 corresponding to the plurality of measured images; obtaining a plurality of pen point paper coordinates according to the plurality of measured transformation functions and the pen point image coordinates comprises: and the pen point paper surface coordinates are equal to the pen point image coordinates multiplied by a 3 x 3 perspective transformation matrix corresponding to the real measurement images multiplied by a corresponding proportionality coefficient, and the proportionality coefficient is the reciprocal of a homogeneous term of a result obtained by multiplying the pen point image coordinates by the corresponding perspective transformation matrix.
The technique shown in fig. 3 can calculate the pen point image coordinates by clicking only one point, so that the operation is simple, and the calculation amount of the perspective transformation and the perspective inverse transformation performed in the current calculation is small, so that the technique has the advantages of simple operation and small calculation amount.
Exemplary device
After the method of the exemplary embodiment of the present invention is introduced, next, referring to fig. 4, technical terms, specific implementations and technical effects of the device shown in fig. 4, for the apparatus of the exemplary embodiment of the present invention, the apparatus for solving the pen tip position, and the method for solving the pen tip position described above, can be referred to the description of the embodiment shown in fig. 2 or fig. 3. This smart pen is shown in fig. 4, and includes:
an acquisition unit 401, configured to acquire a first image captured when a user clicks a first point in a paper surface through a pen tip, where paper surface coordinates of the first point are known;
a processing unit 402, configured to obtain a first transformation function according to image information on the first image;
a calculating unit 403, configured to calculate image coordinates of the first point according to the paper coordinates of the first point and a first transformation function, and obtain pen point image coordinates according to the image coordinates of the first point.
Optionally, the processing unit 402 is specifically configured to perform image analysis on the first image to obtain image coordinates and paper surface coordinates of at least 4 points in the image; and calculating a first transformation function for transforming the image coordinates to the paper coordinates according to the image coordinates and the paper coordinates of the at least 4 points.
Optionally, the processing unit 402 is specifically configured to obtain a first perspective transformation matrix of 3 × 3;
specifically, the calculating unit 403 is configured to invert the perspective transformation matrix to obtain the perspective transformation inverse matrix; and the image coordinate of the first point is equal to the reciprocal of a homogeneous term of a result obtained by multiplying the first perspective transformation inverse matrix by the paper surface coordinate of the first point and multiplying the first scale coefficient by the paper surface coordinate of the first point.
Optionally, wherein the apparatus further comprises:
the correction unit 404 is configured to correct the pen tip image coordinates, where the correcting the pen tip image coordinates specifically includes:
the method comprises the following steps: when a user clicks an Nth point in a paper surface through a pen point, acquiring a shot Nth image, wherein the paper surface coordinate of the Nth point is known;
step two: acquiring an Nth transformation function according to the image information on the Nth image;
step three: calculating the image coordinate of the Nth point according to the paper surface coordinate N transformation function of the Nth point, wherein N is an integer greater than or equal to 2;
when the position of the pen point moves, repeating the first step to the third step to calculate M image coordinates of the position of the pen point;
and correcting pen point image coordinates according to the first point and the M image coordinates.
Optionally, the obtaining the nth transformation function according to the image information on the nth image includes: carrying out image analysis on the Nth image to obtain image coordinates and paper surface coordinates of at least 4 points in the image; and calculating a first transformation function for transforming the image coordinates to the paper coordinates according to the image coordinates and the paper coordinates of the at least 4 points.
Optionally, wherein obtaining the nth transform function comprises obtaining an nth perspective transform matrix of 3 x 3; calculating the image coordinate of the nth point according to the paper surface seat N transformation function of the nth point comprises: inverting the Nth perspective transformation matrix to obtain an N perspective transformation inverse matrix; and the image coordinate of the Nth point is equal to the product of the Nth perspective transformation inverse matrix and the paper surface coordinate of the Nth point and an Nth proportionality coefficient, and the Nth proportionality coefficient is the reciprocal of a homogeneous term of a result obtained by multiplying the Nth perspective transformation inverse matrix and the paper surface coordinate of the Nth point.
Optionally, wherein updating pen-tip image coordinates according to the first point and the M image coordinates includes: : deleting abnormal values in the M image coordinates and the image coordinates of the first point to obtain X image coordinates, and taking the average value of the X image coordinates as the pen point image coordinates; or taking the image coordinate of the first point and the average value of the M image coordinates as the pen point image coordinate.
Optionally, wherein the apparatus further comprises:
a handwriting unit 405, configured to obtain a plurality of measured images when a user writes; acquiring a plurality of actual measurement transformation functions corresponding to the plurality of actual measurement images according to the image information of the plurality of actual measurement images; and acquiring a plurality of pen point paper coordinates according to the plurality of actually measured transformation functions and the pen point image coordinates, and acquiring the handwriting of the user according to the plurality of pen point paper coordinates.
Optionally, the obtaining, according to the image information of the measured images, a plurality of measured transformation functions corresponding to the measured images includes: analyzing each image information of a plurality of measured images to obtain image coordinates and paper surface coordinates of at least 4 points in each image; and calculating a plurality of actual measurement transformation functions corresponding to the plurality of actual measurement images transformed from the image coordinates to the paper coordinates according to the image coordinates and the paper coordinates of the at least 4 points.
Optionally, obtaining a plurality of measured transformation functions corresponding to the plurality of measured images includes obtaining a perspective transformation matrix of 3 × 3 corresponding to the plurality of measured images; obtaining a plurality of pen point paper coordinates according to the plurality of measured transformation functions and the pen point image coordinates comprises: and the pen point paper coordinates are equal to 3 x 3 perspective transformation matrixes corresponding to the real images of the pen point, multiplied by corresponding proportionality coefficients, wherein the proportionality coefficients are inverses of homogeneous terms of results obtained by multiplying the pen point image coordinates by the corresponding perspective transformation matrixes.
Referring to fig. 5, fig. 5 is a diagram of an intelligent pen device provided in the present invention, including: a processor 501, a memory 502, an external interface 505, a bus 504, and a camera 506. The external interface 505 is used to interact with the external device 504 to send and receive data. The number of processors 501 in the smart pen device 50 may be one or more. In some embodiments of the present application, the processor 501, the memory 502, and the external interface 505 may be connected by a bus or other means. The memory 502 is used for storing the program code 5024, and the processor 501 is used for calling the program code 5024 stored in the memory 502 to realize the functions of the smart pen in fig. 2 or fig. 3. With regard to the meaning and examples of the terms involved in the present embodiment, reference may be made to the embodiments corresponding to fig. 2 or fig. 3. And will not be described in detail herein. It should be noted that the processor 501 may be a single processing element or may be a general term for multiple processing elements. For example, the processing element may be a Central Processing Unit (CPU), an application-specific integrated circuit (ASIC), or one or more integrated circuits configured to implement the embodiments of the present application, such as: one or more Digital Signal Processors (DSP), or one or more field-programmable gate arrays (FPGA).
The memory 502 may be a storage device or a combination of storage elements, and is used for storing executable program codes or parameters, data, etc. required by the running device of the application program. The memory 503 may include a random-access memory 5021 (RAM), a non-volatile memory (non-volatile memory), such as a disk memory, a flash memory (flash), and a cache memory 5022 or a read-only memory (ROM).
The bus 503 may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, an Extended ISA (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one line is shown in FIG. 5, but this does not represent only one bus or one type of bus.
Referring to fig. 6, fig. 6 provides a computer-readable storage medium 60 on which a computer program is stored, which when executed by a processor implements the method according to the embodiment of fig. 2 or fig. 3.
It should be noted that although in the above detailed description several means or sub-means of the device are mentioned, this division is only not mandatory. Indeed, the features and functions of two or more of the devices described above may be embodied in one device, according to embodiments of the invention. Conversely, the features and functions of one apparatus described above may be further divided into embodiments by a plurality of apparatuses.
Moreover, while the operations of the method of the invention are depicted in the drawings in a particular order, this does not require or imply that the operations must be performed in this particular order, or that all of the illustrated operations must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions.
While the spirit and principles of the invention have been described with reference to several particular embodiments, it is to be understood that the invention is not limited to the disclosed embodiments, nor is the division of aspects, which is for convenience only as the features in such aspects may not be combined to benefit. The invention is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims.

Claims (19)

1. A method of solving for a location of a stylus, comprising:
acquiring a first image shot when a user clicks a first point in a paper surface through a pen point, wherein the paper surface coordinate of the first point is known, and the first point clicked by the pen point is not in the shot first image;
obtaining a first transformation function from image information on the first image, comprising:
carrying out image analysis on the first image to obtain image coordinates and paper surface coordinates of at least 4 points in the image;
calculating a first transformation function for transforming the image coordinates to the paper coordinates according to the image coordinates and the paper coordinates of the at least 4 points, wherein the obtaining of the first transformation function comprises obtaining a first perspective transformation matrix of 3 x 3; calculating the image coordinate of the first point according to the paper surface coordinate of the first point and a first transformation function, and acquiring the pen point image coordinate according to the image coordinate of the first point, wherein the method comprises the following steps:
inverting the perspective transformation matrix to obtain a perspective transformation inverse matrix;
the image coordinate of the first point is equal to the reciprocal of a homogeneous term of a result obtained by multiplying the first perspective transformation inverse matrix by the paper surface coordinate of the first point by a first scale coefficient; and solving the paper surface coordinate of the pen point according to the image coordinate of the pen point and the first transformation function.
2. The method of claim 1, further comprising: and correcting the pen point image coordinates by clicking other preset points with known paper surface coordinates on the paper surface.
3. The method of claim 2, wherein correcting the pen tip image coordinates specifically comprises:
the method comprises the following steps: when a user clicks an Nth point in a paper surface through a pen point, acquiring a shot Nth image, wherein the paper surface coordinate of the Nth point is known;
step two: acquiring an Nth transformation function according to the image information on the Nth image;
step three: calculating the image coordinate of the Nth point according to the paper surface coordinate of the Nth point and an Nth transformation function, wherein N is an integer greater than or equal to 2;
when the position of the pen point moves, repeating the first step to the third step to calculate M image coordinates of the position of the pen point, wherein M is a positive integer;
and correcting the pen point image coordinates according to the image coordinates of the first point and the M image coordinates.
4. The method of claim 3, wherein obtaining an Nth transformation function from image information on the Nth image comprises:
carrying out image analysis on the Nth image to obtain image coordinates and paper surface coordinates of at least 4 points in the image;
and calculating an Nth transformation function for transforming the image coordinates to the paper coordinates according to the image coordinates and the paper coordinates of the at least 4 points.
5. The method of claim 4, wherein obtaining an Nth transformation function comprises obtaining an Nth perspective transformation matrix of 3 x 3;
calculating the image coordinate of the nth point according to the paper surface coordinate of the nth point and the nth transformation function comprises:
inverting the Nth perspective transformation matrix to obtain an N perspective transformation inverse matrix;
and the image coordinate of the Nth point is equal to the product of the Nth perspective transformation inverse matrix and the paper surface coordinate of the Nth point and an Nth proportionality coefficient, and the Nth proportionality coefficient is the reciprocal of a homogeneous item of a result obtained by multiplying the Nth perspective transformation inverse matrix and the paper surface coordinate of the Nth point.
6. The method of claim 3, wherein correcting pen tip image coordinates as a function of image coordinates of a first point and the M image coordinates comprises:
deleting abnormal values in the M image coordinates and the image coordinates of the first point to obtain X image coordinates, and taking the average value of the X image coordinates as the pen point image coordinates;
or taking the image coordinate of the first point and the average value of the M image coordinates as the pen point image coordinate.
7. The method of any of claims 1-6, wherein the method further comprises:
acquiring a plurality of measured images when a user writes;
acquiring a plurality of actual measurement transformation functions corresponding to the plurality of actual measurement images according to the image information of the plurality of actual measurement images;
and acquiring a plurality of pen point paper coordinates according to the plurality of actually measured transformation functions and the pen point image coordinates, and acquiring the handwriting of the user according to the plurality of pen point paper coordinates.
8. The method of claim 7, wherein obtaining a plurality of measured transformation functions corresponding to the plurality of measured images from image information of the plurality of measured images comprises:
analyzing each image information of a plurality of measured images to obtain image coordinates and paper surface coordinates of at least 4 points in each image;
and calculating a plurality of actual measurement transformation functions corresponding to the plurality of actual measurement images transformed from the image coordinates to the paper coordinates according to the image coordinates and the paper coordinates of the at least 4 points.
9. The method of claim 7, wherein obtaining a plurality of measured transformation functions corresponding to the plurality of measured images comprises obtaining a perspective transformation matrix of 3 x 3 corresponding to the plurality of measured images;
obtaining a plurality of pen point paper coordinates according to the plurality of measured transformation functions and the pen point image coordinates comprises:
and the pen point paper surface coordinates are equal to the pen point image coordinates multiplied by a 3 x 3 perspective transformation matrix corresponding to the real measurement images multiplied by a corresponding proportionality coefficient, and the proportionality coefficient is the reciprocal of a homogeneous term of a result obtained by multiplying the pen point image coordinates by the corresponding perspective transformation matrix.
10. A pen tip position solving apparatus comprising:
the device comprises an acquisition unit, a display unit and a control unit, wherein the acquisition unit is used for acquiring a first image shot when a user clicks a first point in a paper surface through a pen point, the paper surface coordinate of the first point is known, and the first point clicked by the pen point is not in the shot first image;
a processing unit, configured to obtain a first transformation function according to image information on the first image, including:
the processing unit is specifically used for carrying out image analysis on the first image to obtain image coordinates and paper surface coordinates of at least 4 points in the image;
calculating a first transformation function for transforming the image coordinates to the paper coordinates according to the image coordinates and the paper coordinates of the at least 4 points, wherein the processing unit is specifically used for acquiring a first perspective transformation matrix of 3 x 3;
a calculating unit, configured to calculate image coordinates of the first point according to the paper coordinates of the first point and a first transformation function, and obtain pen point image coordinates according to the image coordinates of the first point, where the calculating unit includes:
the system is used for inverting the perspective transformation matrix to obtain the perspective transformation inverse matrix; the image coordinate of the first point is equal to the reciprocal of a homogeneous term of a result obtained by multiplying the first perspective transformation inverse matrix by the paper surface coordinate of the first point by a first scale coefficient;
and the handwriting unit is used for solving the paper surface coordinate of the pen point according to the image coordinate of the pen point and the first transformation function.
11. The apparatus of claim 10, wherein the apparatus further comprises:
and the correction unit is used for correcting the pen point image coordinates by clicking other preset points with known paper surface coordinates on the paper surface.
12. The apparatus according to claim 11, wherein the correction unit corrects the pen-tip image coordinates, and specifically comprises:
the method comprises the following steps: when a user clicks an Nth point in a paper surface through a pen point, acquiring a shot Nth image, wherein the paper surface coordinate of the Nth point is known;
step two: acquiring an Nth transformation function according to the image information on the Nth image;
step three: calculating the image coordinate of the Nth point according to the paper surface coordinate of the Nth point and an Nth transformation function, wherein N is an integer greater than or equal to 2;
when the position of the pen point moves, repeating the first step to the third step to calculate M image coordinates of the position of the pen point, wherein M is a positive integer;
and correcting pen point image coordinates according to the first point and the M image coordinates.
13. The apparatus of claim 12, wherein obtaining an nth transform function from image information on the nth image comprises:
carrying out image analysis on the Nth image to obtain image coordinates and paper surface coordinates of at least 4 points in the image;
and calculating an Nth transformation function for transforming the image coordinates to the paper coordinates according to the image coordinates and the paper coordinates of the at least 4 points.
14. The apparatus of claim 13, wherein obtaining an nth transformation function comprises obtaining an nth perspective transformation matrix of 3 x 3;
calculating the image coordinate of the nth point according to the paper surface coordinate of the nth point and the nth transformation function comprises:
inverting the Nth perspective transformation matrix to obtain an Nth perspective transformation inverse matrix;
and the image coordinate of the Nth point is equal to the product of the Nth perspective transformation inverse matrix and the paper surface coordinate of the Nth point and an Nth proportionality coefficient, and the Nth proportionality coefficient is the reciprocal of a homogeneous item of a result obtained by multiplying the Nth perspective transformation inverse matrix and the paper surface coordinate of the Nth point.
15. The device of claim 13, wherein correcting pen tip image coordinates as a function of image coordinates of a first point and the M image coordinates comprises:
deleting abnormal values in the M image coordinates and the image coordinates of the first point to obtain X image coordinates, and taking the average value of the X image coordinates as the pen point image coordinates;
or taking the image coordinate of the first point and the average value of the M image coordinates as the pen point image coordinate.
16. The apparatus according to any one of claims 10-15, wherein the handwriting unit is further configured to obtain a plurality of measured images of the user while writing; acquiring a plurality of actual measurement transformation functions corresponding to the plurality of actual measurement images according to the image information of the plurality of actual measurement images; and acquiring a plurality of pen point paper coordinates according to the plurality of actually measured transformation functions and the pen point image coordinates, and acquiring the handwriting of the user according to the plurality of pen point paper coordinates.
17. The apparatus of claim 16, wherein obtaining a plurality of measured transformation functions corresponding to the plurality of measured images from image information of the plurality of measured images comprises:
analyzing each image information of a plurality of measured images to obtain image coordinates and paper surface coordinates of at least 4 points in each image;
and calculating a plurality of actual measurement transformation functions corresponding to the plurality of actual measurement images transformed from the image coordinates to the paper coordinates according to the image coordinates and the paper coordinates of the at least 4 points.
18. The apparatus of claim 16, wherein obtaining a plurality of measured transformation functions for the plurality of measured images comprises obtaining a perspective transformation matrix of 3 x 3 for the plurality of measured images;
obtaining a plurality of pen point paper coordinates according to the plurality of measured transformation functions and the pen point image coordinates comprises:
the pen point paper surface coordinates are equal to the pen point image coordinates multiplied by a perspective transformation matrix of 3 x 3 corresponding to the real measurement images multiplied by corresponding proportionality coefficients;
the scale factor is the reciprocal of a homogeneous term of a result obtained by multiplying the pen point image coordinates by the corresponding perspective transformation matrix.
19. A computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the pen tip position solution method of any one of claims 1-9.
CN201611188735.7A 2016-12-21 2016-12-21 Method, apparatus and computer-readable storage medium for solving pen tip position Active CN107066919B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611188735.7A CN107066919B (en) 2016-12-21 2016-12-21 Method, apparatus and computer-readable storage medium for solving pen tip position

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611188735.7A CN107066919B (en) 2016-12-21 2016-12-21 Method, apparatus and computer-readable storage medium for solving pen tip position

Publications (2)

Publication Number Publication Date
CN107066919A CN107066919A (en) 2017-08-18
CN107066919B true CN107066919B (en) 2020-09-29

Family

ID=59619225

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611188735.7A Active CN107066919B (en) 2016-12-21 2016-12-21 Method, apparatus and computer-readable storage medium for solving pen tip position

Country Status (1)

Country Link
CN (1) CN107066919B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1881235A (en) * 2005-06-15 2006-12-20 富士施乐株式会社 Electronic document management system, image forming device, method of managing electronic document, and program
CN102135821A (en) * 2011-03-08 2011-07-27 中国科学技术大学 Handwriting pen and graphic restoration system
JP2014006579A (en) * 2012-06-21 2014-01-16 Dainippon Printing Co Ltd Electronic pen system and program
CN104656880A (en) * 2013-11-21 2015-05-27 深圳先进技术研究院 Writing system and method based on smart glasses

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7203384B2 (en) * 2003-02-24 2007-04-10 Electronic Scripting Products, Inc. Implement for optically inferring information from a planar jotting surface
US20050111735A1 (en) * 2003-11-21 2005-05-26 International Business Machines Corporation Video based handwriting recognition system and method
JP4647515B2 (en) * 2006-02-20 2011-03-09 株式会社リコー Coordinate detection device, writing instrument, and coordinate input system
CN101093543B (en) * 2007-06-13 2010-05-26 中兴通讯股份有限公司 Method for correcting image in 2D code of quick response matrix
CN101799996B (en) * 2010-03-11 2013-04-10 南昌航空大学 Click-reading method of click-reading machine based on video image
CN101847209B (en) * 2010-06-01 2012-06-06 福建新大陆电脑股份有限公司 Character image correction method
CN202472687U (en) * 2011-11-04 2012-10-03 刘建生 Multifunctional digital pen
CN103605974B (en) * 2013-11-15 2017-10-17 刘建生 Coordinate location method, multimedia and handwriting data acquisition methods based on Quick Response Code

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1881235A (en) * 2005-06-15 2006-12-20 富士施乐株式会社 Electronic document management system, image forming device, method of managing electronic document, and program
CN102135821A (en) * 2011-03-08 2011-07-27 中国科学技术大学 Handwriting pen and graphic restoration system
JP2014006579A (en) * 2012-06-21 2014-01-16 Dainippon Printing Co Ltd Electronic pen system and program
CN104656880A (en) * 2013-11-21 2015-05-27 深圳先进技术研究院 Writing system and method based on smart glasses

Also Published As

Publication number Publication date
CN107066919A (en) 2017-08-18

Similar Documents

Publication Publication Date Title
US11842438B2 (en) Method and terminal device for determining occluded area of virtual object
CN108961303B (en) Image processing method and device, electronic equipment and computer readable medium
EP3713220A1 (en) Video image processing method and apparatus, and terminal
CN107610146B (en) Image scene segmentation method and device, electronic equipment and computer storage medium
JP6201379B2 (en) Position calculation system, position calculation program, and position calculation method
CN111127422A (en) Image annotation method, device, system and host
CN107464266B (en) Bearing calibration, device, equipment and the storage medium of camera calibration parameter
US20220319050A1 (en) Calibration method and apparatus, processor, electronic device, and storage medium
CN112967381B (en) Three-dimensional reconstruction method, apparatus and medium
US10586099B2 (en) Information processing apparatus for tracking processing
CN111583280B (en) Image processing method, device, equipment and computer readable storage medium
CN113516697B (en) Image registration method, device, electronic equipment and computer readable storage medium
CN113298870B (en) Object posture tracking method and device, terminal equipment and storage medium
CN108053464B (en) Particle special effect processing method and device
CN110297677B (en) Drawing method, drawing device, drawing equipment and storage medium
CN113034582A (en) Pose optimization device and method, electronic device and computer readable storage medium
CN108876704B (en) Method and device for deforming human face image and computer storage medium
CN107066919B (en) Method, apparatus and computer-readable storage medium for solving pen tip position
CN110956131A (en) Single-target tracking method, device and system
CN107622498B (en) Image crossing processing method and device based on scene segmentation and computing equipment
CN109426775B (en) Method, device and equipment for detecting reticulate patterns in face image
CN116012242A (en) Camera distortion correction effect evaluation method, device, medium and equipment
CN110197228B (en) Image correction method and device
JP2022064506A (en) Image processing device, image processing method, and program
CN111429399A (en) Straight line detection method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant