CN113393480B - Method for projecting notes in real time based on book positions - Google Patents

Method for projecting notes in real time based on book positions Download PDF

Info

Publication number
CN113393480B
CN113393480B CN202110645802.8A CN202110645802A CN113393480B CN 113393480 B CN113393480 B CN 113393480B CN 202110645802 A CN202110645802 A CN 202110645802A CN 113393480 B CN113393480 B CN 113393480B
Authority
CN
China
Prior art keywords
point
book
coordinates
projector
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110645802.8A
Other languages
Chinese (zh)
Other versions
CN113393480A (en
Inventor
杨俊曦
陈安
刘丞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN202110645802.8A priority Critical patent/CN113393480B/en
Publication of CN113393480A publication Critical patent/CN113393480A/en
Application granted granted Critical
Publication of CN113393480B publication Critical patent/CN113393480B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B29/00Combinations of cameras, projectors or photographic printing apparatus with non-photographic non-optical apparatus, e.g. clocks or weapons; Cameras having the shape of other objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4007Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/181Segmentation; Edge detection involving edge growing; involving edge linking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a method for projecting notes in real time based on book positions, which adopts an identification system with a camera, a projector and an image processing identification module, and comprises the following steps: acquiring a relation between coordinates of a projector and a camera; reading book coordinates through a function in a CV2 library; step three, judging whether the book outline is extracted, if so, carrying out the next step, and otherwise, returning to the step two; and step four, carrying out actual projection according to the projection coordinates. The invention can project notes in real time and assist students in learning; the book outline is extracted through edge detection, so that the inaccuracy of image recognition caused by ambient light can be well avoided, and the universality is wider compared with color recognition; the edge of the book is supplemented through the shape of the book, and inaccurate identification caused by hand shielding is prevented.

Description

Method for projecting notes in real time based on book positions
Technical Field
The invention relates to the technical field of projection display, in particular to a method for projecting notes in real time based on book positions.
Background
At present, online education is more and more taken important by people, but the traditional online education only can enable students to acquire knowledge in one direction from a screen, the learning effect of the students is not good in a variety of teaching scenes, and the students are likely to be distracted or only listen but not take notes. And a new mode can be adopted for online education, and the mode makes the students take notes by themselves while reading the notes on the books by directly projecting the notes on the books, so that the learning impression is deepened. The method relates to how to position the book, convert the position of the book into the projection coordinates of the camera, and convert the way by which the note pictures are finally projected onto the book. Meanwhile, in order to improve the robustness of the technology, the interference of the hand action on the book positioning is also considered. The problem is solved, and the method becomes an important condition for realizing novel interactive teaching equipment.
In the existing patent, information is acquired through a camera, and then the projector outputs specific data after operation to form interaction. However, the patent does not relate to how to accurately project the camera acquisition position, and does not clearly process the position relationship between the camera position and the projector position. And thus is different from the present application. The expansion deformation of the book on the plane can be adapted through perspective transformation and can be basically and accurately projected to the correct position, but if the book has expansion deformation in the three-dimensional direction, the notebook is difficult to project to the correct position.
Disclosure of Invention
In order to achieve the technical purpose, the invention relates to image positioning, coordinate conversion and projection, and has the functions of reading book coordinates, processing coordinates, removing hand interference, converting the coordinates into projection coordinates and projecting notes.
The invention is realized by at least one of the following technical schemes.
A method for projecting notes in real time based on book positions adopts an identification system with a camera, a projector and an image processing identification module, and comprises the following steps:
acquiring a relation between a projector and a camera coordinate by using a projection relation;
reading book coordinates through a function in a CV2 library;
step three, extracting the book outline, judging whether the book outline is extracted or not, if so, carrying out the next step, otherwise, returning to the step two;
and step four, carrying out actual projection according to the projection coordinate and the perspective transformation matrix.
Preferably, the obtaining of the relationship between the coordinates of the projector and the camera includes the following steps:
(1) Placing two markers on a base, wherein the two markers are respectively marked as a first marker and a second marker;
(2) Acquiring an image with a first marker and a second marker through a camera;
(3) Manufacturing a black picture occupying a projector screen according to the projection size of the projector, marking a red point on the black picture, recording the position coordinates of the red point as (alpha 0, beta 0), and projecting the black picture to the projector for projection, wherein the black picture is named as src;
(4) Marking a first marker in an image acquired by a camera and acquiring pixel coordinates (x 11, y 11) of the first marker; if the red point of the black picture src does not belong to the surface of the first marker, regenerating a new red point, shifting the position of the regenerated red point, putting the black base picture with the regenerated new red point coordinate to a projector for projection, and continuously performing the step (4) until the red point falls on the surface of the first marker, and at the moment, reading the pixel coordinate of the red point in the black picture src and recording as (x 21, y 21);
(5) Acquiring pixel coordinates (x 12, y 12) of a second marker from a camera picture, repeating the steps (3) to (4), projecting the picture by using a projector, enabling the red point to fall on the second marker, obtaining the pixel coordinates of the red point in a black picture src, and recording as (x 22, y 22);
(6) The degrees of expansion k1 and k2 of the image read by the camera and the projected image in the X direction and the Y direction and the offsets b1 and b2 of the central point of the optical axis of the camera and the central point of the projection optical axis of the projector in the X direction and the Y direction are obtained according to the following formula and are represented as a formula I:
x11=k1*x21+b1
x12=k1*x22+b1
y11=k2*y21+b2
y12=k2*y22+b2
the coordinates of a certain pixel point shot by the camera are recorded as (x 1, y 1), the coordinates projected by the projector are recorded as (x 2, y 2), and the relationship between the two points is recorded as a formula two:
x1=k1*x2+b1
y1=k2*y2+b2
and (4) obtaining the relation between the coordinates of the projector and the camera through the steps (1) to (7), and applying the relation to subsequent coordinate transformation.
Preferably, reading book coordinates comprises the steps of:
21 Placing the book on the base, and adjusting the field of view of the camera so that the field of view is all the base;
22 Placing a book above the base within the visual field range of the camera, and actually measuring to obtain the width w and the height h of the book;
23 Reading a book picture through a camera, and compressing the obtained book picture;
24 Convert the compressed image into a grayscale image;
25 ) gaussian blurring the gray map;
26 Performing edge extraction on the image after the Gaussian blur;
27 Using an approxplolydp function in a CV2 library, performing polygon fitting processing on edges including hands and books, changing a curve into a straight line polygon after fitting, and obtaining a set M of coordinates of each point of the polygon.
Preferably, the step of judging whether the book outline is extracted comprises the following steps:
31 Recording the corner coordinates of the upper left and the upper right of the book as an upper left point and an upper right point, wherein the upper left point and the upper right point are obtained through three points with the maximum absolute value of the y coordinate in the set M obtained in the step 7), and calculating the distance between every two points respectively, and if the distance between the two points is most consistent with the width w of the book, defining the coordinates of the two points as P1 (x 1, y 1) and P2 (x 2, y 2);
32 X1, x2, if x1> x2, P1 is the upper right point and P2 is the upper left point, otherwise P1 is the upper left point and P2 is the upper right point;
33 Traverse the remaining points in the set M and take as the corner point to the bottom right of the page the point with the following features:
the difference between the characteristic a and the x coordinate value of the upper right point is smaller than an allowable range;
the distance between the feature b and the upper right point is within the range of the page width w;
according to the above method, there are two cases: obtaining a point P3 and defining the point as a lower right point; defining the lower right point as null by obtaining points which do not meet the conditions;
based on the same method, the lower left point is found, and two situations also occur: obtaining a point P4 and defining the point as a lower left point; if the point that does not meet the above condition is defined as empty, there are four cases:
the first condition is as follows: only the lower left point is found, and the lower right point is empty;
case two: only the lower right point is found, and the lower left point is empty;
case three: simultaneously finding a left lower point and a right lower point;
case four: the lower left and lower right points are not found and are null.
34 If the situation is one, only the lower left point is found, the lower right point is empty, and the lower left point P4 (x 4, y 4) is translated to obtain (x 4+ w, y 4) which is marked as the lower right point;
if the situation is two, only the lower right point is found, the lower left point is empty, and the lower right point P3 (x 3, y 3) is translated to obtain (x 3-w, y 3) which is marked as the lower left point;
if the situation is three, directly obtaining a left lower point and a right lower point;
if the situation is four, the book is not identified in the drawing;
35 In case one to case three, coordinates of an upper left point, an upper right point, a lower left point and a lower right point are obtained according to an image or calculation, and then the book is considered to be recognized, and if the case four, the book is considered not to be recognized;
36 ) if the book is identified, performing the step four, otherwise, returning to the step two.
Preferably, reading book coordinates comprises the steps of:
(41) According to the requirement of a formula II, taking the upper left point obtained in the step III as a camera to shoot the coordinate of a certain pixel point, substituting the coordinate into the formula II to obtain the actual coordinate projected by the projector, and marking as (x 1, y 1);
(42) Taking the coordinates of the upper right point, the lower left point and the lower right point obtained in the third step as the coordinates of a certain pixel point shot by a camera, substituting the coordinates into a formula II to obtain the coordinates projected by the projector, and marking as (x 2, y 2), (x 3, y 3) and (x 4, y 4);
(43) Cutting a note picture containing a note into pictures with the same width and height ratio as those of a book;
(44) Acquiring a perspective transformation matrix M by utilizing a getPerfect transform in a CV2 library, inputting four points (0,0), (0,h), (w, 0), (w, h) of the note picture and the four points obtained in the steps (41) and (42) into a getPerfect transform function, and calculating to obtain a coordinate mapping matrix M between the note picture and the projection picture;
(45) Converting the note picture into a picture src2 with the same size as an actual book and the same actual position relative to the projector by using warPeractive in a CV2 library according to the coordinate mapping matrix M;
(46) And (5) sending the picture src2 obtained in the step (45) to a projector for projection.
Preferably, the CV2 image processing library is used to convert the compressed image into a gray-scale image.
Preferably, edge extraction is performed on the image after the Gaussian blur by using a canny operator, and a threshold value in the canny operator is set as an upper threshold value.
Preferably, the upper threshold is equal to three times the lower threshold.
Preferably, the image size is compressed by using a bilinear interpolation method.
Preferably, the base is a base made of black materials,
Compared with the prior art, the invention has the beneficial effects that
(1) The method has the whole process from the calibration of the camera and the projector to the realization of all functions, and the landing is completely feasible. In addition, the method starts from the principle, performs approximate processing on the premise of not influencing the effect, simplifies the implementation steps and the conditions required by implementation, and can project notes in real time to assist students in learning;
(2) The book outline is extracted through edge detection, so that the inaccuracy of image recognition caused by ambient light can be well avoided, and the universality is wider compared with color recognition;
(3) The edges of the books are supplemented through the shapes of the books, so that inaccurate identification caused by hand shielding is prevented;
(4) The projection position correction is only related to the positions of the camera and the projector, and the correction can be performed only once after the projection position correction is fixed, so that the use is convenient and the universality is considered;
(5) The problem of computer end note to the projection of object is solved.
Drawings
Fig. 1 is a flowchart of a method for projecting notes in real time based on book positions according to an embodiment of the present invention.
Detailed Description
The practice of the present invention will be further illustrated, but is not limited, by the following figures and examples. The purpose of the drawings is to supplement the description of the written part of the specification with figures so that a person can intuitively and visually understand each and every technical feature and the whole technical solution of the present invention, but it should not be construed as limiting the scope of the present invention.
In the description of the present invention, it should be understood that the orientation or positional relationship referred to in the description of the orientation, such as the upper, lower, front, rear, left, right, etc., is based on the orientation or positional relationship shown in the drawings, and is only for convenience of description and simplification of description, and thus, it should not be construed as limiting the present invention.
In the description of the present invention, the meaning of a plurality of means is one or more, the meaning of a plurality of means is two or more, and larger, smaller, larger, etc. are understood as excluding the number, and larger, smaller, inner, etc. are understood as including the number. If the first and second are described for the purpose of distinguishing technical features, they are not to be understood as indicating or implying relative importance or implicitly indicating the number of technical features indicated or implicitly indicating the precedence of the technical features indicated.
As shown in fig. 1, a method for projecting notes in real time based on book positions in this example employs an identification system having a camera, a projector, and an image processing and identification module, and includes the following steps:
the method comprises the following steps of firstly, obtaining the relation between the projector and the camera coordinate, and specifically comprising the following steps:
(1) Placing two small white chess pieces with the diameter of about 5mm on a base, and respectively marking the small white chess pieces as a first chess piece 1 and a second chess piece 2, wherein the two chess pieces are required to be not on the same horizontal line, and the spacing distance between the chess pieces is about 30cm in the transverse direction and 21cm in the longitudinal direction, so that the parameter values obtained in the subsequent correction step are more accurate;
(2) Acquiring an image with a first chess piece 1 and a second chess piece 2 through a camera;
(3) Making a black picture just occupying the projector screen according to the projector projection size and with a red dot at an arbitrary position (alpha 0, beta 0), and projecting the picture to the projector, wherein the picture is named src.
(4) The first pawn 1 is marked in the image acquired by the camera and the pixel coordinates (x 11, y 11) of the first pawn 1 are acquired. If the red point of the picture src is not on the surface of the chess piece 1, for example, the red point is on the left side of the chess piece 1, then the next red point is generated and shifted to the right, a black base picture with different red point coordinates is generated again and is placed on a projector for projection, the step is continuously carried out until the red point falls on the surface of the first chess piece, and at the moment, the pixel coordinates of the red point in the picture src are read and are marked as (x 21, y 21);
(5) Similarly, repeating the step (3) and the step (4), acquiring pixel coordinates (x 12, y 12) of the second chess piece 2 from the camera picture, projecting the picture src by the projector, changing the position of a red point in the src, enabling the red point to fall on the second chess piece 2, obtaining the pixel coordinates of the red point in the picture src, and marking as (x 22, y 22);
(6) The expansion degrees k1 and k2 of the image read by the projector and the projected image in the X direction and the Y direction and the offsets b1 and b2 of the central point of the optical axis of the camera and the central point of the projection optical axis of the projector in the X direction and the Y direction are obtained according to the following formula and are represented as a formula I:
x11=k1*x21+h1
x12=k1*x22+b1
y11=k2*y21+b2
y12=k2*y22+b2
(7) The correction step is ended;
(8) The relation between the coordinates of the projector and the camera is determined by the imaging process and the relative position between the camera and the projector, and after the position between the projector and the camera is fixed, the relation between the coordinates of the position between the projector and the camera can be calculated by various parameters. However, the way of calculating the relationship between the position coordinates of the projector and the camera according to the actual position between the projector and the camera is too complicated, so that the transformation relationship between the two sets of coordinates is obtained by selecting an actual correction way. As can be known from the principle of camera imaging and the principle of projector projection, the image taken by the camera is centered on the optical axis of the camera, and the projection note of the projector is centered on the projection optical axis of the projector, so the offset between the two sets of points in the X direction and the Y direction is denoted as b1 and b2. When the projector and the camera are fixed, the length and the width of the rectangle for imaging or projecting the projector and the camera are respectively parallel, so that the rotational offset between the two groups of points can be ignored. Finally, since the imaging and the projection are two different modes, the projection size and the image size are often inconsistent, and the read image and the projected image have expansion and contraction in the X direction and the Y direction, the expansion and contraction degrees are respectively recorded as k1 and k2. After the relationship is obtained according to the principle, the relationship between shooting and projection is obtained, the coordinates of a certain pixel point shot by the camera are recorded as (x 1, y 1), the coordinates (namely, actual coordinates) projected by the projector are recorded as (x 2, y 2), and the relationship between two points can be obtained and recorded as a formula two:
x1=k1*x2+b1
y1=k2*y2+b2
therefore, after the steps (1) to (7) are performed, the relationship between the projector and the camera coordinates is obtained and applied to the subsequent coordinate transformation when k1, k2, b1, and b2 are obtained.
As an example, the parameters k1, k2, b1 and b2 are obtained by using a fixed device comprising a projector, a camera and a base, and according to the position relationship between the projector and the camera, the method in the step one is used to obtain the relationship parameters k1= -61/26, b1=18760/13, k2= -192/85, and b2=76471/85 between the camera and the projector.
Marking the position of a white chess piece in the camera, obtaining the pixel coordinates (100, 10) of the white chess piece in the picture of the camera, substituting the pixel coordinates into a formula two for calculation, wherein X = k1 + 100+ b1, Y = k 2+ X2+ b2, and the obtained X, Y values are 1208 and 877 respectively, so the red point coordinate projected by the projector is (1208, 877).
The positions of the red points and the white chess pieces are found to be coincident, pixel coordinates in the camera image are obtained at any position in the middle point of the camera, the pixel coordinates are converted into projection coordinates, an image with the red points based on the projection coordinates is generated and is sent to a projector for projection, and the projector can project the red points to the actual positions.
Reading book coordinates;
(21) Selecting a black base, and adjusting the visual field range of the camera to ensure that the base is black in the visual field range;
(22) Books are placed above the base within the visual field range of the camera, and the width w and the height h of the books are obtained through actual measurement. The bottom plate of the step (21) and the bottom plate of the step (22) are different in color from the book, the outer contour of the book can be identified through strong color contrast between the black base and the book, in addition, the base in the visual field range can isolate the interference of the surrounding environment, and the robustness to the environment is improved.
(23) The method comprises the steps of reading a book picture through a camera, and compressing the image size of the obtained book picture by a bilinear interpolation method, and aims to improve the calculation speed.
(24) The resulting image was converted to a grayscale using the CV2 image processing library.
(25) The gray scale map is blurred by gaussian. Through multiple experiments, a larger convolution kernel was used in gaussian blurring, and in the example blurring was performed using a convolution kernel of (9*9 size). The feasibility of using large convolution kernel gaussian filtering extracts only the edges of the book and does not process the content. The method has the advantages that the large convolution kernel can well keep the edge of the image and eliminate noise caused by camera shooting to cause the edge to be inaccurate.
(26) And (5) carrying out edge extraction on the image after the Gaussian blur by using a canny operator. And setting the threshold value in the canny operator as an upper threshold value, wherein the upper threshold value is equal to three times of a lower threshold value. The specific reason is that the canny operator needs to set an upper threshold and a lower threshold to judge the sensitivity degree to the edge so as to obtain an edge image, in the example, when the lower threshold is 26 through manual experiments, the extracted edge effect is optimal, then the upper threshold is set to be equal to three times of the lower threshold, and finally the upper threshold and the lower threshold are obtained.
(27) Using an approxPolyDP function in a CV2 library, performing polygon fitting processing on edges including hands and books, selecting a fitting coefficient of 0.01 × peri (the perimeter of the edge), changing a curve into a straight line polygon after fitting, and obtaining a set M of coordinates of each point of the polygon.
(28) And (3) recording the coordinates of the upper left corner and the upper right corner of the book as an upper left point and an upper right point, obtaining the upper left point and the upper right point through the three points with the maximum y coordinate value in the set M obtained in the step (27), and respectively calculating the distance between every two points, wherein if the distance between the two points is most consistent with the width w of the book, the distances are defined as P1 (x 1, y 1) and P2 (x 2, y 2).
(29) Comparing the two points P1 and P2 obtained in the above step according to the sizes of x1 and x2, if x1 is larger than x2, the position P1 is the upper right point, and P2 is the upper left point, otherwise, the position P1 is the upper left point, and P2 is the upper right point.
(210) Traversing the remaining points in M, and taking the points with the following characteristics as the corner points at the bottom right of the page: the difference of the x value from the upper right point is less than the allowable range (taking the length of 15 pixels); and the upper right point is within a known width of the page.
According to the above method, there are two cases: obtaining a point P3 and defining the point as a lower right point; points which do not meet the above conditions are not obtained, and the lower right point is defined as null.
Based on the same method, the lower left point is found, and two situations occur as well: obtaining a point P4 and defining the point as a lower left point; case one cannot be met: only the lower left point is found, and the lower right point is empty;
case two: only the lower right point is found, and the lower left point is empty;
case three: simultaneously finding a left lower point and a right lower point;
case four: the lower left and lower right points are not found and are empty.
(211) If the situation is the first situation, only the lower left point is found, the lower right point is empty, and the lower left point P4 (x 4, y 4) is translated to obtain (x 4+ w, y 4) which is marked as the lower right point;
if the situation is two, only the lower right point is found, the lower left point is empty, and the lower right point P3 (x 3, y 3) is translated to obtain (x 3-w, y 3) which is marked as the lower left point;
if the situation is three, directly obtaining a left lower point and a right lower point;
if the situation is four, the book is not recognized in the drawing;
(212) In the first case, the coordinates of the upper left point, the upper right point, the lower left point and the lower right point can be obtained according to the image or calculation, the book is considered to be recognized, and in the fourth case, the book is considered not to be recognized. And if the book is identified, performing the step three, otherwise, returning to the step (23) of the step two.
As an example, the coordinates of four points of the book are extracted in a laboratory environment, the above method can successfully track the exact position of the book after the book is moved, and the recognition of the book is not affected when the handle is placed on the book.
Step three, carrying out actual projection according to the projection coordinates;
(31) And (3) taking the upper left point obtained in the step two as a camera to shoot the coordinate of a certain pixel point according to the requirement of a formula two, substituting the coordinate into the formula two to obtain the coordinate (namely the actual coordinate) projected by the projector, and recording the coordinate as (x 1, y 1).
(32) Similarly, the coordinates of the upper right point, the lower left point and the lower right point obtained in the second step are taken as the coordinates of a certain pixel point shot by the camera, and are substituted into the second formula to obtain the coordinates (i.e. actual coordinates) projected by the projector, and the coordinates are recorded as (x 2, y 2), (x 3, y 3) and (x 4, y 4).
(33) And taking a picture with a written note, wherein the width and height ratio of the note picture and the width and height ratio of the book are the same as w and h.
(34) A perspective transformation matrix M is obtained by using a getphotopactervransform function in a CV2 library, four points such as (x 1, y 1) obtained in four points (0,0), (0,h), (w, 0), (w, h), and step 31) 32) of a note picture are put into the getphotopactervransform function, and a coordinate mapping matrix M between the note picture and a projection picture is calculated.
(35) The coordinate mapping matrix M in the step is used for converting the note picture into a picture src2 which has the same size as the actual book and the same actual position relative to the projector by using a warPeractive function in a CV2 library.
(35) And sending the converted picture src2 to a projector for projection to complete the whole method for projecting the note in real time based on the book position.
As an example, a blank note picture with the same width and height as the book w =440, h =220 is defined, and a note is written on the picture by drawing software. The camera acquires coordinates of an upper left point, an upper right point, a lower left point and a lower right point of the book through the second step, wherein the coordinates are respectively (100 ), (100, 200), (300, 100) and (300, 200), the four points are respectively substituted into the second formula to be calculated, so that four points (1208, 1172), (1208, 2281), (3612, 1172) and (3612, 2281) are respectively obtained, then (0,0), (0, 220), (440,0), (440, 220) and (1208, 1172), (1208, 2281), (3612, 1172) are substituted into a getPterspectivetransform function to calculate a projected picture, and the projected picture is given to a projector to be projected, so that the position of the note and the position of the book are found to be coincident. Then, notes drawn on the drawing software are continuously read and projected through the steps, and the notes can be projected in real time based on the book position.
Example two:
for reading the book coordinates in the step two, the following method can be adopted for extracting the book outline:
1) A picture containing the pages of a complete book is taken with a camera.
2) 5 to 7 points with representative colors in the page range of the book are extracted from the picture by Photoshop, the color RGB values of the points are read, and the RGB range of the page color of the book is determined according to the RGB values.
3) And according to the RGB range in the step, extracting the corresponding color range in the image acquired by the camera by using an inRange function in a CV2 library to obtain a binary image.
4) And (5) performing Gaussian blur on the binarized picture.
5) Then, step (26) is performed, and the subsequent steps are the same.
Example three:
for reading the book coordinates in the second step, the following method can be adopted for calculating the coordinates of four points of the book:
1) judging the polygon obtained in the step (27), if the number of corner points of the polygon is equal to 4 (namely, the extracted polygon is a quadrangle), then the book outline is considered to be extracted, and the coordinates of the four corner points of the polygon are recorded as (x 1, y 1), (x 2, y 2), (x 3, y 3) and (x 4, y 4).
2) Substituting (x 1, y 1), (x 2, y 2), (x 3, y 3) and (x 4, y 4) obtained in the above step into step (32), and the subsequent steps are the same.
Example four:
aiming at the relationship between the coordinates of the projector and the camera acquired in the step one, the following changes are carried out:
(1) A square hard paper sheet with a side length of 10cm is placed on the base. Hereinafter referred to as a sheet of paper, which serves to provide an accurate reference for subsequent calibration.
(2) And acquiring an image with a square through a camera.
(3) A black picture which just occupies the projector screen is generated according to the projection size of the projector and is provided with a red square at any position, and the red square is recorded below. The middle point of the red square is located at (α 0, β 0), where the camera distortion is taken into account and therefore the width and height are not necessarily equal, noted W0, H0, and the picture is put to the projector projection, where it is named src.
(4) Marking the center point of the paper sheet in the image acquired by the camera and acquiring the pixel coordinates (x 11, y 11) of the center point of the paper sheet, wherein the width of the paper sheet is W, and the height of the paper sheet is H.
(5) If the center point of the red square of the picture src does not coincide with the center point of the paper sheet, for example, the center point of the projected red square is on the left side of the center point of the paper sheet, then the coordinates of the center point of the red square in the black base picture generated again next time are shifted to the right and are projected by the projector, the step is continuously performed until the center point of the red square coincides with the center point of the paper sheet, and at this time, the pixel coordinates of the center point of the red square in the picture src are read and are marked as (x 21, y 21);
(6) After the central point of the red square and the central point of the paper sheet are superposed, adjusting the width of the red square in the generated black bottom picture, projecting the red square on a projector, and continuously performing the step until the widths of the red square and the paper sheet are the same, reading the pixel width of the red square in the picture src at the moment, and marking the pixel width as W1;
(7) Likewise, the height of the red square in the generated black base picture is adjusted and is projected to the projector. Continuously carrying out the steps until the heights of the red square and the paper sheet are the same, reading the pixel height of the red square in the picture src, and recording as H1;
(8) The expansion degrees k1 and k2 of the image read by the projector and the projected image in the X direction and the Y direction and the offsets b1 and b2 of the central point of the optical axis of the camera and the central point of the projection optical axis of the projector in the X direction and the Y direction are obtained according to the following formula and are represented as a formula I:
h1=x11-x21
b2=y11-y21
k1=W/W1
k2=H/H1
(9) K1, k2, b1, b2 are obtained, so far the calibration step is finished.
The preferred embodiments of the invention disclosed above are intended to be illustrative only. The preferred embodiments are not intended to be exhaustive or to limit the invention to the precise embodiments disclosed. Obviously, many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the invention and the practical application, to thereby enable others skilled in the art to best utilize the invention. The invention is limited only by the claims and their full scope and equivalents.

Claims (9)

1. A method for projecting notes in real time based on book positions adopts an identification system with a camera, a projector and an image processing identification module, and is characterized by comprising the following steps:
the method comprises the following steps of firstly, acquiring the relation between the coordinates of a projector and the coordinates of a camera by using a projection relation, and comprising the following steps:
(1) Placing two markers on a base, wherein the two markers are respectively marked as a first marker and a second marker;
(2) Acquiring an image with a first marker and a second marker through a camera;
(3) Manufacturing a black picture occupying a projector screen according to the projection size of the projector, marking a red point on the black picture, recording the position coordinates of the red point as (alpha 0, beta 0), and projecting the black picture to the projector for projection, wherein the black picture is named as src;
(4) Marking a first marker in an image acquired by a camera and acquiring pixel coordinates (x 11, y 11) of the first marker; if the red point of the black picture src does not belong to the surface of the first marker, regenerating a new red point, shifting the position of the regenerated red point, putting the black base picture with the regenerated new red point coordinate to a projector for projection, and continuously performing the step (4) until the red point falls on the surface of the first marker, and at the moment, reading the pixel coordinate of the red point in the black picture src and recording as (x 21, y 21);
(5) Acquiring pixel coordinates (x 12, y 12) of a second marker from a camera picture, repeating the steps (3) to (4), projecting the picture by using a projector, enabling the red point to fall on the second marker, obtaining the pixel coordinates of the red point in a black picture src, and recording as (x 22, y 22);
(6) The degrees of expansion k1 and k2 of the image read by the camera and the projected image in the X direction and the Y direction and the offsets b1 and b2 of the central point of the optical axis of the camera and the central point of the projection optical axis of the projector in the X direction and the Y direction are obtained according to the following formula and are represented as a formula I:
x11=k1*x21+b1
x12=k1*x22+b1
y11=k2*y21+b2
y12=k2*y22+b2
the coordinates of a certain pixel point shot by the camera are recorded as (x 1, y 1), the coordinates projected by the projector are recorded as (x 2, y 2), and the relationship between the two points is recorded as a formula two:
x1=k1*x2+b1
y1=k2*y2+b2
obtaining the relation between the coordinates of the projector and the camera through the steps (1) to (7), and applying the relation to the subsequent coordinate transformation;
reading book coordinates through a function in a CV2 library;
step three, extracting the book outline, judging whether the book outline is extracted or not, if so, carrying out the next step, and otherwise, returning to the step two;
and step four, carrying out actual projection according to the projection coordinate and the perspective transformation matrix.
2. The method of projecting notes in real time based on book position of claim 1, wherein reading book coordinates, comprises the steps of:
21 Placing the book on the base, and adjusting the field of view of the camera so that the field of view is all the base;
22 Placing a book above the base within the visual field range of the camera, and actually measuring to obtain the width w and the height h of the book;
23 Reading the book picture through a camera, and compressing the obtained book picture;
24 Convert the compressed image into a grayscale image;
25 ) gaussian blurring the gray map;
26 Performing edge extraction on the image after the Gaussian blur;
27 Using an approxplolydp function in a CV2 library, performing polygon fitting processing on edges including hands and books, changing a curve into a straight line polygon after fitting, and obtaining a set M of coordinates of each point of the polygon.
3. The method of claim 2, wherein determining whether to extract the book outline comprises:
31 Recording the corner coordinates of the upper left and the upper right of the book as an upper left point and an upper right point, wherein the upper left point and the upper right point are obtained through three points with the maximum absolute value of the y coordinate in the set M obtained in the step 7), and calculating the distance between every two points respectively, and if the distance between the two points is most consistent with the width w of the book, defining the coordinates of the two points as P1 (x 1, y 1) and P2 (x 2, y 2);
32 X1, x2, if x1> x2, P1 is the upper right point and P2 is the upper left point, otherwise P1 is the upper left point and P2 is the upper right point;
33 Traverse the remaining points in the set M and take as the corner point to the bottom right of the page the point with the following features:
the difference between the characteristic a and the x coordinate value of the upper right point is smaller than an allowable range;
the distance between the feature b and the upper right point is within the range of the page width w;
according to the above method, there are two cases: obtaining a point P3 and defining the point as a lower right point; points which do not accord with any one of the steps 31) to 33) are not obtained, and the lower right point is defined as empty;
based on the same method, the lower left point is found, and two situations occur as well: obtaining a point P4 and defining the point as a lower left point; if the point corresponding to any one of the steps 31) to 33) is not obtained, the lower left point is defined as empty, and there are only four cases:
the first condition is as follows: only the lower left point is found, and the lower right point is empty;
and a second condition: only the lower right point is found, and the lower left point is empty;
case three: simultaneously finding a left lower point and a right lower point;
case four: the lower left point and the lower right point are not found and are null;
34 If the situation is one, only the lower left point is found, the lower right point is empty, and the lower left point P4 (x 4, y 4) is translated to obtain (x 4+ w, y 4) which is marked as the lower right point;
if the situation is two, only the lower right point is found, the lower left point is empty, and the lower right point P3 (x 3, y 3) is translated to obtain (x 3-w, y 3) which is marked as the lower left point;
if the situation is three, directly obtaining a left lower point and a right lower point;
if the situation is four, the book is not recognized in the drawing;
35 In case one to case three, coordinates of an upper left point, an upper right point, a lower left point and a lower right point are obtained according to an image or calculation, and then the book is considered to be recognized, and if the case four, the book is considered not to be recognized;
36 ) if the book is identified, performing the step four, otherwise, returning to the step two.
4. The method of projecting notes in real time based on book position of claim 3, wherein reading book coordinates comprises the steps of:
(41) According to the requirement of a formula II, taking the upper left point obtained in the step III as a camera to shoot a certain pixel point coordinate, substituting the pixel point coordinate into the formula II to obtain a coordinate actual coordinate projected by a projector, and marking as (x 1, y 1);
(42) Taking the coordinates of the upper right point, the lower left point and the lower right point obtained in the step three as coordinates of a certain pixel point shot by a camera, substituting the coordinates into a formula II to obtain coordinates projected by a projector, and marking as (x 2, y 2), (x 3, y 3) and (x 4, y 4);
(43) Cutting a note picture containing a note into pictures with the same width and height ratio as those of a book;
(44) Acquiring a perspective transformation matrix M by utilizing a getPerspectiveTransform in a CV2 library, putting four points (0,0), (0,h), (w, 0), (w, h) of a note picture and the four points obtained in the steps (41) and (42) into a getPerspectiveTransform function, and calculating to obtain a coordinate mapping matrix M between the note picture and a projection picture;
(45) Converting the note picture into a picture src2 with the same size as an actual book and the same actual position relative to the projector by using warPeractive in a CV2 library according to the coordinate mapping matrix M;
(46) And (5) sending the picture src2 obtained in the step (45) to a projector for projection.
5. The method for projecting notes in real time based on book positions as claimed in claim 4, wherein the compressed image is converted into a gray scale image using a CV2 image processing library.
6. The method of claim 5, wherein edge extraction is performed on the Gaussian-blurred image by using a canny operator, and a threshold value in the canny operator is set as an upper threshold value.
7. The method of claim 6, wherein the upper threshold is equal to three times the lower threshold.
8. The method of claim 7, wherein the image size is compressed by bilinear interpolation.
9. The method of claim 7, wherein the base is a black material base.
CN202110645802.8A 2021-06-09 2021-06-09 Method for projecting notes in real time based on book positions Active CN113393480B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110645802.8A CN113393480B (en) 2021-06-09 2021-06-09 Method for projecting notes in real time based on book positions

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110645802.8A CN113393480B (en) 2021-06-09 2021-06-09 Method for projecting notes in real time based on book positions

Publications (2)

Publication Number Publication Date
CN113393480A CN113393480A (en) 2021-09-14
CN113393480B true CN113393480B (en) 2023-01-06

Family

ID=77620149

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110645802.8A Active CN113393480B (en) 2021-06-09 2021-06-09 Method for projecting notes in real time based on book positions

Country Status (1)

Country Link
CN (1) CN113393480B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111625151A (en) * 2020-06-02 2020-09-04 吕嘉昳 Method and system for accurately identifying contact position in deformation projection based on touch method

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104575120B (en) * 2015-01-09 2017-02-22 代四广 Display system for aided teaching
CN108874187A (en) * 2018-06-06 2018-11-23 哈尔滨工业大学 A kind of projector Notes System
CN109241244A (en) * 2018-08-31 2019-01-18 广东小天才科技有限公司 A kind of exchange method, intelligent apparatus and system for assisting user to solve the problems, such as
CN109254663B (en) * 2018-09-07 2021-04-09 许昌特博特科技有限公司 Using method of auxiliary reading robot for books of children
CN109493288B (en) * 2018-10-23 2021-12-07 安徽慧视金瞳科技有限公司 Light spot self-adaptive mapping method for interactive classroom teaching system
CN110781734B (en) * 2019-09-18 2023-04-07 长安大学 Child cognitive game system based on paper-pen interaction
CN112614190B (en) * 2020-12-14 2023-06-06 北京淳中科技股份有限公司 Method and device for projecting mapping

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111625151A (en) * 2020-06-02 2020-09-04 吕嘉昳 Method and system for accurately identifying contact position in deformation projection based on touch method

Also Published As

Publication number Publication date
CN113393480A (en) 2021-09-14

Similar Documents

Publication Publication Date Title
US10198661B2 (en) System for determining alignment of a user-marked document and method thereof
US7965904B2 (en) Position and orientation measuring apparatus and position and orientation measuring method, mixed-reality system, and computer program
CN111243032A (en) Full-automatic checkerboard angular point detection method
CN112132907B (en) Camera calibration method and device, electronic equipment and storage medium
CN111401266B (en) Method, equipment, computer equipment and readable storage medium for positioning picture corner points
JP6188052B2 (en) Information system and server
CN113688846B (en) Object size recognition method, readable storage medium, and object size recognition system
CN114283434B (en) Answer sheet identification method based on machine vision
CN115170525A (en) Image difference detection method and device
CN113393480B (en) Method for projecting notes in real time based on book positions
US11544875B2 (en) Image processing apparatus, image processing method, and storage medium
CN116110069A (en) Answer sheet identification method and device based on coding mark points and relevant medium thereof
CN116125489A (en) Indoor object three-dimensional detection method, computer equipment and storage medium
KR101766787B1 (en) Image correction method using deep-learning analysis bassed on gpu-unit
CN104933430A (en) Interactive image processing method and interactive image processing system for mobile terminal
CN115586796A (en) Vision-based unmanned aerial vehicle landing position processing method, device and equipment
CN114550176A (en) Examination paper correcting method based on deep learning
CN114241486A (en) Method for improving accuracy rate of identifying student information of test paper
CN110443847B (en) Automatic vending machine holder positioning detection method based on camera
JPH07146937A (en) Pattern matching method
KR101957925B1 (en) Braille trainning apparatus and braille translation method using it
CN113112546B (en) Space target detection identification and pose tracking method based on three-X combined marker
JP2005227929A (en) Processing method for photography image of object, image display system, program and recording medium
Raveendran Effective auto grading with webcam
US20210390325A1 (en) System for determining alignment of a user-marked document and method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant