WO2022193559A1 - Projection correction method and apparatus, storage medium, and electronic device - Google Patents

Projection correction method and apparatus, storage medium, and electronic device Download PDF

Info

Publication number
WO2022193559A1
WO2022193559A1 PCT/CN2021/115160 CN2021115160W WO2022193559A1 WO 2022193559 A1 WO2022193559 A1 WO 2022193559A1 CN 2021115160 W CN2021115160 W CN 2021115160W WO 2022193559 A1 WO2022193559 A1 WO 2022193559A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
projector
target feature
feature point
coordinates
Prior art date
Application number
PCT/CN2021/115160
Other languages
French (fr)
Chinese (zh)
Inventor
孙世攀
张聪
胡震宇
Original Assignee
深圳市火乐科技发展有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市火乐科技发展有限公司 filed Critical 深圳市火乐科技发展有限公司
Publication of WO2022193559A1 publication Critical patent/WO2022193559A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]

Definitions

  • the present disclosure relates to the field of projection technology, and in particular, to a projection correction method, an apparatus, a storage medium, and an electronic device.
  • a projector is a device that projects an image onto a wall or projection screen by optical projection.
  • the projector needs to be facing the wall to ensure that the projected picture is a normal rectangle. Once the projector is placed improperly, the projected picture will be distorted.
  • binocular correction requires the use of two cameras or a distance sensor to match the camera, and additional cameras or distance sensors are used for projectors. It is said that the hardware cost will be increased, and the calibration parameters will be more.
  • the present disclosure provides a projection correction method, device, storage medium and electronic device.
  • the present disclosure provides a projection correction method, the method comprising:
  • For each target feature point determine the depth of the target feature point in the shooting space of the camera according to the mapping relationship pre-calibrated for the target feature point and the camera coordinates of the target feature point on the captured image information to obtain the three-dimensional coordinates of the target feature point in the projection space of the projector, wherein the mapping relationship is between the depth information of the target feature point calibrated at different depths and the offset of the camera coordinates connection relation;
  • the projector is controlled to project according to the two-dimensional vertex coordinates of the corrected original image.
  • the present disclosure provides a projection correction device, the device comprising:
  • a response module configured to control the projector to project a preset image to the projection plane in response to the received correction instruction
  • a photographing module configured to photograph the preset image projected by the projector through the camera of the projector to obtain a photographed image
  • a target feature point determination module configured to identify the target feature point of the preset image in the captured image
  • the three-dimensional coordinate determination module is configured to, for each of the target feature points, determine that the target feature point is in the The depth information in the shooting space of the camera to obtain the three-dimensional coordinates of the target feature point in the projection space of the projector, wherein the mapping relationship is the depth information and camera coordinates of the target feature point calibrated at different depths The relationship between the offsets;
  • a normal vector module configured to determine the normal vector of the projection plane relative to the projector according to the three-dimensional coordinates of each of the target feature points
  • a correction module configured to correct the two-dimensional vertex coordinates of the original image of the projector according to the normal vector of the projection plane and the current pose information of the projector to obtain the two-dimensional vertex of the corrected original image coordinate;
  • the projection module controls the projector to project according to the two-dimensional vertex coordinates of the corrected original image.
  • the present disclosure provides a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processing device, implements the steps of the method provided in the first aspect.
  • the present disclosure provides an electronic device, comprising:
  • a processor configured to execute the computer program in the memory, to implement the steps of the method provided in the first aspect above.
  • a preset image projected by the projector is collected through a camera to obtain a shot image; target feature points are determined in the shot image, and for each target feature point, according to the target feature point
  • the pre-calibrated mapping relationship of the feature points and the camera coordinates of the target feature point on the captured image determine the depth information of the target feature point in the shooting space of the camera to obtain the three-dimensional coordinates of the target feature point;
  • the normal vector of the projection plane is determined, and the two-dimensional vertex coordinates of the original image of the projector are corrected according to the normal vector and the current pose information of the projector.
  • projection trapezoidal correction can be achieved through one camera, which not only reduces the number of device settings, but also can quickly calculate the depth information of the target feature points through the pre-calibrated mapping relationship, reducing the calculation of the three-dimensional coordinates of the target feature points.
  • the complexity further, improves the correction efficiency.
  • FIG. 1 is a flowchart of a projection correction method according to an exemplary embodiment
  • FIG. 2 is a schematic flowchart of identifying target feature points according to an exemplary embodiment
  • FIG. 3 is a schematic diagram of mathematical modeling of a mapping relationship shown according to an exemplary embodiment
  • FIG. 4 is another flowchart of a projection correction method according to an exemplary embodiment
  • FIG. 5 is a schematic diagram showing the principle of calculating three-dimensional imaging vertex coordinates of a standard image according to an exemplary embodiment
  • FIG. 6 is a schematic diagram of the principle of vector decomposition according to an exemplary embodiment
  • FIG. 7 is a block diagram of a projection correction apparatus according to an exemplary embodiment
  • Fig. 8 is a block diagram of an electronic device according to an exemplary embodiment.
  • Fig. 1 is a flowchart of a projection correction method according to an exemplary embodiment.
  • the projection correction method can be applied to electronic equipment such as projectors. As shown in FIG. 1 , the projection correction method includes the following steps:
  • the preset image refers to an image projected onto the projection plane, and generally, the preset image in this embodiment may adopt a checkerboard image.
  • the projection plane refers to an area, such as a wall or a curtain, that is used to display the output image of the projector. It should be understood that when the projector is perpendicular to the wall or the curtain, the preset image is a standard rectangular image, and the checkerboard image is not distorted at this time, and when the projector is not perpendicular to the wall or the curtain, the projected image is not Rectangular image, the checkerboard image is distorted at this time.
  • the correction instruction may or may not be automatically triggered.
  • the projector can automatically trigger a correction command; if it is not automatically triggered, the user can press the controller that communicates with the projector.
  • the button is used to trigger the controller to send a correction command to the projector.
  • the button can be a virtual button or a physical button. This embodiment does not limit this.
  • the preset image projected by the projector is captured by the camera of the projector to obtain a captured image.
  • the preset image is photographed by the camera to obtain the photographed image, so as to model the wall or curtain projected by the projector according to the photographed image to obtain the wall or the curtain three-dimensional information.
  • the target feature point is the feature point set on the preset image and used for modeling the wall or the curtain, and the feature point can be set in form or quantity according to the actual situation.
  • the target feature points in the preset image refer to intersections between black and white grids in the preset image.
  • mapping relationship is the difference between the depth information of the target feature point calibrated at different depths and the offset of the camera coordinates relationship between.
  • the camera coordinates can be based on the camera coordinates.
  • the mapping relationship calculates the depth of the target feature point.
  • the depth information refers to the depth of the target feature point of the preset image projected on the projection plane relative to the camera.
  • the preset images projected by the projector are taken at depths of 1.2m and 1.9m, respectively, and the camera coordinates of the target feature points at 1.2m and 1.9m are determined, so as to calculate the depth information of the same target feature point and the camera.
  • the relationship between coordinates is possible to calculate the depth information of the same target feature point and the camera.
  • the depth information of the target feature point in space can be determined according to the camera coordinates and the depth information.
  • the depth information is the Z-axis coordinate in the three-dimensional coordinates of the target feature point.
  • the X-axis coordinates and the Y-axis coordinates of the three-dimensional coordinates of the target feature point on the captured image are obtained by scaling according to the camera coordinates of the target feature point on the preset image, combined with the depth information of the target feature point.
  • the specific principle is as follows: the preset image is projected on the projection plane through the principle of pinhole imaging, and the X-axis coordinates and Y-axis coordinates of the target feature points displayed on the projection plane are in the camera coordinates of the target feature on the preset image.
  • the X-axis coordinates and Y-axis coordinates of are obtained by converting the depth information.
  • S105 Determine the normal vector of the projection plane relative to the projector according to the three-dimensional coordinates of each of the target feature points.
  • the obtained three-dimensional coordinates of all target feature points can be fitted, so as to model the projection plane according to the three-dimensional coordinates of the target feature points to obtain a fitted plane, which reflects the three-dimensional information of the projection plane, and then Get the normal vector of the projection plane relative to the projector.
  • the fitting may be fitting using the least squares method.
  • S106 correct the two-dimensional vertex coordinates of the original image of the projector according to the normal vector of the projection plane and the current pose information of the projector to obtain the two-dimensional vertex coordinates of the corrected original image.
  • the normal vector of the projection plane refers to a line segment perpendicular to the projection plane
  • the current pose information of the projector refers to the current placement position of the projector
  • the current pose information can be obtained through an attitude sensor (IMU).
  • IMU attitude sensor
  • the original image refers to the original output image of the projector.
  • the original image is a rectangular image, such as an image with a width of w and a height of h.
  • the original rectangular image projected on the projection plane will appear as an irregular quadrilateral, such as a convex quadrilateral.
  • the two-dimensional vertex coordinates refer to the four vertex coordinates of the original image. Correcting the two-dimensional vertex coordinates of the original image may be in the form of digital adjustment, which only changes the vertex coordinates of the original image output by the projector without changing the lens structure of the projector.
  • the projector is controlled to project according to the two-dimensional vertex coordinates of the corrected original image, so that the original image of the projector appears as a rectangle on the projection plane.
  • projection trapezoidal correction can be realized through one camera, which not only reduces the number of device settings, but also can quickly calculate the depth information of the target feature points through the pre-calibrated mapping relationship, reducing the time required for calculating the target feature points.
  • the complexity of the three-dimensional coordinates further improves the correction efficiency.
  • the preset image is a checkerboard image
  • the camera of the camera will be affected by environmental factors, such as exposure, during actual shooting, resulting in a black and white image with a checkerboard.
  • environmental factors such as exposure, during actual shooting, resulting in a black and white image with a checkerboard.
  • the feature points depend on the image quality.
  • the feature points determined by the conventional corner detection algorithm are inaccurate, which in turn leads to inaccurate parameters obtained from the feature points later. the accuracy of the correction. Therefore, the initial feature points detected by the corner detection algorithm are optimized, and the following describes in detail the target feature points identified in the preset image in S103.
  • FIG. 2 is a schematic flowchart of identifying target feature points according to an exemplary embodiment. As shown in FIG. 2, in an achievable embodiment, in S103, identifying the target feature points of the preset image, including:
  • the corner detection algorithm is used to determine the initial feature points in the preset image, and then the initial feature points in the same preset area are linearly fitted according to the vertical and horizontal directions respectively; The intersection between any obtained straight line in the vertical direction and any straight line in the horizontal direction is taken as the target feature point.
  • the same preset area range is a range that can be determined in advance.
  • the straight line fitting can be realized by the least square method.
  • the black and white intersection of the projected image is formed by the intersection of two straight lines.
  • the number of horizontal and vertical lines is known.
  • the number of straight lines in the horizontal direction and the straight lines in the vertical direction obtained by fitting is the same as the number in the ideal situation, and the number of feature points is also the same.
  • the corner detection algorithm may be, for example, the findChessboardCorners algorithm of OpenCV.
  • the initial feature points detected by the corner point detection algorithm are optimized, the accuracy of the target feature points of the preset image is ensured, and the correction accuracy is improved.
  • the target is determined according to the pre-calibrated mapping relationship for the target feature point and the camera coordinates of the target feature point on the captured image
  • the depth information of the feature points in the shooting space of the camera including:
  • mapping relationship For each of the target feature points, according to the pre-calibrated mapping relationship for the target feature point and the camera coordinates of the target feature point on the captured image, calculate the distance of the target feature point in the shooting space of the camera. depth information, wherein the mapping relationship is:
  • h is the depth information of the target feature point
  • p 1 is the first preset calibration parameter of the target feature point
  • p 2 is the second preset calibration parameter of the target feature point
  • X is the camera coordinate of the target feature point , wherein the first preset calibration parameter and the second preset calibration parameter are both constants.
  • the camera coordinates are substituted into the above calculation formula, and then the depth information of the target feature point can be calculated.
  • p 1 and p 2 are a constant.
  • X can be the abscissa or ordinate of the camera coordinates, and the abscissa and the ordinate are symmetrical in the operation process. Therefore, the above calculation formula is applicable to the abscissa or the ordinate of the camera coordinates.
  • the depth information of the target feature point can be quickly calculated, thereby calculating the three-dimensional coordinates of the target feature point in space.
  • the first preset calibration parameter and the second preset calibration parameter of the target feature point are obtained in the following manner:
  • a preset image is projected to the projection plane, and the image is captured by the camera of the projector. the preset image to obtain the first image;
  • a preset image is projected to the projection plane, and passes through the camera of the projector photographing the preset image to obtain a second image;
  • the first camera coordinates and the first depth information are obtained.
  • the preset calibration parameter and the second preset calibration parameter wherein the mapping relationship is:
  • h is the depth coordinate of the target feature point
  • X is the camera coordinate
  • p 1 is the first preset calibration parameter
  • p 2 is the second preset calibration parameter.
  • the process of calibrating the mapping relationship is actually projecting preset images at two different depths, and using the camera to capture the preset images at the different depths, for example, setting the projection light of the projector to be perpendicular to the Projection plane (wall or curtain), and then the projector projects preset images to the projection planes at distances of 1.2m and 1.9m respectively, and uses the camera to shoot to obtain the first image and the second image. Then, the camera coordinates and depth information of the target feature point in the first image are calculated, wherein the first distance is equivalent to the depth information of the target feature point in the first image.
  • the calculation process of the depth information of the target feature points of the second image and the camera coordinates is the same as that of the first image, and details are not repeated here. It is worth noting that the camera coordinate is a two-dimensional coordinate.
  • the values of the first preset calibration parameter and the second preset calibration parameter can be obtained by calculation.
  • FIG. 3 is a schematic diagram of mathematical modeling of a mapping relationship according to an exemplary embodiment.
  • point H is the position of a certain target feature point at a first distance
  • point I is the target feature point at a second distance.
  • K point is the point where the target feature point at the second distance position is projected to the camera normalization plane
  • J point is the point where the target feature point at the first distance position is projected to the camera normalization plane
  • L point is Auxiliary point, used to describe the depth of point H from the normalized plane of the camera
  • point Q is the point where the target feature point is projected on the normalized plane of the camera
  • point D is the origin of the camera
  • point A is the origin of the projector
  • point M is The auxiliary point is used to describe the depth of the H point from the camera origin plane
  • the N point is the position of the target feature point in the original image.
  • JQ/DN HL/HM
  • h and X can be obtained from the above formula, where h is the depth information of the target feature point, p 1 is the first preset calibration parameter of the target feature point, and p 2 is the second preset calibration parameter of the target feature point parameter, X is the abscissa of the target feature point. Therefore, the values of the first preset calibration parameter and the second preset calibration parameter can be obtained by substituting the first camera coordinates and the first depth information, and the second camera coordinates and the second depth information into the above calculation formula.
  • FIG. 4 is another flowchart of a projection correction method according to an exemplary embodiment.
  • the two-dimensional vertex coordinates of the original image of the projector are Perform correction to obtain the two-dimensional vertex coordinates of the corrected original image, including:
  • S1061 according to the normal vector of the projection plane and the current pose information of the projector, calculate and obtain the offset information of the normal vector of the projector, wherein the offset information includes yaw angle, pitch angle and roll angle corner;
  • S1064 select a target rectangle from the projected image of the projector, and determine the two-dimensional vertex coordinates of the target rectangle;
  • the offset information of the projector is calculated according to the normal vector of the projection plane and the current pose information of the projector.
  • the roll angle can be obtained by an inertial sensor (Inertial Measurement Unit, IMU) set on the projector, that is, the IMU obtains the current pose information of the projector, and then calculates the roll angle according to the current pose information.
  • IMU Inertial Measurement Unit
  • the calculation method is as follows: The prior art is not described in detail here.
  • the yaw angle and pitch angle can be calculated according to the normal vector of the projection plane. The specific method is to project the normal vector of the projection plane on the plane where the yaw angle is located, and then calculate the included angle of the projection, which is the yaw angle.
  • the angle between the projection and the X axis is calculated. Project the normal vector of the projection plane on the plane where the pitch angle is located, and then calculate the included angle of the projection, which is the pitch angle. For example, if the XOZ plane is the plane where the yaw angle is located, the angle between the projection and the X axis is calculated.
  • step S1062 the vertex coordinates of the projected image are calculated based on the offset information, which is to calculate the vertex coordinates of the original image of the projector projected onto the wall or curtain, and the vertex coordinates are two-dimensional coordinates.
  • the two-dimensional vertex coordinates of the projected image can be calculated by the following steps:
  • the first preset calculation formula is used to obtain the measurement normal vector of the projected image relative to the projector, wherein the first preset calculation formula is:
  • X 1 is the X-axis coordinate of the measurement normal vector
  • Y 1 is the Y-axis coordinate of the measurement normal vector
  • Z 1 is the Z-axis coordinate of the measurement normal vector
  • H is the yaw angle
  • V is the pitch angle.
  • the position information of the plane where the projection image is located is determined, wherein the target point is a preset center point where the projection image is rotated.
  • the target point is the preset center point where the preset projection image is rotated, such as yaw, pitch, and roll
  • the coordinate information of the target point is unchanged.
  • the position information of the plane where the projected image is located can be determined.
  • the three-dimensional vertex coordinates of the projected image are obtained, wherein the ray vector is the vertex of the projected image projected by the projector and the optical center of the projector Unit vector of lines between .
  • the ray vector refers to the unit vector of the line between the vertex of the projected image projected by the projector and the optical center of the projector, that is, the projector projects the projected image outward, and the four vertices of the projected image projected by the projector are connected to the optical center. The connection between them can be determined. After determining the position information of the plane where the projection image is located, the intersection of the ray vector and the plane where the projection image is located can be determined through the ray vector, and the intersection point is the coordinates of the four vertices of the projection image projected by the original image on the projection plane.
  • the ray vector can be calculated according to the roll angle, and the specific calculation method is as follows:
  • optical-mechanical parameters of the projector wherein the optical-mechanical parameters include a rising angle, a projection ratio, and an aspect ratio of the projected light
  • the three-dimensional imaging vertex coordinates of the standard image projected by the projector on the projection plane under preset conditions are obtained, wherein the preset conditions are that the projector is placed horizontally, the The projection light of the projector is perpendicular to the projection plane, and the projector is separated from the projection plane by a preset distance threshold;
  • the unit vector of the connecting line between the vertex of the standard image and the optical center of the projector is calculated, and the unit vector is used as the ray vector.
  • the projector will change the similarity of the projected images due to the depth.
  • the projected image projected onto the projection plane is a rectangle. Regardless of the depth, the projected image is always a rectangle. Therefore, if the projector projects to the projection plane under the preset conditions, the three-dimensional imaging vertex coordinates of the standard image projected under the preset conditions can be calculated according to the optical-mechanical parameters of the projector.
  • the rising angle refers to the rising angle of the projected light of the projector, and in general, the rising angle is related to the model of the projector.
  • Fig. 5 is a schematic diagram showing the principle of calculating the three-dimensional imaging vertex coordinates of a standard image according to an exemplary embodiment.
  • the standard image has four vertices, namely the first vertex 0, the second vertex 1, the third vertex 2, and the fourth vertex 3, wherein the first vertex is the vertex located in the upper right corner of the projected image, The second vertex is the vertex located at the upper left corner of the projected image, the third vertex is the vertex located at the lower right corner of the projected image, and the fourth vertex is the vertex located at the lower left corner of the projected image.
  • the preset distance threshold is defined as f
  • the throw ratio is throwRatio
  • w is the width of the projected image
  • h is the height of the projected image.
  • the three-dimensional imaging vertex coordinates of the first vertex 0 are:
  • the three-dimensional imaging vertex coordinates of the second vertex 1 are:
  • the three-dimensional imaging vertex coordinates of the third vertex 2 are:
  • the three-dimensional imaging vertex coordinates of the fourth vertex 3 are:
  • srcCoodinate[0][0] is the X-axis coordinate of the first vertex
  • f is the preset distance threshold
  • doffsetAngle is the rising angle
  • srcCoodinate[0][1] is the first
  • the Y-axis coordinate of the vertex, srcCoodinate[1][0] is the X-axis coordinate of the second vertex
  • srcCoodinate[1][1] is the Y-axis coordinate of the second vertex
  • srcCoodinate[0][2] is the Y-axis coordinate of the second vertex
  • srcCoodinate[1][2] is the Z-axis coordinate of the second vertex
  • srcCoodinate[2][0] is the X-axis coordinate of the third vertex
  • srcCoodinate[2] [1] is the
  • vectors can be used to calculate the optical center of the projector and the four ray vectors of the four vertices.
  • the unit vector is the ray vector of the vertex divided by the modulo of the ray vector.
  • the ray vector is related to the opto-mechanical parameters of the projector, and the ray vector is unchanged under the condition that the opto-mechanical parameters of the projector do not change.
  • step 1544 vector decomposition is performed on the three-dimensional imaging vertex coordinates of the projected image to obtain the two-dimensional imaging vertex coordinates of the projected image.
  • the specific method is to decompose the vector into basis vectors on the horizontal plane, for example, is a pair of basis vectors, Find the basis vector as the X-axis of the coordinate system for the intersection of the projected image and the horizontal plane, and vertical. in, It can be calculated by the following formula:
  • horizonPlanN is the normal vector of the horizontal plane
  • is the cross product of the vector
  • rotatePlanN is the normal vector of the projected image
  • norm(coSSlineU) is the modulus of the vector cosslineU.
  • Fig. 6 is a schematic diagram showing the principle of vector decomposition according to an exemplary embodiment.
  • the projection image has four vertices in total, G, I, J, and H.
  • a coordinate system is established with any point G, I, J and H as the coordinate origin to convert the three-dimensional imaging vertex coordinates into two-dimensional imaging vertex coordinates.
  • a coordinate system is established with point H as the coordinate origin, and the process of vector decomposition to calculate the coordinates of two-dimensional imaging vertexes is described in detail. Then, the following formula can be used to convert the three-dimensional imaging vertex coordinates of points G, I, and J into two-dimensional imaging vertex coordinates.
  • x is the X-axis coordinate of the two-dimensional imaging vertex coordinate
  • vectorP(0) is the X-axis coordinate of the vector vectorP
  • vectorP(1) is the Y-axis coordinate of the vector vectorP
  • for the X-axis coordinate for The Y-axis coordinate of
  • y is the Y-axis coordinate of the two-dimensional imaging vertex coordinate
  • point3D is the three-dimensional imaging vertex coordinate of the vertex to be solved, such as any vertex in G
  • I, J, pointOrigin is the coordinate of the H point
  • vectorP is the HG One of vector, HJ vector and HI vector.
  • point3D is the three-dimensional imaging vertex coordinates of point G
  • vectorP is the HG vector.
  • the three-dimensional imaging vertex coordinates of the projected image can be converted into the two-dimensional imaging vertex coordinates of the projected image.
  • the method further includes:
  • the second preset calculation formula is:
  • ansP [i][x] is the corrected X-axis coordinate of the ith vertex of the standard image
  • ansP [i][y] is the corrected Y-axis of the ith vertex of the standard image Coordinates
  • anyP [i][x] is the X-axis coordinate of the ith vertex of the standard image before correction
  • anyP [i][y] is the Y-axis of the ith vertex of the standard image before correction coordinates
  • rotateP.x is the X-axis coordinate of the rotation center of the projector rolling
  • rotateP.y is the Y-axis coordinate of the rotation center
  • r is the current roll angle
  • the corrected X-axis coordinates and Y-axis coordinates are used as new X-axis coordinates and Y-axis coordinates of the vertices of the standard image.
  • the current roll angle of the projector can be obtained through an inertial sensor (Inertial Measurement Unit, IMU) set on the projector.
  • IMU Inertial Measurement Unit
  • the current roll angle does not meet the preset threshold, it means that the projector has rolled.
  • the current roll angle is not 0, it means that the projector has a roll rotation.
  • the projector rolls its standard image will roll with the optical center ray as the rotation axis, and the X-axis and Y-axis coordinates of the three-dimensional imaging vertex coordinates of the standard image will change.
  • the design formula calculates the X-axis and Y-axis coordinates of the three-dimensional vertex coordinates of the rolling standard image, and obtains the corrected X-axis and Y-axis coordinates of each vertex, thereby obtaining the new three-dimensional imaging vertex coordinates of the standard image. Then, the ray vector is recalculated based on the new three-dimensional imaging vertex coordinates, and the three-dimensional imaging vertex coordinates of the projected image are solved.
  • the coordinates of the rotation center rotateP can be (0, 0)
  • the rotation center rotateP refers to the rotation center of the projector to roll
  • the above-mentioned preset center point is an imaginary projector when yaw and pitch occur.
  • the offset of the projected image after rotation can be (0, 0)
  • the roll angle can take into account the change of the projector's rotated projection image after sending the roll, so as to achieve accurate keystone correction.
  • a homography is a concept in projective geometry, also called a projective transformation. It is to map a point (three-dimensional homogeneous vector) on a projective plane to another projective plane. Assuming that the homography matrix relationship between the two images is known, the image of one plane can be transformed to the other plane. The transformation through the plane is to perform projection correction on the same plane. Therefore, after knowing the two-dimensional vertex coordinates of the original image of the projector and the two-dimensional vertex coordinates of the projected image, the corresponding homography matrix relationship can be constructed.
  • the homography matrix relationship refers to the mapping of the original image of the projector on the wall. The relationship between projected images of a surface or curtain.
  • the target rectangle refers to a rectangle selected in the area of the projected image, and the rectangle may be the rectangle with the largest area in the area in the projected image of the projector. By setting the target rectangle to the rectangle with the largest area, it is to maximize the projected area and improve the user experience.
  • an embodiment of calculating the two-dimensional vertex coordinates of the projected image is proposed.
  • Use other methods to compute the 2D vertex coordinates of the projected image For example, based on the offset information and the vertex coordinates of the original image, the vertex coordinates of the rotated original image are calculated.
  • the vertex coordinates of the rotated original image refer to the vertex coordinates of the original image after the yaw angle, pitch angle and roll angle are rotated, and then based on the calculated projection depth of the projector, calculate the rotated
  • the vertex coordinates of the original image are mapped to the 2D vertex coordinates of the projected image of the projected plane.
  • the projection depth refers to the distance between the projector and the projection plane.
  • selecting a target rectangle from the projected image may include:
  • a point is arbitrarily selected from any side of the projected image, and the point is taken as the vertex of the rectangle to be constructed, and the aspect ratio of the original image is taken as the aspect ratio of the rectangle to be constructed.
  • the rectangle with the largest area is selected from the generated rectangles as the target rectangle.
  • the specific method of selecting the target rectangle may be to arbitrarily select a point on either side of the projected image, and use the point as the vertex of the rectangle to be constructed, and the aspect ratio of the original image as the aspect ratio of the rectangle to be constructed.
  • a rectangle is generated in the area of the projected image, and the rectangle with the largest area is selected as the target rectangle from the generated rectangles.
  • traverse the longest side of the projected image and the side adjacent to the longest side select any point as the vertex of the rectangle to be constructed, and generate a rectangle with an aspect ratio consistent with the original image around the projected image.
  • at the end of the traversal find the rectangle with the largest area from all the generated rectangles as the target rectangle.
  • the rectangle with the largest area as the target rectangle it can be ensured that the projected image area viewed by the user is the largest, thereby improving the viewing experience of the user.
  • step S1065 after obtaining the two-dimensional vertex coordinates of the target rectangle, the two-dimensional vertex coordinates of the target rectangle can be used as the input value of the homography matrix relationship, and the two-dimensional vertex coordinates of the corrected original image can be obtained by calculating, such that The projector projects according to the two-dimensional vertex coordinates of the corrected original image, so that the projected image presented in the user's field of view is a rectangle. That is, before the correction, the projected image in the user's field of view appears as a trapezoid, while the corrected projected image appears as a rectangle.
  • vertex coordinates mentioned in the above embodiments refer to the coordinates of the four corner points of the projection plane.
  • Fig. 7 is a block diagram of a projection correction apparatus according to an exemplary embodiment. As shown in FIG. 7, the apparatus 400 includes:
  • the response module 401 is configured to control the projector to project a preset image to the projection plane in response to the received correction instruction;
  • a photographing module 402 configured to photograph the preset image projected by the projector through the camera of the projector to obtain a photographed image
  • the target feature point determination module 403 is configured to identify the target feature point of the preset image in the captured image
  • the three-dimensional coordinate determination module 404 is configured to, for each of the target feature points, determine that the target feature point is located in the target feature point according to the pre-calibrated mapping relationship for the target feature point and the camera coordinates of the target feature point on the captured image.
  • the depth information in the shooting space of the camera is obtained to obtain the three-dimensional coordinates of the target feature point in the projection space of the projector, wherein the mapping relationship is the depth information of the target feature point calibrated at different depths and the camera.
  • the normal vector module 405 is configured to determine the normal vector of the projection plane relative to the projector according to the three-dimensional coordinates of each of the target feature points;
  • the correction module 406 is configured to correct the two-dimensional vertex coordinates of the original image of the projector according to the normal vector of the projection plane and the current pose information of the projector, so as to obtain the two-dimensional coordinates of the corrected original image. vertex coordinates;
  • the projection module 407 controls the projector to project according to the two-dimensional vertex coordinates of the corrected original image.
  • the preset image is a checkerboard image
  • the response module 401 includes:
  • an instruction response submodule configured to use a corner detection algorithm to determine the initial feature points in the checkerboard image
  • the straight line fitting sub-module is configured to perform straight line fitting on the initial feature points within the same preset area according to the vertical direction and the horizontal direction respectively;
  • the feature point determination sub-module is configured to take the intersection point between any straight line in the vertical direction and any straight line in the horizontal direction obtained by fitting as the target feature point.
  • the three-dimensional coordinate determination module 404 is specifically configured as:
  • mapping relationship For each of the target feature points, according to the pre-calibrated mapping relationship for the target feature point and the camera coordinates of the target feature point on the captured image, calculate the distance of the target feature point in the shooting space of the camera. depth information, wherein the mapping relationship is:
  • h is the depth information of the target feature point
  • p 1 is the first preset calibration parameter of the target feature point
  • p 2 is the second preset calibration parameter of the target feature point
  • X is the camera coordinate of the target feature point , wherein the first preset calibration parameter and the second preset calibration parameter are both constants.
  • the three-dimensional coordinate determination module 404 is specifically configured as:
  • a preset image is projected to the projection plane, and the image is captured by the camera of the projector. the preset image to obtain the first image;
  • a preset image is projected to the projection plane, and passes through the camera of the projector photographing the preset image to obtain a second image;
  • the first camera coordinates and the first depth information are obtained.
  • the preset calibration parameter and the second preset calibration parameter wherein the mapping relationship is:
  • h is the depth coordinate of the target feature point
  • X is the camera coordinate
  • p 1 is the first preset calibration parameter
  • p 2 is the second preset calibration parameter.
  • correction module 406 includes:
  • An offset information calculation module configured to calculate offset information of the projector's normal vector according to the normal vector of the projection plane and the current pose information of the projector, wherein the offset information includes yaw angle, pitch angle and roll angle;
  • a vertex calculation module configured to calculate, based on the offset information, the two-dimensional vertex coordinates of the projected image projected by the projector to the projection plane;
  • a homography matrix module configured to establish a homography matrix relationship between the projected image and the original image based on the two-dimensional vertex coordinates of the projected image and the two-dimensional vertex coordinates of the original image of the projector;
  • a target rectangle selection module configured to select a target rectangle from the projected image of the projector, and determine the two-dimensional vertex coordinates of the target rectangle;
  • the vertex correction module is configured to obtain the two-dimensional vertex coordinates of the corrected original image based on the two-dimensional vertex coordinates of the target rectangle and the homography matrix relationship.
  • the target rectangle selection module is specifically configured as:
  • a point is arbitrarily selected from any side of the projected image, and the point is taken as the vertex of the rectangle to be constructed, and the aspect ratio of the original image is taken as the aspect ratio of the rectangle to be constructed.
  • the rectangle with the largest area is selected from the generated rectangles as the target rectangle.
  • the present disclosure also provides a computer-readable storage medium having a computer program stored thereon, the
  • Fig. 8 is a block diagram of an electronic device according to an exemplary embodiment.
  • the electronic device 500 may include: a processor 501 and a memory 502 .
  • the electronic device 500 may also include one or more of a multimedia component 503 , an input/output (I/O) interface 504 , and a communication component 505 .
  • I/O input/output
  • the processor 501 is used to control the overall operation of the electronic device 500 to complete all or part of the steps in the above-mentioned projection correction method.
  • the memory 502 is used to store various types of data to support operations on the electronic device 500, such data may include, for example, instructions for any application or method operating on the electronic device 500, and application-related data, Such as contact data, messages sent and received, pictures, audio, video, and so on.
  • the memory 502 can be implemented by any type of volatile or non-volatile storage device or their combination, such as static random access memory (Static Random Access Memory, SRAM for short), electrically erasable programmable read-only memory ( Electrically Erasable Programmable Read-Only Memory (EEPROM for short), Erasable Programmable Read-Only Memory (EPROM), Programmable Read-Only Memory (PROM), Read-Only Memory (Read-Only Memory, ROM for short), magnetic memory, flash memory, magnetic disk or optical disk.
  • static random access memory Static Random Access Memory, SRAM for short
  • electrically erasable programmable read-only memory Electrically Erasable Programmable Read-Only Memory (EEPROM for short), Erasable Programmable Read-Only Memory (EPROM), Programmable Read-Only Memory (PROM), Read-Only Memory (Read-Only Memory, ROM for short
  • EEPROM Electrically Erasable Programmable Read-Only Memory
  • EPROM Erasable Programmable Read-On
  • Multimedia components 503 may include screen and audio components.
  • the screen can be, for example, a touch screen, and the audio component is used for outputting and/or inputting audio signals.
  • the audio component may include a microphone for receiving external audio signals.
  • the received audio signal may be further stored in memory 502 or transmitted through communication component 505 .
  • the audio assembly also includes at least one speaker for outputting audio signals.
  • the I/O interface 504 provides an interface between the processor 501 and other interface modules, and the above-mentioned other interface modules may be a keyboard, a mouse, a button, and the like. These buttons can be virtual buttons or physical buttons.
  • the communication component 505 is used for wired or wireless communication between the electronic device 500 and other devices.
  • Wireless communication such as Wi-Fi, Bluetooth, Near Field Communication (NFC for short), 2G, 3G or 4G, or one or a combination of them, so the corresponding communication component 505 may include: Wi-Fi module, Bluetooth module, NFC module.
  • the electronic device 500 may be implemented by one or more application-specific integrated circuits (Application Specific Integrated Circuit, ASIC for short), digital signal processors (Digital Signal Processor, DSP for short), digital signal processing devices (Digital Signal Processing Device (DSPD), Programmable Logic Device (PLD), Field Programmable Gate Array (FPGA), controller, microcontroller, microprocessor or other electronic components Implementation is used to perform the above-mentioned projection correction method.
  • ASIC Application Specific Integrated Circuit
  • DSP Digital Signal Processor
  • DSP Digital Signal Processor
  • DSP Digital Signal Processing Device
  • PLD Programmable Logic Device
  • FPGA Field Programmable Gate Array
  • controller microcontroller, microprocessor or other electronic components Implementation is used to perform the above-mentioned projection correction method.
  • a computer-readable storage medium comprising program instructions, the program instructions implement the steps of the above-mentioned projection correction method when executed by a processor.
  • the computer-readable storage medium can be the above-mentioned memory 502 including program instructions, and the above-mentioned program instructions can be executed by the processor 501 of the electronic device 500 to complete the above-mentioned projection correction method.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Transforming Electric Information Into Light Information (AREA)
  • Projection Apparatus (AREA)

Abstract

The present disclosure relates to the technical field of projection, and relates to a projection correction method and apparatus, a storage medium, and an electronic device. The method comprises: acquiring, by means of a camera, a preset image projected by a projector to obtain a photographed image; determining target feature points in the photographed image, and determining, for each target feature point, depth information of the target feature point in a photographing space of the camera to obtain three-dimensional coordinates of the target feature point; and determining, according to the three-dimensional coordinates of all target feature points, a normal vector of a fitting plane constructed by all the target feature points, and correcting an original image of the projector according to the normal vector and current pose information of the projector. The beneficial effects of the present disclosure are: keystone correction of the projection can be realized by means of one camera, so that not only the number of arranged devices is decreased, but also the depth information of the target feature points can be rapidly calculated; and the complexity of calculating the three-dimensional coordinates of the target feature points is reduced.

Description

投影校正方法、装置、存储介质及电子设备Projection correction method, device, storage medium and electronic device
本公开要求于2021年03月19日提交中国专利局、申请号为202110297235.1、发明名称为“投影校正方法、装置、存储介质及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本公开中。This disclosure claims the priority of the Chinese patent application with the application number 202110297235.1 and the invention titled "Projection Correction Method, Apparatus, Storage Medium and Electronic Device" filed with the China Patent Office on March 19, 2021, the entire contents of which are incorporated by reference in this disclosure.
技术领域technical field
本公开涉及投影技术领域,具体地,涉及一种投影校正方法、装置、存储介质及电子设备。The present disclosure relates to the field of projection technology, and in particular, to a projection correction method, an apparatus, a storage medium, and an electronic device.
背景技术Background technique
投影仪是一种通过光学投影将图像投影到墙壁或者投影幕布的设备。在传统的投影仪中,投影仪需要正对墙面,才能保证投影出来的画面是一个正常的矩形。一旦投影仪摆放不当,将会使投影出来的画面发生变形。A projector is a device that projects an image onto a wall or projection screen by optical projection. In traditional projectors, the projector needs to be facing the wall to ensure that the projected picture is a normal rectangle. Once the projector is placed improperly, the projected picture will be distorted.
目前,在投影仪的梯形校正技术中,主要是以双目校正为主,然而双目校正需要用到两个摄像头或者需要距离传感器来配合摄像头,而额外的摄像头或者距离传感器,对于投影仪来说会增加硬件成本,且标定的参数较多。At present, the keystone correction technology of projectors is mainly based on binocular correction. However, binocular correction requires the use of two cameras or a distance sensor to match the camera, and additional cameras or distance sensors are used for projectors. It is said that the hardware cost will be increased, and the calibration parameters will be more.
发明内容SUMMARY OF THE INVENTION
为了克服相关技术中存在的问题,本公开提供一种投影校正方法、装置、存储介质及电子设备。In order to overcome the problems existing in the related art, the present disclosure provides a projection correction method, device, storage medium and electronic device.
为了实现上述目的,第一方面,本公开提供一种投影校正方法,所述方法包括:In order to achieve the above objects, in a first aspect, the present disclosure provides a projection correction method, the method comprising:
响应于接收到的校正指令,控制投影仪向投影平面投射预设图像;In response to the received correction instruction, controlling the projector to project a preset image to the projection plane;
通过所述投影仪的摄像头拍摄所述投影仪投射出的所述预设图像,得到拍摄图像;Capture the preset image projected by the projector by using the camera of the projector to obtain a captured image;
在所述拍摄图像中,识别出所述预设图像的目标特征点;In the captured image, identify the target feature point of the preset image;
针对每一所述目标特征点,根据针对该目标特征点预先标定的映射关系和该目标特征点在所述拍摄图像上的相机坐标,确定该目标特征点在所述摄像头的拍摄空间中的深度信息,以得到该目标特征点在所述投影仪的投影空间中的三维坐标,其中,所述映射关系是在不同深度下标定的目标特征点的深度信息与相机坐标的偏移量之间的关联关系;For each target feature point, determine the depth of the target feature point in the shooting space of the camera according to the mapping relationship pre-calibrated for the target feature point and the camera coordinates of the target feature point on the captured image information to obtain the three-dimensional coordinates of the target feature point in the projection space of the projector, wherein the mapping relationship is between the depth information of the target feature point calibrated at different depths and the offset of the camera coordinates connection relation;
根据各个所述目标特征点的三维坐标,确定所述投影平面相对于所述投影仪的法向量;Determine the normal vector of the projection plane relative to the projector according to the three-dimensional coordinates of each of the target feature points;
根据所述投影平面的法向量以及所述投影仪的当前位姿信息,对所述投影仪的原始图像的二维顶点坐标进行校正,得到校正后的原始图像的二维顶点坐标;Correcting the two-dimensional vertex coordinates of the original image of the projector according to the normal vector of the projection plane and the current pose information of the projector, to obtain the two-dimensional vertex coordinates of the corrected original image;
控制所述投影仪根据所述校正后的原始图像的二维顶点坐标进行投影。The projector is controlled to project according to the two-dimensional vertex coordinates of the corrected original image.
第二方面,本公开提供一种投影校正装置,所述装置包括:In a second aspect, the present disclosure provides a projection correction device, the device comprising:
响应模块,配置为响应于接收到的校正指令,控制投影仪向投影平面投射预设图像;a response module, configured to control the projector to project a preset image to the projection plane in response to the received correction instruction;
拍摄模块,配置为通过所述投影仪的摄像头拍摄所述投影仪投射出的所述预设图像,得到拍摄图像;a photographing module, configured to photograph the preset image projected by the projector through the camera of the projector to obtain a photographed image;
目标特征点确定模块,配置为在所述拍摄图像中,识别出所述预设图像的目标特征点;a target feature point determination module, configured to identify the target feature point of the preset image in the captured image;
三维坐标确定模块,配置为针对每一所述目标特征点,根据针对该目标特征点预先标定的映射关系和该目标特征点在所述拍摄图像上的相机坐标,确定该目标特征点在所述摄像头的拍摄空间中的深度信息,以得到该目标特征点在所述投影仪的投影空间中的三维坐标,其中,所述映射关系是在不同深度下标定的目标特征点的深度信息与相机坐 标的偏移量之间的关联关系;The three-dimensional coordinate determination module is configured to, for each of the target feature points, determine that the target feature point is in the The depth information in the shooting space of the camera to obtain the three-dimensional coordinates of the target feature point in the projection space of the projector, wherein the mapping relationship is the depth information and camera coordinates of the target feature point calibrated at different depths The relationship between the offsets;
法向量模块,配置为根据各个所述目标特征点的三维坐标,确定所述投影平面相对于所述投影仪的法向量;a normal vector module, configured to determine the normal vector of the projection plane relative to the projector according to the three-dimensional coordinates of each of the target feature points;
校正模块,配置为根据所述投影平面的法向量以及所述投影仪的当前位姿信息,对所述投影仪的原始图像的二维顶点坐标进行校正,得到校正后的原始图像的二维顶点坐标;a correction module, configured to correct the two-dimensional vertex coordinates of the original image of the projector according to the normal vector of the projection plane and the current pose information of the projector to obtain the two-dimensional vertex of the corrected original image coordinate;
投影模块,控制所述投影仪根据所述校正后的原始图像的二维顶点坐标进行投影。The projection module controls the projector to project according to the two-dimensional vertex coordinates of the corrected original image.
第三方面,本公开提供一种计算机可读存储介质,其上存储有计算机程序,该程序被处理装置执行时实现上述第一方面提供的所述方法的步骤。In a third aspect, the present disclosure provides a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processing device, implements the steps of the method provided in the first aspect.
第四方面,本公开提供一种电子设备,包括:In a fourth aspect, the present disclosure provides an electronic device, comprising:
存储器,其上存储有计算机程序;a memory on which a computer program is stored;
处理器,用于执行所述存储器中的所述计算机程序,以实现上述第一方面提供的所述方法的步骤。A processor, configured to execute the computer program in the memory, to implement the steps of the method provided in the first aspect above.
通过上述技术方案,响应于接收到的校正指令,通过摄像头采集投影仪投射的预设图像,得到拍摄图像;在该拍摄图像中确定目标特征点,并针对每一目标特征点,根据针对该目标特征点预先标定的映射关系和该目标特征点在所述拍摄图像上的相机坐标,确定该目标特征点在所述摄像头的拍摄空间中的深度信息,以得到该目标特征点的三维坐标;进而根据所有目标特征点的三维坐标,确定投影平面的法向量,并根据该法向量和投影仪的当前位姿信息来对投影仪的原始图像二维顶点坐标进行校正。这样,通过一个摄像头即可实现投影梯形校正,不仅减少了器件的设置数量,同时也通过预先标定的映射关系,可以快速计算出目标特征点的深度信息,降低了计算目标特征点的三维坐标的复杂度,进一步地,提高了校正效率。Through the above technical solution, in response to the received correction instruction, a preset image projected by the projector is collected through a camera to obtain a shot image; target feature points are determined in the shot image, and for each target feature point, according to the target feature point The pre-calibrated mapping relationship of the feature points and the camera coordinates of the target feature point on the captured image determine the depth information of the target feature point in the shooting space of the camera to obtain the three-dimensional coordinates of the target feature point; According to the three-dimensional coordinates of all target feature points, the normal vector of the projection plane is determined, and the two-dimensional vertex coordinates of the original image of the projector are corrected according to the normal vector and the current pose information of the projector. In this way, projection trapezoidal correction can be achieved through one camera, which not only reduces the number of device settings, but also can quickly calculate the depth information of the target feature points through the pre-calibrated mapping relationship, reducing the calculation of the three-dimensional coordinates of the target feature points. The complexity, further, improves the correction efficiency.
本公开的其他特征和优点将在随后的具体实施方式部分予以详细说明。Other features and advantages of the present disclosure will be described in detail in the detailed description that follows.
附图说明Description of drawings
结合附图并参考以下具体实施方式,本公开各实施例的上述和其他特征、优点及方面将变得更加明显。贯穿附图中,相同或相似的附图标记表示相同或相似的元素。应当理解附图是示意性的,原件和元素不一定按照比例绘制。在附图中:The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent when taken in conjunction with the accompanying drawings and with reference to the following detailed description. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that the originals and elements are not necessarily drawn to scale. In the attached image:
图1是根据一示例性实施例示出的一种投影校正方法的流程图;1 is a flowchart of a projection correction method according to an exemplary embodiment;
图2是一示例性实施例示出的识别目标特征点的流程示意图;2 is a schematic flowchart of identifying target feature points according to an exemplary embodiment;
图3是根据一示例性实施例示出的映射关系的数学建模示意图;3 is a schematic diagram of mathematical modeling of a mapping relationship shown according to an exemplary embodiment;
图4是根据一示例性实施例示出的一种投影校正方法的另一流程图;4 is another flowchart of a projection correction method according to an exemplary embodiment;
图5是根据一示例性实施例示出的计算标准图像的三维成像顶点坐标的原理示意图;FIG. 5 is a schematic diagram showing the principle of calculating three-dimensional imaging vertex coordinates of a standard image according to an exemplary embodiment;
图6是根据一示例性实施例示出的向量分解的原理示意图;6 is a schematic diagram of the principle of vector decomposition according to an exemplary embodiment;
图7是根据一示例性实施例示出的一种投影校正装置的框图;7 is a block diagram of a projection correction apparatus according to an exemplary embodiment;
图8是根据一示例性实施例示出的一种电子设备的框图。Fig. 8 is a block diagram of an electronic device according to an exemplary embodiment.
具体实施方式Detailed ways
下面将参照附图更详细地描述本公开的实施例。虽然附图中显示了本公开的某些实施例,然而应当理解的是,本公开可以通过各种形式来实现,而且不应该被解释为限于这里阐述的实施例,相反提供这些实施例是为了更加透彻和完整地理解本公开。应当理解的是,本公开的附图及实施例仅用于示例性作用,并非用于限制本公开的保护范围。Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for the purpose of A more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the present disclosure are only for exemplary purposes, and are not intended to limit the protection scope of the present disclosure.
图1是根据一示例性实施例示出的一种投影校正方法的流程图。该投影校正方法可以应用于如投影仪等电子设备,如图1所示,该投影校正方法包括如下步骤:Fig. 1 is a flowchart of a projection correction method according to an exemplary embodiment. The projection correction method can be applied to electronic equipment such as projectors. As shown in FIG. 1 , the projection correction method includes the following steps:
S101,响应于接收到的校正指令,控制投影仪向投影平面投射预设图像。S101, in response to the received correction instruction, control the projector to project a preset image to the projection plane.
这里,预设图像是指投射至投影平面的图像,一般地,本实施例预设图像可以采用棋盘格图像。其中,该投影平面指的是墙面或幕布等用于展示投影仪的输出图像的区域。应当理解的是,在投影仪垂直于墙面或幕布时,预设图像是标准的矩形图像,此时棋盘格图不是畸变的,而在投影仪与墙面或幕布不垂直时,投影图像是非矩形图像,此时棋盘格图是畸变的。Here, the preset image refers to an image projected onto the projection plane, and generally, the preset image in this embodiment may adopt a checkerboard image. Wherein, the projection plane refers to an area, such as a wall or a curtain, that is used to display the output image of the projector. It should be understood that when the projector is perpendicular to the wall or the curtain, the preset image is a standard rectangular image, and the checkerboard image is not distorted at this time, and when the projector is not perpendicular to the wall or the curtain, the projected image is not Rectangular image, the checkerboard image is distorted at this time.
在一些实施例中,校正指令可以是自动触发的,也可以是非自动触发的。例如,若是自动触发的,在投影仪自检测到投影图像为非矩形图像的情况下,投影仪可以自动触发一个校正指令;若是非自动触发的,用户可以按下与投影仪通信连接的控制器的按钮,以此,来触发控制器发送一个校正指令至投影仪,该按钮可以是虚拟按钮,也可以是实体按钮。本实施例对此不作限定。In some embodiments, the correction instruction may or may not be automatically triggered. For example, if it is automatically triggered, when the projector self-detects that the projected image is a non-rectangular image, the projector can automatically trigger a correction command; if it is not automatically triggered, the user can press the controller that communicates with the projector. The button is used to trigger the controller to send a correction command to the projector. The button can be a virtual button or a physical button. This embodiment does not limit this.
S102,通过所述投影仪的摄像头拍摄所述投影仪投射出的所述预设图像,得到拍摄图像。S102, the preset image projected by the projector is captured by the camera of the projector to obtain a captured image.
这里,投影仪向投影平面投射预设图像之后,利用摄像头对该预设图像进行拍摄,得到拍摄图像,以根据该拍摄图像对投影仪投影的墙面或幕布进行建模,得到墙面或幕布的三维信息。Here, after the projector projects the preset image to the projection plane, the preset image is photographed by the camera to obtain the photographed image, so as to model the wall or curtain projected by the projector according to the photographed image to obtain the wall or the curtain three-dimensional information.
S103,在所述拍摄图像中,识别出所述预设图像的目标特征点。S103, in the captured image, identify the target feature point of the preset image.
这里,目标特征点是在预设图像上设置的用于对墙面或幕布进行建模的特征点,该特征点可以根据实际情况设定其形式或数量。例如,在预设图像为棋盘格图像时,预设图像中的目标特征点是指该预设图像中黑白格之间的交点。Here, the target feature point is the feature point set on the preset image and used for modeling the wall or the curtain, and the feature point can be set in form or quantity according to the actual situation. For example, when the preset image is a checkerboard image, the target feature points in the preset image refer to intersections between black and white grids in the preset image.
S104,针对每一所述目标特征点,根据针对该目标特征点预先标定的映射关系和该目标特征点在所述拍摄图像上的相机坐标,确定该目标特征点在所述摄像头的拍摄空间中的深度信息,以得到该目标特征点在所述投影仪的投影空间中的三维坐标,其中,所述映射关系是在不同深度下标定的目标特征点的深度信息与相机坐标的偏移量之间的关联关系。S104, for each target feature point, according to the pre-calibrated mapping relationship for the target feature point and the camera coordinates of the target feature point on the captured image, determine that the target feature point is in the shooting space of the camera to obtain the three-dimensional coordinates of the target feature point in the projection space of the projector, wherein the mapping relationship is the difference between the depth information of the target feature point calibrated at different depths and the offset of the camera coordinates relationship between.
这里,由于预先在不同深度下标定的目标特征点的深度信息与相机坐标的偏移量之间的关联关系,因此,在拍摄图像中确定到目标特征点的相机坐标后,可以基于该相机坐标以及该映射关系计算得到目标特征点的深度。其中,该深度信息指的是投射在投影平面上的预设图像的目标特征点相对于摄像头的深度。例如,分别在1.2m和1.9m的深度下对投影仪投影的预设图像进行拍摄,确定目标特征点在1.2m和1.9m下的相机坐标,从而计算出同一目标特征点的深度信息与相机坐标之间的关联关系。Here, due to the correlation between the depth information of the target feature points calibrated at different depths in advance and the offset of the camera coordinates, after the camera coordinates of the target feature points are determined in the captured image, the camera coordinates can be based on the camera coordinates. And the mapping relationship calculates the depth of the target feature point. The depth information refers to the depth of the target feature point of the preset image projected on the projection plane relative to the camera. For example, the preset images projected by the projector are taken at depths of 1.2m and 1.9m, respectively, and the camera coordinates of the target feature points at 1.2m and 1.9m are determined, so as to calculate the depth information of the same target feature point and the camera. The relationship between coordinates.
值得说明的是,在计算得到目标特征坐标点的相机坐标以及深度信息之后,根据该相机坐标以及该深度信息可以确定到目标特征点在空间中的深度信息。其中,深度信息是目标特征点的三维坐标中的Z轴坐标。拍摄图像上的目标特征点的三维坐标中的X轴坐标以及Y轴坐标则是根据目标特征点在预设图像上的相机坐标,结合该目标特征点的深度信息,进行比例缩放得到的。其原理具体是:预设图像通过小孔成像原理投射在投影平面上,则显示在投影平面上的目标特征点的X轴坐标、Y轴坐标是在预设图像上的目标特征的相机坐标中的X轴坐标、Y轴坐标经过该深度信息进行转换得到的。It should be noted that, after the camera coordinates and depth information of the target feature coordinate point are calculated, the depth information of the target feature point in space can be determined according to the camera coordinates and the depth information. The depth information is the Z-axis coordinate in the three-dimensional coordinates of the target feature point. The X-axis coordinates and the Y-axis coordinates of the three-dimensional coordinates of the target feature point on the captured image are obtained by scaling according to the camera coordinates of the target feature point on the preset image, combined with the depth information of the target feature point. The specific principle is as follows: the preset image is projected on the projection plane through the principle of pinhole imaging, and the X-axis coordinates and Y-axis coordinates of the target feature points displayed on the projection plane are in the camera coordinates of the target feature on the preset image. The X-axis coordinates and Y-axis coordinates of , are obtained by converting the depth information.
S105,根据各个所述目标特征点的三维坐标,确定所述投影平面相对于所述投影仪的法向量。S105: Determine the normal vector of the projection plane relative to the projector according to the three-dimensional coordinates of each of the target feature points.
这里,可以对得到的所有目标特征点的三维坐标进行拟合,从而根据目标特征点的三维坐标对投影平面进行建模,得到拟合平面,该拟合平面反映了投影平面的三维信息,进而得到该投影平面相对于投影仪的法向量。其中,拟合可以是采用最小二乘法拟合。Here, the obtained three-dimensional coordinates of all target feature points can be fitted, so as to model the projection plane according to the three-dimensional coordinates of the target feature points to obtain a fitted plane, which reflects the three-dimensional information of the projection plane, and then Get the normal vector of the projection plane relative to the projector. Wherein, the fitting may be fitting using the least squares method.
S106,根据所述投影平面的法向量以及所述投影仪的当前位姿信息,对所述投影仪 的原始图像的二维顶点坐标进行校正,得到校正后的原始图像的二维顶点坐标。S106, correct the two-dimensional vertex coordinates of the original image of the projector according to the normal vector of the projection plane and the current pose information of the projector to obtain the two-dimensional vertex coordinates of the corrected original image.
这里,投影平面的法向量是指与投影平面垂直的线段,投影仪的当前位姿信息指的是投影仪当前的摆放位置,该当前位姿信息可以通过姿态传感器(IMU)获得。投影仪在投影时,如果摆放位置是歪斜的,投影仪的投影光线相对于墙面或幕布会相对偏移,从而导致投影仪显示在墙面或幕布的投影平面呈现梯形畸变。根据投影平面的法向量以及投影仪的当前位姿信息,可以计算得到投影仪相对于投影平面(墙面或幕布)的位置偏差,进而根据该位置偏差对投影仪的原始图像的二维顶点坐标进行调整。Here, the normal vector of the projection plane refers to a line segment perpendicular to the projection plane, and the current pose information of the projector refers to the current placement position of the projector, and the current pose information can be obtained through an attitude sensor (IMU). When the projector is projecting, if the placement position is skewed, the projected light of the projector will be offset relative to the wall or curtain, which will cause the projector to display keystone distortion on the projection plane of the wall or curtain. According to the normal vector of the projection plane and the current pose information of the projector, the position deviation of the projector relative to the projection plane (wall or curtain) can be calculated, and then the two-dimensional vertex coordinates of the original image of the projector can be calculated according to the position deviation. make adjustments.
其中,原始图像指的是投影仪的原始输出图像,一般而言,原始图像是一个矩形图像,如宽度为w,高度为h的图像。当投影仪相对于投影平面倾斜设置时,矩形的原始图像投射在投影平面上会呈现为一个不规则四边形,如凸四边形,为了使得投影仪投射在投影平面上的图像呈现为矩形,因此,需要对原始图像的二维顶点坐标进行校正,使得校正后的原始图像投影在投影平面上呈现为矩形。The original image refers to the original output image of the projector. Generally speaking, the original image is a rectangular image, such as an image with a width of w and a height of h. When the projector is tilted relative to the projection plane, the original rectangular image projected on the projection plane will appear as an irregular quadrilateral, such as a convex quadrilateral. In order to make the image projected by the projector on the projection plane appear as a rectangle, it is necessary to Correct the two-dimensional vertex coordinates of the original image, so that the corrected original image is projected as a rectangle on the projection plane.
应当理解的是,二维顶点坐标指的是原始图像的四个顶点坐标。校正原始图像的二维顶点坐标可以是数字调整的方式,在不改变投影仪的镜头结构的情况下,只改变投影仪输出的原始图像的顶点坐标。It should be understood that the two-dimensional vertex coordinates refer to the four vertex coordinates of the original image. Correcting the two-dimensional vertex coordinates of the original image may be in the form of digital adjustment, which only changes the vertex coordinates of the original image output by the projector without changing the lens structure of the projector.
S107,控制所述投影仪根据所述校正后的原始图像的二维顶点坐标进行投影。S107, controlling the projector to project according to the two-dimensional vertex coordinates of the corrected original image.
这里,控制投影仪根据该校正后的原始图像的二维顶点坐标进行投影,使得投影仪的原始图像在投影平面上呈现为矩形。Here, the projector is controlled to project according to the two-dimensional vertex coordinates of the corrected original image, so that the original image of the projector appears as a rectangle on the projection plane.
采用上述技术方案,通过一个摄像头即可实现投影梯形校正,不仅减少了器件的设置数量,同时也通过预先标定的映射关系,可以快速计算出目标特征点的深度信息,降低了计算目标特征点的三维坐标的复杂度,进一步地,提高了校正效率。With the above technical solution, projection trapezoidal correction can be realized through one camera, which not only reduces the number of device settings, but also can quickly calculate the depth information of the target feature points through the pre-calibrated mapping relationship, reducing the time required for calculating the target feature points. The complexity of the three-dimensional coordinates further improves the correction efficiency.
在本公开中,当所述预设图像为棋盘格图像时,考虑到相机的摄像头在实际拍摄中会受到环境因素的影响,例如曝光度的影响,导致拍摄的带有棋盘格的图像的黑白区域存在差异,而特征点又取决于图像质量,在此差异下,常规采用角点检测算法的所确定的特征点并不准确,进而导致后续根据特征点求取的参数也是不准确的,影响了校正的精度。因此,对角点检测算法检测到的初始特征点进行优化,下面针对S103中识别出所述预设图像的目标特征点进行详细说明。In the present disclosure, when the preset image is a checkerboard image, it is considered that the camera of the camera will be affected by environmental factors, such as exposure, during actual shooting, resulting in a black and white image with a checkerboard. There are differences in the regions, and the feature points depend on the image quality. Under this difference, the feature points determined by the conventional corner detection algorithm are inaccurate, which in turn leads to inaccurate parameters obtained from the feature points later. the accuracy of the correction. Therefore, the initial feature points detected by the corner detection algorithm are optimized, and the following describes in detail the target feature points identified in the preset image in S103.
图2是一示例性实施例示出的识别目标特征点的流程示意图。如图2所示,在一个可实现的实施方式中,S103中,识别出所述预设图像的目标特征点,包括:FIG. 2 is a schematic flowchart of identifying target feature points according to an exemplary embodiment. As shown in FIG. 2, in an achievable embodiment, in S103, identifying the target feature points of the preset image, including:
S1031,采用角点检测算法确定所述棋盘格图像中的初始特征点;S1031, using a corner detection algorithm to determine the initial feature points in the checkerboard image;
S1032,分别按照竖直方向和水平方向,对处于同一预设区域范围内的初始特征点进行直线拟合;S1032, according to the vertical direction and the horizontal direction, perform straight line fitting on the initial feature points within the same preset area range;
S1033,将拟合得到的任意一条处于竖直方向上的直线与任意一条处于水平方向上的直线之间的交点作为所述目标特征点。S1033 , taking an intersection between any straight line in the vertical direction and any straight line in the horizontal direction obtained by fitting as the target feature point.
这里,首先采用角点检测算法确定预设图像中的初始特征点,然后分别按照竖直方向和水平方向,对处于同一预设区域范围内的初始特征点进行直线拟合;最后,将拟合得到的任意一条处于竖直方向上的直线与任意一条处于水平方向上的直线之间的交点作为目标特征点。Here, first, the corner detection algorithm is used to determine the initial feature points in the preset image, and then the initial feature points in the same preset area are linearly fitted according to the vertical and horizontal directions respectively; The intersection between any obtained straight line in the vertical direction and any straight line in the horizontal direction is taken as the target feature point.
在本公开中,由于初始特征点的偏移也会是小范围的偏移,因此,同一预设区域范围是一个预先可以确定的范围。其中,直线拟合可以采用最小二乘法实现。In the present disclosure, since the offset of the initial feature point is also a small-range offset, the same preset area range is a range that can be determined in advance. Among them, the straight line fitting can be realized by the least square method.
应当理解的是,在投影仪垂直墙面的情况下,投影图像的黑白交点是由两条直线相交形成的。在该理想情况下,水平直线和竖直直线的数量是已知的。相应地,拟合得到水平方向的直线和竖直方向的直线的数量与理想情况下的数量是等同的,进而特征点的 数量也是等同的。It should be understood that, in the case where the projector is perpendicular to the wall, the black and white intersection of the projected image is formed by the intersection of two straight lines. In this ideal case, the number of horizontal and vertical lines is known. Correspondingly, the number of straight lines in the horizontal direction and the straight lines in the vertical direction obtained by fitting is the same as the number in the ideal situation, and the number of feature points is also the same.
其中,角点检测算法例如可以是OpenCV的findChessboardCorners算法。The corner detection algorithm may be, for example, the findChessboardCorners algorithm of OpenCV.
采用上述技术方案,对角点检测算法检测到的初始特征点进行优化,确保了预设图像的目标特征点的准确性,提高了校正的精度。By adopting the above technical solution, the initial feature points detected by the corner point detection algorithm are optimized, the accuracy of the target feature points of the preset image is ensured, and the correction accuracy is improved.
在一个可实现的实施方式中,S104中,针对每一所述目标特征点,根据针对该目标特征点预先标定的映射关系和该目标特征点在所述拍摄图像上的相机坐标,确定该目标特征点在所述摄像头的拍摄空间中的深度信息,包括:In an achievable embodiment, in S104, for each target feature point, the target is determined according to the pre-calibrated mapping relationship for the target feature point and the camera coordinates of the target feature point on the captured image The depth information of the feature points in the shooting space of the camera, including:
针对每一所述目标特征点,根据针对该目标特征点预先标定的映射关系和该目标特征点在所述拍摄图像上的相机坐标,计算得到该目标特征点在所述摄像头的拍摄空间中的深度信息,其中,所述映射关系为:For each of the target feature points, according to the pre-calibrated mapping relationship for the target feature point and the camera coordinates of the target feature point on the captured image, calculate the distance of the target feature point in the shooting space of the camera. depth information, wherein the mapping relationship is:
Figure PCTCN2021115160-appb-000001
Figure PCTCN2021115160-appb-000001
其中,h为该目标特征点的深度信息,p 1为该目标特征点的第一预设标定参数,p 2为目标特征点的第二预设标定参数,X为该目标特征点的相机坐标,其中,所述第一预设标定参数、所述第二预设标定参数均为常数。 Wherein, h is the depth information of the target feature point, p 1 is the first preset calibration parameter of the target feature point, p 2 is the second preset calibration parameter of the target feature point, and X is the camera coordinate of the target feature point , wherein the first preset calibration parameter and the second preset calibration parameter are both constants.
这里,在计算得到目标特征点的相机坐标之后,将该相机坐标代入上述计算式中,即可计算得到该目标特征点的深度信息。其中,在上述计算式中,p 1和p 2为一个常数。 Here, after calculating the camera coordinates of the target feature point, the camera coordinates are substituted into the above calculation formula, and then the depth information of the target feature point can be calculated. Among them, in the above calculation formula, p 1 and p 2 are a constant.
值得说明的是,X可以是相机坐标的横坐标或纵坐标,横坐标与纵坐标在运算过程中是对称的,因此,上述计算式对应相机坐标的横坐标或纵坐标均适用。It is worth noting that X can be the abscissa or ordinate of the camera coordinates, and the abscissa and the ordinate are symmetrical in the operation process. Therefore, the above calculation formula is applicable to the abscissa or the ordinate of the camera coordinates.
由此,在本公开中,通过预先标定的映射关系,可以快速计算出目标特征点的深度信息,从而计算出目标特征点在空间中的三维坐标。Therefore, in the present disclosure, through the pre-calibrated mapping relationship, the depth information of the target feature point can be quickly calculated, thereby calculating the three-dimensional coordinates of the target feature point in space.
接下来,对第一预设标定参数和第二预设标定参数的标定过程进行说明。Next, the calibration process of the first preset calibration parameter and the second preset calibration parameter will be described.
针对每一所述目标特征点,该目标特征点的第一预设标定参数和第二预设标定参数是通过如下方式得到的:For each target feature point, the first preset calibration parameter and the second preset calibration parameter of the target feature point are obtained in the following manner:
在所述投影仪距离投影平面第一距离,且所述投影仪的投影光线垂直于所述投影平面的情况下,向所述投影平面投射预设图像,并通过所述投影仪的摄像头拍摄所述预设图像,得到第一图像;When the projector is at a first distance from the projection plane, and the projection light of the projector is perpendicular to the projection plane, a preset image is projected to the projection plane, and the image is captured by the camera of the projector. the preset image to obtain the first image;
基于所述第一图像以及所述第一距离,确定所述第一图像上的目标特征点的第一相机坐标和第一深度信息;determining, based on the first image and the first distance, first camera coordinates and first depth information of the target feature point on the first image;
在所述投影仪距离所述投影平面第二距离,且所述投影仪的投影光线垂直于所述投影平面的情况下,向所述投影平面投射预设图像,并通过所述投影仪的摄像头拍摄所述预设图像,得到第二图像;When the projector is a second distance away from the projection plane, and the projection light of the projector is perpendicular to the projection plane, a preset image is projected to the projection plane, and passes through the camera of the projector photographing the preset image to obtain a second image;
根据所述第二图像以及所述第二距离,确定所述预设图像上的目标特征点的第二相机坐标和第二深度信息;determining, according to the second image and the second distance, second camera coordinates and second depth information of the target feature point on the preset image;
根据所述第一相机坐标和第一深度信息、以及所述第二相机坐标和第二深度信息,结合预先建立的目标特征点的深度信息与相机坐标之间的映射关系,得到所述第一预设标定参数和所述第二预设标定参数,其中,所述映射关系为:According to the first camera coordinates and the first depth information, as well as the second camera coordinates and the second depth information, combined with the pre-established mapping relationship between the depth information of the target feature point and the camera coordinates, the first camera coordinates are obtained. The preset calibration parameter and the second preset calibration parameter, wherein the mapping relationship is:
Figure PCTCN2021115160-appb-000002
Figure PCTCN2021115160-appb-000002
其中,h为目标特征点的深度坐标,X为相机坐标,p 1为第一预设标定参数,p 2为第二预设标定参数。 Wherein, h is the depth coordinate of the target feature point, X is the camera coordinate, p 1 is the first preset calibration parameter, and p 2 is the second preset calibration parameter.
这里,映射关系的标定过程实际上是在两个不同的深度下投影预设图像,并利用摄 像头对该不同的深度下的预设图像进行拍摄,例如,将投影仪的投影光线设置成垂直于投影平面(墙面或幕布),然后投影仪分别向1.2m和1.9m距离下的投影平面投射预设图像,并利用摄像头拍摄,得到第一图像和第二图像。然后,计算第一图像中的目标特征点的相机坐标以及深度信息,其中,第一距离相当于第一图像的目标特征点的深度信息。第二图像的目标特征点的深度信息以及相机坐标的计算过程与第一图像的一致,在此不再赘述。值得说明的是,该相机坐标为一个二维坐标。Here, the process of calibrating the mapping relationship is actually projecting preset images at two different depths, and using the camera to capture the preset images at the different depths, for example, setting the projection light of the projector to be perpendicular to the Projection plane (wall or curtain), and then the projector projects preset images to the projection planes at distances of 1.2m and 1.9m respectively, and uses the camera to shoot to obtain the first image and the second image. Then, the camera coordinates and depth information of the target feature point in the first image are calculated, wherein the first distance is equivalent to the depth information of the target feature point in the first image. The calculation process of the depth information of the target feature points of the second image and the camera coordinates is the same as that of the first image, and details are not repeated here. It is worth noting that the camera coordinate is a two-dimensional coordinate.
在计算得到两个深度下的目标特征点的相机坐标以及深度信息之后,将其代入下述计算式:After calculating the camera coordinates and depth information of the target feature points at two depths, substitute them into the following formula:
Figure PCTCN2021115160-appb-000003
Figure PCTCN2021115160-appb-000003
即可计算得到第一预设标定参数和第二预设标定参数的数值。The values of the first preset calibration parameter and the second preset calibration parameter can be obtained by calculation.
下面结合附图3对上述实施方式进行说明。The above embodiment will be described below with reference to FIG. 3 .
图3是根据一示例性实施例示出的映射关系的数学建模示意图,在图3中,H点为某一目标特征点在第一距离的位置,I点为该目标特征点在第二距离的位置,K点为在第二距离位置的目标特征点投影到相机归一化平面的点,J点为在第一距离位置的目标特征点投影到相机归一化平面的点,L点为辅助点,用来描述H点距离相机归一化平面的深度,Q点为该目标特征点在相机归一化平面投影的点,D点为相机原点,A点为投影仪原点,M点为辅助点,用来描述H点距离相机原点平面的深度,N点为该目标特征点在原图的位置。FIG. 3 is a schematic diagram of mathematical modeling of a mapping relationship according to an exemplary embodiment. In FIG. 3, point H is the position of a certain target feature point at a first distance, and point I is the target feature point at a second distance. The position of , K point is the point where the target feature point at the second distance position is projected to the camera normalization plane, J point is the point where the target feature point at the first distance position is projected to the camera normalization plane, and L point is Auxiliary point, used to describe the depth of point H from the normalized plane of the camera, point Q is the point where the target feature point is projected on the normalized plane of the camera, point D is the origin of the camera, point A is the origin of the projector, and point M is The auxiliary point is used to describe the depth of the H point from the camera origin plane, and the N point is the position of the target feature point in the original image.
根据数学几何关系可以得到:JQ/DN=HL/HM;According to the mathematical geometric relationship, it can be obtained: JQ/DN=HL/HM;
Q点的横坐标记为x q,H点(目标特征点)对于相机的投射点J的横坐标记为X,LM的长度记为f,HM的长度记为h,DN的长度记为b,则变换JQ/DN=HL/HM可得出: The abscissa of point Q is marked as x q , the abscissa of point H (target feature point) to the projection point J of the camera is marked as X, the length of LM is marked as f, the length of HM is marked as h, and the length of DN is marked as b , then transform JQ/DN=HL/HM to get:
(x q-X)/b=(h-f)/h; (x q -X)/b=(hf)/h;
进一步地,将上式化简为:Further, simplify the above formula to:
h=b*f /((b-x q)+X); h=b*f /( ( bxq )+X);
进一步地,上式可看为:Further, the above formula can be seen as:
h=p 1/(p 2+X); h=p 1 /(p 2 +X);
由上式可以得出h和X的关系式,其中,h为目标特征点的深度信息,p 1为目标特征点的第一预设标定参数,p 2为目标特征点的第二预设标定参数,X为该目标特征点的横坐标。因此,将所述第一相机坐标和第一深度信息、以及所述第二相机坐标和第二深度信息代入上述计算式即可得到第一预设标定参数、第二预设标定参数的数值。 The relationship between h and X can be obtained from the above formula, where h is the depth information of the target feature point, p 1 is the first preset calibration parameter of the target feature point, and p 2 is the second preset calibration parameter of the target feature point parameter, X is the abscissa of the target feature point. Therefore, the values of the first preset calibration parameter and the second preset calibration parameter can be obtained by substituting the first camera coordinates and the first depth information, and the second camera coordinates and the second depth information into the above calculation formula.
图4是根据一示例性实施例示出的一种投影校正方法的另一流程图。如图4所示,在一个可实现的实施方式中,S106中,根据所述投影平面的法向量以及所述投影仪的当前位姿信息,对所述投影仪的原始图像的二维顶点坐标进行校正,得到校正后的原始图像的二维顶点坐标,包括:FIG. 4 is another flowchart of a projection correction method according to an exemplary embodiment. As shown in FIG. 4, in an achievable embodiment, in S106, according to the normal vector of the projection plane and the current pose information of the projector, the two-dimensional vertex coordinates of the original image of the projector are Perform correction to obtain the two-dimensional vertex coordinates of the corrected original image, including:
S1061,根据所述投影平面的法向量以及所述投影仪的当前位姿信息,计算得到所述投影仪法向量的偏移信息,其中,所述偏移信息包括偏航角、俯仰角以及滚转角;S1061, according to the normal vector of the projection plane and the current pose information of the projector, calculate and obtain the offset information of the normal vector of the projector, wherein the offset information includes yaw angle, pitch angle and roll angle corner;
S1062,基于所述偏移信息,计算得到所述投影仪投射至所述投影平面的投影图像 的二维顶点坐标;S1062, based on the offset information, calculate the two-dimensional vertex coordinates of the projected image projected by the projector to the projection plane;
S1063,基于所述投影图像的二维顶点坐标以及所述投影仪的原始图像的二维顶点坐标,建立所述投影图像与所述原始图像的单应矩阵关系;S1063, based on the two-dimensional vertex coordinates of the projected image and the two-dimensional vertex coordinates of the original image of the projector, establish a homography matrix relationship between the projected image and the original image;
S1064,从所述投影仪的投影图像中选取目标矩形,并确定该目标矩形的二维顶点坐标;S1064, select a target rectangle from the projected image of the projector, and determine the two-dimensional vertex coordinates of the target rectangle;
S1065,基于所述目标矩形的二维顶点坐标,结合所述单应矩阵关系,得到所述校正后的原始图像的二维顶点坐标。S1065 , based on the two-dimensional vertex coordinates of the target rectangle and in combination with the homography matrix relationship, obtain the two-dimensional vertex coordinates of the corrected original image.
这里,在步骤S1061中,根据所述投影平面的法向量以及所述投影仪的当前位姿信息,计算得到所述投影仪的偏移信息。其中,滚转角可以通过设置于投影仪的惯性传感器(Inertial Measurement Unit,简称IMU)来获取,即IMU获取投影仪的当前位姿信息,进而根据当前位姿信息计算得到滚转角,该计算方法为现有技术,在此不作详细说明。而偏航角、俯仰角可以根据投影平面的法向量计算得到,其具体方法是将投影平面的法向量投影在偏航角所在平面上,然后计算投影的夹角,即为偏航角。例如,XOY平面为偏航角所在平面,则计算投影与X轴的夹角。将投影平面的法向量投影在俯仰角所在平面上,然后计算投影的夹角,即为俯仰角。例如,XOZ平面为偏航角所在平面,则计算投影与X轴的夹角。Here, in step S1061, the offset information of the projector is calculated according to the normal vector of the projection plane and the current pose information of the projector. Among them, the roll angle can be obtained by an inertial sensor (Inertial Measurement Unit, IMU) set on the projector, that is, the IMU obtains the current pose information of the projector, and then calculates the roll angle according to the current pose information. The calculation method is as follows: The prior art is not described in detail here. The yaw angle and pitch angle can be calculated according to the normal vector of the projection plane. The specific method is to project the normal vector of the projection plane on the plane where the yaw angle is located, and then calculate the included angle of the projection, which is the yaw angle. For example, if the XOY plane is the plane where the yaw angle is located, the angle between the projection and the X axis is calculated. Project the normal vector of the projection plane on the plane where the pitch angle is located, and then calculate the included angle of the projection, which is the pitch angle. For example, if the XOZ plane is the plane where the yaw angle is located, the angle between the projection and the X axis is calculated.
在步骤S1062中,基于偏移信息计算投影图像的顶点坐标,是要计算得到投影仪的原始图像投射至墙面或幕布上的顶点坐标,该顶点坐标为二维坐标。In step S1062, the vertex coordinates of the projected image are calculated based on the offset information, which is to calculate the vertex coordinates of the original image of the projector projected onto the wall or curtain, and the vertex coordinates are two-dimensional coordinates.
其中,可以通过如下步骤计算投影图像的二维顶点坐标:Among them, the two-dimensional vertex coordinates of the projected image can be calculated by the following steps:
首先,根据偏航角、俯仰角,利用第一预设计算式,计算得到所述投影图像相对于所述投影仪的测量法向量,其中,所述第一预设计算式为:First, according to the yaw angle and the pitch angle, the first preset calculation formula is used to obtain the measurement normal vector of the projected image relative to the projector, wherein the first preset calculation formula is:
Figure PCTCN2021115160-appb-000004
Figure PCTCN2021115160-appb-000004
Figure PCTCN2021115160-appb-000005
Figure PCTCN2021115160-appb-000005
Figure PCTCN2021115160-appb-000006
Figure PCTCN2021115160-appb-000006
其中,X 1为所述测量法向量的X轴坐标,Y 1为所述测量法向量的Y轴坐标,Z 1为所述测量法向量的Z轴坐标,H为所述偏航角,V为所述俯仰角。 Wherein, X 1 is the X-axis coordinate of the measurement normal vector, Y 1 is the Y-axis coordinate of the measurement normal vector, Z 1 is the Z-axis coordinate of the measurement normal vector, H is the yaw angle, V is the pitch angle.
然后基于所述测量法向量,以及所述预设的目标点的坐标信息,确定所述投影图像所在平面的位置信息,其中,所述目标点为所述投影图像进行旋转的预设中心点。Then, based on the measurement normal vector and the coordinate information of the preset target point, the position information of the plane where the projection image is located is determined, wherein the target point is a preset center point where the projection image is rotated.
这里,由于目标点是预设的投影图像进行偏航、俯仰以及滚转等旋转的预设中心点,因此,目标点的坐标信息是不变的。在确定到测量法向量以及目标点之后,可以确定出投影图像所在平面的位置信息。Here, since the target point is the preset center point where the preset projection image is rotated, such as yaw, pitch, and roll, the coordinate information of the target point is unchanged. After the measurement normal vector and the target point are determined, the position information of the plane where the projected image is located can be determined.
接着,基于所述位置信息、结合预先建立的射线向量,得到所述投影图像的三维顶点坐标,其中,所述射线向量为所述投影仪投射的投影图像的顶点与所述投影仪的光心之间的连线的单位向量。Next, based on the position information and in combination with a pre-established ray vector, the three-dimensional vertex coordinates of the projected image are obtained, wherein the ray vector is the vertex of the projected image projected by the projector and the optical center of the projector Unit vector of lines between .
这里,射线向量是指投影仪投射的投影图像的顶点与投影仪的光心之间的连线的单位向量,即投影仪向外投射投影图像,其投射的投影图形的四个顶点与光心之间的连线是可以确定到的。在确定到投影图像所在平面的位置信息之后,通过射线向量可以确定到射线向量与投影图像所在平面的交点,该交点即为原始图像投射在投影平面上的投影 图像的4个顶点坐标。Here, the ray vector refers to the unit vector of the line between the vertex of the projected image projected by the projector and the optical center of the projector, that is, the projector projects the projected image outward, and the four vertices of the projected image projected by the projector are connected to the optical center. The connection between them can be determined. After determining the position information of the plane where the projection image is located, the intersection of the ray vector and the plane where the projection image is located can be determined through the ray vector, and the intersection point is the coordinates of the four vertices of the projection image projected by the original image on the projection plane.
其中,射线向量可以是根据滚转角计算得到的,其具体计算方法如下:Among them, the ray vector can be calculated according to the roll angle, and the specific calculation method is as follows:
获取所述投影仪的光机参数,其中,所述光机参数包括投影光线的上扬角度、投射比以及宽高比;obtaining the optical-mechanical parameters of the projector, wherein the optical-mechanical parameters include a rising angle, a projection ratio, and an aspect ratio of the projected light;
根据所述投影仪的光机参数,得到所述投影仪以预设条件投射在投影平面上的标准图像的三维成像顶点坐标,其中,所述预设条件为所述投影仪水平放置、所述投影仪的投影光线垂直于该投影平面、以及所述投影仪距离该投影平面预设距离阈值;According to the optical-mechanical parameters of the projector, the three-dimensional imaging vertex coordinates of the standard image projected by the projector on the projection plane under preset conditions are obtained, wherein the preset conditions are that the projector is placed horizontally, the The projection light of the projector is perpendicular to the projection plane, and the projector is separated from the projection plane by a preset distance threshold;
根据所述标准图像的三维成像顶点坐标,计算得到所述标准图像的顶点与所述投影仪的光心之间的连线的单位向量,并将该单位向量作为所述射线向量。According to the three-dimensional imaging vertex coordinates of the standard image, the unit vector of the connecting line between the vertex of the standard image and the optical center of the projector is calculated, and the unit vector is used as the ray vector.
这里,投影仪因为深度的远近投影图像会产生相似性的变化,例如,投射到投影平面的投影图像为矩形,不管深度远近,投影图像始终为矩形。因此,投影仪以预设条件向投影平面投射,则可以根据投影仪的光机参数计算得到在预设条件下投射的标准图像的三维成像顶点坐标。其中,上扬角度是指投影仪的投影光线的上扬角度,在一般情况下,上扬角度与投影仪的型号相关。Here, the projector will change the similarity of the projected images due to the depth. For example, the projected image projected onto the projection plane is a rectangle. Regardless of the depth, the projected image is always a rectangle. Therefore, if the projector projects to the projection plane under the preset conditions, the three-dimensional imaging vertex coordinates of the standard image projected under the preset conditions can be calculated according to the optical-mechanical parameters of the projector. The rising angle refers to the rising angle of the projected light of the projector, and in general, the rising angle is related to the model of the projector.
计算标准图像的三维成像顶点坐标的具体过程如下:The specific process of calculating the three-dimensional imaging vertex coordinates of the standard image is as follows:
图5是根据一示例性实施例示出的计算标准图像的三维成像顶点坐标的原理示意图。如图5所示,标准图像存在四个顶点,分别为第一顶点0、第二顶点1、第三顶点2、第四顶点3,其中,第一顶点为位于投影图像的右上角的顶点,第二顶点为位于投影图像的左上角的顶点,第三顶点为位于投影图像的右下角的顶点,第四顶点为位于投影图像的左下角的顶点。Fig. 5 is a schematic diagram showing the principle of calculating the three-dimensional imaging vertex coordinates of a standard image according to an exemplary embodiment. As shown in Figure 5, the standard image has four vertices, namely the first vertex 0, the second vertex 1, the third vertex 2, and the fourth vertex 3, wherein the first vertex is the vertex located in the upper right corner of the projected image, The second vertex is the vertex located at the upper left corner of the projected image, the third vertex is the vertex located at the lower right corner of the projected image, and the fourth vertex is the vertex located at the lower left corner of the projected image.
根据光机参数,定义预设距离阈值为f,投射比为throwRatio,w为投影图像的宽度,h为投影图像的高度,根据三角关系,则存在throwRatio=f/w。则
Figure PCTCN2021115160-appb-000007
由于throwRatio=f/w,宽高比aspectRatio=w/h,则h=f/throwRatio,因此,
Figure PCTCN2021115160-appb-000008
According to the optomechanical parameters, the preset distance threshold is defined as f, the throw ratio is throwRatio, w is the width of the projected image, and h is the height of the projected image. According to the triangular relationship, there is throwRatio=f/w. but
Figure PCTCN2021115160-appb-000007
Since throwRatio=f/w, aspectRatio=w/h, then h=f/throwRatio, therefore,
Figure PCTCN2021115160-appb-000008
则第一顶点0的三维成像顶点坐标为:Then the three-dimensional imaging vertex coordinates of the first vertex 0 are:
rcCoodinate[0][0]=(-1)*f*tan(θ)rcCoodinate[0][0]=(-1)*f*tan(θ)
srcCoodinate[0][1]=f*tan(γ)+f*tan(dOffsetAngle)srcCoodinate[0][1]=f*tan(γ)+f*tan(dOffsetAngle)
rcCoodinate[0][2]=frcCoodinate[0][2]=f
第二顶点1的三维成像顶点坐标为:The three-dimensional imaging vertex coordinates of the second vertex 1 are:
srcCoodinate[1][0]=(-1)*srcCoodinate[0][0]srcCoodinate[1][0]=(-1)*srcCoodinate[0][0]
srcCoodinate[1][1]=srcCoodinate[0][1]srcCoodinate[1][1]=srcCoodinate[0][1]
srcCoodinate[1][2]=fsrcCoodinate[1][2]=f
第三顶点2的三维成像顶点坐标为:The three-dimensional imaging vertex coordinates of the third vertex 2 are:
srcCoodinate[2][0]=srcCoodinate[0][0]srcCoodinate[2][0]=srcCoodinate[0][0]
srcCoodinate[2][1]=f*tan(dOffsetAngle)srcCoodinate[2][1]=f*tan(dOffsetAngle)
srcCoodinate[2][2]=fsrcCoodinate[2][2]=f
第四顶点3的三维成像顶点坐标为:The three-dimensional imaging vertex coordinates of the fourth vertex 3 are:
srcCoodinate[3][0]=(-1)*srcCoodinate[0][0]srcCoodinate[3][0]=(-1)*srcCoodinate[0][0]
srcCoodinate[3][1]=f*tan(dOffsetAngle)srcCoodinate[3][1]=f*tan(dOffsetAngle)
srcCoodinate[3][2]=fsrcCoodinate[3][2]=f
其中,srcCoodinate[0][0]为所述第一顶点的X轴坐标,f为所述预设距离阈值,doffsetAn gle为所述上扬角度,srcCoodinate[0][1]为所述第一顶点的Y轴坐标,srcCoodinate[1][0]为所述第二顶点的X轴坐标,srcCoodinate[1][1]为所述第二顶点的Y轴坐标,srcCoodinate[0][2]为所述第一顶点的Z轴坐标,srcCoodinate[1][2]为所述第二顶点的Z轴坐标,srcCoodinate[2][0]为所述第三顶点的X轴坐标,srcCoodinate[2][1]为所述第三顶点的Y轴坐标,srcCoodinate[2][2]为所述第三顶点的Z轴坐标,srcCoodinate[3][0]为所述第四顶点的X轴坐标,srcCoodinate[3][1]为所述第四顶点的Y轴坐标,srcCoodinate[3][2]为所述第四顶点的Z轴坐标。 Wherein, srcCoodinate[0][0] is the X-axis coordinate of the first vertex, f is the preset distance threshold, doffsetAngle is the rising angle, and srcCoodinate[0][1] is the first The Y-axis coordinate of the vertex, srcCoodinate[1][0] is the X-axis coordinate of the second vertex, srcCoodinate[1][1] is the Y-axis coordinate of the second vertex, and srcCoodinate[0][2] is the Y-axis coordinate of the second vertex The Z-axis coordinate of the first vertex, srcCoodinate[1][2] is the Z-axis coordinate of the second vertex, srcCoodinate[2][0] is the X-axis coordinate of the third vertex, and srcCoodinate[2] [1] is the Y-axis coordinate of the third vertex, srcCoodinate[2][2] is the Z-axis coordinate of the third vertex, srcCoodinate[3][0] is the X-axis coordinate of the fourth vertex, srcCoodinate[3][1] is the Y-axis coordinate of the fourth vertex, and srcCoodinate[3][2] is the Z-axis coordinate of the fourth vertex.
在计算得到标准图像的三维成像顶点坐标之后,可以利用向量计算投影仪的光心与四个顶点的四条射线向量,单位向量即是该顶点的射线向量除以射线向量的模。After the three-dimensional imaging vertex coordinates of the standard image are calculated, vectors can be used to calculate the optical center of the projector and the four ray vectors of the four vertices. The unit vector is the ray vector of the vertex divided by the modulo of the ray vector.
应当理解的是,射线向量与投影仪的光机参数相关,在投影仪的光机参数未发生变化的情况下,射线向量是不变的。It should be understood that the ray vector is related to the opto-mechanical parameters of the projector, and the ray vector is unchanged under the condition that the opto-mechanical parameters of the projector do not change.
在步骤1544中,对所述投影图像的三维成像顶点坐标进行向量分解,得到所述投影图像的二维成像顶点坐标。In step 1544, vector decomposition is performed on the three-dimensional imaging vertex coordinates of the projected image to obtain the two-dimensional imaging vertex coordinates of the projected image.
这里,在计算得到投影图像的三维成像顶点坐标之后,需要基于向量分解将四个顶点的三维成像顶点坐标转换为二维的二维成像顶点坐标。其具体做法是将向量分解到水平面上的基向量,例如,
Figure PCTCN2021115160-appb-000009
为一对基向量,
Figure PCTCN2021115160-appb-000010
为投影图像与水平面的交线发现作为坐标系的X轴的基向量,
Figure PCTCN2021115160-appb-000011
Figure PCTCN2021115160-appb-000012
垂直。其中,
Figure PCTCN2021115160-appb-000013
可以通过如下计算式计算:
Here, after calculating the three-dimensional imaging vertex coordinates of the projected image, it is necessary to convert the three-dimensional imaging vertex coordinates of the four vertices into two-dimensional two-dimensional imaging vertex coordinates based on vector decomposition. The specific method is to decompose the vector into basis vectors on the horizontal plane, for example,
Figure PCTCN2021115160-appb-000009
is a pair of basis vectors,
Figure PCTCN2021115160-appb-000010
Find the basis vector as the X-axis of the coordinate system for the intersection of the projected image and the horizontal plane,
Figure PCTCN2021115160-appb-000011
and
Figure PCTCN2021115160-appb-000012
vertical. in,
Figure PCTCN2021115160-appb-000013
It can be calculated by the following formula:
cosslineU=horizonPlanN×rotatePlanNcosslineU=horizonPlanN×rotatePlanN
Figure PCTCN2021115160-appb-000014
Figure PCTCN2021115160-appb-000014
其中,horizonPlanN为水平面的法向量,×为向量的叉乘,rotatePlanN为投影图像的法向量,norm(coSSlineU)为向量cosslineU的模。Among them, horizonPlanN is the normal vector of the horizontal plane, × is the cross product of the vector, rotatePlanN is the normal vector of the projected image, and norm(coSSlineU) is the modulus of the vector cosslineU.
图6是根据一示例性实施例示出的向量分解的原理示意图。如图6所示,投影图像存在G、I、J以及H共4个顶点。在求取到投影图像的三维成像顶点坐标之后,以点G、I、J以及H中的任一点为坐标原点建立坐标系将三维成像顶点坐标转换为二维成像顶点坐标。在本公开中以H点为坐标原点建立坐标系对向量分解计算二维成像顶点坐标的过程进行详细说明。则可以利用如下计算式将点G、I、J的三维成像顶点坐标转换为二维成像顶点坐标。Fig. 6 is a schematic diagram showing the principle of vector decomposition according to an exemplary embodiment. As shown in FIG. 6 , the projection image has four vertices in total, G, I, J, and H. After the three-dimensional imaging vertex coordinates of the projected image are obtained, a coordinate system is established with any point G, I, J and H as the coordinate origin to convert the three-dimensional imaging vertex coordinates into two-dimensional imaging vertex coordinates. In the present disclosure, a coordinate system is established with point H as the coordinate origin, and the process of vector decomposition to calculate the coordinates of two-dimensional imaging vertexes is described in detail. Then, the following formula can be used to convert the three-dimensional imaging vertex coordinates of points G, I, and J into two-dimensional imaging vertex coordinates.
Figure PCTCN2021115160-appb-000015
Figure PCTCN2021115160-appb-000015
Figure PCTCN2021115160-appb-000016
Figure PCTCN2021115160-appb-000016
vectorP=point3D-pointOriginvectorP=point3D-pointOrigin
其中,x为二维成像顶点坐标的X轴坐标,vectorP(0)为向量vectorP的X轴坐标,
Figure PCTCN2021115160-appb-000017
Figure PCTCN2021115160-appb-000018
的Y轴坐标,
Figure PCTCN2021115160-appb-000019
Figure PCTCN2021115160-appb-000020
的X轴坐标,vectorP(1)为向量vectorP的Y轴坐标,
Figure PCTCN2021115160-appb-000021
Figure PCTCN2021115160-appb-000022
的X轴坐标,
Figure PCTCN2021115160-appb-000023
Figure PCTCN2021115160-appb-000024
的Y轴坐标,y为二维成像顶点坐标的Y轴坐标,point3D为求解的顶点的三维成像顶点坐标,如G、I、J中的任一顶点,pointOrigin为H点的坐标,vectorP为HG向量、HJ向量以及HI向量中的一种,例如,在求解点G的二维成像顶点坐标时,point3D为点G的三维成像顶点坐标,则vectorP为HG向量。
Among them, x is the X-axis coordinate of the two-dimensional imaging vertex coordinate, vectorP(0) is the X-axis coordinate of the vector vectorP,
Figure PCTCN2021115160-appb-000017
for
Figure PCTCN2021115160-appb-000018
The Y-axis coordinate of ,
Figure PCTCN2021115160-appb-000019
for
Figure PCTCN2021115160-appb-000020
The X-axis coordinate of vectorP(1) is the Y-axis coordinate of the vector vectorP,
Figure PCTCN2021115160-appb-000021
for
Figure PCTCN2021115160-appb-000022
the X-axis coordinate,
Figure PCTCN2021115160-appb-000023
for
Figure PCTCN2021115160-appb-000024
The Y-axis coordinate of , y is the Y-axis coordinate of the two-dimensional imaging vertex coordinate, point3D is the three-dimensional imaging vertex coordinate of the vertex to be solved, such as any vertex in G, I, J, pointOrigin is the coordinate of the H point, and vectorP is the HG One of vector, HJ vector and HI vector. For example, when solving the two-dimensional imaging vertex coordinates of point G, point3D is the three-dimensional imaging vertex coordinates of point G, and vectorP is the HG vector.
由此,通过上述计算式,可以将投影图像的三维成像顶点坐标转换为投影图像的二维成像顶点坐标。Therefore, by the above calculation formula, the three-dimensional imaging vertex coordinates of the projected image can be converted into the two-dimensional imaging vertex coordinates of the projected image.
在一些可实现的实施方式中,在根据所述投影仪的光机参数,得到所述投影仪以预设条件投射在投影平面上的标准图像的三维成像顶点坐标之后,所述方法还包括:In some achievable embodiments, after obtaining the three-dimensional imaging vertex coordinates of the standard image projected by the projector on the projection plane under preset conditions according to the optical-mechanical parameters of the projector, the method further includes:
获取所述投影仪的当前滚转角;obtain the current roll angle of the projector;
当所述当前滚转角未满足预设阈值时,根据所述当前滚转角,结合第二预设计算式,对所述标准图像的三维成像顶点坐标中的X轴坐标以及Y轴坐标进行修正,其中,所述第二预设计算式为:When the current roll angle does not meet the preset threshold, according to the current roll angle, combined with the second preset calculation formula, the X-axis coordinate and the Y-axis coordinate in the three-dimensional imaging vertex coordinates of the standard image are corrected, Wherein, the second preset calculation formula is:
ansP [i][x]=(anyP [i][x]-rotateP.x)*cos(-r)-(anyP [i][y]-rotateP.y)*sin(-r)+rotateP.x ansP [i][x] =(anyP [i][x] -rotateP.x)*cos(-r)-(anyP [i][y] -rotateP.y)*sin(-r)+rotateP. x
ansP [i][y]=(anyP [i][x]-rotateP.x)*sin(-r)-(anyP [i][y]-rotateP.y)*cos(-r)+rotateP.y ansP [i][y] =(anyP [i][x] -rotateP.x)*sin(-r)-(anyP [i][y] -rotateP.y)*cos(-r)+rotateP. y
其中,ansP [i][x]为所述标准图像的第i个顶点的修正后的X轴坐标,ansP [i][y]为所述标准图像的第i个顶点的修正后的Y轴坐标,anyP [i][x]为所述标准图像的第i个顶点的修正前的X轴坐标,anyP [i][y]为所述标准图像的第i个顶点的修正前的Y轴坐标,rotateP.x为所述投影仪进行滚转的旋转中心的X轴坐标,rotateP.y为所述旋转中心的Y轴坐标,r为所述当前滚转角; Wherein, ansP [i][x] is the corrected X-axis coordinate of the ith vertex of the standard image, ansP [i][y] is the corrected Y-axis of the ith vertex of the standard image Coordinates, anyP [i][x] is the X-axis coordinate of the ith vertex of the standard image before correction, anyP [i][y] is the Y-axis of the ith vertex of the standard image before correction coordinates, rotateP.x is the X-axis coordinate of the rotation center of the projector rolling, rotateP.y is the Y-axis coordinate of the rotation center, and r is the current roll angle;
将修正后的X轴坐标以及Y轴坐标作为所述标准图像的顶点的新的X轴坐标和Y轴坐标。The corrected X-axis coordinates and Y-axis coordinates are used as new X-axis coordinates and Y-axis coordinates of the vertices of the standard image.
这里,可以通过设置于投影仪的惯性传感器(Inertial Measurement Unit,简称IMU)来获取投影仪的当前滚转角,当当前滚转角未满足预设阈值,说明投影仪发生了滚转的旋转。例如,当前滚转角不为0,则说明投影仪发生了滚转的旋转。当投影仪发生了滚转,其标准图像会以光心射线为旋转轴进行滚转,则标准图像的三维成像顶点坐标的X轴坐标以及Y轴坐标会发生变化,因此,需要基于第二预设计算式计算发生滚转的标准图像的三维顶点坐标的X轴坐标以及Y轴坐标,得到各个顶点修正后的X轴坐标以及Y轴坐标,从而获得标准图像新的三维成像顶点坐标。然后基于该新的三维成像顶点坐标重新计算射线向量,并求解出投影图像的三维成像顶点坐标。Here, the current roll angle of the projector can be obtained through an inertial sensor (Inertial Measurement Unit, IMU) set on the projector. When the current roll angle does not meet the preset threshold, it means that the projector has rolled. For example, if the current roll angle is not 0, it means that the projector has a roll rotation. When the projector rolls, its standard image will roll with the optical center ray as the rotation axis, and the X-axis and Y-axis coordinates of the three-dimensional imaging vertex coordinates of the standard image will change. The design formula calculates the X-axis and Y-axis coordinates of the three-dimensional vertex coordinates of the rolling standard image, and obtains the corrected X-axis and Y-axis coordinates of each vertex, thereby obtaining the new three-dimensional imaging vertex coordinates of the standard image. Then, the ray vector is recalculated based on the new three-dimensional imaging vertex coordinates, and the three-dimensional imaging vertex coordinates of the projected image are solved.
应当理解的是,旋转中心rotateP的坐标可以是(0,0),旋转中心rotateP是指投影仪进行滚转的旋转中心,上述的预设中心点是假想的投影仪在发生偏航、俯仰的旋转之后,投影图像发生的偏移。It should be understood that the coordinates of the rotation center rotateP can be (0, 0), the rotation center rotateP refers to the rotation center of the projector to roll, and the above-mentioned preset center point is an imaginary projector when yaw and pitch occur. The offset of the projected image after rotation.
由此,通过滚转角可以考虑到投影仪在发送滚转之后的旋转投影图像的变化,从而实现精准的梯形校正。As a result, the roll angle can take into account the change of the projector's rotated projection image after sending the roll, so as to achieve accurate keystone correction.
在步骤S1063中,单应是射影几何中的概念,又称为射影变换。它是把一个射影平面上的点(三维齐次矢量)映射到另一个射影平面上。假设已知两个图像之间的单应矩阵关系,则可以由一个平面的图像转换到另一平面上。通过平面的转换,是为了在同一个平面上进行投影校正。因此,在得知投影仪的原始图像的二维顶点坐标和投影图像的二维顶点坐标之后,可以构建出相应的单应矩阵关系,单应矩阵关系指的是投影仪的原始图像映射在墙面或幕布的投影图像之间的关联关系。In step S1063, a homography is a concept in projective geometry, also called a projective transformation. It is to map a point (three-dimensional homogeneous vector) on a projective plane to another projective plane. Assuming that the homography matrix relationship between the two images is known, the image of one plane can be transformed to the other plane. The transformation through the plane is to perform projection correction on the same plane. Therefore, after knowing the two-dimensional vertex coordinates of the original image of the projector and the two-dimensional vertex coordinates of the projected image, the corresponding homography matrix relationship can be constructed. The homography matrix relationship refers to the mapping of the original image of the projector on the wall. The relationship between projected images of a surface or curtain.
在步骤S1064中,目标矩形是指在投影图像的区域内选取的一个矩形,该矩形可以是在所述投影仪的投影图像中区域内的面积最大的矩形。通过将目标矩形设置为面积最大的矩形是为了最大化投影面积,提高用户体验。In step S1064, the target rectangle refers to a rectangle selected in the area of the projected image, and the rectangle may be the rectangle with the largest area in the area in the projected image of the projector. By setting the target rectangle to the rectangle with the largest area, it is to maximize the projected area and improve the user experience.
应当理解的是,在上述实施方式中提出了一种计算投影图像的二维顶点坐标的实施方式,在具体应用中不仅可以使用上述实施方式公开的方法计算投影图像的二维顶点坐标,也可以使用其他方法计算投影图像的二维顶点坐标。例如,基于该偏移信息、以及原始图像的顶点坐标,计算得到经过旋转后的原始图像的顶点坐标。其中,经过旋转后的原始图像的顶点坐标是指原始图像的顶点坐标经过偏航角、俯仰角以及滚转角旋转之后的顶点坐标,然后再基于计算得到的投影仪的投影深度,计算旋转后的原始图像的顶点坐标映射到投影平面的投影图像的二维顶点坐标。其中,投影深度是指投影仪与投影平面的距离。It should be understood that, in the above-mentioned embodiments, an embodiment of calculating the two-dimensional vertex coordinates of the projected image is proposed. Use other methods to compute the 2D vertex coordinates of the projected image. For example, based on the offset information and the vertex coordinates of the original image, the vertex coordinates of the rotated original image are calculated. Among them, the vertex coordinates of the rotated original image refer to the vertex coordinates of the original image after the yaw angle, pitch angle and roll angle are rotated, and then based on the calculated projection depth of the projector, calculate the rotated The vertex coordinates of the original image are mapped to the 2D vertex coordinates of the projected image of the projected plane. The projection depth refers to the distance between the projector and the projection plane.
其中,在一个可实现的实施方式中,步骤S1063中,从所述投影图像中选取目标矩形,可以包括:Wherein, in an achievable implementation manner, in step S1063, selecting a target rectangle from the projected image may include:
从所述投影图像的任一边上任意选取一点,并以该点作为待构建的矩形的顶点、以所述原始图像的宽高比作为所述待构建的矩形的宽高比,在所述投影图像的区域内生成矩形;A point is arbitrarily selected from any side of the projected image, and the point is taken as the vertex of the rectangle to be constructed, and the aspect ratio of the original image is taken as the aspect ratio of the rectangle to be constructed. Generate a rectangle within the area of the image;
从生成的矩形中选取面积最大的矩形作为所述目标矩形。The rectangle with the largest area is selected from the generated rectangles as the target rectangle.
这里,选取目标矩形的具体做法可以是在投影图像任一边上任意选取一点,并以该点作为待构建的矩形的顶点、以原始图像的宽高比作为待构建的矩形的宽高比,在投影图像的区域内生成矩形,并从生成的矩形中选取面积最大的矩形作为目标矩形。Here, the specific method of selecting the target rectangle may be to arbitrarily select a point on either side of the projected image, and use the point as the vertex of the rectangle to be constructed, and the aspect ratio of the original image as the aspect ratio of the rectangle to be constructed. A rectangle is generated in the area of the projected image, and the rectangle with the largest area is selected as the target rectangle from the generated rectangles.
例如,遍历投影图像的最长边以及与该最长边相邻的边,选取任一点作为待构建的矩形的顶点,向投影图像的四周生成宽高比与原始图像一致的宽高比的矩形,在遍历完 成最后,从所有生成的矩形中查找出面积最大的矩形作为目标矩形。For example, traverse the longest side of the projected image and the side adjacent to the longest side, select any point as the vertex of the rectangle to be constructed, and generate a rectangle with an aspect ratio consistent with the original image around the projected image. , at the end of the traversal, find the rectangle with the largest area from all the generated rectangles as the target rectangle.
由此,通过选取面积最大的矩形作为目标矩形,可以保证用户观看到的投影图像面积最大,从而提升用户的观看体验。Therefore, by selecting the rectangle with the largest area as the target rectangle, it can be ensured that the projected image area viewed by the user is the largest, thereby improving the viewing experience of the user.
在步骤S1065中,在得到目标矩形的二维顶点坐标之后,可以将该目标矩形的二维顶点坐标作为单应矩阵关系的输入值,计算得到经过校正后的原始图像的二维顶点坐标,使得投影仪根据经过校正后的原始图像的二维顶点坐标进行投影,使得在用户的视野中呈现的投影图像为矩形。即未校正前,用户视野中的投影图像呈现为不规则四边形,而校正后的投影图像呈现为矩形。In step S1065, after obtaining the two-dimensional vertex coordinates of the target rectangle, the two-dimensional vertex coordinates of the target rectangle can be used as the input value of the homography matrix relationship, and the two-dimensional vertex coordinates of the corrected original image can be obtained by calculating, such that The projector projects according to the two-dimensional vertex coordinates of the corrected original image, so that the projected image presented in the user's field of view is a rectangle. That is, before the correction, the projected image in the user's field of view appears as a trapezoid, while the corrected projected image appears as a rectangle.
值得说明的是,上述实施例所提及的顶点坐标指的是投影平面的4个角点坐标。It should be noted that the vertex coordinates mentioned in the above embodiments refer to the coordinates of the four corner points of the projection plane.
图7是根据一示例性实施例示出的一种投影校正装置的框图。如图7所示,所述装置400包括:Fig. 7 is a block diagram of a projection correction apparatus according to an exemplary embodiment. As shown in FIG. 7, the apparatus 400 includes:
响应模块401,配置为响应于接收到的校正指令,控制投影仪向投影平面投射预设图像;The response module 401 is configured to control the projector to project a preset image to the projection plane in response to the received correction instruction;
拍摄模块402,配置为通过所述投影仪的摄像头拍摄所述投影仪投射出的所述预设图像,得到拍摄图像;A photographing module 402, configured to photograph the preset image projected by the projector through the camera of the projector to obtain a photographed image;
目标特征点确定模块403,配置为在所述拍摄图像中,识别出所述预设图像的目标特征点;The target feature point determination module 403 is configured to identify the target feature point of the preset image in the captured image;
三维坐标确定模块404,配置为针对每一所述目标特征点,根据针对该目标特征点预先标定的映射关系和该目标特征点在所述拍摄图像上的相机坐标,确定该目标特征点在所述摄像头的拍摄空间中的深度信息,以得到该目标特征点在所述投影仪的投影空间中的三维坐标,其中,所述映射关系是在不同深度下标定的目标特征点的深度信息与相机坐标的偏移量之间的关联关系;The three-dimensional coordinate determination module 404 is configured to, for each of the target feature points, determine that the target feature point is located in the target feature point according to the pre-calibrated mapping relationship for the target feature point and the camera coordinates of the target feature point on the captured image. The depth information in the shooting space of the camera is obtained to obtain the three-dimensional coordinates of the target feature point in the projection space of the projector, wherein the mapping relationship is the depth information of the target feature point calibrated at different depths and the camera. The relationship between the offsets of the coordinates;
法向量模块405,配置为根据各个所述目标特征点的三维坐标,确定所述投影平面相对于所述投影仪的法向量;The normal vector module 405 is configured to determine the normal vector of the projection plane relative to the projector according to the three-dimensional coordinates of each of the target feature points;
校正模块406,配置为根据所述投影平面的法向量以及所述投影仪的当前位姿信息,对所述投影仪的原始图像的二维顶点坐标进行校正,得到校正后的原始图像的二维顶点坐标;The correction module 406 is configured to correct the two-dimensional vertex coordinates of the original image of the projector according to the normal vector of the projection plane and the current pose information of the projector, so as to obtain the two-dimensional coordinates of the corrected original image. vertex coordinates;
投影模块407,控制所述投影仪根据所述校正后的原始图像的二维顶点坐标进行投影。The projection module 407 controls the projector to project according to the two-dimensional vertex coordinates of the corrected original image.
可选地,所述预设图像为棋盘格图像,所述响应模块401包括:Optionally, the preset image is a checkerboard image, and the response module 401 includes:
指令响应子模块,配置为采用角点检测算法确定所述棋盘格图像中的初始特征点;an instruction response submodule, configured to use a corner detection algorithm to determine the initial feature points in the checkerboard image;
直线拟合子模块,配置为分别按照竖直方向和水平方向,对处于同一预设区域范围内的初始特征点进行直线拟合;The straight line fitting sub-module is configured to perform straight line fitting on the initial feature points within the same preset area according to the vertical direction and the horizontal direction respectively;
特征点确定子模块,配置为将拟合得到的任意一条处于竖直方向上的直线与任意一条处于水平方向上的直线之间的交点作为所述目标特征点。The feature point determination sub-module is configured to take the intersection point between any straight line in the vertical direction and any straight line in the horizontal direction obtained by fitting as the target feature point.
可选地,所述三维坐标确定模块404具体配置为:Optionally, the three-dimensional coordinate determination module 404 is specifically configured as:
针对每一所述目标特征点,根据针对该目标特征点预先标定的映射关系和该目标特征点在所述拍摄图像上的相机坐标,计算得到该目标特征点在所述摄像头的拍摄空间中的深度信息,其中,所述映射关系为:For each of the target feature points, according to the pre-calibrated mapping relationship for the target feature point and the camera coordinates of the target feature point on the captured image, calculate the distance of the target feature point in the shooting space of the camera. depth information, wherein the mapping relationship is:
Figure PCTCN2021115160-appb-000025
Figure PCTCN2021115160-appb-000025
其中,h为该目标特征点的深度信息,p 1为该目标特征点的第一预设标定参数,p 2为目标特征点的第二预设标定参数,X为该目标特征点的相机坐标,其中,所述第一预 设标定参数、所述第二预设标定参数均为常数。 Wherein, h is the depth information of the target feature point, p 1 is the first preset calibration parameter of the target feature point, p 2 is the second preset calibration parameter of the target feature point, and X is the camera coordinate of the target feature point , wherein the first preset calibration parameter and the second preset calibration parameter are both constants.
可选地,所述三维坐标确定模块404具体配置为:Optionally, the three-dimensional coordinate determination module 404 is specifically configured as:
在所述投影仪距离投影平面第一距离,且所述投影仪的投影光线垂直于所述投影平面的情况下,向所述投影平面投射预设图像,并通过所述投影仪的摄像头拍摄所述预设图像,得到第一图像;When the projector is at a first distance from the projection plane, and the projection light of the projector is perpendicular to the projection plane, a preset image is projected to the projection plane, and the image is captured by the camera of the projector. the preset image to obtain the first image;
基于所述第一图像以及所述第一距离,确定所述第一图像上的目标特征点的第一相机坐标和第一深度信息;determining, based on the first image and the first distance, first camera coordinates and first depth information of the target feature point on the first image;
在所述投影仪距离所述投影平面第二距离,且所述投影仪的投影光线垂直于所述投影平面的情况下,向所述投影平面投射预设图像,并通过所述投影仪的摄像头拍摄所述预设图像,得到第二图像;When the projector is a second distance away from the projection plane, and the projection light of the projector is perpendicular to the projection plane, a preset image is projected to the projection plane, and passes through the camera of the projector photographing the preset image to obtain a second image;
根据所述第二图像以及所述第二距离,确定所述预设图像上的目标特征点的第二相机坐标和第二深度信息;determining, according to the second image and the second distance, second camera coordinates and second depth information of the target feature point on the preset image;
根据所述第一相机坐标和第一深度信息、以及所述第二相机坐标和第二深度信息,结合预先建立的目标特征点的深度信息与相机坐标之间的映射关系,得到所述第一预设标定参数和所述第二预设标定参数,其中,所述映射关系为:According to the first camera coordinates and the first depth information, as well as the second camera coordinates and the second depth information, combined with the pre-established mapping relationship between the depth information of the target feature point and the camera coordinates, the first camera coordinates are obtained. The preset calibration parameter and the second preset calibration parameter, wherein the mapping relationship is:
Figure PCTCN2021115160-appb-000026
Figure PCTCN2021115160-appb-000026
其中,h为目标特征点的深度坐标,X为相机坐标,p 1为第一预设标定参数,p 2为第二预设标定参数。 Wherein, h is the depth coordinate of the target feature point, X is the camera coordinate, p 1 is the first preset calibration parameter, and p 2 is the second preset calibration parameter.
可选地,所述校正模块406包括:Optionally, the correction module 406 includes:
偏移信息计算模块,配置为根据所述投影平面的法向量以及所述投影仪的当前位姿信息,计算得到所述投影仪法向量的偏移信息,其中,所述偏移信息包括偏航角、俯仰角以及滚转角;An offset information calculation module, configured to calculate offset information of the projector's normal vector according to the normal vector of the projection plane and the current pose information of the projector, wherein the offset information includes yaw angle, pitch angle and roll angle;
顶点计算模块,配置为基于所述偏移信息,计算得到所述投影仪投射至所述投影平面的投影图像的二维顶点坐标;a vertex calculation module, configured to calculate, based on the offset information, the two-dimensional vertex coordinates of the projected image projected by the projector to the projection plane;
单应矩阵模块,配置为基于所述投影图像的二维顶点坐标以及所述投影仪的原始图像的二维顶点坐标,建立所述投影图像与所述原始图像的单应矩阵关系;a homography matrix module configured to establish a homography matrix relationship between the projected image and the original image based on the two-dimensional vertex coordinates of the projected image and the two-dimensional vertex coordinates of the original image of the projector;
目标矩形选取模块,配置为从所述投影仪的投影图像中选取目标矩形,并确定该目标矩形的二维顶点坐标;A target rectangle selection module, configured to select a target rectangle from the projected image of the projector, and determine the two-dimensional vertex coordinates of the target rectangle;
顶点校正模块,配置为基于所述目标矩形的二维顶点坐标,结合所述单应矩阵关系,得到所述校正后的原始图像的二维顶点坐标。The vertex correction module is configured to obtain the two-dimensional vertex coordinates of the corrected original image based on the two-dimensional vertex coordinates of the target rectangle and the homography matrix relationship.
可选地,所述目标矩形选取模块具体配置为:Optionally, the target rectangle selection module is specifically configured as:
从所述投影图像的任一边上任意选取一点,并以该点作为待构建的矩形的顶点、以所述原始图像的宽高比作为所述待构建的矩形的宽高比,在所述投影图像的区域内生成矩形;A point is arbitrarily selected from any side of the projected image, and the point is taken as the vertex of the rectangle to be constructed, and the aspect ratio of the original image is taken as the aspect ratio of the rectangle to be constructed. Generate a rectangle within the area of the image;
从生成的矩形中选取面积最大的矩形作为所述目标矩形。The rectangle with the largest area is selected from the generated rectangles as the target rectangle.
关于上述实施例中的装置,其中各个模块执行操作的具体方式已经在有关该方法的实施例中进行了详细描述,此处将不做详细阐述说明。Regarding the apparatus in the above-mentioned embodiment, the specific manner in which each module performs operations has been described in detail in the embodiment of the method, and will not be described in detail here.
本公开还提供一种计算机可读存储介质,其上存储有计算机程序,该The present disclosure also provides a computer-readable storage medium having a computer program stored thereon, the
程序被处理器执行时实现本公开提供的投影校正方法的步骤。When the program is executed by the processor, the steps of the projection correction method provided by the present disclosure are implemented.
图8是根据一示例性实施例示出的一种电子设备的框图。如图8所示,该电子设备500可以包括:处理器501,存储器502。该电子设备500还可以包括多媒体组件503, 输入/输出(I/O)接口504,以及通信组件505中的一者或多者。Fig. 8 is a block diagram of an electronic device according to an exemplary embodiment. As shown in FIG. 8 , the electronic device 500 may include: a processor 501 and a memory 502 . The electronic device 500 may also include one or more of a multimedia component 503 , an input/output (I/O) interface 504 , and a communication component 505 .
其中,处理器501用于控制该电子设备500的整体操作,以完成上述的投影校正方法中的全部或部分步骤。Wherein, the processor 501 is used to control the overall operation of the electronic device 500 to complete all or part of the steps in the above-mentioned projection correction method.
存储器502用于存储各种类型的数据以支持在该电子设备500的操作,这些数据例如可以包括用于在该电子设备500上操作的任何应用程序或方法的指令,以及应用程序相关的数据,例如联系人数据、收发的消息、图片、音频、视频等等。该存储器502可以由任何类型的易失性或非易失性存储设备或者它们的组合实现,例如静态随机存取存储器(Static Random Access Memory,简称SRAM),电可擦除可编程只读存储器(Electrically Erasable Programmable Read-Only Memory,简称EEPROM),可擦除可编程只读存储器(Erasable Programmable Read-Only Memory,简称EPROM),可编程只读存储器(Programmable Read-Only Memory,简称PROM),只读存储器(Read-Only Memory,简称ROM),磁存储器,快闪存储器,磁盘或光盘。The memory 502 is used to store various types of data to support operations on the electronic device 500, such data may include, for example, instructions for any application or method operating on the electronic device 500, and application-related data, Such as contact data, messages sent and received, pictures, audio, video, and so on. The memory 502 can be implemented by any type of volatile or non-volatile storage device or their combination, such as static random access memory (Static Random Access Memory, SRAM for short), electrically erasable programmable read-only memory ( Electrically Erasable Programmable Read-Only Memory (EEPROM for short), Erasable Programmable Read-Only Memory (EPROM), Programmable Read-Only Memory (PROM), Read-Only Memory (Read-Only Memory, ROM for short), magnetic memory, flash memory, magnetic disk or optical disk.
多媒体组件503可以包括屏幕和音频组件。其中屏幕例如可以是触摸屏,音频组件用于输出和/或输入音频信号。例如,音频组件可以包括一个麦克风,麦克风用于接收外部音频信号。所接收的音频信号可以被进一步存储在存储器502或通过通信组件505发送。音频组件还包括至少一个扬声器,用于输出音频信号。 Multimedia components 503 may include screen and audio components. Wherein the screen can be, for example, a touch screen, and the audio component is used for outputting and/or inputting audio signals. For example, the audio component may include a microphone for receiving external audio signals. The received audio signal may be further stored in memory 502 or transmitted through communication component 505 . The audio assembly also includes at least one speaker for outputting audio signals.
I/O接口504为处理器501和其他接口模块之间提供接口,上述其他接口模块可以是键盘,鼠标,按钮等。这些按钮可以是虚拟按钮或者实体按钮。The I/O interface 504 provides an interface between the processor 501 and other interface modules, and the above-mentioned other interface modules may be a keyboard, a mouse, a button, and the like. These buttons can be virtual buttons or physical buttons.
通信组件505用于该电子设备500与其他设备之间进行有线或无线通信。无线通信,例如Wi-Fi,蓝牙,近场通信(Near Field Communication,简称NFC),2G、3G或4G,或它们中的一种或几种的组合,因此相应的该通信组件505可以包括:Wi-Fi模块,蓝牙模块,NFC模块。The communication component 505 is used for wired or wireless communication between the electronic device 500 and other devices. Wireless communication, such as Wi-Fi, Bluetooth, Near Field Communication (NFC for short), 2G, 3G or 4G, or one or a combination of them, so the corresponding communication component 505 may include: Wi-Fi module, Bluetooth module, NFC module.
在一示例性实施例中,电子设备500可以被一个或多个应用专用集成电路(Application Specific Integrated Circuit,简称ASIC)、数字信号处理器(Digital Signal Processor,简称DSP)、数字信号处理设备(Digital Signal Processing Device,简称DSPD)、可编程逻辑器件(Programmable Logic Device,简称PLD)、现场可编程门阵列(Field Programmable Gate Array,简称FPGA)、控制器、微控制器、微处理器或其他电子元件实现,用于执行上述的投影校正方法。In an exemplary embodiment, the electronic device 500 may be implemented by one or more application-specific integrated circuits (Application Specific Integrated Circuit, ASIC for short), digital signal processors (Digital Signal Processor, DSP for short), digital signal processing devices (Digital Signal Processing Device (DSPD), Programmable Logic Device (PLD), Field Programmable Gate Array (FPGA), controller, microcontroller, microprocessor or other electronic components Implementation is used to perform the above-mentioned projection correction method.
在另一示例性实施例中,还提供了一种包括程序指令的计算机可读存储介质,该程序指令被处理器执行时实现上述的投影校正方法的步骤。例如,该计算机可读存储介质可以为上述包括程序指令的存储器502,上述程序指令可由电子设备500的处理器501执行以完成上述的投影校正方法。In another exemplary embodiment, there is also provided a computer-readable storage medium comprising program instructions, the program instructions implement the steps of the above-mentioned projection correction method when executed by a processor. For example, the computer-readable storage medium can be the above-mentioned memory 502 including program instructions, and the above-mentioned program instructions can be executed by the processor 501 of the electronic device 500 to complete the above-mentioned projection correction method.
以上结合附图详细描述了本公开的优选实施方式,但是,本公开并不限于上述实施方式中的具体细节,在本公开的技术构思范围内,可以对本公开的技术方案进行多种简单变型,这些简单变型均属于本公开的保护范围。The preferred embodiments of the present disclosure have been described above in detail with reference to the accompanying drawings. However, the present disclosure is not limited to the specific details of the above-mentioned embodiments. Various simple modifications can be made to the technical solutions of the present disclosure within the scope of the technical concept of the present disclosure. These simple modifications all fall within the protection scope of the present disclosure.
另外需要说明的是,在上述具体实施方式中所描述的各个具体技术特征,在不矛盾的情况下,可以通过任何合适的方式进行组合。为了避免不必要的重复,本公开对各种可能的组合方式不再另行说明。In addition, it should be noted that each specific technical feature described in the above-mentioned specific implementation manner may be combined in any suitable manner under the circumstance that there is no contradiction. In order to avoid unnecessary repetition, various possible combinations are not described in the present disclosure.
此外,本公开的各种不同的实施方式之间也可以进行任意组合,只要其不违背本公开的思想,其同样应当视为本公开所公开的内容。In addition, the various embodiments of the present disclosure can also be arbitrarily combined, as long as they do not violate the spirit of the present disclosure, they should also be regarded as the contents disclosed in the present disclosure.

Claims (10)

  1. 一种投影校正方法,其特征在于,所述方法包括:A projection correction method, characterized in that the method comprises:
    响应于接收到的校正指令,控制投影仪向投影平面投射预设图像;In response to the received correction instruction, controlling the projector to project a preset image to the projection plane;
    通过所述投影仪的摄像头拍摄所述投影仪投射出的所述预设图像,得到拍摄图像;Capture the preset image projected by the projector by using the camera of the projector to obtain a captured image;
    在所述拍摄图像中,识别出所述预设图像的目标特征点;In the captured image, identify the target feature point of the preset image;
    针对每一所述目标特征点,根据针对该目标特征点预先标定的映射关系和该目标特征点在所述拍摄图像上的相机坐标,确定该目标特征点在所述摄像头的拍摄空间中的深度信息,以得到该目标特征点在所述投影仪的投影空间中的三维坐标,其中,所述映射关系是在不同深度下标定的目标特征点的深度信息与相机坐标的偏移量之间的关联关系;For each target feature point, determine the depth of the target feature point in the shooting space of the camera according to the mapping relationship pre-calibrated for the target feature point and the camera coordinates of the target feature point on the captured image information to obtain the three-dimensional coordinates of the target feature point in the projection space of the projector, wherein the mapping relationship is between the depth information of the target feature point calibrated at different depths and the offset of the camera coordinates connection relation;
    根据各个所述目标特征点的三维坐标,确定所述投影平面相对于所述投影仪的法向量;Determine the normal vector of the projection plane relative to the projector according to the three-dimensional coordinates of each of the target feature points;
    根据所述投影平面的法向量以及所述投影仪的当前位姿信息,对所述投影仪的原始图像的二维顶点坐标进行校正,得到校正后的原始图像的二维顶点坐标;Correcting the two-dimensional vertex coordinates of the original image of the projector according to the normal vector of the projection plane and the current pose information of the projector, to obtain the two-dimensional vertex coordinates of the corrected original image;
    控制所述投影仪根据所述校正后的原始图像的二维顶点坐标进行投影。The projector is controlled to project according to the two-dimensional vertex coordinates of the corrected original image.
  2. 根据权利要求1所述的方法,其特征在于,所述预设图像为棋盘格图像;所述识别出所述预设图像的目标特征点,包括:The method according to claim 1, wherein the preset image is a checkerboard image; and the identifying the target feature points of the preset image comprises:
    采用角点检测算法确定所述棋盘格图像中的初始特征点;Use a corner detection algorithm to determine the initial feature points in the checkerboard image;
    分别按照竖直方向和水平方向,对处于同一预设区域范围内的初始特征点进行直线拟合;According to the vertical direction and the horizontal direction, the initial feature points in the same preset area are fitted with straight lines;
    将拟合得到的任意一条处于竖直方向上的直线与任意一条处于水平方向上的直线之间的交点作为所述目标特征点。An intersection between any straight line in the vertical direction obtained by fitting and any straight line in the horizontal direction is taken as the target feature point.
  3. 根据权利要求1所述的方法,其特征在于,所述针对每一所述目标特征点,根据针对该目标特征点预先标定的映射关系和该目标特征点在所述拍摄图像上的相机坐标,确定该目标特征点在所述摄像头的拍摄空间中的深度信息,包括:The method according to claim 1, wherein, for each of the target feature points, according to a mapping relationship pre-calibrated for the target feature point and the camera coordinates of the target feature point on the captured image, Determine the depth information of the target feature point in the shooting space of the camera, including:
    针对每一所述目标特征点,根据针对该目标特征点预先标定的映射关系和该目标特征点在所述拍摄图像上的相机坐标,计算得到该目标特征点在所述摄像头的拍摄空间中的深度信息,其中,所述映射关系为:For each of the target feature points, according to the pre-calibrated mapping relationship for the target feature point and the camera coordinates of the target feature point on the captured image, calculate the distance of the target feature point in the shooting space of the camera. depth information, wherein the mapping relationship is:
    Figure PCTCN2021115160-appb-100001
    Figure PCTCN2021115160-appb-100001
    其中,h为该目标特征点的深度信息,p 1为该目标特征点的第一预设标定参数,p 2为目标特征点的第二预设标定参数,X为该目标特征点的相机坐标,其中,所述第一预设标定参数、所述第二预设标定参数均为常数。 Wherein, h is the depth information of the target feature point, p 1 is the first preset calibration parameter of the target feature point, p 2 is the second preset calibration parameter of the target feature point, and X is the camera coordinate of the target feature point , wherein the first preset calibration parameter and the second preset calibration parameter are both constants.
  4. 根据权利要求3所述的方法,其特征在于,针对每一所述目标特征点,该目标特征点的第一预设标定参数和第二预设标定参数是通过如下方式得到的:The method according to claim 3, wherein, for each target feature point, the first preset calibration parameter and the second preset calibration parameter of the target feature point are obtained in the following manner:
    在所述投影仪距离投影平面第一距离,且所述投影仪的投影光线垂直于所述投影平面的情况下,向所述投影平面投射预设图像,并通过所述投影仪的摄像头拍摄所述预设图像,得到第一图像;When the projector is at a first distance from the projection plane, and the projection light of the projector is perpendicular to the projection plane, a preset image is projected to the projection plane, and the image is captured by the camera of the projector. the preset image to obtain the first image;
    基于所述第一图像以及所述第一距离,确定所述第一图像上的目标特征点的第一相机坐标和第一深度信息;determining, based on the first image and the first distance, first camera coordinates and first depth information of the target feature point on the first image;
    在所述投影仪距离所述投影平面第二距离,且所述投影仪的投影光线垂直于所述投影平面的情况下,向所述投影平面投射预设图像,并通过所述投影仪的摄像头拍摄所述 预设图像,得到第二图像;When the projector is a second distance away from the projection plane, and the projection light of the projector is perpendicular to the projection plane, a preset image is projected to the projection plane, and passes through the camera of the projector photographing the preset image to obtain a second image;
    根据所述第二图像以及所述第二距离,确定所述预设图像上的目标特征点的第二相机坐标和第二深度信息;determining, according to the second image and the second distance, second camera coordinates and second depth information of the target feature point on the preset image;
    根据所述第一相机坐标和第一深度信息、以及所述第二相机坐标和第二深度信息,结合预先建立的目标特征点的深度信息与相机坐标之间的映射关系,得到所述第一预设标定参数和所述第二预设标定参数,其中,所述映射关系为:According to the first camera coordinates and the first depth information, as well as the second camera coordinates and the second depth information, combined with the pre-established mapping relationship between the depth information of the target feature point and the camera coordinates, the first camera coordinates are obtained. The preset calibration parameter and the second preset calibration parameter, wherein the mapping relationship is:
    Figure PCTCN2021115160-appb-100002
    Figure PCTCN2021115160-appb-100002
    其中,h为目标特征点的深度坐标,X为相机坐标,p 1为第一预设标定参数,p 2为第二预设标定参数。 Wherein, h is the depth coordinate of the target feature point, X is the camera coordinate, p 1 is the first preset calibration parameter, and p 2 is the second preset calibration parameter.
  5. 根据权利要求1所述的方法,其特征在于,所述根据所述投影平面的法向量以及所述投影仪的当前位姿信息,对所述投影仪的原始图像的二维顶点坐标进行校正,得到校正后的原始图像的二维顶点坐标,包括:The method according to claim 1, wherein the two-dimensional vertex coordinates of the original image of the projector are corrected according to the normal vector of the projection plane and the current pose information of the projector, Get the 2D vertex coordinates of the corrected original image, including:
    根据所述投影平面的法向量以及所述投影仪的当前位姿信息,计算得到所述投影仪法向量的偏移信息,其中,所述偏移信息包括偏航角、俯仰角以及滚转角;Calculate the offset information of the normal vector of the projector according to the normal vector of the projection plane and the current pose information of the projector, wherein the offset information includes a yaw angle, a pitch angle and a roll angle;
    基于所述偏移信息,计算得到所述投影仪投射至所述投影平面的投影图像的二维顶点坐标;Based on the offset information, calculating the two-dimensional vertex coordinates of the projected image projected by the projector to the projection plane;
    基于所述投影图像的二维顶点坐标以及所述投影仪的原始图像的二维顶点坐标,建立所述投影图像与所述原始图像的单应矩阵关系;establishing a homography matrix relationship between the projected image and the original image based on the two-dimensional vertex coordinates of the projected image and the two-dimensional vertex coordinates of the original image of the projector;
    从所述投影仪的投影图像中选取目标矩形,并确定该目标矩形的二维顶点坐标;Select a target rectangle from the projected image of the projector, and determine the two-dimensional vertex coordinates of the target rectangle;
    基于所述目标矩形的二维顶点坐标,结合所述单应矩阵关系,得到所述校正后的原始图像的二维顶点坐标。Based on the two-dimensional vertex coordinates of the target rectangle, combined with the homography matrix relationship, the two-dimensional vertex coordinates of the corrected original image are obtained.
  6. 根据权利要求5所述的方法,其特征在于,所述从所述投影仪的投影图像中选取目标矩形,包括:The method according to claim 5, wherein the selecting a target rectangle from the projected image of the projector comprises:
    从所述投影图像的任一边上任意选取一点,并以该点作为待构建的矩形的顶点、以所述原始图像的宽高比作为所述待构建的矩形的宽高比,在所述投影图像的区域内生成矩形;A point is arbitrarily selected from any side of the projected image, and the point is taken as the vertex of the rectangle to be constructed, and the aspect ratio of the original image is taken as the aspect ratio of the rectangle to be constructed. Generate a rectangle within the area of the image;
    从生成的矩形中选取面积最大的矩形作为所述目标矩形。The rectangle with the largest area is selected from the generated rectangles as the target rectangle.
  7. 一种投影校正装置,其特征在于,所述装置包括:A projection correction device, characterized in that the device comprises:
    响应模块,配置为响应于接收到的校正指令,控制投影仪向投影平面投射预设图像;a response module, configured to control the projector to project a preset image to the projection plane in response to the received correction instruction;
    拍摄模块,配置为通过所述投影仪的摄像头拍摄所述投影仪投射出的所述预设图像,得到拍摄图像;a photographing module, configured to photograph the preset image projected by the projector through the camera of the projector to obtain a photographed image;
    目标特征点确定模块,配置为在所述拍摄图像中,识别出所述预设图像的目标特征点;a target feature point determination module, configured to identify the target feature point of the preset image in the captured image;
    三维坐标确定模块,配置为针对每一所述目标特征点,根据针对该目标特征点预先标定的映射关系和该目标特征点在所述拍摄图像上的相机坐标,确定该目标特征点在所述摄像头的拍摄空间中的深度信息,以得到该目标特征点在所述投影仪的投影空间中的三维坐标,其中,所述映射关系是在不同深度下标定的目标特征点的深度信息与相机坐标的偏移量之间的关联关系;The three-dimensional coordinate determination module is configured to, for each of the target feature points, determine that the target feature point is in the The depth information in the shooting space of the camera to obtain the three-dimensional coordinates of the target feature point in the projection space of the projector, wherein the mapping relationship is the depth information and camera coordinates of the target feature point calibrated at different depths The relationship between the offsets;
    法向量模块,配置为根据各个所述目标特征点的三维坐标,确定所述投影平面相对于所述投影仪的法向量;a normal vector module, configured to determine the normal vector of the projection plane relative to the projector according to the three-dimensional coordinates of each of the target feature points;
    校正模块,配置为根据所述投影平面的法向量以及所述投影仪的当前位姿信息,对 所述投影仪的原始图像的二维顶点坐标进行校正,得到校正后的原始图像的二维顶点坐标;a correction module, configured to correct the two-dimensional vertex coordinates of the original image of the projector according to the normal vector of the projection plane and the current pose information of the projector to obtain the two-dimensional vertex of the corrected original image coordinate;
    投影模块,控制所述投影仪根据所述校正后的原始图像的二维顶点坐标进行投影。The projection module controls the projector to project according to the two-dimensional vertex coordinates of the corrected original image.
  8. 根据权利要求7所述的装置,其特征在于,所述预设图像为棋盘格图像,所述响应模块包括:The device according to claim 7, wherein the preset image is a checkerboard image, and the response module comprises:
    指令响应子模块,配置为采用角点检测算法确定所述棋盘格图像中的初始特征点;an instruction response submodule, configured to use a corner detection algorithm to determine the initial feature points in the checkerboard image;
    直线拟合子模块,配置为分别按照竖直方向和水平方向,对处于同一预设区域范围内的初始特征点进行直线拟合;The straight line fitting sub-module is configured to perform straight line fitting on the initial feature points within the same preset area according to the vertical direction and the horizontal direction respectively;
    特征点确定子模块,配置为将拟合得到的任意一条处于竖直方向上的直线与任意一条处于水平方向上的直线之间的交点作为所述目标特征点。The feature point determination sub-module is configured to take the intersection point between any straight line in the vertical direction and any straight line in the horizontal direction obtained by fitting as the target feature point.
  9. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,该程序被处理装置执行时实现权利要求1-6中任一项所述方法的步骤。A computer-readable storage medium on which a computer program is stored, characterized in that, when the program is executed by a processing device, the steps of the method according to any one of claims 1-6 are implemented.
  10. 一种电子设备,其特征在于,包括:An electronic device, comprising:
    存储器,其上存储有计算机程序;a memory on which a computer program is stored;
    处理器,用于执行所述存储器中的所述计算机程序,以实现权利要求1-6中任一项所述方法的步骤。A processor for executing the computer program in the memory to implement the steps of the method of any one of claims 1-6.
PCT/CN2021/115160 2021-03-19 2021-08-27 Projection correction method and apparatus, storage medium, and electronic device WO2022193559A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110297235.1A CN112689135B (en) 2021-03-19 2021-03-19 Projection correction method, projection correction device, storage medium and electronic equipment
CN202110297235.1 2021-03-19

Publications (1)

Publication Number Publication Date
WO2022193559A1 true WO2022193559A1 (en) 2022-09-22

Family

ID=75455702

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/115160 WO2022193559A1 (en) 2021-03-19 2021-08-27 Projection correction method and apparatus, storage medium, and electronic device

Country Status (2)

Country Link
CN (1) CN112689135B (en)
WO (1) WO2022193559A1 (en)

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112689135B (en) * 2021-03-19 2021-07-02 深圳市火乐科技发展有限公司 Projection correction method, projection correction device, storage medium and electronic equipment
CN112804507B (en) * 2021-03-19 2021-08-31 深圳市火乐科技发展有限公司 Projector correction method, projector correction system, storage medium, and electronic device
CN112804508B (en) * 2021-03-19 2021-08-31 深圳市火乐科技发展有限公司 Projector correction method, projector correction system, storage medium, and electronic device
WO2022267027A1 (en) * 2021-06-25 2022-12-29 闻泰科技(深圳)有限公司 Image correction method and apparatus, and electronic device and storage medium
CN113983951B (en) * 2021-09-10 2024-03-29 深圳市辰卓科技有限公司 Three-dimensional target measuring method, device, imager and storage medium
CN113645456B (en) * 2021-09-22 2023-11-07 业成科技(成都)有限公司 Projection image correction method, projection system and readable storage medium
CN113824939B (en) * 2021-09-29 2024-05-28 深圳市火乐科技发展有限公司 Projection image adjusting method, device, projection equipment and storage medium
CN114157848A (en) * 2021-12-01 2022-03-08 深圳市火乐科技发展有限公司 Projection equipment correction method and device, storage medium and projection equipment
CN114383812A (en) * 2022-01-17 2022-04-22 深圳市火乐科技发展有限公司 Method and device for detecting stability of sensor, electronic equipment and medium
CN114449249B (en) * 2022-01-29 2024-02-09 深圳市火乐科技发展有限公司 Image projection method, image projection device, storage medium and projection apparatus
CN114615478B (en) * 2022-02-28 2023-12-01 青岛信芯微电子科技股份有限公司 Projection screen correction method, projection screen correction system, projection apparatus, and storage medium
CN115086625B (en) * 2022-05-12 2024-03-15 峰米(重庆)创新科技有限公司 Correction method, device and system for projection picture, correction equipment and projection equipment
CN115002345B (en) * 2022-05-13 2024-02-13 北京字节跳动网络技术有限公司 Image correction method, device, electronic equipment and storage medium
CN115103169B (en) * 2022-06-10 2024-02-09 深圳市火乐科技发展有限公司 Projection picture correction method, projection picture correction device, storage medium and projection device
CN115314689A (en) * 2022-08-05 2022-11-08 深圳海翼智新科技有限公司 Projection correction method, projection correction device, projector and computer program product
CN115442584B (en) * 2022-08-30 2023-08-18 中国传媒大学 Multi-sensor fusion type special-shaped surface dynamic projection method
CN116033131B (en) * 2022-12-29 2024-05-17 深圳创维数字技术有限公司 Image correction method, device, electronic equipment and readable storage medium
CN116433476B (en) * 2023-06-09 2023-09-08 有方(合肥)医疗科技有限公司 CT image processing method and device
CN117066702B (en) * 2023-08-25 2024-04-19 上海频准激光科技有限公司 Laser marking control system based on laser

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140340529A1 (en) * 2011-09-15 2014-11-20 Nec Corporation Automatic Projection Image Correction System, Automatic Projection Image Correction Method, and Non-Transitory Storage Medium
US20140379114A1 (en) * 2013-06-25 2014-12-25 Roland Dg Corporation Projection image correction system and projection image correction method
CN104869377A (en) * 2012-03-14 2015-08-26 海信集团有限公司 Method for correcting colors of projected images and projector
CN107749979A (en) * 2017-09-20 2018-03-02 神画科技(深圳)有限公司 A kind of projector or so trapezoidal distortion correction method
CN110099267A (en) * 2019-05-27 2019-08-06 广州市讯码通讯科技有限公司 Trapezoidal correcting system, method and projector
CN110336987A (en) * 2019-04-03 2019-10-15 北京小鸟听听科技有限公司 A kind of projector distortion correction method, device and projector
CN111093067A (en) * 2019-12-31 2020-05-01 歌尔股份有限公司 Projection apparatus, lens distortion correction method, distortion correction device, and storage medium
CN112422939A (en) * 2021-01-25 2021-02-26 深圳市橙子数字科技有限公司 Trapezoidal correction method and device for projection equipment, projection equipment and medium
CN112689135A (en) * 2021-03-19 2021-04-20 深圳市火乐科技发展有限公司 Projection correction method, projection correction device, storage medium and electronic equipment
CN112804507A (en) * 2021-03-19 2021-05-14 深圳市火乐科技发展有限公司 Projector correction method, projector correction system, storage medium, and electronic device

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI592020B (en) * 2016-08-23 2017-07-11 國立臺灣科技大學 Image correction method of projector and image correction system
CN107147888B (en) * 2017-05-16 2020-06-02 深圳市火乐科技发展有限公司 Method and device for automatically correcting distortion by utilizing graphics processing chip
CN110769217A (en) * 2018-10-10 2020-02-07 成都极米科技股份有限公司 Image processing method, projection apparatus, and photographing apparatus

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140340529A1 (en) * 2011-09-15 2014-11-20 Nec Corporation Automatic Projection Image Correction System, Automatic Projection Image Correction Method, and Non-Transitory Storage Medium
CN104869377A (en) * 2012-03-14 2015-08-26 海信集团有限公司 Method for correcting colors of projected images and projector
US20140379114A1 (en) * 2013-06-25 2014-12-25 Roland Dg Corporation Projection image correction system and projection image correction method
CN107749979A (en) * 2017-09-20 2018-03-02 神画科技(深圳)有限公司 A kind of projector or so trapezoidal distortion correction method
CN110336987A (en) * 2019-04-03 2019-10-15 北京小鸟听听科技有限公司 A kind of projector distortion correction method, device and projector
CN110099267A (en) * 2019-05-27 2019-08-06 广州市讯码通讯科技有限公司 Trapezoidal correcting system, method and projector
CN111093067A (en) * 2019-12-31 2020-05-01 歌尔股份有限公司 Projection apparatus, lens distortion correction method, distortion correction device, and storage medium
CN112422939A (en) * 2021-01-25 2021-02-26 深圳市橙子数字科技有限公司 Trapezoidal correction method and device for projection equipment, projection equipment and medium
CN112689135A (en) * 2021-03-19 2021-04-20 深圳市火乐科技发展有限公司 Projection correction method, projection correction device, storage medium and electronic equipment
CN112804507A (en) * 2021-03-19 2021-05-14 深圳市火乐科技发展有限公司 Projector correction method, projector correction system, storage medium, and electronic device

Also Published As

Publication number Publication date
CN112689135B (en) 2021-07-02
CN112689135A (en) 2021-04-20

Similar Documents

Publication Publication Date Title
WO2022193559A1 (en) Projection correction method and apparatus, storage medium, and electronic device
WO2021103347A1 (en) Projector keystone correction method, apparatus, and system, and readable storage medium
WO2022193560A1 (en) Projector correction method and system, and storage medium and electronic device
WO2022193558A1 (en) Projector correction method and system, and storage medium and electronic device
CN110099266B (en) Projector picture correction method and device and projector
US7899270B2 (en) Method and apparatus for providing panoramic view with geometric correction
TWI719493B (en) Image projection system, image projection apparatus and calibrating method for display image thereof
JP5596972B2 (en) Control device and control method of imaging apparatus
WO2021031781A1 (en) Method and device for calibrating projection image and projection device
WO2021082264A1 (en) Projection image automatic correction method and system based on binocular vision
CN112272292B (en) Projection correction method, apparatus and storage medium
CN114727081A (en) Projector projection correction method and device and projector
WO2012163259A1 (en) Method and apparatus for adjusting video conference system
JP2011176629A (en) Controller and projection type video display device
JP6990694B2 (en) Projector, data creation method for mapping, program and projection mapping system
CN114286068B (en) Focusing method, focusing device, storage medium and projection equipment
CN111131801B (en) Projector correction system and method and projector
CN108718404B (en) Image correction method and image correction system
TWI766206B (en) Method for correcting distortion image and apparatus thereof
CN114339179B (en) Projection correction method, apparatus, storage medium and projection device
TWI688274B (en) Image calibration method and image calibration system
WO2024080234A1 (en) Projection device, correction device, projection system, correction method, and computer program
JP7463133B2 (en) Area measuring device, area measuring method, and program
WO2023066331A1 (en) Automatic calibration method, and device, system and computer-readable storage medium
US11176644B2 (en) Keystone corrections with quadrilateral objects

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21931134

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 12.02.2024)