WO2022193739A1 - 投影图像调整方法、装置、存储介质及电子设备 - Google Patents

投影图像调整方法、装置、存储介质及电子设备 Download PDF

Info

Publication number
WO2022193739A1
WO2022193739A1 PCT/CN2021/135440 CN2021135440W WO2022193739A1 WO 2022193739 A1 WO2022193739 A1 WO 2022193739A1 CN 2021135440 W CN2021135440 W CN 2021135440W WO 2022193739 A1 WO2022193739 A1 WO 2022193739A1
Authority
WO
WIPO (PCT)
Prior art keywords
coordinate system
target
image
feature point
mapping relationship
Prior art date
Application number
PCT/CN2021/135440
Other languages
English (en)
French (fr)
Inventor
孙世攀
张聪
胡震宇
Original Assignee
深圳市火乐科技发展有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市火乐科技发展有限公司 filed Critical 深圳市火乐科技发展有限公司
Publication of WO2022193739A1 publication Critical patent/WO2022193739A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3179Video signal processing therefor
    • H04N9/3185Geometric adjustment, e.g. keystone or convergence

Definitions

  • the present disclosure relates to the field of projection technology, and in particular, to a projection image adjustment method, device, storage medium, and electronic device.
  • a projector is a device that can project images or videos onto a screen. It can be connected to devices such as computers, game consoles, and storage devices through different interfaces to play corresponding video content.
  • the size of the projected image is positively related to the distance between the projector and the wall.
  • the ultra-short-throw projector it can project a picture with a diagonal of 250 centimeters at a distance of 50 centimeters from the wall, that is, it can project a larger projection picture in a limited space.
  • the angle of the ultra-short-throw projector changes by 1° during installation and use, it will cause a huge shift in the projected image. Therefore, for ultra-short-throw projectors, it becomes difficult to manually correct the image position, which affects the user experience.
  • the purpose of the present disclosure is to provide a projection image adjustment method, device, storage medium and electronic device to solve the above-mentioned related technical problems.
  • a projection image adjustment method including:
  • target parameters where the target parameters are used to describe the relationship between the first coordinate system where the projected image from the camera's perspective is located and the second coordinate system where the source image is located, and the projected image is a projection device that projects the source image to the second coordinate system.
  • the correlation parameter between the first coordinate system and the user perspective coordinate system where the preset user perspective plane is located is obtained based on the first mapping relationship, and the correlation parameter includes a method of the camera relative to the user perspective plane. a vector, a rotation matrix between the first coordinate system and a preset user perspective coordinate system, and a translation vector between the first coordinate system and a preset user perspective coordinate system;
  • the source image is adjusted according to the third mapping relationship, and the adjusted source image is projected to the projection area.
  • the obtaining target parameters includes:
  • the recognition model is trained by image samples including feature point annotation information
  • the calculating, according to the target parameter, the first mapping relationship between the first coordinate system and the second coordinate system includes:
  • the first mapping relationship is calculated based on the target feature point pair.
  • the determining a target feature point pair from the plurality of feature point pairs includes:
  • the feature points in the preset area in the target image are used as the target feature points of the target image, wherein the preset area is on the target image an area away from the distorted edge of the target image;
  • the target feature point and the feature point corresponding to the target feature point on the source image are used as the target feature point pair.
  • the determining a target feature point pair from the plurality of feature point pairs includes:
  • the target feature point and the feature point corresponding to the target feature point on the source image are used as the target feature point pair and the target feature point pair.
  • the target parameters include parameters of the camera, a rotation matrix between the first coordinate system and the second coordinate system, and a rotation matrix between the first coordinate system and the second coordinate system.
  • translation vector the normal vector of the wall where the projection area is located, and the intercept between the projection device and the wall; correspondingly, calculating the first coordinate system and the second coordinate according to the target parameter
  • the first mapping relationship between departments including:
  • the first mapping relationship is calculated by the following formula:
  • H is a matrix describing the first mapping relationship
  • K is a parameter of the camera
  • R is a rotation matrix between the first coordinate system and the second coordinate system
  • t is the first coordinate
  • n is the normal vector of the camera relative to the wall where the projection area is located
  • d is the intercept between the projection device and the wall.
  • the target parameters include parameters of the camera, a rotation matrix between the first coordinate system and the second coordinate system, and a rotation matrix between the first coordinate system and the second coordinate system.
  • a translation vector an intercept between the projection device and the wall, and a plurality of feature point pairs between the target image and the source image;
  • the calculating the first mapping relationship between the first coordinate system and the second coordinate system according to the target parameter specifically includes:
  • the rotation matrix between the first coordinate system and the second coordinate system, the first coordinate system and the second coordinate system The translation vector between the systems, the intercept between the projection device and the wall, and the parameters of the camera are calculated to obtain a candidate first mapping relationship corresponding to the normal vector;
  • the first mapping relationship is determined from the candidate first mapping relationships.
  • the determining the first mapping relationship from the candidate first mapping relationship includes:
  • For each candidate first mapping relationship calculate the absolute value of the difference between the matrix corresponding to the candidate first mapping relationship and the matrices corresponding to other candidate first mapping relationships;
  • the candidate first mapping relationship with the smallest sum value is used as the first mapping relationship.
  • a projection image adjustment device comprising:
  • the first acquisition module is used to acquire target parameters, and the target parameters are used to describe the relationship between the first coordinate system where the projected image from the camera perspective is located and the second coordinate system where the source image is located, and the projected image is
  • the projection device projects the source image to the image formed by the projection area
  • a first calculation module configured to calculate a first mapping relationship between the first coordinate system and the second coordinate system according to the target parameter
  • a second acquiring module configured to acquire, based on the first mapping relationship, a correlation parameter between the first coordinate system and the user perspective coordinate system where the preset user perspective plane is located, where the correlation parameter includes the relative relationship between the camera and the camera. the normal vector of the user perspective plane, the rotation matrix between the first coordinate system and the preset user perspective coordinate system, and the translation vector between the first coordinate system and the preset user perspective coordinate system;
  • a second calculation module configured to calculate a second mapping relationship between the first coordinate system and the user viewing angle coordinate system according to the associated parameter
  • a determining module configured to determine a third mapping relationship between the second coordinate system and the user perspective coordinate system based on the first mapping relationship and the second mapping relationship;
  • An execution module configured to adjust the source image according to the third mapping relationship, and project the adjusted source image to the projection area.
  • a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, implements the steps of the method in any one of the foregoing first aspects.
  • an electronic device comprising:
  • a processor configured to execute the computer program in the memory, to implement the steps of the method in any one of the above-mentioned first aspects.
  • the first coordinate system and the second coordinate system where the source image are located can be calculated based on the target parameters.
  • a first mapping relationship between the second coordinate systems, and based on the first mapping relationship, a correlation parameter between the first coordinate system and the user viewing angle coordinate system where the preset user viewing angle plane is located is calculated.
  • a second mapping relationship between the first coordinate system and the user viewing angle coordinate system may be calculated based on the associated parameters, and the second coordinate may be determined based on the first mapping relationship and the second mapping relationship
  • the third mapping relationship between the system and the coordinate system of the user's perspective that is, the mapping relationship between the projection source image and the image under the user's perspective. That is to say, the source image can be adjusted through the third mapping relationship, so as to obtain a desired projection image.
  • a trapezoidal projection picture may be adjusted to a rectangular projection picture based on the third mapping relationship, thereby realizing keystone correction of the projection picture.
  • FIG. 1 is a flowchart of a method for adjusting a projection image according to an exemplary embodiment of the present disclosure.
  • FIG. 2 is a flowchart of a method for adjusting a projection image according to an exemplary embodiment of the present disclosure.
  • FIG. 3 is a schematic diagram of a target image according to an exemplary embodiment of the present disclosure.
  • FIG. 4 is a flowchart of a method for adjusting a projection image according to an exemplary embodiment of the present disclosure.
  • FIG. 5 is a block diagram of a projection image adjustment apparatus according to an exemplary embodiment of the present disclosure.
  • FIG. 6 is a block diagram of an electronic device according to an exemplary embodiment of the present disclosure.
  • the application scenarios of the present disclosure are firstly introduced.
  • the embodiments provided by the present disclosure can be applied to the adjustment scenarios of projection images.
  • the projection image may be, for example, an image formed by projecting the source image onto the projection area by various types of projection devices.
  • the ultra-short-throw projector can project a picture with a diagonal of 250 centimeters at a distance of 50 centimeters from the wall, that is, it can project a larger projection picture in a limited space.
  • the projected image of the ultra-short-throw projector is also easily affected by positional offset. For example, in the case of an angle change of 1°, the projected image may also be offset by 6 cm. Therefore, for ultra-short-throw projectors, it becomes difficult to manually correct the image position, which affects the user experience.
  • FIG. 1 is a flowchart of a method for adjusting a projected image according to an exemplary embodiment of the present disclosure, and the method includes:
  • step S11 target parameters are acquired.
  • the target parameter is used to describe the relationship between the first coordinate system where the projected image from the camera perspective is located and the second coordinate system where the source image is located, and the projected image is a projection device (for example, an ultra-short-focus projection device). ) project the source image onto the image formed by the projection area.
  • the first coordinate system may be established for the projected image from the perspective of the camera, and the second coordinate system may be established based on the source image.
  • the target parameters include parameters of the camera, such as correction coefficients, focal lengths, and the like.
  • the target parameters also include a rotation matrix between the first coordinate system and the second coordinate system, a translation vector between the first coordinate system and the second coordinate system, and a wall where the projection area is located. The normal vector of the surface, and the intercept between the projection device and the wall.
  • the pitch angle is related, and z may be related to the roll angle of the projection device.
  • the parameters y and z in the normal vector can be acquired through an IMU (Inertial Measurement Unit, inertial measurement unit).
  • one or more feature point pairs between the projected image and the source image may also be acquired based on a camera or a TOF (Time Of Flight) sensor.
  • the feature point pair may include a pixel point A on the projected image and a pixel point B on the source image, and the pixel point A and the pixel point B represent the same object in the projected image and the source image. position on the image.
  • the parameter x can be calculated based on the feature point pair, and then the normal vector of the camera relative to the wall where the projection area is located is obtained.
  • a rotation matrix between the first coordinate system and the second coordinate system may be obtained by calculation based on the normal vector.
  • the translation vector between the first coordinate system and the second coordinate system can be obtained by pre-calibration.
  • step S12 a first mapping relationship between the first coordinate system and the second coordinate system is calculated according to the target parameter.
  • the first mapping relationship can be calculated by the following formula:
  • H is a matrix describing the first mapping relationship
  • K is a parameter of the camera
  • R is a rotation matrix between the first coordinate system and the second coordinate system
  • t is the first coordinate
  • n is the normal vector of the camera relative to the wall where the projection area is located
  • d is the intercept between the projection device and the wall.
  • step S13 an association parameter between the first coordinate system and the user perspective coordinate system where the preset user perspective plane is located is obtained based on the first mapping relationship.
  • the user viewing angle plane may be an assumed plane of the projected image viewed by the user.
  • the user's viewing angle plane may be a plane parallel to the plane where the projection area is located.
  • the user viewing angle plane may be a plane parallel to the wall at a preset distance (eg, two meters, three meters, etc.) from the wall.
  • a coordinate system may also be established for the user perspective plane, that is, the user perspective coordinate system.
  • the associated parameters may include a normal vector of the camera relative to the user perspective plane, a rotation matrix between the first coordinate system and the preset user perspective coordinate system, and the first coordinate system and the preset user perspective. Translation vector between coordinate systems. It should be understood that since the user's viewing angle plane and the plane where the projection area is located are parallel to each other, the normal vector of the camera relative to the user's viewing angle plane and the normal vector of the camera relative to the wall where the projection area is located may be the same.
  • a rotation matrix between the first coordinate system and the preset user's viewing angle coordinate system may be calculated based on the normal vector.
  • the translation vector between the first coordinate system and the preset user viewing angle coordinate system can be obtained by pre-calibration.
  • step S14 a second mapping relationship between the first coordinate system and the user viewing angle coordinate system is calculated according to the associated parameter.
  • the second mapping relationship can be calculated by the following formula:
  • H' is a matrix describing the second mapping relationship
  • K is a parameter of the camera
  • R' is a rotation matrix between the first coordinate system and the preset user viewing angle coordinate system
  • t' is the first coordinate system.
  • n' the normal vector of the camera relative to the user viewing angle plane
  • d' is the intercept between the projection device and the wall.
  • a third mapping relationship between the second coordinate system and the user viewing angle coordinate system may be determined based on the first mapping relationship and the second mapping relationship. It should be understood that the source image is in the second coordinate system, and the image from the user's perspective is in the coordinate system from the user's perspective. Therefore, the above step S15 may refer to obtaining the mapping relationship between the source image of the projection device and the image from the user's perspective.
  • step S16 the source image is adjusted according to the third mapping relationship, and the adjusted source image is projected to the projection area.
  • the user's viewing angle plane is the assumed plane of the projected image viewed by the user. And the distance of depth produces the change of similar shape. Therefore, if the projected image projected onto the user's viewing angle plane is a rectangle, the projected image formed in the projection area is still a rectangle.
  • a rectangular projection picture may be assumed in the user viewing angle coordinate system.
  • the source image can be adjusted based on the third mapping relationship, so that the projection image formed by the source image on the user's viewing angle plane is a rectangle, so that the source image is located in the projection area.
  • the formed projection screen is a rectangle, which finally has the effect of keystone correction, and also improves the convenience of use of the projection device.
  • the third mapping relationship can be calculated by using modules such as tof, imu, etc., that is, the third mapping relationship can be calculated without using an image captured by a camera. Therefore, even if the camera module fails, the projected image can still be adjusted by using the solution of this embodiment.
  • FIG. 2 is a flowchart of a projection image adjustment method shown in an exemplary embodiment of the present disclosure. As shown in FIG. 2 , the method is based on FIG. 1 , and the step S11 includes:
  • S111 Acquire a target image obtained by photographing a projection area by a camera, where the projection area displays a projection image.
  • the feature point recognition model may be obtained by training image samples including feature point annotation information.
  • the feature point recognition model can be constructed based on the Superpoint algorithm, and feature point pairs are marked on the target image obtained by photographing the source image and the projection area (displayed with the projection image formed by the projection of the source image) by the camera, thereby The source image and the target image can form a training sample.
  • multiple sets of training samples can be input into the training model, and the feature point pair recognition result of the model can be obtained.
  • the parameters of the model can be adjusted until the model converges, and finally the feature point recognition model is obtained.
  • the feature point recognition model may also be constructed based on other feature point matching algorithms of the same type, which is not limited in the present disclosure.
  • the target image and the source image may be input into the feature point recognition model to obtain the multiplicity between the target image and the source image output by the feature point recognition model. feature point pairs.
  • a target feature point pair may be determined from the plurality of feature point pairs. It should be understood that the number of feature point pairs identified by the feature point model may be multiple, so a target feature point pair may be determined from the multiple feature point pairs to facilitate the calculation of the first mapping relationship.
  • the projection area may include a projection screen (eg, a curtain).
  • a projection screen eg, a curtain
  • the projection screen may not fit with the wall (for example, there may be a certain distance between the hanging curtain and the wall), in this case, the light spot on the projection screen in the projection area is the same as the wall. There may be errors in the light path between light spots in the projection area that are not on the projection screen.
  • the determining a target feature point pair from the plurality of feature points may include:
  • the line segment information in the target image may be extracted based on a line segment detection algorithm such as LSD and Hough transform.
  • the line segments in the target image are clustered. It should be understood that by clustering the neutralized line segments in the target image, multiple line segment clusters can be obtained, and each cluster cluster can correspond to a type of line segment. For example, the four edges of the projection screen can correspond to four clusters. class cluster.
  • Edge line segments for characterizing the boundary of the projection screen are determined from the line segments in the target image based on the clustering result. For example, in some implementation scenarios, each cluster may be combined, so that clusters that can be combined into a quadrilateral and whose aspect ratio can meet a preset ratio requirement (for example, an aspect ratio of 16:9) can be combined
  • the line segment in the projection screen is used as the edge line segment of the projection screen.
  • the clustered line segments may also include edge line segments of the projected image.
  • the line segment with the smaller brightness difference between the two ends of the line segment can be used as the projection.
  • the edge segments of the screen since the difference in brightness between the two ends of the edge line segment of the projection screen is small, while the difference in brightness between the two ends of the edge line segment of the projected image is large, the line segment with the smaller brightness difference between the two ends of the line segment can be used as the projection.
  • each edge line segment can be used as the vertex of the projection screen, and after obtaining the edge segment of the projection screen and the vertex of the projection screen, the area included in the projection screen can be determined.
  • the target feature point can be determined from the feature points located in the projection screen area, and the target feature point and the feature point corresponding to the target feature point on the source image can be used as the target feature point. pair the target feature points to reduce the calculation error.
  • the determining a target feature point pair from the plurality of feature points includes:
  • the feature points in the preset area in the target image are used as the target feature points of the target image, wherein the preset area is on the target image an area away from the distorted edge of the target image;
  • the target feature point and the feature point corresponding to the target feature point on the source image are used as the target feature point pair.
  • the projection device forms a projection image in the projection area by projecting light upward.
  • the camera captures the target image from below, there may be corresponding distortion in the upper part of the target image.
  • the effective pixels in the area located in the lower part of the image in the target image are more than the effective pixels in the area located in the upper part of the image in the target image. Therefore, in this case, the preset area may refer to an area in the target image that is located within a preset distance range at the lower edge of the image, and the distance range may be set according to application requirements, which is not limited in the present disclosure .
  • FIG. 3 illustrates the above solution by taking the camera shooting the target image from below as an example, but those skilled in the art know that the camera may also shoot the target image from top to bottom during specific implementation. Or the target image is obtained by shooting from left to right, and so on. Therefore, the preset area is an area on the target image away from the distorted edge of the target image.
  • the selection accuracy of the feature point pair can be improved, thereby improving the calculation accuracy of the first mapping relationship.
  • the first mapping relationship may be calculated based on the target feature point pair. For example, in some embodiments, 4 sets of feature point pairs may be selected, so that the first mapping relationship may be calculated based on the coordinate information of 8 points in the 4 sets of feature point pairs.
  • step S13 the first mapping relationship may be decomposed to obtain the associated parameter.
  • steps S14 to S16 please refer to the above-mentioned description of the embodiment in FIG. 1 .
  • the present disclosure will not repeat them here.
  • the above technical solution can obtain the feature point pair between the projected image and the source image based on the deep learning model, thereby reducing the false matching rate and increasing the number of matched feature points, thereby helping to improve the selection accuracy of the feature point pair. , which finally has the effect of improving the accuracy of the calculated first mapping relationship and the adjustment accuracy of the projected image.
  • FIG. 4 is a flowchart of a method for adjusting a projection image according to an exemplary embodiment of the present disclosure. As shown in FIG. 4 , the method includes:
  • the target parameter is used to describe the relationship between the first coordinate system where the projected image from the perspective of the camera is located and the second coordinate system where the source image is located, and the projected image is the projection device that projects the source image to the projection area. formed image.
  • the first coordinate system may be established for the projected image from the perspective of the camera, and the second coordinate system may be established based on the source image.
  • the target parameters include the parameters of the camera, the rotation matrix between the first coordinate system and the second coordinate system, the translation vector between the first coordinate system and the second coordinate system, the the intercept between the projection device and the wall.
  • the above parameters please refer to the description of the embodiment in FIG. 1 , which will not be repeated in the present disclosure.
  • the target parameter may also include a plurality of feature point pairs between the target image and the source image.
  • the target image and the source image may be input into a feature point recognition model to obtain a plurality of feature point pairs between the target image and the source image output by the feature point recognition model.
  • the feature point recognition model please refer to the above-mentioned description of the embodiment in FIG. 2 , which will not be repeated in the present disclosure.
  • the normal vector of the wall corresponding to the projection area of the feature point pair can be obtained by calculation.
  • the candidate first mapping relationship corresponding to the normal vector can be calculated by the following formula:
  • N is the matrix describing the candidate first mapping relationship
  • K is the parameter of the camera
  • R is the rotation matrix between the first coordinate system and the second coordinate system
  • t is the first coordinate system.
  • n is the normal vector of the camera relative to the wall where the projection area is located
  • d is the intercept between the projection device and the wall.
  • the number of candidate first mapping relationships obtained by calculation may be multiple, iterative optimization may also be performed on the multiple candidate first mapping relationships, so as to select the first mapping relationship.
  • the determining the first mapping relationship from the candidate first mapping relationship includes:
  • For each candidate first mapping relationship calculate the absolute value of the difference between the matrix corresponding to the candidate first mapping relationship and the matrices corresponding to other candidate first mapping relationships;
  • the candidate first mapping relationship with the smallest sum value is used as the first mapping relationship.
  • H i is a matrix corresponding to the i-th candidate first mapping relationship among the n candidate first mapping relationships.
  • the candidate first mapping relationship with the smallest sum value can be used as the first mapping relationship, so that the calculated candidate first mapping relationship can be optimally selected in combination with the feature points identified by the model, and finally the first mapping relationship can be obtained.
  • the first mapping relationship when the first mapping relationship is determined based on the candidate first mapping relationship, the first mapping relationship may also be iteratively selected based on methods such as Kalman filtering, which is not limited in the present disclosure.
  • a third mapping relationship between the second coordinate system and the user viewing angle coordinate system may be determined based on the first mapping relationship and the second mapping relationship.
  • S16 Adjust the source image according to the third mapping relationship, and project the adjusted source image to the projection area.
  • steps S13 to S16 please refer to the above-mentioned description of the embodiment of FIG. 1, and for the sake of brevity of the description, the present disclosure will not repeat them here.
  • the above technical solution can calculate the candidate first mapping relationship respectively in combination with the feature points identified by the model, and finally obtain the first mapping relationship by optimizing the selection of the candidate first mapping relationship obtained by the calculation.
  • the calculation accuracy of the first mapping relationship can be improved, and the accuracy of image adjustment can be improved.
  • the present disclosure also provides a projection image adjustment apparatus.
  • the apparatus 500 includes:
  • the first acquisition module 501 is used to acquire target parameters, the target parameters are used to describe the relationship between the first coordinate system where the projected image from the camera perspective is located and the second coordinate system where the source image is located. It is the image formed by the projection device projecting the source image to the projection area;
  • a first calculation module 502 configured to calculate a first mapping relationship between the first coordinate system and the second coordinate system according to the target parameter
  • the second acquiring module 503 is configured to acquire, based on the first mapping relationship, a correlation parameter between the first coordinate system and the user perspective coordinate system where the preset user perspective plane is located, where the correlation parameter includes the camera relative The normal vector of the user's viewing angle plane, the rotation matrix between the first coordinate system and the preset user's viewing angle coordinate system, and the translation vector between the first coordinate system and the preset user's viewing angle coordinate system;
  • a second calculation module 504 configured to calculate a second mapping relationship between the first coordinate system and the user perspective coordinate system according to the associated parameter
  • a determining module 505 configured to determine a third mapping relationship between the second coordinate system and the user viewing angle coordinate system based on the first mapping relationship and the second mapping relationship;
  • the execution module 506 is configured to adjust the source image according to the third mapping relationship, and project the adjusted source image to the projection area.
  • the first coordinate system and the second coordinate system where the source image are located can be calculated based on the target parameters.
  • a first mapping relationship between the second coordinate systems, and based on the first mapping relationship, a correlation parameter between the first coordinate system and the user viewing angle coordinate system where the preset user viewing angle plane is located is calculated.
  • a second mapping relationship between the first coordinate system and the user viewing angle coordinate system may be calculated based on the associated parameters, and the second coordinate may be determined based on the first mapping relationship and the second mapping relationship
  • the third mapping relationship between the system and the coordinate system of the user's perspective that is, the mapping relationship between the projection source image and the image under the user's perspective. That is to say, the source image can be adjusted through the third mapping relationship, so as to obtain a desired projection image.
  • a trapezoidal projection picture may be adjusted to a rectangular projection picture based on the third mapping relationship, thereby realizing keystone correction of the projection picture.
  • the first obtaining module 501 includes:
  • a first acquisition sub-module configured to acquire a target image obtained by photographing a projection area by a camera, and the projection area displays a projection image
  • an input submodule configured to input the target image and the source image into a feature point recognition model, to obtain a plurality of feature point pairs between the target image and the source image output by the feature point recognition model;
  • the feature point recognition model is obtained by training image samples including feature point annotation information;
  • the first computing module 502 includes:
  • a first determination submodule used for determining a target feature point pair from the plurality of feature point pairs
  • a first calculation submodule configured to calculate the first mapping relationship based on the target feature point pair.
  • the first determination submodule includes:
  • the first execution subunit is configured to use the feature points in the preset area in the target image as the target feature points of the target image according to the positions of the feature points in the target image, wherein the preset Let the area be the area on the target image that is far from the distorted edge of the target image;
  • the second execution subunit is configured to use the target feature point and the feature point corresponding to the target feature point on the source image as the target feature point pair.
  • the sub-module from the first determination includes:
  • an acquisition subunit for acquiring line segment information in the target image
  • a clustering subunit for clustering the line segments in the target image
  • a first determination subunit for determining, from the line segments in the target image based on the clustering result, an edge line segment for characterizing the boundary of the projection screen
  • the third execution subunit is used for taking the intersection between each edge line segment as the vertex of the projection screen
  • a second determination subunit used to determine the target feature point from the feature points located in the projection screen area
  • the fourth execution subunit is configured to use the target feature point and the feature point corresponding to the target feature point on the source image as the target feature point pair to the target feature point pair.
  • the target parameters include parameters of the camera, a rotation matrix between the first coordinate system and the second coordinate system, and a rotation matrix between the first coordinate system and the second coordinate system.
  • Translation vector the normal vector of the wall where the projection area is located, the intercept between the projection device and the wall; correspondingly, the first calculation module 502 is used for:
  • the first mapping relationship is calculated by the following formula:
  • H is a matrix describing the first mapping relationship
  • K is a parameter of the camera
  • R is a rotation matrix between the first coordinate system and the second coordinate system
  • t is the first coordinate
  • n is the normal vector of the camera relative to the wall where the projection area is located
  • d is the intercept between the projection device and the wall.
  • the target parameters include parameters of the camera, a rotation matrix between the first coordinate system and the second coordinate system, and a rotation matrix between the first coordinate system and the second coordinate system.
  • a translation vector an intercept between the projection device and the wall, and a plurality of feature point pairs between the target image and the source image;
  • the first computing module 502 includes:
  • the second calculation submodule is used to calculate, for each of the feature point pairs, the normal vector of the wall where the projection area is located according to the feature point pair;
  • the third calculation sub-module is used for the normal vector of the wall where each projection area is located, according to the normal vector, the rotation matrix between the first coordinate system and the second coordinate system, the first coordinate system
  • the translation vector between the coordinate system and the second coordinate system, the intercept between the projection device and the wall surface, and the parameters of the camera are calculated to obtain a candidate first mapping relationship corresponding to the normal vector;
  • the second determination submodule is configured to determine the first mapping relationship from the candidate first mapping relationship.
  • the second determination submodule includes:
  • a first calculation subunit configured to calculate, for each candidate first mapping relationship, the absolute value of the difference between the matrix corresponding to the candidate first mapping relationship and the matrices corresponding to other candidate first mapping relationships;
  • a second calculation subunit configured to calculate the sum of absolute values corresponding to each of the candidate first mapping relationships
  • the fifth execution subunit uses the candidate first mapping relationship with the smallest sum value as the first mapping relationship.
  • the present disclosure also provides a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, implements the steps of the method in any one of the foregoing embodiments.
  • the present disclosure also provides an electronic device, comprising:
  • a processor configured to execute the computer program in the memory, to implement the steps of the method in any one of the above embodiments.
  • FIG. 6 is a block diagram of an electronic device 600 according to an exemplary embodiment.
  • the electronic device 600 may include: a processor 601 and a memory 602 .
  • the electronic device 600 may also include one or more of a multimedia component 603 , an input/output (I/O) interface 604 , and a communication component 605 .
  • I/O input/output
  • the processor 601 is used to control the overall operation of the electronic device 600 to complete all or part of the steps in the above-mentioned projection image adjustment method.
  • the memory 602 is used to store various types of data to support operations on the electronic device 600, such data may include, for example, instructions for any application or method operating on the electronic device 600, and application-related data, Such as pictures, audio, video, etc.
  • the memory 602 can be implemented by any type of volatile or non-volatile storage device or a combination thereof, such as Static Random Access Memory (SRAM for short), Electrically Erasable Programmable Read-Only Memory ( Electrically Erasable Programmable Read-Only Memory (EEPROM for short), Erasable Programmable Read-Only Memory (EPROM), Programmable Read-Only Memory (PROM), Read-Only Memory (Read-Only Memory, ROM for short), magnetic memory, flash memory, magnetic disk or optical disk.
  • Multimedia components 603 may include screen and audio components. Wherein the screen can be, for example, a touch screen, and the audio component is used for outputting and/or inputting audio signals.
  • the audio component may include a microphone for receiving external audio signals.
  • the received audio signal may be further stored in memory 602 or transmitted through communication component 605 .
  • the audio assembly also includes at least one speaker for outputting audio signals.
  • the I/O interface 604 provides an interface between the processor 601 and other interface modules, and the above-mentioned other interface modules may be a keyboard, a mouse, a button, and the like. These buttons can be virtual buttons or physical buttons.
  • the communication component 605 is used for wired or wireless communication between the electronic device 600 and other devices. Wireless communication, such as Wi-Fi, Bluetooth, Near Field Communication (NFC), 2G, 3G, 4G, NB-IOT, eMTC, or other 5G, etc., or one or more of them The combination is not limited here. Therefore, the corresponding communication component 605 may include: Wi-Fi module, Bluetooth module, NFC module and so on.
  • the electronic device 600 may be implemented by one or more application-specific integrated circuits (Application Specific Integrated Circuit, ASIC for short), digital signal processors (Digital Signal Processor, DSP for short), digital signal processing devices (Digital Signal Processing Device (DSPD), Programmable Logic Device (PLD), Field Programmable Gate Array (FPGA), controller, microcontroller, microprocessor or other electronic components
  • ASIC Application Specific Integrated Circuit
  • DSP Digital Signal Processor
  • DSP digital signal processing devices
  • DSPD Digital Signal Processing Device
  • PLD Programmable Logic Device
  • FPGA Field Programmable Gate Array
  • controller microcontroller, microprocessor or other electronic components
  • microcontroller microprocessor or other electronic components
  • a computer-readable storage medium comprising program instructions, the program instructions implementing the steps of the above-mentioned projection image adjustment method when executed by a processor.
  • the computer-readable storage medium can be the above-mentioned memory 602 including program instructions, and the above-mentioned program instructions can be executed by the processor 601 of the electronic device 600 to complete the above-mentioned projection image adjustment method.
  • a computer program product comprising a computer program executable by a programmable apparatus, the computer program having, when executed by the programmable apparatus, for performing the above The code section of the projection image adjustment method.

Landscapes

  • Physics & Mathematics (AREA)
  • Geometry (AREA)
  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Projection Apparatus (AREA)

Abstract

本公开涉及一种投影图像调整方法、装置、存储介质及电子设备,所述方法包括:获取目标参数,所述目标参数用于描述相机视角下的投影图像所在的第一坐标系与源图像所在的第二坐标系之间的关联关系,投影图像是投影装置将源图像投射到投影区域形成的图像;根据目标参数计算第一坐标系与第二坐标系之间的第一映射关系;基于第一映射关系获取第一坐标系与预设的用户视角平面所在的用户视角坐标系之间的关联参数;根据关联参数计算所述第一坐标系与所述用户视角坐标系之间的第二映射关系;基于第一映射关系、第二映射关系确定第二坐标系与用户视角坐标系之间的第三映射关系;根据第三映射关系对源图像进行调整,并将调整后的源图像投射至投影区域。

Description

投影图像调整方法、装置、存储介质及电子设备
相关申请的交叉引用
本公开要求在2021年03月19日提交中国专利局、申请号为202110297322.7、名称为“投影图像调整方法、装置、存储介质及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本公开中。
技术领域
本公开涉及投影技术领域,具体地,涉及一种投影图像调整方法、装置、存储介质及电子设备。
背景技术
投影机是一种可以将图像或视频投射到幕布上的设备,其可以通过不同的接口与计算机、游戏机、存储器等设备相连接,从而播放相应的视频内容。
一般来说,投影画面的大小与投影仪距墙面的距离正相关。而对于超短焦投影仪而言,其在距墙面50厘米的位置就可以投影出对角线达到250厘米的画面,即能够在有限的空间下投射出更大的投影画面。然而,超短焦投影仪在安装使用过程中即便发生1°的角度变化,也会导致投影画面出现巨大的偏移。因此,对于超短焦投影仪而言,人工校正图像位置变得较为困难,影响了用户的使用体验。
发明内容
本公开的目的是提供一种投影图像调整方法、装置、存储介质及电子设备,以解决上述相关技术问题。
为了实现上述目的,根据本公开实施例的第一方面,提供一种投影图像调整方法,包括:
获取目标参数,所述目标参数用于描述相机视角下的投影图像所在的第一坐标系与源图像所在的第二坐标系之间的关联关系,所述投影图像是投影装置将源图像投射到投影区域形成的图像;
根据所述目标参数计算所述第一坐标系与所述第二坐标系之间的第一映射关系;
基于所述第一映射关系获取所述第一坐标系与预设的用户视角平面所在的用户视角 坐标系之间的关联参数,所述关联参数包括所述相机相对于所述用户视角平面的法向量、所述第一坐标系与预设的用户视角坐标系之间的旋转矩阵以及所述第一坐标系与预设的用户视角坐标系之间的平移向量;
根据所述关联参数计算所述第一坐标系与所述用户视角坐标系之间的第二映射关系;
基于所述第一映射关系、所述第二映射关系确定所述第二坐标系与所述用户视角坐标系之间的第三映射关系;
根据所述第三映射关系对所述源图像进行调整,并将调整后的源图像投射至所述投影区域。
可选地,所述获取目标参数包括:
获取相机对投影区域进行拍摄得到的目标图像,所述投影区域显示有投影图像;
将所述目标图像和所述源图像输入至特征点识别模型,得到所述特征点识别模型输出的所述目标图像和所述源图像之间的多个特征点对;其中,所述特征点识别模型由包括特征点标注信息的图像样本训练得到;
所述根据所述目标参数计算所述第一坐标系与所述第二坐标系之间的第一映射关系,包括:
从所述多个特征点对中确定目标特征点对;
基于所述目标特征点对计算所述第一映射关系。
可选地,所述从所述多个特征点对中确定目标特征点对,包括:
根据所述目标图像中的特征点的位置,将处于所述目标图像中的预设区域内的特征点作为所述目标图像的目标特征点,其中,所述预设区域为所述目标图像上远离所述目标图像发生畸变的边缘的区域;
将所述目标特征点,以及所述源图像上与所述目标特征点相对应的特征点作为所述目标特征点对。
可选地,所述从所述多个特征点对中确定目标特征点对,包括:
获取所述目标图像中的线段信息;
对所述目标图像中的线段进行聚类;
基于聚类结果从所述目标图像中的线段中确定用于表征投影屏幕的边界的边缘线段;
将各边缘线段之间的交点作为所述投影屏幕的顶点;
从位于投影屏幕区域内的特征点中确定目标特征点;
将所述目标特征点,以及所述源图像上与所述目标特征点相对应的特征点作为所述目标特征点对所述目标特征点对。
可选地,所述目标参数包括所述相机的参数、所述第一坐标系与所述第二坐标系之间的旋转矩阵,所述第一坐标系与所述第二坐标系之间的平移向量,所述投影区域所在墙面的法向量,所述投影装置与所述墙面的截距;相应地,所述根据所述目标参数计算所述第一坐标系与所述第二坐标系之间的第一映射关系,包括:
通过如下计算式计算所述第一映射关系:
Figure PCTCN2021135440-appb-000001
其中,H为描述所述第一映射关系的矩阵,K为所述相机的参数、R为所述第一坐标系与所述第二坐标系之间的旋转矩阵,t为所述第一坐标系与所述第二坐标系之间的平移向量,n为所述相机相对所述投影区域所在墙面的法向量,d为所述投影装置与所述墙面的截距。
可选地,所述目标参数包括所述相机的参数、所述第一坐标系与所述第二坐标系之间的旋转矩阵,所述第一坐标系与所述第二坐标系之间的平移向量、所述投影装置与所述墙面的截距、以及所述目标图像和所述源图像之间的多个特征点对;
所述根据所述目标参数计算所述第一坐标系与所述第二坐标系之间的第一映射关系具体包括:
针对每一所述特征点对,根据该特征点对计算所述投影区域所在墙面的法向量;
针对每一所述投影区域所在墙面的法向量,根据该法向量、所述第一坐标系与所述第二坐标系之间的旋转矩阵,所述第一坐标系与所述第二坐标系之间的平移向量、所述投影装置与所述墙面的截距以及所述相机的参数,计算得到对应于该法向量的候选第一映射关系;
从所述候选第一映射关系中确定所述第一映射关系。
可选地,所述从所述候选第一映射关系中确定所述第一映射关系,包括:
针对每一候选第一映射关系,计算该候选第一映射关系所对应的矩阵与其他候选第一映射关系所对应的矩阵之间的差值的绝对值;
计算每一所述候选第一映射关系所对应的绝对值的和值;
将和值最小的候选第一映射关系作为所述第一映射关系。
根据本公开实施例的第二方面,提供一种投影图像调整装置,包括:
第一获取模块,用于获取目标参数,所述目标参数用于描述相机视角下的投影图像所在的第一坐标系与源图像所在的第二坐标系之间的关联关系,所述投影图像是投影装置将源图像投射到投影区域形成的图像;
第一计算模块,用于根据所述目标参数计算所述第一坐标系与所述第二坐标系之间的第一映射关系;
第二获取模块,用于基于所述第一映射关系获取所述第一坐标系与预设的用户视角平面所在的用户视角坐标系之间的关联参数,所述关联参数包括所述相机相对于所述用户视角平面的法向量、所述第一坐标系与预设的用户视角坐标系之间的旋转矩阵以及所述第一坐标系与预设的用户视角坐标系之间的平移向量;
第二计算模块,用于根据所述关联参数计算所述第一坐标系与所述用户视角坐标系之间的第二映射关系;
确定模块,用于基于所述第一映射关系、所述第二映射关系确定所述第二坐标系与所述用户视角坐标系之间的第三映射关系;
执行模块,用于根据所述第三映射关系对所述源图像进行调整,并将调整后的源图像投射至所述投影区域。
根据本公开实施例的第三方面,提供一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现上述第一方面中任一项所述方法的步骤。
根据本公开实施例的第四方面,提供一种电子设备,包括:
存储器,其上存储有计算机程序;
处理器,用于执行所述存储器中的所述计算机程序,以实现上述第一方面中任一项所述方法的步骤。
上述技术方案,至少可以包括如下有益效果:
通过获取用于描述相机视角下的投影图像所在的第一坐标系与源图像所在的第二坐标系之间的关联关系的目标参数,从而可以基于所述目标参数计算所述第一坐标系与所述第二坐标系之间的第一映射关系,并基于所述第一映射关系计算所述第一坐标系与预设的用户视角平面所在的用户视角坐标系之间的关联参数。这样,可以基于所述关联参数计算述第一坐标系与所述用户视角坐标系之间的第二映射关系,并基于所述第一映射关系、所述第二映射关系确定所述第二坐标系与所述用户视角坐标系之间的第三映射关系,即投影源图像与用户视角下的图像之间的映射关系。也就是说,可以通过所述第三 映射关系对所述源图像进行调整,从而得到需求的投影画面。例如,可以基于所述第三映射关系将梯形的投影画面调整为矩形的投影画面,从而实现投影画面的梯形校正。
本公开的其他特征和优点将在随后的具体实施方式部分予以详细说明。
附图说明
附图是用来提供对本公开的进一步理解,并且构成说明书的一部分,与下面的具体实施方式一起用于解释本公开,但并不构成对本公开的限制。在附图中:
图1是本公开一示例性实施例所示出的一种投影图像调整方法的流程图。
图2是本公开一示例性实施例所示出的一种投影图像调整方法的流程图。
图3是本公开一示例性实施例所示出的一种目标图像的示意图。
图4是本公开一示例性实施例所示出的一种投影图像调整方法的流程图。
图5是本公开一示例性实施例所示出的一种投影图像调整装置的框图。
图6是本公开一示例性实施例所示出的一种电子设备的框图。
具体实施方式
以下结合附图对本公开的具体实施方式进行详细说明。应当理解的是,此处所描述的具体实施方式仅用于说明和解释本公开,并不用于限制本公开。
在介绍本公开的投影图像调整方法、装置、存储介质及电子设备之前,首先对本公开的应用场景进行介绍,本公开所提供的各实施例可以应用于投影图像的调整场景。其中,所述投影图像例如可以是各类投影装置将源图像投射到投影区域所形成的图像。
以超短焦投影仪为例,其在距墙面50厘米的位置就可以投影出对角线达到250厘米的画面,即能够在有限的空间下投射出更大的投影画面。相应的,超短焦投影仪的投影画面也容易受到位置偏移的影响。例如,在出现1°的角度变化的情况下,其投影画面也可能出现6厘米的偏移。因此,对于超短焦投影仪而言,人工校正图像位置变得较为困难,影响了用户的使用体验。
为此,本公开提供一种投影图像调整方法,图1是本公开一示例实施例所示出的一种投影图像调整方法的流程图,所述方法包括:
在步骤S11中,获取目标参数。
其中,所述目标参数用于描述相机视角下的投影图像所在的第一坐标系与源图像所 在的第二坐标系之间的关联关系,所述投影图像是投影装置(例如超短焦投影装置)将源图像投射到投影区域形成的图像。在具体实施时,可以对相机视角下的投影图像建立所述第一坐标系,并基于所述源图像建立所述第二坐标系。
举例来讲,在一些可能的实施场景中,所述目标参数包括所述相机的参数,如校正系数、焦距等等。所述目标参数还包括所述第一坐标系与所述第二坐标系之间的旋转矩阵,所述第一坐标系与所述第二坐标系之间的平移向量,所述投影区域所在墙面的法向量,所述投影装置与所述墙面的截距。
针对所述投影区域所在墙面的法向量,该法向量可以表示为n=[x,y,z],其中x可以与所述投影装置的偏航角相关,y可以与所述投影装置的俯仰角相关,z可以与所述投影装置的滚转角相关。例如,可以通过IMU(Inertial Measurement Unit,惯性测量单元)获取所述法向量中的参数y和z。此外,还可以基于相机或TOF(Time Of Flight,飞行时间)传感器获取所述投影图像和所述源图像之间的一个或多个特征点对。其中,特征点对可以包括所述投影图像上的像素点A以及所述源图像上的像素点B,所述像素点A与所述像素点B表征同一对象在所述投影图像以及所述源图像上的位置。在获取到所述特征点对之后,可以基于所述特征点对计算所述参数x,进而得到相机相对于投影区域所在墙面的法向量。
在得到所述法向量之后,可以基于所述法向量计算得到所述第一坐标系与所述第二坐标系之间的旋转矩阵。此外,所述第一坐标系与所述第二坐标系之间的平移向量可以预先标定得到,具体请参照相关技术中的说明,本公开在此不做赘述。
在步骤S12中,根据所述目标参数计算所述第一坐标系与所述第二坐标系之间的第一映射关系。
沿用上述例子,在获取到所述目标参数之后,可以通过如下计算式计算所述第一映射关系:
Figure PCTCN2021135440-appb-000002
其中,H为描述所述第一映射关系的矩阵,K为所述相机的参数、R为所述第一坐标系与所述第二坐标系之间的旋转矩阵,t为所述第一坐标系与所述第二坐标系之间的平移向量,n为所述相机相对所述投影区域所在墙面的法向量,d为所述投影装置与所述墙面的截距。
在步骤S13中,基于所述第一映射关系获取所述第一坐标系与预设的用户视角平面 所在的用户视角坐标系之间的关联参数。
其中,所述用户视角平面可以是假定的用户所观看的投影图像的平面。当用户期望在正对投影区域的位置观测到矩形的投影画面时,所述用户视角平面可以为与投影区域所在平面相平行的平面。例如,当投影区域为墙面时,所述用户视角平面可以为距离所述墙面预设距离(如两米、三米等等)的与所述墙面平行的平面。进一步的,还可以为所述用户视角平面建立坐标系,即所述用户视角坐标系。
所述关联参数可以包括所述相机相对于所述用户视角平面的法向量、第一坐标系与预设的用户视角坐标系之间的旋转矩阵以及所述第一坐标系与预设的用户视角坐标系之间的平移向量。应当理解,由于所述用户视角平面与所述投影区域所在的平面相互平行,因此相机相对于所述用户视角平面的法向量与所述相机相对所述投影区域所在墙面的法向量可以相同。
类似的,在得到所述相机相对于所述用户视角平面的法向量之后,可以基于该法向量计算第一坐标系与预设的用户视角坐标系之间的旋转矩阵。例如,可以将所述相机相对于所述用户视角平面的法向量取反(n=[-x,-y,-z]),并与所述第一坐标系与所述第二坐标系之间的旋转矩阵相乘,从而得到所述第一坐标系与预设的用户视角坐标系之间的旋转矩阵。此外,所述第一坐标系与预设的用户视角坐标系之间的平移向量可以预先标定得到,具体请参照相关技术中的说明,本公开在此不做赘述。
在步骤S14中,根据所述关联参数计算所述第一坐标系与所述用户视角坐标系之间的第二映射关系。
示例地,可以通过如下计算式计算所述第二映射关系:
Figure PCTCN2021135440-appb-000003
其中,H’为描述所述第二映射关系的矩阵,K为所述相机的参数、R’为所述第一坐标系与预设的用户视角坐标系的旋转矩阵,t’为所述第一坐标系与预设的用户视角坐标系之间的平移向量,n’所述相机相对于所述用户视角平面的法向量,d’为所述投影装置与所述墙面的截距。
在步骤S15中,可以基于所述第一映射关系、所述第二映射关系确定所述第二坐标系与所述用户视角坐标系之间的第三映射关系。应当理解,源图像处于所述第二坐标系,用户视角下的图像处于用户视角坐标系,因此上述步骤S15可以是指获取到投影装置的 源图像与用户视角下的图像之间的映射关系。
这样,在步骤S16中,根据所述第三映射关系对所述源图像进行调整,并将调整后的源图像投射至所述投影区域。
应当理解,由于用户视角平面为假定的用户所观看的投影图像的平面。而深度的远近产生的是相似形的变化。因此,如果投射到所述用户视角平面的投影图像为矩形,那么在投影区域所形成的投影图像仍然为矩形。
因此,在一些实施场景中,当需要得到矩形的投影画面时,可以在所述用户视角坐标系中假定一个矩形的投影画面。这样,可以基于所述第三映射关系,对所述源图像进行调整,使得所述源图像在所述用户视角平面所形成的投影画面为矩形,进而使得所述源图像在所述投影区域所形成的投影画面为矩形,最终起到梯形校正的效果,也提升了投影装置的使用便捷度。并且,本实施例中能够通过tof、imu等模块计算所述第三映射关系,即能够在不借助相机拍摄的图像的情况下计算所述第三映射关系。因此,即便相机模组失效,采用本实施例的方案,仍能够对投影图像进行调整。
图2是本公开一示性式实施例所示出的一种投影图像调整方法的流程图,如图2所示,所述方法在图1的基础上,所述步骤S11包括:
S111,获取相机对投影区域进行拍摄得到的目标图像,所述投影区域显示有投影图像。
S112,将所述目标图像和所述源图像输入至特征点识别模型,得到所述特征点识别模型输出的所述目标图像和所述源图像之间的多个特征点对。
针对所述特征点识别模型,所述特征点识别模型可以由包括特征点标注信息的图像样本训练得到。举例来讲,可以基于Superpoint算法构建所述特征点识别模型,通过对源图像以及相机对投影区域(显示有所述源图像投影形成的投影图像)进行拍摄得到的目标图像标记特征点对,从而可以将该源图像以及目标图像构成一个训练样本。这样,可以将多组训练样本输入至训练模型中,得到所述模型的特征点对识别结果。进一步的,可以基于所述特征点对识别结果以及所述样本中所标记的特征点对信息,对所述模型的参数进行调整,直至模型收敛,最终得到所述特征点识别模型。当然,在具体实施时,还可以基于其他同类型的特征点匹配算法构建所述特征点识别模型,本公开对此不做限定。
在获得所述特征点识别模型之后,可以将所述目标图像和所述源图像输入至特征点 识别模型,得到所述特征点识别模型输出的所述目标图像和所述源图像之间的多个特征点对。
这样,在获得所述特征点对之后,在步骤S121中,可以从所述多个特征点中对确定目标特征点对。应当理解,所述特征点模型识别出的特征点对的数量可以为多个,因此可以从所述多个特征点对中确定目标特征点对,以便于计算所述第一映射关系。
具体来讲,在一种可能的实施方式中,投影区域可以包括投影屏幕(如幕布)。值得说明的是,投影屏幕可能不与墙面贴合(例如悬挂的幕布可能与墙面存在一定距离的间隙),在这种情况下所述投影区域中处于投影屏幕上的光点与所述投影区域中不处于所述投影屏幕上的光点之间可能存在着光路上的误差。
因此,所述从所述多个特征点中对确定目标特征点对(步骤S121)可以包括:
获取所述目标图像中的线段信息,举例来讲,可以基于LSD、霍夫变换等线段检测算法提取所述目标图像中的线段信息。
对所述目标图像中的线段进行聚类。应当理解,通过对所述目标图像中和的线段进行聚类,可以得到多个线段聚类簇,每一个聚类簇可以对应于一类线段,例如投影屏幕的四条边缘可以对应于四个聚类簇。
基于聚类结果从所述目标图像中的线段中确定用于表征投影屏幕的边界的边缘线段。举例来讲,在一些实施场景中可以将各聚类簇进行组合,从而可以将能够组合成四边形、且长宽比例能够满足预设比例要求(例如长宽比为16:9)的聚类簇中的线段作为所述投影屏幕的边缘线段。
值得注意的是,在一些实施场景中,聚类出的线段也可能包括投影图像的边缘线段。在这种情况下,由于投影屏幕的边缘线段两端的亮度值差异较小,而投影图像边缘线段两端的亮度差值较大,因此可以将线段两端亮度差值较小的线段作为所述投影屏幕的边缘线段。
这样,可以将各边缘线段之间的交点作为所述投影屏幕的顶点,在得到所述投影屏幕的边缘线段以及所述投影屏幕的顶点之后,则可以确定出所述投影屏幕所包括的区域。
进一步的,可以从位于投影屏幕区域内的特征点中确定目标特征点,并将所述目标特征点,以及所述源图像上与所述目标特征点相对应的特征点作为所述目标特征点对所述目标特征点对,以降低计算误差。
在另一种可能的实施场景中,所述从所述多个特征点中对确定目标特征点对(步骤 S121),包括:
根据所述目标图像中的特征点的位置,将处于所述目标图像中的预设区域内的特征点作为所述目标图像的目标特征点,其中,所述预设区域为所述目标图像上远离所述目标图像发生畸变的边缘的区域;
将所述目标特征点,以及所述源图像上与所述目标特征点相对应的特征点作为所述目标特征点对。
参照图3所示出一种目标图像的示意图,在图3的示例中,投影装置通过向上方投射光线,从而在投影区域形成投影图像。参照图3可知,由于相机从下方拍摄所述目标图像,因此所述目标图像的上部可能存在着相应的畸变。并且,在同样的长度尺寸下,所述目标图像中位于图像下部的区域内的有效像素多于目标图像中位于图像上部的区域内的有效像素。因此,在这种情况下,所述预设区域可以是指所述目标图像中位于图像下边缘预设距离范围的区域,所述距离范围可以根据应用需求进行设置,本公开对此不做限制。
当然,图3以相机从下方拍摄得到所述目标图像为例对上述方案进行了说明,但本领域技术人员知晓,在具体实施时所述相机也可能从上往下拍摄得到所述目标图像,或是从左往右拍摄得到所述目标图像等等。因此,所述预设区域为所述目标图像上远离所述目标图像发生畸变的边缘的区域。
采用上述技术方案,通过选择位于预设区域内的特征点,能够提升特征点对的选择精度,进而提升第一映射关系的计算准确度。
仍参照图2,在获取到目标特征点对之后,可以基于所述目标特征点对计算所述第一映射关系。例如,在一些实施方式中,可以选择4组特征点对,这样,基于所述4组特征点对中的8个点的坐标信息,可以计算得到所述第一映射关系。
在步骤S13中,可以将所述第一映射关系进行分解,从而得到所述关联参数。此外,关于步骤S14至S16,请参照上述关于图1的实施例说明,为了说明书的简洁,本公开在此不做赘述。
上述技术方案能够基于深度学习模型获取到投影图像与源图像之间的特征点对,从而能够降低误匹配率并提升所匹配得到的特征点的数量,进而有助于提升特征点对的选取精度,最终起到提升所计算得到的第一映射关系的准确度、提升投影图像的调整准确度的效果。
图4是本公开一示例性实施例所示出的一种投影图像调整方法的流程图,如图4所示,所述方法包括:
S11,获取目标参数。
其中,所述目标参数用于描述相机视角下的投影图像所在的第一坐标系与源图像所在的第二坐标系之间的关联关系,所述投影图像是投影装置将源图像投射到投影区域形成的图像。在具体实施时,可以对相机视角下的投影图像建立所述第一坐标系,并基于所述源图像建立所述第二坐标系。
所述目标参数包括所述相机的参数、所述第一坐标系与所述第二坐标系之间的旋转矩阵,所述第一坐标系与所述第二坐标系之间的平移向量、所述投影装置与所述墙面的截距。关于上述参数,请参照关于图1的实施例说明,本公开对此不再赘述。
所述目标参数还可以包括所述目标图像和所述源图像之间的多个特征点对。例如,可以将所述目标图像和所述源图像输入至特征点识别模型,得到所述特征点识别模型输出的所述目标图像和所述源图像之间的多个特征点对。关于所述特征点识别模型,请参照上述关于图2的实施例说明,本公开在此不做赘述。
S123,针对每一所述特征点对,根据该特征点对计算所述投影区域所在墙面的法向量。
应当理解,投影区域所在墙面的法向量可以表示为n=[x,y,z],在具体实施时可以基于IMU获取所述法向量中的参数y和z。因此,可以根据所述投影图像和所述源图像之间的一个特征点对计算得到所述参数x,从而得到相机相对于投影区域所在墙面的法向量。
也就是说,针对每一所述特征点对,都可以计算得到对应于该特征点对的投影区域所在墙面的法向量。
S124,针对每一所述投影区域所在墙面的法向量,根据该法向量、所述第一坐标系与所述第二坐标系之间的旋转矩阵,所述第一坐标系与所述第二坐标系之间的平移向量、所述投影装置与所述墙面的截距以及所述相机的参数,计算得到对应于该法向量的候选第一映射关系。
举例来讲,可以通过如下计算式计算所述对应于该法向量的候选第一映射关系:
Figure PCTCN2021135440-appb-000004
其中,N为描述所述候选第一映射关系的矩阵,K为所述相机的参数、R为所述第一坐标系与所述第二坐标系之间的旋转矩阵,t为所述第一坐标系与所述第二坐标系之间 的平移向量,n为所述相机相对所述投影区域所在墙面的法向量,d为所述投影装置与所述墙面的截距。
S125,从所述候选第一映射关系中确定所述第一映射关系。
由于计算得到的候选第一映射关系的数量可以为多个,因此还可以对所述多个候选第一映射关系进行迭代优化,从而选择出所述第一映射关系。
以最小二乘法为例,在一种可能的实施方式中,所述从所述候选第一映射关系中确定所述第一映射关系,包括:
针对每一候选第一映射关系,计算该候选第一映射关系所对应的矩阵与其他候选第一映射关系所对应的矩阵之间的差值的绝对值;
计算每一所述候选第一映射关系所对应的绝对值的和值;
将和值最小的候选第一映射关系作为所述第一映射关系。
例如,针对候选第一映射关系所对应的矩阵H0,可以计算该候选第一映射关系所对应的矩阵与其他候选第一映射关系所对应的矩阵之间的差值的绝对值的和值:
Figure PCTCN2021135440-appb-000005
其中,H i为n个候选第一映射关系中的第i个候选第一映射关系所对应的矩阵。
这样,可以将和值最小的候选第一映射关系作为所述第一映射关系,从而能够结合模型识别出的特征点对计算得到的候选第一映射关系进行优化选择,最终所得到所述第一映射关系。
当然,在基于所述候选第一映射关系确定所述第一映射关系时,还可以基于卡尔曼滤波法等方法迭代选择所述第一映射关系,本公开对此不做限定。
S13,基于所述第一映射关系获取所述第一坐标系与预设的用户视角平面所在的用户视角坐标系之间的关联参数。
S14,根据所述关联参数计算所述第一坐标系与所述用户视角坐标系之间的第二映射关系。
S15,可以基于所述第一映射关系、所述第二映射关系确定所述第二坐标系与所述用户视角坐标系之间的第三映射关系。
S16,根据所述第三映射关系对所述源图像进行调整,并将调整后的源图像投射至所述投影区域。
关于步骤S13至S16,请参照上述关于图1的实施例说明,为了说明书的简洁,本 公开在此不做赘述。
上述技术方案能够结合模型识别出的特征点分别计算候选第一映射关系,并通过对计算得到的候选第一映射关系进行优化选择,最终得到所述第一映射关系。采用上述技术方案,能够提升所述第一映射关系的计算准确度,有助于提升图像调整的准确度。
基于同样的发明构思,本公开还提供一种投影图像调整装置,参照图5所示出的一种投影图像调整装置的框图,所述装置500包括:
第一获取模块501,用于获取目标参数,所述目标参数用于描述相机视角下的投影图像所在的第一坐标系与源图像所在的第二坐标系之间的关联关系,所述投影图像是投影装置将源图像投射到投影区域形成的图像;
第一计算模块502,用于根据所述目标参数计算所述第一坐标系与所述第二坐标系之间的第一映射关系;
第二获取模块503,用于基于所述第一映射关系获取所述第一坐标系与预设的用户视角平面所在的用户视角坐标系之间的关联参数,所述关联参数包括所述相机相对于所述用户视角平面的法向量、所述第一坐标系与预设的用户视角坐标系之间的旋转矩阵以及所述第一坐标系与预设的用户视角坐标系之间的平移向量;
第二计算模块504,用于根据所述关联参数计算所述第一坐标系与所述用户视角坐标系之间的第二映射关系;
确定模块505,用于基于所述第一映射关系、所述第二映射关系确定所述第二坐标系与所述用户视角坐标系之间的第三映射关系;
执行模块506,用于根据所述第三映射关系对所述源图像进行调整,并将调整后的源图像投射至所述投影区域。
上述技术方案,可以包括如下有益效果:
通过获取用于描述相机视角下的投影图像所在的第一坐标系与源图像所在的第二坐标系之间的关联关系的目标参数,从而可以基于所述目标参数计算所述第一坐标系与所述第二坐标系之间的第一映射关系,并基于所述第一映射关系计算所述第一坐标系与预设的用户视角平面所在的用户视角坐标系之间的关联参数。这样,可以基于所述关联参数计算述第一坐标系与所述用户视角坐标系之间的第二映射关系,并基于所述第一映射关系、所述第二映射关系确定所述第二坐标系与所述用户视角坐标系之间的第三映射关系,即投影源图像与用户视角下的图像之间的映射关系。也就是说,可以通过所述第三 映射关系对所述源图像进行调整,从而得到需求的投影画面。例如,可以基于所述第三映射关系将梯形的投影画面调整为矩形的投影画面,从而实现投影画面的梯形校正。
可选地,所述第一获取模块501包括:
第一获取子模块,用于获取相机对投影区域进行拍摄得到的目标图像,所述投影区域显示有投影图像;
输入子模块,用于将所述目标图像和所述源图像输入至特征点识别模型,得到所述特征点识别模型输出的所述目标图像和所述源图像之间的多个特征点对;其中,所述特征点识别模型由包括特征点标注信息的图像样本训练得到;
所述第一计算模块502,包括:
第一确定子模块,用于从所述多个特征点对中确定目标特征点对;
第一计算子模块,用于基于所述目标特征点对计算所述第一映射关系。
可选地,所述第一确定子模块,包括:
第一执行子单元,用于根据所述目标图像中的特征点的位置,将处于所述目标图像中的预设区域内的特征点作为所述目标图像的目标特征点,其中,所述预设区域为所述目标图像上远离所述目标图像发生畸变的边缘的区域;
第二执行子单元,用于将所述目标特征点,以及所述源图像上与所述目标特征点相对应的特征点作为所述目标特征点对。
可选地,所述从所述第一确定子模块,包括:
获取子单元,用于获取所述目标图像中的线段信息;
聚类子单元,用于对所述目标图像中的线段进行聚类;
第一确定子单元,用于基于聚类结果从所述目标图像中的线段中确定用于表征投影屏幕的边界的边缘线段;
第三执行子单元,用于将各边缘线段之间的交点作为所述投影屏幕的顶点;
第二确定子单元,用于从位于投影屏幕区域内的特征点中确定目标特征点;
第四执行子单元,用于将所述目标特征点,以及所述源图像上与所述目标特征点相对应的特征点作为所述目标特征点对所述目标特征点对。
可选地,所述目标参数包括所述相机的参数、所述第一坐标系与所述第二坐标系之间的旋转矩阵,所述第一坐标系与所述第二坐标系之间的平移向量,所述投影区域所在墙面的法向量,所述投影装置与所述墙面的截距;相应地,所述第一计算模块502用于:
通过如下计算式计算所述第一映射关系:
Figure PCTCN2021135440-appb-000006
其中,H为描述所述第一映射关系的矩阵,K为所述相机的参数、R为所述第一坐标系与所述第二坐标系之间的旋转矩阵,t为所述第一坐标系与所述第二坐标系之间的平移向量,n为所述相机相对所述投影区域所在墙面的法向量,d为所述投影装置与所述墙面的截距。
可选地,所述目标参数包括所述相机的参数、所述第一坐标系与所述第二坐标系之间的旋转矩阵,所述第一坐标系与所述第二坐标系之间的平移向量、所述投影装置与所述墙面的截距、以及所述目标图像和所述源图像之间的多个特征点对;
所述第一计算模块502,包括:
第二计算子模块,用于针对每一所述特征点对,根据该特征点对计算所述投影区域所在墙面的法向量;
第三计算子模块,用于针对每一所述投影区域所在墙面的法向量,根据该法向量、所述第一坐标系与所述第二坐标系之间的旋转矩阵,所述第一坐标系与所述第二坐标系之间的平移向量、所述投影装置与所述墙面的截距以及所述相机的参数,计算得到对应于该法向量的候选第一映射关系;
第二确定子模块,用于从所述候选第一映射关系中确定所述第一映射关系。
可选地,所述第二确定子模块,包括:
第一计算子单元,用于针对每一候选第一映射关系,计算该候选第一映射关系所对应的矩阵与其他候选第一映射关系所对应的矩阵之间的差值的绝对值;
第二计算子单元,用于计算每一所述候选第一映射关系所对应的绝对值的和值;
第五执行子单元,将和值最小的候选第一映射关系作为所述第一映射关系。
关于上述实施例中的装置,其中各个模块执行操作的具体方式已经在有关该方法的实施例中进行了详细描述,此处将不做详细阐述说明。
本公开还提供一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现上述实施例中任一项所述方法的步骤。
本公开还提供一种电子设备,包括:
存储器,其上存储有计算机程序;
处理器,用于执行所述存储器中的所述计算机程序,以实现上述实施例中任一项所 述方法的步骤。
图6是根据一示例性实施例示出的一种电子设备600的框图。如图6所示,该电子设备600可以包括:处理器601,存储器602。该电子设备600还可以包括多媒体组件603,输入/输出(I/O)接口604,以及通信组件605中的一者或多者。
其中,处理器601用于控制该电子设备600的整体操作,以完成上述的投影图像调整方法中的全部或部分步骤。存储器602用于存储各种类型的数据以支持在该电子设备600的操作,这些数据例如可以包括用于在该电子设备600上操作的任何应用程序或方法的指令,以及应用程序相关的数据,例如图片、音频、视频等等。该存储器602可以由任何类型的易失性或非易失性存储设备或者它们的组合实现,例如静态随机存取存储器(Static Random Access Memory,简称SRAM),电可擦除可编程只读存储器(Electrically Erasable Programmable Read-Only Memory,简称EEPROM),可擦除可编程只读存储器(Erasable Programmable Read-Only Memory,简称EPROM),可编程只读存储器(Programmable Read-Only Memory,简称PROM),只读存储器(Read-Only Memory,简称ROM),磁存储器,快闪存储器,磁盘或光盘。多媒体组件603可以包括屏幕和音频组件。其中屏幕例如可以是触摸屏,音频组件用于输出和/或输入音频信号。例如,音频组件可以包括一个麦克风,麦克风用于接收外部音频信号。所接收的音频信号可以被进一步存储在存储器602或通过通信组件605发送。音频组件还包括至少一个扬声器,用于输出音频信号。I/O接口604为处理器601和其他接口模块之间提供接口,上述其他接口模块可以是键盘,鼠标,按钮等。这些按钮可以是虚拟按钮或者实体按钮。通信组件605用于该电子设备600与其他设备之间进行有线或无线通信。无线通信,例如Wi-Fi,蓝牙,近场通信(Near Field Communication,简称NFC),2G、3G、4G、NB-IOT、eMTC、或其他5G等等,或它们中的一种或几种的组合,在此不做限定。因此相应的该通信组件605可以包括:Wi-Fi模块,蓝牙模块,NFC模块等等。
在一示例性实施例中,电子设备600可以被一个或多个应用专用集成电路(Application Specific Integrated Circuit,简称ASIC)、数字信号处理器(Digital Signal Processor,简称DSP)、数字信号处理设备(Digital Signal Processing Device,简称DSPD)、可编程逻辑器件(Programmable Logic Device,简称PLD)、现场可编程门阵列(Field Programmable Gate Array,简称FPGA)、控制器、微控制器、微处理器或其他电子元件实现,用于执行上述的投影图像调整方法。
在另一示例性实施例中,还提供了一种包括程序指令的计算机可读存储介质,该程序指令被处理器执行时实现上述的投影图像调整方法的步骤。例如,该计算机可读存储介质可以为上述包括程序指令的存储器602,上述程序指令可由电子设备600的处理器601执行以完成上述的投影图像调整方法。
在另一示例性实施例中,还提供一种计算机程序产品,该计算机程序产品包含能够由可编程的装置执行的计算机程序,该计算机程序具有当由该可编程的装置执行时用于执行上述的投影图像调整方法的代码部分。
以上结合附图详细描述了本公开的优选实施方式,但是,本公开并不限于上述实施方式中的具体细节,在本公开的技术构思范围内,可以对本公开的技术方案进行多种简单变型,这些简单变型均属于本公开的保护范围。
另外需要说明的是,在上述具体实施方式中所描述的各个具体技术特征,在不矛盾的情况下,可以通过任何合适的方式进行组合,为了避免不必要的重复,本公开对各种可能的组合方式不再另行说明。
此外,本公开的各种不同的实施方式之间也可以进行任意组合,只要其不违背本公开的思想,其同样应当视为本公开所公开的内容。

Claims (20)

  1. 一种投影图像调整方法,其特征在于,包括:
    获取目标参数,所述目标参数用于描述相机视角下的投影图像所在的第一坐标系与源图像所在的第二坐标系之间的关联关系,所述投影图像是投影装置将源图像投射到投影区域形成的图像;
    根据所述目标参数计算所述第一坐标系与所述第二坐标系之间的第一映射关系;
    基于所述第一映射关系获取所述第一坐标系与预设的用户视角平面所在的用户视角坐标系之间的关联参数,所述关联参数包括所述相机相对于所述用户视角平面的法向量、所述第一坐标系与预设的用户视角坐标系之间的旋转矩阵以及所述第一坐标系与预设的用户视角坐标系之间的平移向量;
    根据所述关联参数计算所述第一坐标系与所述用户视角坐标系之间的第二映射关系;
    基于所述第一映射关系、所述第二映射关系确定所述第二坐标系与所述用户视角坐标系之间的第三映射关系;
    根据所述第三映射关系对所述源图像进行调整,并将调整后的源图像投射至所述投影区域。
  2. 根据权利要求1所述的方法,其特征在于,所述获取目标参数包括:
    获取相机对投影区域进行拍摄得到的目标图像,所述投影区域显示有投影图像;
    将所述目标图像和所述源图像输入至特征点识别模型,得到所述特征点识别模型输出的所述目标图像和所述源图像之间的多个特征点对;其中,所述特征点识别模型由包括特征点标注信息的图像样本训练得到;
    所述根据所述目标参数计算所述第一坐标系与所述第二坐标系之间的第一映射关系,包括:
    从所述多个特征点对中确定目标特征点对;
    基于所述目标特征点对计算所述第一映射关系。
  3. 根据权利要求2所述的方法,其特征在于,所述从所述多个特征点对中确定目标特征点对,包括:
    根据所述目标图像中的特征点的位置,将处于所述目标图像中的预设区域内的特征 点作为所述目标图像的目标特征点,其中,所述预设区域为所述目标图像上远离所述目标图像发生畸变的边缘的区域;
    将所述目标特征点,以及所述源图像上与所述目标特征点相对应的特征点作为所述目标特征点对。
  4. 根据权利要求3所述的方法,其特征在于,所述投影图像为所述投影装置向所述投影装置的上方投射所述源图像时,在所述投影区域形成的图像,所述预设区域为所述目标图像中位于图像下边缘预设距离范围的区域。
  5. 根据权利要求2所述的方法,其特征在于,所述从所述多个特征点对中确定目标特征点对,包括:
    获取所述目标图像中的线段信息;
    对所述目标图像中的线段进行聚类;
    基于聚类结果从所述目标图像中的线段中确定用于表征投影屏幕的边界的边缘线段;
    将各边缘线段之间的交点作为所述投影屏幕的顶点;
    从位于投影屏幕区域内的特征点中确定目标特征点;
    将所述目标特征点,以及所述源图像上与所述目标特征点相对应的特征点作为所述目标特征点对所述目标特征点对。
  6. 根据权利要求5所述的方法,其特征在于,所述基于聚类结果从所述目标图像中的线段中确定用于表征投影屏幕的边界的边缘线段,包括:
    将聚类得到的各聚类簇进行组合;
    将能够组合成四边形、且长宽比例能够满足预设比例要求的聚类簇中的线段作为所述边缘线段。
  7. 根据权利要求1所述的方法,其特征在于,所述目标参数包括所述相机的参数、所述第一坐标系与所述第二坐标系之间的旋转矩阵,所述第一坐标系与所述第二坐标系之间的平移向量,所述投影区域所在墙面的法向量,所述投影装置与所述墙面的截距;相应地,所述根据所述目标参数计算所述第一坐标系与所述第二坐标系之间的第一映射关系,包括:
    通过如下计算式计算所述第一映射关系:
    Figure PCTCN2021135440-appb-100001
    其中,H为描述所述第一映射关系的矩阵,K为所述相机的参数、R为所述第一坐标系与所述第二坐标系之间的旋转矩阵,t为所述第一坐标系与所述第二坐标系之间的平移向量,n为所述相机相对所述投影区域所在墙面的法向量,d为所述投影装置与所述墙面的截距。
  8. 根据权利要求1所述的方法,其特征在于,所述目标参数包括所述相机的参数、所述第一坐标系与所述第二坐标系之间的旋转矩阵,所述第一坐标系与所述第二坐标系之间的平移向量、所述投影装置与所述墙面的截距、以及所述目标图像和所述源图像之间的多个特征点对;
    所述根据所述目标参数计算所述第一坐标系与所述第二坐标系之间的第一映射关系具体包括:
    针对每一所述特征点对,根据该特征点对计算所述投影区域所在墙面的法向量;
    针对每一所述投影区域所在墙面的法向量,根据该法向量、所述第一坐标系与所述第二坐标系之间的旋转矩阵,所述第一坐标系与所述第二坐标系之间的平移向量、所述投影装置与所述墙面的截距以及所述相机的参数,计算得到对应于该法向量的候选第一映射关系;
    从所述候选第一映射关系中确定所述第一映射关系。
  9. 根据权利要求8所述的方法,其特征在于,所述从所述候选第一映射关系中确定所述第一映射关系,包括:
    针对每一候选第一映射关系,计算该候选第一映射关系所对应的矩阵与其他候选第一映射关系所对应的矩阵之间的差值的绝对值;
    计算每一所述候选第一映射关系所对应的绝对值的和值;
    将和值最小的候选第一映射关系作为所述第一映射关系。
  10. 一种投影图像调整装置,其特征在于,包括:
    第一获取模块,用于获取目标参数,所述目标参数用于描述相机视角下的投影图像 所在的第一坐标系与源图像所在的第二坐标系之间的关联关系,所述投影图像是投影装置将源图像投射到投影区域形成的图像;
    第一计算模块,用于根据所述目标参数计算所述第一坐标系与所述第二坐标系之间的第一映射关系;
    第二获取模块,用于基于所述第一映射关系获取所述第一坐标系与预设的用户视角平面所在的用户视角坐标系之间的关联参数,所述关联参数包括所述相机相对于所述用户视角平面的法向量、所述第一坐标系与预设的用户视角坐标系之间的旋转矩阵以及所述第一坐标系与预设的用户视角坐标系之间的平移向量;
    第二计算模块,用于根据所述关联参数计算所述第一坐标系与所述用户视角坐标系之间的第二映射关系;
    确定模块,用于基于所述第一映射关系、所述第二映射关系确定所述第二坐标系与所述用户视角坐标系之间的第三映射关系;
    执行模块,用于根据所述第三映射关系对所述源图像进行调整,并将调整后的源图像投射至所述投影区域。
  11. 根据权利要求10所述的装置,其特征在于,所述第一获取模块包括:
    第一获取子模块,用于获取相机对投影区域进行拍摄得到的目标图像,所述投影区域显示有投影图像;
    输入子模块,用于将所述目标图像和所述源图像输入至特征点识别模型,得到所述特征点识别模型输出的所述目标图像和所述源图像之间的多个特征点对;其中,所述特征点识别模型由包括特征点标注信息的图像样本训练得到;
    所述第一计算模块,包括:
    第一确定子模块,用于从所述多个特征点对中确定目标特征点对;
    第一计算子模块,用于基于所述目标特征点对计算所述第一映射关系。
  12. 根据权利要求11所述的装置,其特征在于,所述第一确定子模块,包括:
    第一执行子单元,用于根据所述目标图像中的特征点的位置,将处于所述目标图像中的预设区域内的特征点作为所述目标图像的目标特征点,其中,所述预设区域为所述目标图像上远离所述目标图像发生畸变的边缘的区域;
    第二执行子单元,用于将所述目标特征点,以及所述源图像上与所述目标特征点相对应的特征点作为所述目标特征点对。
  13. 根据权利要求12所述的装置,其特征在于,所述投影图像为所述投影装置向所述投影装置的上方投射所述源图像时,在所述投影区域形成的图像,所述预设区域为所述目标图像中位于图像下边缘预设距离范围的区域。
  14. 根据权利要求11所述的装置,其特征在于,所述第一确定子模块,包括:
    获取子单元,用于获取所述目标图像中的线段信息;
    聚类子单元,用于对所述目标图像中的线段进行聚类;
    第一确定子单元,用于基于聚类结果从所述目标图像中的线段中确定用于表征投影屏幕的边界的边缘线段;
    第三执行子单元,用于将各边缘线段之间的交点作为所述投影屏幕的顶点;
    第二确定子单元,用于从位于投影屏幕区域内的特征点中确定目标特征点;
    第四执行子单元,用于将所述目标特征点,以及所述源图像上与所述目标特征点相对应的特征点作为所述目标特征点对所述目标特征点对。
  15. 根据权利要求14所述的装置,其特征在于,所述第一确定子单元用于:将聚类得到的各聚类簇进行组合;将能够组合成四边形、且长宽比例能够满足预设比例要求的聚类簇中的线段作为所述边缘线段。
  16. 根据权利要求10所述的装置,其特征在于,所述目标参数包括所述相机的参数、所述第一坐标系与所述第二坐标系之间的旋转矩阵,所述第一坐标系与所述第二坐标系之间的平移向量,所述投影区域所在墙面的法向量,所述投影装置与所述墙面的截距;相应地,所述第一计算模块用于:
    通过如下计算式计算所述第一映射关系:
    Figure PCTCN2021135440-appb-100002
    其中,H为描述所述第一映射关系的矩阵,K为所述相机的参数、R为所述第一坐标系与所述第二坐标系之间的旋转矩阵,t为所述第一坐标系与所述第二坐标系之间的平 移向量,n为所述相机相对所述投影区域所在墙面的法向量,d为所述投影装置与所述墙面的截距。
  17. 根据权利要求10所述的装置,其特征在于,所述目标参数包括所述相机的参数、所述第一坐标系与所述第二坐标系之间的旋转矩阵,所述第一坐标系与所述第二坐标系之间的平移向量、所述投影装置与所述墙面的截距、以及所述目标图像和所述源图像之间的多个特征点对;
    所述第一计算模块,包括:
    第二计算子模块,用于针对每一所述特征点对,根据该特征点对计算所述投影区域所在墙面的法向量;
    第三计算子模块,用于针对每一所述投影区域所在墙面的法向量,根据该法向量、所述第一坐标系与所述第二坐标系之间的旋转矩阵,所述第一坐标系与所述第二坐标系之间的平移向量、所述投影装置与所述墙面的截距以及所述相机的参数,计算得到对应于该法向量的候选第一映射关系;
    第二确定子模块,用于从所述候选第一映射关系中确定所述第一映射关系。
  18. 根据权利要求17所述的装置,其特征在于,所述第二确定子模块,包括:
    第一计算子单元,用于针对每一候选第一映射关系,计算该候选第一映射关系所对应的矩阵与其他候选第一映射关系所对应的矩阵之间的差值的绝对值;
    第二计算子单元,用于计算每一所述候选第一映射关系所对应的绝对值的和值;
    第五执行子单元,将和值最小的候选第一映射关系作为所述第一映射关系。
  19. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现权利要求1-9中任一项所述方法的步骤。
  20. 一种电子设备,其特征在于,包括:
    存储器,其上存储有计算机程序;
    处理器,用于执行所述存储器中的所述计算机程序,以实现权利要求1-9中任一项所述方法的步骤。
PCT/CN2021/135440 2021-03-19 2021-12-03 投影图像调整方法、装置、存储介质及电子设备 WO2022193739A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110297322.7A CN113099198B (zh) 2021-03-19 2021-03-19 投影图像调整方法、装置、存储介质及电子设备
CN202110297322.7 2021-03-19

Publications (1)

Publication Number Publication Date
WO2022193739A1 true WO2022193739A1 (zh) 2022-09-22

Family

ID=76668535

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/135440 WO2022193739A1 (zh) 2021-03-19 2021-12-03 投影图像调整方法、装置、存储介质及电子设备

Country Status (2)

Country Link
CN (1) CN113099198B (zh)
WO (1) WO2022193739A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116542847A (zh) * 2023-07-05 2023-08-04 海豚乐智科技(成都)有限责任公司 一种低小慢目标高速图像模拟方法、存储介质及装置
CN117724610A (zh) * 2023-12-13 2024-03-19 广东聚华新型显示研究院 用于头显设备的数据处理方法、装置、头戴设备和介质

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113099198B (zh) * 2021-03-19 2023-01-10 深圳市火乐科技发展有限公司 投影图像调整方法、装置、存储介质及电子设备
CN116074482A (zh) * 2021-11-03 2023-05-05 深圳光峰科技股份有限公司 投影画面的叠加方法、装置、电子设备及存储介质
CN114466173A (zh) * 2021-11-16 2022-05-10 海信视像科技股份有限公司 投影设备及自动投入幕布区域的投影显示控制方法
CN114286066B (zh) * 2021-12-23 2024-07-23 深圳市火乐科技发展有限公司 投影校正方法、装置、存储介质以及投影设备
CN114827562B (zh) * 2022-03-11 2024-06-25 深圳海翼智新科技有限公司 投影方法、装置、投影设备及计算机存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106127745A (zh) * 2016-06-17 2016-11-16 凌云光技术集团有限责任公司 结构光3d视觉系统与线阵相机的联合标定方法及装置
CN110336987A (zh) * 2019-04-03 2019-10-15 北京小鸟听听科技有限公司 一种投影仪畸变校正方法、装置和投影仪
JP2020150481A (ja) * 2019-03-15 2020-09-17 キヤノン株式会社 情報処理装置、投影システム、情報処理方法、及び、プログラム
CN113099198A (zh) * 2021-03-19 2021-07-09 深圳市火乐科技发展有限公司 投影图像调整方法、装置、存储介质及电子设备

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105227881B (zh) * 2015-09-15 2019-02-26 海信集团有限公司 一种投影画面校正方法及投影设备
CN107547879B (zh) * 2016-06-24 2019-12-24 上海顺久电子科技有限公司 一种投影成像的校正方法、装置及激光电视
JP2018084686A (ja) * 2016-11-24 2018-05-31 株式会社リコー 画像投影装置および画像投影装置の制御方法
CN110784691B (zh) * 2018-07-31 2022-02-18 中强光电股份有限公司 投影装置、投影系统及影像校正方法
CN110111262B (zh) * 2019-03-29 2021-06-04 北京小鸟听听科技有限公司 一种投影仪投影畸变校正方法、装置和投影仪
CN110830781B (zh) * 2019-10-30 2021-03-23 歌尔科技有限公司 一种基于双目视觉的投影图像自动校正方法及系统

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106127745A (zh) * 2016-06-17 2016-11-16 凌云光技术集团有限责任公司 结构光3d视觉系统与线阵相机的联合标定方法及装置
JP2020150481A (ja) * 2019-03-15 2020-09-17 キヤノン株式会社 情報処理装置、投影システム、情報処理方法、及び、プログラム
CN110336987A (zh) * 2019-04-03 2019-10-15 北京小鸟听听科技有限公司 一种投影仪畸变校正方法、装置和投影仪
CN113099198A (zh) * 2021-03-19 2021-07-09 深圳市火乐科技发展有限公司 投影图像调整方法、装置、存储介质及电子设备

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116542847A (zh) * 2023-07-05 2023-08-04 海豚乐智科技(成都)有限责任公司 一种低小慢目标高速图像模拟方法、存储介质及装置
CN116542847B (zh) * 2023-07-05 2023-10-10 海豚乐智科技(成都)有限责任公司 一种低小慢目标高速图像模拟方法、存储介质及装置
CN117724610A (zh) * 2023-12-13 2024-03-19 广东聚华新型显示研究院 用于头显设备的数据处理方法、装置、头戴设备和介质

Also Published As

Publication number Publication date
CN113099198B (zh) 2023-01-10
CN113099198A (zh) 2021-07-09

Similar Documents

Publication Publication Date Title
WO2022193739A1 (zh) 投影图像调整方法、装置、存储介质及电子设备
US9946954B2 (en) Determining distance between an object and a capture device based on captured image data
WO2022179108A1 (zh) 投影校正方法、装置、存储介质和电子设备
US8571350B2 (en) Image processing system with image alignment mechanism and method of operation thereof
US11282232B2 (en) Camera calibration using depth data
US10558881B2 (en) Parallax minimization stitching method and apparatus using control points in overlapping region
US8538084B2 (en) Method and apparatus for depth sensing keystoning
US8398246B2 (en) Real-time projection management
US9838614B1 (en) Multi-camera image data generation
US10586308B2 (en) Digital media environment for removal of obstructions in a digital image scene
TW202042548A (zh) 投影系統、投影裝置以及其顯示影像的校正方法
CN112272292B (zh) 投影校正方法、装置和存储介质
CN106125994B (zh) 坐标匹配方法及使用该坐标匹配方法的操控方法和终端
WO2023103377A1 (zh) 标定方法及装置、电子设备、存储介质及计算机程序产品
WO2019041650A1 (zh) 摄像机标定参数的校正方法、装置、设备和存储介质
WO2019076027A1 (zh) 白平衡信息同步方法、装置及计算机可读介质
WO2023098045A1 (zh) 图像对齐方法、装置、计算机设备和存储介质
CN112689136B (zh) 投影图像调整方法、装置、存储介质及电子设备
CN112233189B (zh) 多深度相机外部参数标定方法、装置及存储介质
CN109690611B (zh) 一种图像校正方法及装置
US20220405968A1 (en) Method, apparatus and system for image processing
JP2009301181A (ja) 画像処理装置、画像処理プログラム、画像処理方法、および電子機器
WO2018152710A1 (zh) 图像校正的方法及装置
CN115174878B (zh) 投影画面校正方法、装置和存储介质
JP2009302731A (ja) 画像処理装置、画像処理プログラム、画像処理方法、および電子機器

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21931313

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 01.02.2024)

122 Ep: pct application non-entry in european phase

Ref document number: 21931313

Country of ref document: EP

Kind code of ref document: A1