CN113099198A - Projection image adjusting method and device, storage medium and electronic equipment - Google Patents

Projection image adjusting method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN113099198A
CN113099198A CN202110297322.7A CN202110297322A CN113099198A CN 113099198 A CN113099198 A CN 113099198A CN 202110297322 A CN202110297322 A CN 202110297322A CN 113099198 A CN113099198 A CN 113099198A
Authority
CN
China
Prior art keywords
coordinate system
target
image
mapping relation
projection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110297322.7A
Other languages
Chinese (zh)
Other versions
CN113099198B (en
Inventor
孙世攀
张聪
胡震宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Huole Science and Technology Development Co Ltd
Original Assignee
Shenzhen Huole Science and Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Huole Science and Technology Development Co Ltd filed Critical Shenzhen Huole Science and Technology Development Co Ltd
Priority to CN202110297322.7A priority Critical patent/CN113099198B/en
Publication of CN113099198A publication Critical patent/CN113099198A/en
Priority to PCT/CN2021/135440 priority patent/WO2022193739A1/en
Application granted granted Critical
Publication of CN113099198B publication Critical patent/CN113099198B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3179Video signal processing therefor
    • H04N9/3185Geometric adjustment, e.g. keystone or convergence

Landscapes

  • Physics & Mathematics (AREA)
  • Geometry (AREA)
  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Projection Apparatus (AREA)

Abstract

The present disclosure relates to a method, an apparatus, a storage medium, and an electronic device for adjusting a projected image, wherein the method includes: acquiring target parameters, wherein the target parameters are used for describing the incidence relation between a first coordinate system where a projected image under the view angle of a camera is located and a second coordinate system where a source image is located, and the projected image is an image formed by projecting the source image to a projection area by a projection device; calculating a first mapping relation between the first coordinate system and the second coordinate system according to the target parameters; acquiring association parameters between a first coordinate system and a user perspective coordinate system where a preset user perspective plane is located based on the first mapping relation; calculating a second mapping relation between the first coordinate system and the user visual coordinate system according to the associated parameters; determining a third mapping relation between the second coordinate system and the user visual angle coordinate system based on the first mapping relation and the second mapping relation; and adjusting the source image according to the third mapping relation, and projecting the adjusted source image to the projection area.

Description

Projection image adjusting method and device, storage medium and electronic equipment
Technical Field
The present disclosure relates to the field of projection technologies, and in particular, to a method and an apparatus for adjusting a projected image, a storage medium, and an electronic device.
Background
A projector is a device that can project images or videos onto a curtain, and can be connected to a computer, a game machine, a memory, etc. through different interfaces, so as to play corresponding video contents.
Generally, the size of the projection screen is positively correlated with the distance of the projector from the wall surface. For the ultra-short-focus projector, a picture with a diagonal line of 250 cm can be projected at a position 50 cm away from a wall surface, namely, a larger projection picture can be projected in a limited space. However, the ultra-short focus projector causes a large deviation of the projection screen even if an angle change of 1 ° occurs during installation and use. Therefore, it becomes difficult for the ultra-short focus projector to manually correct the image position, which affects the user experience.
Disclosure of Invention
An object of the present disclosure is to provide a method, an apparatus, a storage medium, and an electronic device for adjusting a projected image, so as to solve the above related technical problems.
In order to achieve the above object, according to a first aspect of an embodiment of the present disclosure, there is provided a projection image adjustment method including:
acquiring target parameters, wherein the target parameters are used for describing the incidence relation between a first coordinate system where a projected image under the view angle of a camera is located and a second coordinate system where a source image is located, and the projected image is an image formed by projecting the source image to a projection area by a projection device;
calculating a first mapping relation between the first coordinate system and the second coordinate system according to the target parameters;
acquiring association parameters between the first coordinate system and a user perspective coordinate system where a preset user perspective plane is located based on the first mapping relation, wherein the association parameters comprise a normal vector of the camera relative to the user perspective plane, a rotation matrix between the first coordinate system and the preset user perspective coordinate system, and a translation vector between the first coordinate system and the preset user perspective coordinate system;
calculating a second mapping relation between the first coordinate system and the user visual coordinate system according to the associated parameters;
determining a third mapping relation between the second coordinate system and the user visual coordinate system based on the first mapping relation and the second mapping relation;
and adjusting the source image according to the third mapping relation, and projecting the adjusted source image to the projection area.
Optionally, the obtaining the target parameter includes:
acquiring a target image obtained by shooting a projection area by a camera, wherein the projection area displays a projection image;
inputting the target image and the source image into a feature point recognition model to obtain a plurality of feature point pairs between the target image and the source image output by the feature point recognition model; the feature point identification model is obtained by training an image sample comprising feature point marking information;
the calculating a first mapping relationship between the first coordinate system and the second coordinate system according to the target parameter includes:
determining a target characteristic point pair from the plurality of characteristic point pairs;
and calculating the first mapping relation based on the target characteristic point pairs.
Optionally, the determining a target feature point pair from the plurality of feature point pairs includes:
according to the position of the feature point in the target image, taking the feature point in a preset area in the target image as the target feature point of the target image, wherein the preset area is an area, far away from the edge of the target image, which is distorted;
and taking the target characteristic points and the characteristic points corresponding to the target characteristic points on the source image as the target characteristic point pairs.
Optionally, the determining a target feature point pair from the plurality of feature point pairs includes:
acquiring line segment information in the target image;
clustering line segments in the target image;
determining edge line segments for representing the boundary of the projection screen from the line segments in the target image based on the clustering result;
taking the intersection points among the edge line segments as the vertexes of the projection screen;
determining target characteristic points from the characteristic points in the projection screen area;
and taking the target characteristic points and the characteristic points corresponding to the target characteristic points on the source image as the target characteristic point pairs.
Optionally, the target parameters include parameters of the camera, a rotation matrix between the first coordinate system and the second coordinate system, a translation vector between the first coordinate system and the second coordinate system, a normal vector of a wall surface where the projection area is located, and an intercept between the projection device and the wall surface; correspondingly, the calculating a first mapping relationship between the first coordinate system and the second coordinate system according to the target parameter includes:
calculating the first mapping relationship by:
Figure BDA0002984837670000031
h is a matrix describing the first mapping relation, K is a parameter of the camera, R is a rotation matrix between the first coordinate system and the second coordinate system, t is a translation vector between the first coordinate system and the second coordinate system, n is a normal vector of the camera relative to a wall surface where the projection area is located, and d is an intercept of the projection device and the wall surface.
Optionally, the target parameters include parameters of the camera, a rotation matrix between the first coordinate system and the second coordinate system, a translation vector between the first coordinate system and the second coordinate system, an intercept of the projection device with the wall, and a plurality of pairs of feature points between the target image and the source image;
the calculating a first mapping relationship between the first coordinate system and the second coordinate system according to the target parameter specifically includes:
aiming at each characteristic point pair, calculating a normal vector of the wall surface where the projection area is located according to the characteristic point pair;
aiming at a normal vector of a wall surface where each projection area is located, calculating to obtain a candidate first mapping relation corresponding to the normal vector according to the normal vector, a rotation matrix between the first coordinate system and the second coordinate system, a translation vector between the first coordinate system and the second coordinate system, an intercept between the projection device and the wall surface and parameters of the camera;
determining the first mapping relationship from the candidate first mapping relationships.
Optionally, the determining the first mapping relation from the candidate first mapping relations includes:
calculating the absolute value of the difference between the matrix corresponding to the candidate first mapping relation and the matrix corresponding to other candidate first mapping relations aiming at each candidate first mapping relation;
calculating the sum of absolute values corresponding to each candidate first mapping relation;
and taking the candidate first mapping relation with the minimum sum value as the first mapping relation.
According to a second aspect of the embodiments of the present disclosure, there is provided a projection image adjusting apparatus including:
the first acquisition module is used for acquiring target parameters, the target parameters are used for describing the association relationship between a first coordinate system where a projected image under the view angle of a camera is located and a second coordinate system where a source image is located, and the projected image is an image formed by projecting the source image to a projection area by a projection device;
the first calculation module is used for calculating a first mapping relation between the first coordinate system and the second coordinate system according to the target parameters;
a second obtaining module, configured to obtain, based on the first mapping relationship, associated parameters between the first coordinate system and a user perspective coordinate system in which a preset user perspective plane is located, where the associated parameters include a normal vector of the camera relative to the user perspective plane, a rotation matrix between the first coordinate system and the preset user perspective coordinate system, and a translation vector between the first coordinate system and the preset user perspective coordinate system;
the second calculation module is used for calculating a second mapping relation between the first coordinate system and the user visual coordinate system according to the association parameters;
a determining module, configured to determine a third mapping relationship between the second coordinate system and the user visual coordinate system based on the first mapping relationship and the second mapping relationship;
and the execution module is used for adjusting the source image according to the third mapping relation and projecting the adjusted source image to the projection area.
Optionally, the first obtaining module includes:
the first acquisition submodule is used for acquiring a target image obtained by shooting a projection area by a camera, and the projection area displays a projection image;
the input submodule is used for inputting the target image and the source image into a feature point recognition model to obtain a plurality of feature point pairs between the target image and the source image output by the feature point recognition model; the feature point identification model is obtained by training an image sample comprising feature point marking information;
the first computing module, comprising:
a first determination submodule configured to determine a target feature point pair from the plurality of feature point pairs;
and the first calculation submodule is used for calculating the first mapping relation based on the target characteristic point pairs.
Optionally, the first determining sub-module includes:
the first execution subunit is configured to, according to the position of the feature point in the target image, use a feature point in a preset region in the target image as a target feature point of the target image, where the preset region is a region on the target image that is far away from an edge where the target image is distorted;
and the second execution subunit is used for taking the target feature points and the feature points corresponding to the target feature points on the source image as the target feature point pairs.
Optionally, the determining sub-module from the first includes:
the acquisition subunit is used for acquiring the line segment information in the target image;
the clustering subunit is used for clustering the line segments in the target image;
a first determining subunit, configured to determine, from the line segments in the target image, edge line segments for characterizing a boundary of a projection screen based on the clustering result;
the third execution subunit is used for taking the intersection point between each edge line segment as the vertex of the projection screen;
the second determining subunit is used for determining target characteristic points from the characteristic points in the projection screen area;
and the fourth execution subunit is configured to use the target feature point and a feature point corresponding to the target feature point on the source image as the target feature point pair.
Optionally, the target parameters include parameters of the camera, a rotation matrix between the first coordinate system and the second coordinate system, a translation vector between the first coordinate system and the second coordinate system, a normal vector of a wall surface where the projection area is located, and an intercept between the projection device and the wall surface; accordingly, the first computing module is configured to:
calculating the first mapping relationship by:
Figure BDA0002984837670000061
h is a matrix describing the first mapping relation, K is a parameter of the camera, R is a rotation matrix between the first coordinate system and the second coordinate system, t is a translation vector between the first coordinate system and the second coordinate system, n is a normal vector of the camera relative to a wall surface where the projection area is located, and d is an intercept of the projection device and the wall surface.
Optionally, the target parameters include parameters of the camera, a rotation matrix between the first coordinate system and the second coordinate system, a translation vector between the first coordinate system and the second coordinate system, an intercept of the projection device with the wall, and a plurality of pairs of feature points between the target image and the source image;
the first computing module, comprising:
the second calculation submodule is used for calculating a normal vector of the wall surface where the projection area is located according to each characteristic point pair;
a third calculation submodule, configured to calculate, for a normal vector of a wall surface where each projection area is located, according to the normal vector, a rotation matrix between the first coordinate system and the second coordinate system, a translation vector between the first coordinate system and the second coordinate system, an intercept between the projection apparatus and the wall surface, and a parameter of the camera, to obtain a candidate first mapping relationship corresponding to the normal vector;
a second determining submodule, configured to determine the first mapping relationship from the candidate first mapping relationships.
Optionally, the second determining sub-module includes:
a first calculating subunit, configured to calculate, for each candidate first mapping relationship, an absolute value of a difference between a matrix corresponding to the candidate first mapping relationship and matrices corresponding to other candidate first mapping relationships;
the second calculating subunit is used for calculating the sum of absolute values corresponding to each candidate first mapping relation;
and the fifth execution subunit takes the candidate first mapping relation with the minimum sum value as the first mapping relation.
According to a third aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method of any one of the above-mentioned first aspects.
According to a fourth aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including:
a memory having a computer program stored thereon;
a processor for executing the computer program in the memory to implement the steps of the method of any of the first aspects above.
The technical scheme at least has the following beneficial effects:
by acquiring target parameters for describing the association relationship between a first coordinate system of a projected image under a camera viewing angle and a second coordinate system of a source image, a first mapping relationship between the first coordinate system and the second coordinate system can be calculated based on the target parameters, and association parameters between the first coordinate system and a user viewing angle coordinate system of a preset user viewing angle plane can be calculated based on the first mapping relationship. In this way, a second mapping relationship between the first coordinate system and the user view coordinate system may be calculated based on the association parameter, and a third mapping relationship between the second coordinate system and the user view coordinate system, that is, a mapping relationship between the projection source image and the image at the user view angle, may be determined based on the first mapping relationship and the second mapping relationship. That is, the source image can be adjusted through the third mapping relationship, so as to obtain a required projection picture. For example, the trapezoidal projection screen may be adjusted to a rectangular projection screen based on the third mapping relationship, thereby realizing trapezoidal correction of the projection screen.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure and are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description serve to explain the disclosure without limiting the disclosure. In the drawings:
fig. 1 is a flowchart illustrating a method for adjusting a projection image according to an exemplary embodiment of the present disclosure.
Fig. 2 is a flowchart illustrating a method for adjusting a projection image according to an exemplary embodiment of the present disclosure.
FIG. 3 is a schematic diagram of a target image shown in an exemplary embodiment of the present disclosure.
Fig. 4 is a flowchart illustrating a method for adjusting a projected image according to an exemplary embodiment of the present disclosure.
Fig. 5 is a block diagram of a projected image adjusting apparatus according to an exemplary embodiment of the present disclosure.
FIG. 6 is a block diagram of an electronic device shown in an exemplary embodiment of the present disclosure.
Detailed Description
The following detailed description of specific embodiments of the present disclosure is provided in connection with the accompanying drawings. It should be understood that the detailed description and specific examples, while indicating the present disclosure, are given by way of illustration and explanation only, not limitation.
Before describing the projection image adjustment method, apparatus, storage medium, and electronic device of the present disclosure, an application scenario of the present disclosure is first described, and various embodiments provided by the present disclosure may be applied to an adjustment scenario of a projection image. The projection image may be, for example, an image formed by various projection devices projecting the source image to the projection area.
Taking the ultra-short-focus projector as an example, the image with the diagonal line of 250 cm can be projected at a position 50 cm away from the wall surface, that is, a larger projection image can be projected in a limited space. Accordingly, the projection screen of the ultra-short focus projector is also susceptible to positional deviation. For example, in the case of an angular change of 1 °, the projection screen thereof may also be shifted by 6 cm. Therefore, it becomes difficult for the ultra-short focus projector to manually correct the image position, which affects the user experience.
To this end, the present disclosure provides a method for adjusting a projected image, and fig. 1 is a flowchart of a method for adjusting a projected image according to an exemplary embodiment of the present disclosure, where the method includes:
in step S11, target parameters are acquired.
The target parameters are used for describing the association relationship between a first coordinate system where a projection image under the view angle of the camera is located and a second coordinate system where a source image is located, and the projection image is an image formed by a projection device (such as an ultra-short-focus projection device) projecting the source image to a projection area. In an implementation, the first coordinate system may be established for a projected image from a camera perspective and the second coordinate system may be established based on the source image.
For example, in some possible implementation scenarios, the target parameters include parameters of the camera, such as correction coefficients, focal length, and so on. The target parameters further include a rotation matrix between the first coordinate system and the second coordinate system, a translation vector between the first coordinate system and the second coordinate system, a normal vector of a wall surface where the projection area is located, and an intercept between the projection device and the wall surface.
For a normal vector of a wall surface where the projection area is located, the normal vector may be represented as n ═ x, y, z ], where x may be related to a yaw angle of the projection apparatus, y may be related to a pitch angle of the projection apparatus, and z may be related to a roll angle of the projection apparatus. For example, the parameters y and z in the normal vector may be obtained by an IMU (Inertial Measurement Unit). Furthermore, one or more pairs Of feature points between the projection image and the source image may also be acquired based on a camera or TOF (Time Of Flight) sensor. The feature point pairs may include a pixel point a on the projected image and a pixel point B on the source image, where the pixel point a and the pixel point B represent positions of the same object on the projected image and the source image. After the feature point pairs are obtained, the parameter x can be calculated based on the feature point pairs, and then a normal vector of the camera relative to the wall where the projection area is located is obtained.
After obtaining the normal vector, a rotation matrix between the first coordinate system and the second coordinate system may be calculated based on the normal vector. In addition, the translation vector between the first coordinate system and the second coordinate system may be obtained by pre-calibration, which is specifically referred to the description in the related art and is not repeated herein.
In step S12, a first mapping relationship between the first coordinate system and the second coordinate system is calculated according to the target parameter.
Following the above example, after the target parameter is obtained, the first mapping relationship may be calculated by the following calculation:
Figure BDA0002984837670000101
h is a matrix describing the first mapping relation, K is a parameter of the camera, R is a rotation matrix between the first coordinate system and the second coordinate system, t is a translation vector between the first coordinate system and the second coordinate system, n is a normal vector of the camera relative to a wall surface where the projection area is located, and d is an intercept of the projection device and the wall surface.
In step S13, based on the first mapping relationship, a correlation parameter between the first coordinate system and a user perspective coordinate system in which a preset user perspective plane is located is obtained.
Wherein the user perspective plane may be a plane of a projected image viewed by an assumed user. When the user desires to observe a rectangular projection screen at a position directly opposite to the projection area, the user viewing angle plane may be a plane parallel to the plane of the projection area. For example, when the projection area is a wall surface, the user viewing angle plane may be a plane parallel to the wall surface and spaced from the wall surface by a predetermined distance (e.g., two meters, three meters, etc.). Further, a coordinate system, that is, the user view coordinate system, may be established for the user view plane.
The association parameters may include a normal vector of the camera with respect to the user view plane, a rotation matrix between a first coordinate system and a preset user view coordinate system, and a translation vector between the first coordinate system and the preset user view coordinate system. It should be understood that, since the user viewing plane and the plane where the projection area is located are parallel to each other, a normal vector of the camera with respect to the user viewing plane and a normal vector of the camera with respect to the wall surface where the projection area is located may be the same.
Similarly, after obtaining a normal vector of the camera relative to the user viewing angle plane, a rotation matrix between the first coordinate system and a preset user viewing angle coordinate system may be calculated based on the normal vector. For example, a normal vector of the camera with respect to the user view plane may be inverted (n [ -x, -y, -z ]), and multiplied by a rotation matrix between the first coordinate system and the second coordinate system, thereby obtaining a rotation matrix between the first coordinate system and a preset user view coordinate system. In addition, the translation vector between the first coordinate system and the preset user view coordinate system may be obtained by pre-calibration, which is specifically referred to the description in the related art, and the details of the disclosure are not repeated herein.
In step S14, a second mapping relationship between the first coordinate system and the user' S visual coordinate system is calculated according to the correlation parameter.
For example, the second mapping relationship may be calculated by the following calculation:
Figure BDA0002984837670000111
h ' is a matrix describing the second mapping relationship, K is a parameter of the camera, R ' is a rotation matrix of the first coordinate system and a preset user view coordinate system, t ' is a translation vector between the first coordinate system and the preset user view coordinate system, n ' is a normal vector of the camera relative to the user view plane, and d ' is an intercept of the projection device and the wall surface.
In step S15, a third mapping relationship between the second coordinate system and the user' S visual coordinate system may be determined based on the first mapping relationship and the second mapping relationship. It should be understood that the source image is in the second coordinate system, and the image from the user perspective is in the user perspective coordinate system, so the step S15 may refer to a mapping relationship between the source image obtained by the projection apparatus and the image from the user perspective.
In this way, in step S16, the source image is adjusted according to the third mapping relationship, and the adjusted source image is projected to the projection area.
It should be understood that the plane is the plane of the projected image as viewed by the user as assumed. While the depth is more or less the change in the similarity. Thus, if the projected image projected onto the user viewing plane is rectangular, the projected image formed at the projection area is still rectangular.
Therefore, in some implementation scenarios, when a rectangular projection screen needs to be obtained, a rectangular projection screen may be assumed in the user view coordinate system. Therefore, the source image can be adjusted based on the third mapping relation, so that a projection picture formed by the source image on the user view angle plane is rectangular, the projection picture formed by the source image in the projection area is rectangular, the trapezoidal correction effect is finally achieved, and the convenience in use of the projection device is also improved. In the present embodiment, the third mapping relationship can be calculated by modules such as tof and imu, that is, the third mapping relationship can be calculated without using an image captured by a camera. Therefore, even if the camera module fails, the projection image can still be adjusted by adopting the scheme of the embodiment.
Fig. 2 is a flowchart of a method for adjusting a projected image according to an exemplary embodiment of the disclosure, as shown in fig. 2, where the step S11 includes:
and S111, acquiring a target image obtained by shooting a projection area by a camera, wherein the projection area displays a projection image.
S112, inputting the target image and the source image into a feature point recognition model to obtain a plurality of feature point pairs between the target image and the source image output by the feature point recognition model.
For the feature point identification model, the feature point identification model can be trained from image samples including feature point labeling information. For example, the feature point recognition model may be constructed based on a Superpoint algorithm, and a target image obtained by shooting a source image and a projection area (a projection image formed by projecting the source image is displayed) by a camera is marked with a feature point pair, so that the source image and the target image may form a training sample. In this way, a plurality of groups of training samples can be input into the training model, and the feature point pair recognition result of the model is obtained. Further, parameters of the model may be adjusted based on the feature point pair identification result and the feature point pair information marked in the sample until the model converges, and the feature point identification model is finally obtained. Of course, in specific implementation, the feature point identification model may also be constructed based on other feature point matching algorithms of the same type, which is not limited in this disclosure.
After obtaining the feature point recognition model, the target image and the source image may be input to the feature point recognition model, and a plurality of feature point pairs between the target image and the source image output by the feature point recognition model are obtained.
In this way, after the characteristic point pairs are obtained, in step S121, a target characteristic point pair may be determined from the plurality of characteristic point pairs. It should be understood that the number of feature point pairs identified by the feature point model may be plural, and therefore a target feature point pair may be determined from the plural feature point pairs so as to calculate the first mapping relationship.
In particular, in one possible implementation, the projection area may comprise a projection screen (e.g., a curtain). It is worth noting that the projection screen may not be in close contact with the wall (for example, a hanging curtain may have a gap at a certain distance from the wall), in which case there may be an error in the optical path between the light spot on the projection screen in the projection area and the light spot not on the projection screen in the projection area.
Thus, the determining a target characteristic point pair from the pairs of the plurality of characteristic points (step S121) may include:
the line segment information in the target image is obtained, for example, the line segment information in the target image may be extracted based on a line segment detection algorithm such as LSD and hough transform.
And clustering the line segments in the target image. It should be appreciated that by clustering line segments neutralized in the target image, a plurality of line segment cluster clusters may be obtained, each cluster may correspond to a class of line segments, e.g., four edges of a projection screen may correspond to four cluster clusters.
Determining edge line segments for characterizing the boundary of the projection screen from the line segments in the target image based on the clustering result. For example, in some implementation scenarios, the clustering clusters may be combined, so that line segments in the clustering clusters that can be combined into a quadrilateral and have a length-width ratio that meets a preset ratio requirement (e.g., an aspect ratio of 16: 9) may be used as edge line segments of the projection screen.
It is noted that in some implementations, the clustered line segments may also include edge line segments of the projected image. In this case, since the difference in luminance values at both ends of the edge line segment of the projection screen is small and the difference in luminance values at both ends of the edge line segment of the projected image is large, the line segment having a small difference in luminance values at both ends of the line segment can be used as the edge line segment of the projection screen.
In this way, the intersection point between the edge line segments can be used as the vertex of the projection screen, and after the edge line segment of the projection screen and the vertex of the projection screen are obtained, the area included in the projection screen can be determined.
Further, a target feature point can be determined from feature points located in the projection screen area, and the target feature point and a feature point corresponding to the target feature point on the source image are used as the target feature point pair to reduce the calculation error.
In another possible implementation scenario, the determining a target feature point pair from the pairs of the plurality of feature points (step S121) includes:
according to the position of the feature point in the target image, taking the feature point in a preset area in the target image as the target feature point of the target image, wherein the preset area is an area, far away from the edge of the target image, which is distorted;
and taking the target characteristic points and the characteristic points corresponding to the target characteristic points on the source image as the target characteristic point pairs.
Referring to fig. 3, which is a schematic view of an object image, in the example of fig. 3, the projection device forms a projection image in the projection area by projecting light upward. Referring to fig. 3, since the camera takes the target image from below, there may be a corresponding distortion in the upper portion of the target image. And under the same length dimension, the effective pixels in the area positioned at the lower part of the image in the target image are more than the effective pixels in the area positioned at the upper part of the image in the target image. Therefore, in this case, the preset region may refer to a region in the target image, which is located at a preset distance range of the lower edge of the image, and the distance range may be set according to application requirements, which is not limited by the present disclosure.
Of course, fig. 3 illustrates the above-mentioned solution by taking the camera as an example to capture the target image from the bottom, but those skilled in the art will know that the camera may capture the target image from the top to the bottom, or from the left to the right, in the implementation. Therefore, the preset region is a region on the target image far from the edge of the target image where distortion occurs.
By adopting the technical scheme, the selection precision of the characteristic point pairs can be improved by selecting the characteristic points located in the preset area, and the calculation accuracy of the first mapping relation is further improved.
Still referring to fig. 2, after a target characteristic point pair is acquired, the first mapping relationship may be calculated based on the target characteristic point pair. For example, in some embodiments, 4 sets of feature point pairs may be selected, so that the first mapping relationship may be calculated based on coordinate information of 8 points in the 4 sets of feature point pairs.
In step S13, the first mapping relationship may be decomposed to obtain the association parameter. In addition, please refer to the above description of the embodiment of fig. 1 for steps S14 to S16, and for brevity of the description, the disclosure is not repeated herein.
According to the technical scheme, the feature point pairs between the projected images and the source images can be acquired based on the depth learning model, so that the mismatching rate can be reduced, the number of the matched feature points can be increased, the selection precision of the feature point pairs can be improved, and the effects of improving the accuracy of the calculated first mapping relation and improving the adjustment accuracy of the projected images are achieved.
Fig. 4 is a flowchart illustrating a method for adjusting a projection image according to an exemplary embodiment of the disclosure, where the method includes:
and S11, acquiring the target parameters.
The target parameters are used for describing the association relationship between a first coordinate system where a projection image under the camera view angle is located and a second coordinate system where a source image is located, and the projection image is an image formed by a projection device projecting the source image to a projection area. In an implementation, the first coordinate system may be established for a projected image from a camera perspective and the second coordinate system may be established based on the source image.
The target parameters comprise parameters of the camera, a rotation matrix between the first coordinate system and the second coordinate system, a translation vector between the first coordinate system and the second coordinate system, and an intercept between the projection device and the wall surface. For the above parameters, please refer to the description of the embodiment of fig. 1, which is not repeated herein.
The target parameters may also include a plurality of pairs of feature points between the target image and the source image. For example, the target image and the source image may be input to a feature point recognition model, resulting in a plurality of feature point pairs between the target image and the source image output by the feature point recognition model. For the feature point identification model, please refer to the above description of the embodiment of fig. 2, which is not repeated herein.
And S123, aiming at each characteristic point pair, calculating a normal vector of the wall surface where the projection area is located according to the characteristic point pair.
It should be understood that the normal vector of the wall surface where the projection area is located may be represented as n ═ x, y, z, and in a specific implementation, the parameters y and z in the normal vector may be obtained based on the IMU. Therefore, the parameter x can be obtained by calculation according to a characteristic point pair between the projection image and the source image, so that a normal vector of the camera relative to the wall where the projection area is located is obtained.
That is, for each of the feature point pairs, a normal vector of a wall surface where a projection area corresponding to the feature point pair is located can be calculated.
And S124, aiming at the normal vector of the wall surface where each projection area is located, calculating to obtain a candidate first mapping relation corresponding to the normal vector according to the normal vector, the rotation matrix between the first coordinate system and the second coordinate system, the translation vector between the first coordinate system and the second coordinate system, the intercept between the projection device and the wall surface and the parameters of the camera.
For example, the candidate first mapping corresponding to the normal vector may be calculated by the following calculation:
Figure BDA0002984837670000171
wherein, N is a matrix describing the candidate first mapping relationship, K is a parameter of the camera, R is a rotation matrix between the first coordinate system and the second coordinate system, t is a translation vector between the first coordinate system and the second coordinate system, N is a normal vector of the camera relative to a wall surface where the projection area is located, and d is an intercept of the projection device and the wall surface.
S125, determining the first mapping relation from the candidate first mapping relations.
Since the number of the candidate first mapping relationships obtained by calculation may be multiple, iterative optimization may be performed on the multiple candidate first mapping relationships, so as to select the first mapping relationship.
Taking the least squares method as an example, in one possible implementation, the determining the first mapping relation from the candidate first mapping relations includes:
calculating the absolute value of the difference between the matrix corresponding to the candidate first mapping relation and the matrix corresponding to other candidate first mapping relations aiming at each candidate first mapping relation;
calculating the sum of absolute values corresponding to each candidate first mapping relation;
and taking the candidate first mapping relation with the minimum sum value as the first mapping relation.
For example, for the matrix H corresponding to the candidate first mapping relation0The sum of the absolute values of the differences between the matrix corresponding to the candidate first mapping relation and the matrices corresponding to the other candidate first mapping relations may be calculated as:
Figure BDA0002984837670000172
wherein HiThe matrix is corresponding to the ith candidate first mapping relation in the n candidate first mapping relations.
In this way, the candidate first mapping relation with the smallest sum value can be used as the first mapping relation, so that the candidate first mapping relation obtained by calculation can be optimally selected by combining the feature points identified by the model, and the first mapping relation can be finally obtained.
Of course, when the first mapping relationship is determined based on the candidate first mapping relationship, the first mapping relationship may be iteratively selected based on a method such as a kalman filter method, which is not limited in this disclosure.
And S13, acquiring the association parameters between the first coordinate system and the user perspective coordinate system where the preset user perspective plane is located based on the first mapping relation.
And S14, calculating a second mapping relation between the first coordinate system and the user visual coordinate system according to the association parameters.
S15, a third mapping relationship between the second coordinate system and the user' S visual coordinate system may be determined based on the first mapping relationship and the second mapping relationship.
S16, adjusting the source image according to the third mapping relation, and projecting the adjusted source image to the projection area.
Regarding steps S13 to S16, please refer to the above description of the embodiment of fig. 1, and for brevity of description, the disclosure is not repeated herein.
According to the technical scheme, the candidate first mapping relations can be respectively calculated by combining the characteristic points identified by the model, and the first mapping relations are finally obtained by carrying out optimization selection on the candidate first mapping relations obtained through calculation. By adopting the technical scheme, the calculation accuracy of the first mapping relation can be improved, and the accuracy of image adjustment is improved.
Based on the same inventive concept, the present disclosure also provides a projected image adjusting apparatus, referring to a block diagram of a projected image adjusting apparatus shown in fig. 5, the apparatus 500 including:
a first obtaining module 501, configured to obtain a target parameter, where the target parameter is used to describe an association relationship between a first coordinate system where a projection image in a camera view is located and a second coordinate system where a source image is located, where the projection image is an image formed by a projection device projecting the source image to a projection area;
a first calculating module 502, configured to calculate a first mapping relationship between the first coordinate system and the second coordinate system according to the target parameter;
a second obtaining module 503, configured to obtain, based on the first mapping relationship, associated parameters between the first coordinate system and a user view coordinate system where a preset user view plane is located, where the associated parameters include a normal vector of the camera relative to the user view plane, a rotation matrix between the first coordinate system and the preset user view coordinate system, and a translation vector between the first coordinate system and the preset user view coordinate system;
a second calculating module 504, configured to calculate a second mapping relationship between the first coordinate system and the user visual coordinate system according to the association parameter;
a determining module 505, configured to determine a third mapping relationship between the second coordinate system and the user visual coordinate system based on the first mapping relationship and the second mapping relationship;
and the execution module 506 is configured to adjust the source image according to the third mapping relationship, and project the adjusted source image to the projection area.
Above-mentioned technical scheme can include following beneficial effect:
by acquiring target parameters for describing the association relationship between a first coordinate system of a projected image under a camera viewing angle and a second coordinate system of a source image, a first mapping relationship between the first coordinate system and the second coordinate system can be calculated based on the target parameters, and association parameters between the first coordinate system and a user viewing angle coordinate system of a preset user viewing angle plane can be calculated based on the first mapping relationship. In this way, a second mapping relationship between the first coordinate system and the user view coordinate system may be calculated based on the association parameter, and a third mapping relationship between the second coordinate system and the user view coordinate system, that is, a mapping relationship between the projection source image and the image at the user view angle, may be determined based on the first mapping relationship and the second mapping relationship. That is, the source image can be adjusted through the third mapping relationship, so as to obtain a required projection picture. For example, the trapezoidal projection screen may be adjusted to a rectangular projection screen based on the third mapping relationship, thereby realizing trapezoidal correction of the projection screen.
Optionally, the first obtaining module 501 includes:
the first acquisition submodule is used for acquiring a target image obtained by shooting a projection area by a camera, and the projection area displays a projection image;
the input submodule is used for inputting the target image and the source image into a feature point recognition model to obtain a plurality of feature point pairs between the target image and the source image output by the feature point recognition model; the feature point identification model is obtained by training an image sample comprising feature point marking information;
the first calculation module 502 includes:
a first determination submodule configured to determine a target feature point pair from the plurality of feature point pairs;
and the first calculation submodule is used for calculating the first mapping relation based on the target characteristic point pairs.
Optionally, the first determining sub-module includes:
the first execution subunit is configured to, according to the position of the feature point in the target image, use a feature point in a preset region in the target image as a target feature point of the target image, where the preset region is a region on the target image that is far away from an edge where the target image is distorted;
and the second execution subunit is used for taking the target feature points and the feature points corresponding to the target feature points on the source image as the target feature point pairs.
Optionally, the determining sub-module from the first includes:
the acquisition subunit is used for acquiring the line segment information in the target image;
the clustering subunit is used for clustering the line segments in the target image;
a first determining subunit, configured to determine, from the line segments in the target image, edge line segments for characterizing a boundary of a projection screen based on the clustering result;
the third execution subunit is used for taking the intersection point between each edge line segment as the vertex of the projection screen;
the second determining subunit is used for determining target characteristic points from the characteristic points in the projection screen area;
and the fourth execution subunit is configured to use the target feature point and a feature point corresponding to the target feature point on the source image as the target feature point pair.
Optionally, the target parameters include parameters of the camera, a rotation matrix between the first coordinate system and the second coordinate system, a translation vector between the first coordinate system and the second coordinate system, a normal vector of a wall surface where the projection area is located, and an intercept between the projection device and the wall surface; accordingly, the first calculation module 502 is configured to:
calculating the first mapping relationship by:
Figure BDA0002984837670000211
h is a matrix describing the first mapping relation, K is a parameter of the camera, R is a rotation matrix between the first coordinate system and the second coordinate system, t is a translation vector between the first coordinate system and the second coordinate system, n is a normal vector of the camera relative to a wall surface where the projection area is located, and d is an intercept of the projection device and the wall surface.
Optionally, the target parameters include parameters of the camera, a rotation matrix between the first coordinate system and the second coordinate system, a translation vector between the first coordinate system and the second coordinate system, an intercept of the projection device with the wall, and a plurality of pairs of feature points between the target image and the source image;
the first calculation module 502 includes:
the second calculation submodule is used for calculating a normal vector of the wall surface where the projection area is located according to each characteristic point pair;
a third calculation submodule, configured to calculate, for a normal vector of a wall surface where each projection area is located, according to the normal vector, a rotation matrix between the first coordinate system and the second coordinate system, a translation vector between the first coordinate system and the second coordinate system, an intercept between the projection apparatus and the wall surface, and a parameter of the camera, to obtain a candidate first mapping relationship corresponding to the normal vector;
a second determining submodule, configured to determine the first mapping relationship from the candidate first mapping relationships.
Optionally, the second determining sub-module includes:
a first calculating subunit, configured to calculate, for each candidate first mapping relationship, an absolute value of a difference between a matrix corresponding to the candidate first mapping relationship and matrices corresponding to other candidate first mapping relationships;
the second calculating subunit is used for calculating the sum of absolute values corresponding to each candidate first mapping relation;
and the fifth execution subunit takes the candidate first mapping relation with the minimum sum value as the first mapping relation.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
The present disclosure also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method of any of the above embodiments.
The present disclosure also provides an electronic device, comprising:
a memory having a computer program stored thereon;
a processor for executing the computer program in the memory to implement the steps of the method of any of the above embodiments.
Fig. 6 is a block diagram illustrating an electronic device 600 according to an example embodiment. As shown in fig. 6, the electronic device 600 may include: a processor 601 and a memory 602. The electronic device 600 may also include one or more of a multimedia component 603, an input/output (I/O) interface 604, and a communications component 605.
The processor 601 is configured to control the overall operation of the electronic device 600, so as to complete all or part of the steps in the projection image adjustment method. The memory 602 is used to store various types of data to support operation at the electronic device 600, such as instructions for any application or method operating on the electronic device 600 and application-related data, such as pictures, audio, video, and so forth. The Memory 602 may be implemented by any type of volatile or non-volatile Memory device or combination thereof, such as Static Random Access Memory (SRAM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Erasable Programmable Read-Only Memory (EPROM), Programmable Read-Only Memory (PROM), Read-Only Memory (ROM), magnetic Memory, flash Memory, magnetic disk or optical disk. The multimedia components 603 may include a screen and audio components. Wherein the screen may be, for example, a touch screen and the audio component is used for outputting and/or inputting audio signals. For example, the audio component may include a microphone for receiving external audio signals. The received audio signal may further be stored in the memory 602 or transmitted through the communication component 605. The audio assembly also includes at least one speaker for outputting audio signals. The I/O interface 604 provides an interface between the processor 601 and other interface modules, such as a keyboard, mouse, buttons, etc. These buttons may be virtual buttons or physical buttons. The communication component 605 is used for wired or wireless communication between the electronic device 600 and other devices. Wireless Communication, such as Wi-Fi, bluetooth, Near Field Communication (NFC), 2G, 3G, 4G, NB-IOT, eMTC, or other 5G, etc., or a combination of one or more of them, which is not limited herein. The corresponding communication component 605 may therefore include: Wi-Fi module, Bluetooth module, NFC module, etc.
In an exemplary embodiment, the electronic Device 600 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic components for performing the above-described projected image adjustment method.
In another exemplary embodiment, there is also provided a computer readable storage medium including program instructions which, when executed by a processor, implement the steps of the projected image adjustment method described above. For example, the computer readable storage medium may be the memory 602 described above including program instructions that are executable by the processor 601 of the electronic device 600 to perform the projected image adjustment method described above.
In another exemplary embodiment, a computer program product is also provided, which comprises a computer program executable by a programmable apparatus, the computer program having code portions for performing the above-mentioned projection image adjustment method when executed by the programmable apparatus.
The preferred embodiments of the present disclosure are described in detail with reference to the accompanying drawings, however, the present disclosure is not limited to the specific details of the above embodiments, and various simple modifications may be made to the technical solution of the present disclosure within the technical idea of the present disclosure, and these simple modifications all belong to the protection scope of the present disclosure.
It should be noted that, in the foregoing embodiments, various features described in the above embodiments may be combined in any suitable manner, and in order to avoid unnecessary repetition, various combinations that are possible in the present disclosure are not described again.
In addition, any combination of various embodiments of the present disclosure may be made, and the same should be considered as the disclosure of the present disclosure, as long as it does not depart from the spirit of the present disclosure.

Claims (10)

1. A method for adjusting a projected image, comprising:
acquiring target parameters, wherein the target parameters are used for describing the incidence relation between a first coordinate system where a projected image under the view angle of a camera is located and a second coordinate system where a source image is located, and the projected image is an image formed by projecting the source image to a projection area by a projection device;
calculating a first mapping relation between the first coordinate system and the second coordinate system according to the target parameters;
acquiring association parameters between the first coordinate system and a user perspective coordinate system where a preset user perspective plane is located based on the first mapping relation, wherein the association parameters comprise a normal vector of the camera relative to the user perspective plane, a rotation matrix between the first coordinate system and the preset user perspective coordinate system, and a translation vector between the first coordinate system and the preset user perspective coordinate system;
calculating a second mapping relation between the first coordinate system and the user visual coordinate system according to the associated parameters;
determining a third mapping relation between the second coordinate system and the user visual coordinate system based on the first mapping relation and the second mapping relation;
and adjusting the source image according to the third mapping relation, and projecting the adjusted source image to the projection area.
2. The method of claim 1, wherein the obtaining target parameters comprises:
acquiring a target image obtained by shooting a projection area by a camera, wherein the projection area displays a projection image;
inputting the target image and the source image into a feature point recognition model to obtain a plurality of feature point pairs between the target image and the source image output by the feature point recognition model; the feature point identification model is obtained by training an image sample comprising feature point marking information;
the calculating a first mapping relationship between the first coordinate system and the second coordinate system according to the target parameter includes:
determining a target characteristic point pair from the plurality of characteristic point pairs;
and calculating the first mapping relation based on the target characteristic point pairs.
3. The method of claim 2, wherein said determining a target pair of feature points from said plurality of pairs of feature points comprises:
according to the position of the feature point in the target image, taking the feature point in a preset area in the target image as the target feature point of the target image, wherein the preset area is an area, far away from the edge of the target image, which is distorted;
and taking the target characteristic points and the characteristic points corresponding to the target characteristic points on the source image as the target characteristic point pairs.
4. The method of claim 2, wherein said determining a target pair of feature points from said plurality of pairs of feature points comprises:
acquiring line segment information in the target image;
clustering line segments in the target image;
determining edge line segments for representing the boundary of the projection screen from the line segments in the target image based on the clustering result;
taking the intersection points among the edge line segments as the vertexes of the projection screen;
determining target characteristic points from the characteristic points in the projection screen area;
and taking the target characteristic points and the characteristic points corresponding to the target characteristic points on the source image as the target characteristic point pairs.
5. The method of claim 1, wherein the target parameters comprise parameters of the camera, a rotation matrix between the first coordinate system and the second coordinate system, a translation vector between the first coordinate system and the second coordinate system, a normal vector of a wall surface where the projection area is located, and an intercept of the projection device and the wall surface; correspondingly, the calculating a first mapping relationship between the first coordinate system and the second coordinate system according to the target parameter includes:
calculating the first mapping relationship by:
Figure FDA0002984837660000031
h is a matrix describing the first mapping relation, K is a parameter of the camera, R is a rotation matrix between the first coordinate system and the second coordinate system, t is a translation vector between the first coordinate system and the second coordinate system, n is a normal vector of the camera relative to a wall surface where the projection area is located, and d is an intercept of the projection device and the wall surface.
6. The method of claim 1, wherein the target parameters include parameters of the camera, a rotation matrix between the first coordinate system and the second coordinate system, a translation vector between the first coordinate system and the second coordinate system, an intercept of the projection device with the wall, and a plurality of pairs of feature points between the target image and the source image;
the calculating a first mapping relationship between the first coordinate system and the second coordinate system according to the target parameter specifically includes:
aiming at each characteristic point pair, calculating a normal vector of the wall surface where the projection area is located according to the characteristic point pair;
aiming at a normal vector of a wall surface where each projection area is located, calculating to obtain a candidate first mapping relation corresponding to the normal vector according to the normal vector, a rotation matrix between the first coordinate system and the second coordinate system, a translation vector between the first coordinate system and the second coordinate system, an intercept between the projection device and the wall surface and parameters of the camera;
determining the first mapping relationship from the candidate first mapping relationships.
7. The method of claim 6, wherein determining the first mapping relationship from the candidate first mapping relationships comprises:
calculating the absolute value of the difference between the matrix corresponding to the candidate first mapping relation and the matrix corresponding to other candidate first mapping relations aiming at each candidate first mapping relation;
calculating the sum of absolute values corresponding to each candidate first mapping relation;
and taking the candidate first mapping relation with the minimum sum value as the first mapping relation.
8. An apparatus for adjusting a projected image, comprising:
the first acquisition module is used for acquiring target parameters, the target parameters are used for describing the association relationship between a first coordinate system where a projected image under the view angle of a camera is located and a second coordinate system where a source image is located, and the projected image is an image formed by projecting the source image to a projection area by a projection device;
the first calculation module is used for calculating a first mapping relation between the first coordinate system and the second coordinate system according to the target parameters;
a second obtaining module, configured to obtain, based on the first mapping relationship, associated parameters between the first coordinate system and a user perspective coordinate system in which a preset user perspective plane is located, where the associated parameters include a normal vector of the camera relative to the user perspective plane, a rotation matrix between the first coordinate system and the preset user perspective coordinate system, and a translation vector between the first coordinate system and the preset user perspective coordinate system;
the second calculation module is used for calculating a second mapping relation between the first coordinate system and the user visual coordinate system according to the association parameters;
a determining module, configured to determine a third mapping relationship between the second coordinate system and the user visual coordinate system based on the first mapping relationship and the second mapping relationship;
and the execution module is used for adjusting the source image according to the third mapping relation and projecting the adjusted source image to the projection area.
9. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
10. An electronic device, comprising:
a memory having a computer program stored thereon;
a processor for executing the computer program in the memory to carry out the steps of the method of any one of claims 1 to 7.
CN202110297322.7A 2021-03-19 2021-03-19 Projection image adjusting method and device, storage medium and electronic equipment Active CN113099198B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110297322.7A CN113099198B (en) 2021-03-19 2021-03-19 Projection image adjusting method and device, storage medium and electronic equipment
PCT/CN2021/135440 WO2022193739A1 (en) 2021-03-19 2021-12-03 Projection image adjustment method and apparatus, storage medium, and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110297322.7A CN113099198B (en) 2021-03-19 2021-03-19 Projection image adjusting method and device, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN113099198A true CN113099198A (en) 2021-07-09
CN113099198B CN113099198B (en) 2023-01-10

Family

ID=76668535

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110297322.7A Active CN113099198B (en) 2021-03-19 2021-03-19 Projection image adjusting method and device, storage medium and electronic equipment

Country Status (2)

Country Link
CN (1) CN113099198B (en)
WO (1) WO2022193739A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114286066A (en) * 2021-12-23 2022-04-05 深圳市火乐科技发展有限公司 Projection correction method, projection correction device, storage medium and projection equipment
CN114827562A (en) * 2022-03-11 2022-07-29 深圳海翼智新科技有限公司 Projection method, projection device, projection equipment and computer storage medium
WO2022193739A1 (en) * 2021-03-19 2022-09-22 深圳市火乐科技发展有限公司 Projection image adjustment method and apparatus, storage medium, and electronic device
WO2023078091A1 (en) * 2021-11-03 2023-05-11 深圳光峰科技股份有限公司 Projection image superposition method and apparatus, electronic device and storage medium
WO2023088304A1 (en) * 2021-11-16 2023-05-25 海信视像科技股份有限公司 Projection device and projection area correction method

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116542847B (en) * 2023-07-05 2023-10-10 海豚乐智科技(成都)有限责任公司 Low-small slow target high-speed image simulation method, storage medium and device
CN117724610A (en) * 2023-12-13 2024-03-19 广东聚华新型显示研究院 Data processing method and device for head display equipment, head-mounted equipment and medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105227881A (en) * 2015-09-15 2016-01-06 海信集团有限公司 A kind of projected picture correcting method and projector equipment
CN107547879A (en) * 2016-06-24 2018-01-05 上海顺久电子科技有限公司 A kind of bearing calibration of projection imaging, device and laser television
US20180146179A1 (en) * 2016-11-24 2018-05-24 Ricoh Company, Ltd. Image projection apparatus
CN110111262A (en) * 2019-03-29 2019-08-09 北京小鸟听听科技有限公司 A kind of projector distortion correction method, device and projector
CN110784691A (en) * 2018-07-31 2020-02-11 中强光电股份有限公司 Projection device, projection system and image correction method
CN110830781A (en) * 2019-10-30 2020-02-21 歌尔科技有限公司 Automatic projected image correction method and system based on binocular vision

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106127745B (en) * 2016-06-17 2019-01-29 凌云光技术集团有限责任公司 The combined calibrating method and device of structure light 3 D vision system and line-scan digital camera
JP2020150481A (en) * 2019-03-15 2020-09-17 キヤノン株式会社 Information processing device, projection system, information processing method, and program
CN110336987B (en) * 2019-04-03 2021-10-08 北京小鸟听听科技有限公司 Projector distortion correction method and device and projector
CN113099198B (en) * 2021-03-19 2023-01-10 深圳市火乐科技发展有限公司 Projection image adjusting method and device, storage medium and electronic equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105227881A (en) * 2015-09-15 2016-01-06 海信集团有限公司 A kind of projected picture correcting method and projector equipment
CN107547879A (en) * 2016-06-24 2018-01-05 上海顺久电子科技有限公司 A kind of bearing calibration of projection imaging, device and laser television
US20180146179A1 (en) * 2016-11-24 2018-05-24 Ricoh Company, Ltd. Image projection apparatus
CN110784691A (en) * 2018-07-31 2020-02-11 中强光电股份有限公司 Projection device, projection system and image correction method
CN110111262A (en) * 2019-03-29 2019-08-09 北京小鸟听听科技有限公司 A kind of projector distortion correction method, device and projector
CN110830781A (en) * 2019-10-30 2020-02-21 歌尔科技有限公司 Automatic projected image correction method and system based on binocular vision

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022193739A1 (en) * 2021-03-19 2022-09-22 深圳市火乐科技发展有限公司 Projection image adjustment method and apparatus, storage medium, and electronic device
WO2023078091A1 (en) * 2021-11-03 2023-05-11 深圳光峰科技股份有限公司 Projection image superposition method and apparatus, electronic device and storage medium
WO2023088304A1 (en) * 2021-11-16 2023-05-25 海信视像科技股份有限公司 Projection device and projection area correction method
CN114286066A (en) * 2021-12-23 2022-04-05 深圳市火乐科技发展有限公司 Projection correction method, projection correction device, storage medium and projection equipment
CN114827562A (en) * 2022-03-11 2022-07-29 深圳海翼智新科技有限公司 Projection method, projection device, projection equipment and computer storage medium

Also Published As

Publication number Publication date
CN113099198B (en) 2023-01-10
WO2022193739A1 (en) 2022-09-22

Similar Documents

Publication Publication Date Title
CN113099198B (en) Projection image adjusting method and device, storage medium and electronic equipment
US8571350B2 (en) Image processing system with image alignment mechanism and method of operation thereof
US10558881B2 (en) Parallax minimization stitching method and apparatus using control points in overlapping region
US10326979B2 (en) Imaging system comprising real-time image registration
US11282232B2 (en) Camera calibration using depth data
JP2017091079A (en) Image processing device and method for extracting image of object to be detected from input data
CN110809786A (en) Calibration device, calibration chart, chart pattern generation device, and calibration method
CN112272292B (en) Projection correction method, apparatus and storage medium
US20210044725A1 (en) Camera-specific distortion correction
CN109919971B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
US10339662B2 (en) Registering cameras with virtual fiducials
KR20180022539A (en) Method and apparatus of stitching for minimizing parallax using control points in overlapping region
KR20220073824A (en) Image processing method, image processing apparatus, and electronic device applying the same
EP3067860B1 (en) Hybrid depth estimation
US11184525B2 (en) Method and apparatus for focusing
US20220405968A1 (en) Method, apparatus and system for image processing
ES2717186T3 (en) Procedure and device for the detection of moving objects in a sequence of video images
CN112689136B (en) Projection image adjusting method and device, storage medium and electronic equipment
US9654749B2 (en) Projection methods and projection devices
CN113407030B (en) Visual positioning method, related device, equipment and storage medium
CN105335959A (en) Quick focusing method and device for imaging apparatus
CN108780572A (en) The method and device of image rectification
CN111832558A (en) Character image correction method, device, storage medium and electronic equipment
US11026048B1 (en) Indoor positioning system for a mobile electronic device
CN115278184B (en) Projection picture correction method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant