CN114286066A - Projection correction method, projection correction device, storage medium and projection equipment - Google Patents

Projection correction method, projection correction device, storage medium and projection equipment Download PDF

Info

Publication number
CN114286066A
CN114286066A CN202111593336.XA CN202111593336A CN114286066A CN 114286066 A CN114286066 A CN 114286066A CN 202111593336 A CN202111593336 A CN 202111593336A CN 114286066 A CN114286066 A CN 114286066A
Authority
CN
China
Prior art keywords
projection
area
picture
position information
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111593336.XA
Other languages
Chinese (zh)
Inventor
谈润杰
孙世攀
张聪
胡震宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Huole Science and Technology Development Co Ltd
Original Assignee
Shenzhen Huole Science and Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Huole Science and Technology Development Co Ltd filed Critical Shenzhen Huole Science and Technology Development Co Ltd
Priority to CN202111593336.XA priority Critical patent/CN114286066A/en
Publication of CN114286066A publication Critical patent/CN114286066A/en
Pending legal-status Critical Current

Links

Images

Abstract

The disclosure relates to a projection correction method, a projection correction device, a storage medium and a projection device, and relates to the technical field of projection, wherein the method comprises the following steps: determining a current sight angle of the target object; determining a display area according to first position information of a projection picture projected on a target projection area by projection equipment and a current sight angle, wherein the display area is an area where the projection picture is orthographically projected on a virtual visual angle plane vertical to the sight; determining a target picture area in the display area; and correcting the original image based on the second position information of the display area and the third position information of the target picture area to obtain a corrected original image. Therefore, the optimal picture viewing experience can be always obtained no matter how the viewing angle of the user changes, and the method is suitable for projection correction of a single plane and projection correction of complex multiple planes.

Description

Projection correction method, projection correction device, storage medium and projection equipment
Technical Field
The present disclosure relates to the field of projection technologies, and in particular, to a projection correction method, an apparatus, a storage medium, and a projection device.
Background
With the popularization of the mobile projection device and the rotatable projection device, when a user uses the mobile projection device or the rotatable projection device, a projection picture is often projected in a complex stereoscopic surface. For example, the internal and external corners of a wall, the corners of a ceiling, and other complex solid surfaces.
In the related art, in an application scenario in which a projection picture is projected on a complex stereoscopic surface, the projection picture is generally encoded according to projection structured light of a projection device, and then the projection picture is corrected according to a distorted image of the current projection picture and a pre-calibrated encoded picture. However, this method not only requires a long calibration process, but also the pre-encoded picture can only satisfy one viewing angle, and when the viewing angle of the user changes, the projection picture viewed by the user still has significant distortion.
Disclosure of Invention
The invention aims to provide a projection correction method, a projection correction device, a storage medium and a projection device, which can provide the best picture viewing experience when the sight angle of a user changes.
In a first aspect, an embodiment of the present disclosure provides a projection correction method, including:
determining a current sight angle of the target object;
determining a display area according to first position information of a projection picture projected on a target projection area by projection equipment and the current sight angle, wherein the display area is an area where the projection picture is orthographically projected on a virtual visual angle plane perpendicular to a sight corresponding to the current sight angle;
determining a target picture area in the display area;
and correcting the original image based on the second position information of the display area and the third position information of the target picture area to obtain a corrected original image, so that the picture projected in the target projection area by the corrected original image is overlapped with the target picture area.
In a second aspect, an embodiment of the present disclosure provides a projection correction apparatus, including:
a first determination module configured to determine a current gaze angle of the target object;
the second determining module is configured to determine a display area according to first position information of a projection picture projected on a target projection area by a projection device and the current sight angle, wherein the display area is an area where the projection picture is orthographically projected on a virtual visual angle plane perpendicular to a sight corresponding to the current sight angle;
a third determination module configured to determine a target screen area in the display area;
and the correction module is configured to correct the original image based on the second position information of the display area and the third position information of the target picture area to obtain a corrected original image, so that a picture projected by the corrected original image in the target projection area is overlapped with the target picture area.
In a third aspect, the disclosed embodiments provide a non-transitory computer readable storage medium, on which a computer program is stored, which when executed by a processor, implements the steps of the method of the first aspect.
In a fourth aspect, an embodiment of the present disclosure provides a projection apparatus, including:
a memory having a computer program stored thereon;
a processor for executing the computer program in the memory to implement the steps of the method of the first aspect.
Based on the technical scheme, the display area is determined according to the current sight angle of the target object and the first position information of the projection picture. And then determining a target picture area in the display area, and correcting the original image according to the second position information of the display area and the third position information of the target picture area to obtain a corrected original image. When the corrected original image is projected on the target display area by the projection equipment, the picture presented in the visual angle plane of the target object is overlapped with the target picture area. According to the projection correction method provided by the embodiment of the disclosure, the original image can be corrected according to the change of the sight angle of the user, so that the optimal picture viewing experience can be always obtained no matter how the viewing angle of the user changes, and the method is not only suitable for projection correction of a single plane, but also suitable for projection correction of complex multiple planes.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure and are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description serve to explain the disclosure without limiting the disclosure. In the drawings:
FIG. 1 is a schematic flow chart diagram of a projection correction method provided in accordance with an exemplary embodiment;
FIG. 2 is a schematic illustration of a projected picture provided in accordance with an exemplary embodiment;
FIG. 3 is a schematic illustration of a target picture region provided in accordance with an exemplary embodiment;
FIG. 4 is a schematic illustration of a corrected original image provided in accordance with an exemplary embodiment;
FIG. 5 is a schematic flow chart illustrating the step 130 shown in FIG. 1;
FIG. 6 is a schematic illustration of a display area provided in accordance with an exemplary embodiment;
FIG. 7 is a schematic flow chart of step 140 shown in FIG. 1;
FIG. 8 is a schematic diagram of a correction of an original image provided in accordance with an exemplary embodiment;
FIG. 9 is a diagram of effects after projection correction provided in accordance with an exemplary embodiment;
FIG. 10 is a schematic diagram of obtaining a corrected original image provided in accordance with another exemplary embodiment;
FIG. 11 is a flow chart providing for determining first location information for a projected picture according to an exemplary embodiment;
FIG. 12 is a schematic diagram of determining first location information provided in accordance with an exemplary embodiment;
FIG. 13 is a flow chart providing for determining a current gaze angle in accordance with an exemplary embodiment;
FIG. 14 is a flow chart providing for determining a current gaze angle in accordance with another exemplary embodiment;
FIG. 15 is a block diagram illustrating the connection of modules of a projection correction apparatus according to an exemplary embodiment;
fig. 16 is a block diagram illustrating a projection device according to an example embodiment.
Detailed Description
The following detailed description of specific embodiments of the present disclosure is provided in connection with the accompanying drawings. It should be understood that the detailed description and specific examples, while indicating the present disclosure, are given by way of illustration and explanation only, not limitation.
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
Fig. 1 is a flowchart illustrating a projection correction method according to an exemplary embodiment. The projection correction method disclosed in this embodiment may be executed by a projection device, and specifically may be executed by a projection correction apparatus, which may be implemented by software and/or hardware and configured in the projection device. Of course, the projection correction method disclosed in this embodiment may be executed by a server, and the server sends the calculated corrected original image to the projection device, so that the projection device projects according to the corrected original image.
In step 110, the current gaze angle of the target object is determined.
Here, the target object may refer to a user who views the projection screen. Illustratively, when the number of users viewing the projection screen is one, the target object is the user viewing the projection screen. When the number of users viewing the projection screen is plural, the target object may be a subject user among the plural users viewing the projection screen. For example, when the number of users viewing the projection screen is 3, a user at a middle position among the 3 users is taken as a target object. For another example, a user who first appears in the center of the screen of a captured image may be targeted using a human body detection technique.
The current sight angle may be an included angle between a connection line between human eyes of the target object and a center point of the projection picture and the projection picture at the current time. It should be understood that the current gaze angle is an angle between a connection line between the eyes of the target object and the center point of the projection screen and the projection screen, and may be set in a case where the projection screen does not change following the change of the gaze angle of the target object.
In other embodiments, the projection screen of the projection device may be changed following the change of the angle of line of sight of the target object, and the current angle of line of sight of the target object may refer to the orientation of the head of the target object. It should be noted that the fact that the projection screen of the projection device changes along with the change of the angle of the line of sight of the target object means that when the angle of the line of sight of the target object changes, the center point of the projection screen projected by the projection device changes along with the change of the angle of the line of sight. For example, when the viewing angle of the target object is shifted by 1 ° to the left, the projection screen is simultaneously shifted by 1 ° to the left, so that the target object can still view the center point of the projection screen when the viewing angle is changed. In a specific implementation, the projection device may be mounted on a spherical holder, so that the projection device can move along with the view angle of the target object. Of course, the projection device may also perform panning projection following the change in the angle of line of sight based on the offset of the projected light.
In step 120, a display area is determined according to the first position information of the projection picture projected on the target projection area by the projection device and the current sight angle, wherein the display area is an area where the projection picture is orthographically projected on a virtual perspective plane perpendicular to the sight corresponding to the current sight angle.
Here, the target projection area may refer to a medium for carrying a projection screen, such as a wall surface or a curtain. The projection picture is a display picture of an image projected by the projection device on the target projection area. Fig. 2 is a schematic diagram of a projection screen provided according to an exemplary embodiment, and as shown in fig. 2, when the target projection area 201 is a wall with internal and external corners, the projection screen 202 projected in the target projection area 201 by the projection device is also divided into two left and right parts by an intersection line between the internal and external corners. Wherein, the internal and external corners are one kind of building structure, and the internal corners refer to the recessed wall corners, such as the included angles between the top surface and the surrounding walls; the external corner refers to a protruding wall corner, such as an included angle formed by two walls at a turning part of a walkway.
It should be noted that the target projection area shown in fig. 2 is only for illustration and does not limit the shape of the target projection area. In a practical use scenario, the target projection area may be a single plane, two intersecting planes.
The display area refers to an orthographic projection of a projection picture projected on the target projection area by the projection device on a virtual visual angle plane perpendicular to a visual line corresponding to the current visual angle. Wherein the virtual perspective plane is actually an imaging plane of the target object at the current gaze angle. It should be appreciated that in step 120, the spatial position of the virtual view plane is variable, but the virtual view plane is perpendicular to the line of sight. The display area is an orthographic projection of the projection picture on the virtual visual angle plane, and the display area is also a virtual plane in reality.
For example, the projection screen may be projected on a virtual viewing angle plane with the current viewing angle as a projection angle, to obtain a display area.
In some embodiments, the first position information of the projection screen may be a captured image of the projection area of the target captured by the capturing device, and the first position information of the projection screen is determined in the captured image.
In other embodiments, the first position information of the projection picture may be obtained by modeling a space in which the projection device is located, obtaining a three-dimensional model of the space, and determining the first position information of the projection picture according to the pose and the parameters of the projection device.
It is noted that the first position information of the projection screen and the current gaze angle may be determined in the same reference coordinate system. For example, the first position information of the projection picture and the current sight angle are determined in a reference coordinate system corresponding to the space where the projection device is located. Of course, the first position information of the projection screen and the current gaze angle may not be determined in the same reference coordinate system, and when the display area is determined, the first position information of the projection screen and the current gaze angle need to be converted into the same reference coordinate system.
In step 130, a target screen area is determined in the display area.
Here, the target screen region refers to a screen shape finally presented in the visual plane of the target object. Since the display area is on a virtual viewing angle plane perpendicular to the line of sight corresponding to the current line of sight angle of the target object, the target screen area determined in the display area is also in the viewing angle plane corresponding to the current line of sight angle.
In some embodiments, when the target projection area is a single plane, determining that the target screen area in the display area may select any point from any side of the display area, and using the point as a vertex of the target screen area to be constructed and an aspect ratio of the original image as an aspect ratio of the target screen area to be constructed, generating rectangles in the display area, and selecting a rectangle with a largest area from the generated rectangles as the target screen area. The target screen region obtained according to the present embodiment is actually the largest inscribed rectangle determined in the display region based on the aspect ratio of the original image.
It should be appreciated that, in the above embodiment, the target picture area is the largest inscribed rectangle determined in the display area, which can provide the largest projection picture for the user and provide the best viewing experience for the user.
Fig. 3 is a schematic diagram of a target picture area provided according to an exemplary embodiment, and as shown in fig. 3, when the target projection area 301 is a wall with internal and external corners, the projection picture 302 projected in the target projection area 301 by the projection device is also divided into two parts, i.e., left and right parts, by the internal and external corners. According to the current gaze angle 303 of the target object, a virtual viewing angle plane 304 perpendicular to the current gaze angle 303 can be determined, and then the projection picture 302 is projected on the virtual viewing angle plane 304 with the current gaze angle 303 as a projection angle, resulting in a display area (not shown in fig. 3). And a target screen area 305 is determined in the display area.
In step 140, the original image is corrected based on the second position information of the display area and the third position information of the target screen area to obtain a corrected original image, so that the screen of the corrected original image projected in the target projection area coincides with the target screen area.
Here, the original image refers to an image that the user wants to project, and may be a video, a picture, or the like. The correction of the original image means that each vertex of the original image is adjusted so that when the projection device projects the corrected original image on the target projection area, the picture imaged in the user view angle is consistent with the target picture area. The target picture area refers to a picture visually presented by a user, and the target picture area is not an area finally used for bearing a projection picture.
It should be noted that the first position information of the projection screen is coordinate information of the projection screen in a reference coordinate system that is a global coordinate system corresponding to a space where the projection apparatus is located. The second position information of the display area and the third position information of the target screen area may be coordinate information of the display area and the target screen area in a reference coordinate system constructed with an arbitrary point in the display area as a coordinate origin.
In some embodiments, the position information of each vertex of the display region and the position information of each corner of the target picture region may be determined based on the second position information of the display region and the third position information of the target picture region, then a homography matrix relationship between the display region and the original image is established according to the determined position information of each vertex of the display region and the position information of each corner of the original image corresponding to the display region in the modulation plane, and then the corrected original image is obtained according to the position information of each corner of the target picture region and the homography matrix relationship.
Wherein, the modulation plane refers to a plane where an image is generated by a light modulator (chip) of the projection apparatus. The chip corresponding to the modulation plane comprises a reflection type image modulation chip or a transmission type image modulation chip. The reflective image modulation chip includes a DMD chip (Digital Micromirror Device) or an LCOS chip (Liquid Crystal on Silicon ), and the transmissive image modulation chip includes an LCD chip (Liquid Crystal Display ), and the like.
Fig. 4 is a schematic diagram of a corrected original image provided according to an exemplary embodiment, and as shown in fig. 4, when a projection scene of a projection device is as shown in fig. 3, a target picture area 305 may be mapped in a modulation plane according to a homography matrix relationship between a display area and the original image 401, in combination with third position information between the target picture areas 305, so as to obtain a corrected original image 402. Wherein, after the corrected original image 402 is projected on the target projection area 301, a picture represented in the view angle plane of the target object coincides with the target picture area 305.
It should be understood that, in the present embodiment, whether the target projection area is a single plane or a solid plane composed of a plurality of non-coplanar planes, the corrected original image can be calculated by the above embodiment.
Thus, the display area is determined according to the current sight angle of the target object and the first position information of the projection picture. And then determining a target picture area in the display area, and correcting the original image according to the second position information of the display area and the third position information of the target picture area to obtain a corrected original image. When the corrected original image is projected on the target display area by the projection equipment, the picture presented in the visual angle plane of the target object is overlapped with the target picture area. According to the projection correction method provided by the embodiment of the disclosure, the original image can be corrected according to the change of the sight angle of the user, so that the optimal picture viewing experience can be always obtained no matter how the viewing angle of the user changes, and the method is not only suitable for projection correction of a single plane, but also suitable for projection correction of complex multiple planes.
Fig. 5 is a schematic flow chart of step 130 shown in fig. 1. In some implementations, as shown in fig. 5, the step 130 of determining the target screen area in the display area may include the following steps.
In step 131, in case the target projection area comprises at least two sub-projection areas located on intersecting planes, fourth position information of a first intersection line between the intersecting planes is determined.
Here, the target projection area including at least two sub-projection areas located on the intersecting planes means that the projection screen is located in the sub-projection areas of the at least two intersecting planes. For example, the projected picture falls in the inside and outside corners of the wall.
The first intersection line refers to an intersection line between the two sub-projection areas. As shown in fig. 3, the target projection area 301 is a wall surface of a concave corner and a convex corner, and thus the first intersection line is an intersection line between the concave corner and the convex corner.
In some embodiments, the fourth position information of the first intersection line may be modeling of a space where the projection device is located, and the fourth position information of the first intersection line in the space is obtained according to a modeling result.
It should be understood that the meaning of the first intersection line is consistent for other more complex target projection areas and is not exhaustive in the embodiments of the disclosure.
In step 132, fifth position information of a second intersection corresponding to the first intersection in the display area is determined according to the fourth position information of the first intersection.
Here, since the first intersection line divides the projection screen into two parts and the display area is an orthographic projection of the projection screen on the virtual viewing angle plane, a second intersection line corresponding to the first intersection line also exists in the display area.
FIG. 6 is a schematic illustration of a display area provided in accordance with an exemplary embodiment. As shown in fig. 6 (a), the first intersection line CF divides the target projection area 601 into a first sub-projection area ABCF and a second sub-projection area CDEF. Similarly, the first intersection line CF divides the projection image 602 into a first sub-projection image GHKL and a second sub-projection image HIJK. As shown in (b) of fig. 6, after the projection screen 602 is projected on a virtual viewing angle plane (not shown in fig. 6), a display area corresponding to the projection screen 602 may be obtained603. Wherein the display region 603 includes a first sub-display region G1H1K1L1And a second sub-display region H1I1J1K1Line segment H1K1Is the second intersection.
It should be noted that the fifth position information of the second intersecting line refers to coordinate information of two end points of the second intersecting line in a reference coordinate system constructed by using any one point in the display area as a coordinate origin.
In step 133, according to the fifth position information of the second intersection line, sixth position information of a third intersection line corresponding to the second intersection line on the original image corresponding to the projection screen is determined.
Here, the original image corresponding to the projection screen refers to an image in the modulation plane. And projecting the original image in a target projection area through projection equipment to obtain a projection picture. Since the second intersection line divides the display area into at least two parts, there is also a third intersection line on the original image that divides the original image into at least two parts.
The sixth position information of the third intersecting line refers to coordinate information of two end points of the third intersecting line in a reference coordinate system which is constructed by taking any point in the modulation plane as a coordinate origin.
In some embodiments, the picture ratio information between the sub-display areas in the display area may be determined according to fifth position information of the second intersection line, where the sub-display areas are obtained by dividing the display area according to the second intersection line; and determining sixth position information of a third intersecting line corresponding to the second intersecting line on the original image according to the picture proportion information.
The picture proportion information refers to the width proportion of the second intersection line dividing the display area into at least two sub-display areas and between each sub-display area. As shown in fig. 6 (b), a second intersecting line H1K1The display area 603 is divided into a first sub-display area G1H1K1L1And a second sub-display region H1I1J1K1Then the first sub-display area G can be obtained1H1K1L1And a second sub-display region H1I1J1K1And according to the picture proportion information, dividing the original image into two parts, and acquiring sixth position information of a third intersection line.
It should be understood that, by obtaining the picture scale information between the sub-projection pictures according to the first intersection line and the projection picture, the sixth position information of the third intersection line can be determined as well, and the principle is consistent.
In step 134, the original image is mapped to the display area according to the fifth position information of the second intersection line and the sixth position information of the third intersection line to construct the target screen area.
Here, after the fifth position information of the second intersection line and the sixth position information of the third intersection line are obtained, the original image in the modulation plane may be mapped into the display area to determine the target picture area.
In some embodiments, based on the fifth position information of the second intersection line and the sixth position information of the third intersection line, the original image is moved to the display area with the second intersection line and the third intersection line being overlapped as a constraint condition; and scaling the original image moved into the display area in an equal ratio, and taking the original image after scaling in the equal ratio as a target picture area, wherein the original image after scaling in the equal ratio is the largest rectangle in the display area.
Here, the original image in the modulation plane is moved into the captured image with the second intersection line and the third intersection line being coincident as a constraint condition. After the original image is moved to the display area, the original image moved to the display area is subjected to equal ratio scaling, and the original image subjected to equal ratio scaling is used as a target picture area, wherein the target picture area is the largest image in the display area subjected to equal ratio scaling.
Specifically, the second intersecting line and the third intersecting line may be aligned, and the bottom side of the original image coincides with the lower endpoint of the second intersecting line. Then, the original image is enlarged or reduced until the enlarged or reduced original image is maximum in the display area, thereby taking the maximum image as the target screen area.
It should be understood that since the target picture region is obtained by enlarging or reducing from the original image, the aspect ratio of the target picture region coincides with the aspect ratio of the original image in the modulation plane. Moreover, the maximum rectangular is used as the target picture area, so that the maximum viewing picture can be provided for the user, and the projection viewing experience of the user is improved.
In addition, since the display area is in the virtual viewing plane, the display area actually corresponds to the visual plane of the user, and thus the target screen area determined in the display area is the projection screen finally presented in the visual plane of the user.
Therefore, when the projection picture is located on a complex plane, the target picture area can be quickly and accurately positioned through the first intersection line, the second intersection line and the third intersection line, and data support is provided for subsequent projection correction.
Fig. 7 is a schematic flow chart of step 140 shown in fig. 1. In some possible embodiments, as shown in fig. 7, in step 140, the correcting the original image based on the second position information of the display area and the third position information of the target screen area to obtain a corrected original image may include the following steps.
In step 141, for each sub-display area in the display area, a homography matrix relationship between the sub-display area and the sub-image is established based on the first coordinate information of the sub-display area and the second coordinate information of the sub-image corresponding to the sub-display area, where the sub-display area is obtained by dividing the display area according to the second intersection line, and the sub-image is obtained by dividing the original image according to the third intersection line.
Here, the sub display region is a region into which the display region is divided at the second intersection line. A sub-picture refers to a picture area where the sub-display area is mapped on the original picture in the modulation plane. The sub-image is obtained by dividing the original image according to the third intersection line. It should be appreciated that each sub-display region actually corresponds to a sub-projection picture in the sub-projection region, the sub-projection picture corresponding to a sub-image in the original image at the modulation plane.
Fig. 8 is a schematic diagram of a principle of correcting an original image according to an exemplary embodiment, as shown in fig. 8, a sub-diagram (a) in fig. 8 is a schematic diagram in a display region, and a sub-diagram (b) in fig. 8 is a schematic diagram of a modulation plane of a projection device.
As shown in fig. 8 (a), the display area 801 includes a first sub-display area ABEF and a second sub-display area BCDE. As shown in (b) of fig. 8, the original image 803 is divided into the first sub-image a by the third intersection line MN1MND1And a second sub-picture MB1C1N。
According to the first coordinate information corresponding to each vertex of the first sub-display area ABEF and the first sub-image A1MND1Establishing a first sub-display area ABEF and a first sub-image A by using second coordinate information corresponding to each corner point1MND1A first homography matrix relationship therebetween. And according to the first coordinate information corresponding to each vertex of the second sub-display region BCDE and the second sub-image MB1C1N, establishing a second sub-display region BCDE and a second sub-image MB1C1A second homography matrix relationship between N.
It should be noted that the first coordinate information is coordinate information of the sub-display area in a reference coordinate system constructed with an arbitrary point in the display area as a coordinate origin. The second coordinate information is coordinate information of the sub-image in a reference coordinate system constructed with an arbitrary point in the modulation plane as a coordinate origin.
It should be appreciated that the homographic matrix relationship acts as a perspective transformation matrix that reflects the change in position of the pixel projections in the modulation plane within the projection region of the object.
In step 142, for each sub-picture area in the target picture area, fourth coordinate information of an image area of the sub-picture area mapped on the modulation plane is determined according to a homography matrix relationship corresponding to the sub-picture area and third coordinate information of the sub-picture area, wherein the sub-picture area is obtained by dividing the target picture area according to a second intersection line.
Here, the sub-picture area is obtained by dividing the target picture area according to the second intersection line. As shown in (a) of fig. 8, the target screen region 802 is divided into a first sub-screen region GHEK and a second sub-screen region HIJE by a second intersection BE.
The third coordinate information of the sub-picture region may refer to coordinate information corresponding to each vertex of the sub-picture region. Since the position information of the target screen region and the second intersection line is determined, the third coordinate information corresponding to each sub-screen region is also available. It should be noted that the third coordinate information is coordinate information of the sub-display area in a reference coordinate system constructed with an arbitrary point in the display area as a coordinate origin.
After the third coordinate information of the sub-picture area is obtained, for each sub-picture area, the coordinate information of the image area of the sub-picture area mapped on the modulation plane is obtained according to the homography matrix relation corresponding to the sub-picture area and the third coordinate information of the sub-picture area. Specifically, the third coordinate information of the sub-picture area is multiplied by the corresponding homography matrix relation to obtain the coordinate information of the image area of the sub-picture area mapped on the modulation plane. When the image area mapped on the modulation plane is projected on the corresponding sub-projection area, the sub-picture area appears as the corresponding sub-picture area.
As shown in fig. 8, for the first sub-picture area GHEK, according to the third coordinate information of the first sub-picture area GHEK and the first homography matrix relationship, the first image area E of the first sub-picture area GHEK mapped on the modulation plane may be obtained1F1NJ1Fourth coordinate information of (1). Aiming at the second sub-picture area HIJE, according to the third coordinate information of the second sub-picture area HIJE and the second homography matrix relation, a second image area F of the second sub-picture area HIJE mapped on the modulation plane can be obtained1G1H1N fourth coordinate information.
In step 143, the original image is corrected according to the fourth coordinate information of the image area of each sub-picture area mapped on the modulation plane, and a corrected original image is obtained.
Here, when an image formed by the fourth coordinate information of the image area in which each sub-picture area is mapped on the modulation plane is projected on the target projection area, the projected picture presented coincides with the target picture area. As shown in fig. 8, according to the first image region E1F1NJ1And a second image area F1G1H1N, a target image area E of the target picture area mapped on the modulation plane can be obtained1F1G1H1NJ1The coordinate information of (2).
Wherein the projection device is based on the target image area E1F1G1H1NJ1The original image is calibrated by the coordinate information to obtain the corrected original image.
It should be noted that, since the projection correction is performed facing the projection picture in the stereo, the focal length of the projection device can be determined by the distance between the intersection line between the sub-projection areas and the projection device during the projection process of the projection device, so as to ensure a good focusing effect.
Fig. 9 is a diagram of effects after projection correction provided according to an exemplary embodiment, and as shown in fig. 9, a projection screen 901 in which an original image not corrected by the projection correction method provided by the embodiment of the present disclosure is projected at a corner is shown, and an image presented in a user viewing angle plane is not a rectangular image. A projection picture 902 of the original image corrected by the projection correction method provided by the embodiment of the present disclosure projected at a corner is as shown in the figure, and an image presented in a user view plane is a rectangular image.
It should be noted that, in the above embodiment, only the target projection area including two sub-projection areas is illustrated, and the projection correction method provided by the embodiment of the present disclosure is not limited to be only used for the application scenario of the target projection area including two sub-projection areas to perform projection correction. For a target projection area including one sub-projection area or three or more sub-projection areas, projection correction can be performed by the projection correction method provided by the embodiment of the disclosure. When the target projection area comprising three or more sub-projection areas is faced, the projection correction is carried out according to the intersecting lines among the sub-projection areas.
Fig. 10 is a schematic diagram of obtaining a corrected original image according to another exemplary embodiment. As shown in fig. 10, when the current gaze angle 811 of the target object is perpendicular to the sub-projection area on the right side of the target projection area 812, it is illustrated that the target screen area falls on the sub-projection area on the right side of the target projection area 812. At this time, the original image 814 in the modulation plane may be corrected to the corrected original image 815 (as indicated by the dashed box on the right side of the original image 814) according to the projection screen 813 and the target screen region 812. As can be seen, in the embodiment of the present disclosure, when the projection screen remains still, if the user's gaze angle moves from the corner to the sub-projection area on the right side of the target projection area 812, accurate projection correction can still be performed according to the user's gaze angle.
In addition, it should be understood that the projection correction method provided by the embodiment of the disclosure is not only applicable to application scenarios in which the projection picture projected by the projection device remains still. For example, when the projection device is fixed to project toward a corner of a wall, the projection device may correct the projection picture according to the change 811 of the gaze angle of the user, so that the corrected projection picture has the best viewing experience at the current gaze angle of the user. The method is suitable for an application scene that the central point of the projection picture projected by the projection equipment changes along with the sight angle of the user, for example, when the sight angle of the user falls on a flat wall surface, the projection picture also falls on a flat wall surface, the projection correction can be performed according to the projection correction method provided by the embodiment of the disclosure, and when the sight angle of the user moves to a corner, the projection correction can be performed according to the projection correction method provided by the embodiment of the disclosure, so that the user can have the best viewing experience when the projection picture falls on the corner.
Fig. 11 is a flowchart providing for determining first location information of a projected picture according to an example embodiment. As shown in fig. 11, the first position information of the projection screen may be determined by the following steps.
In step 1101, first current pose information of the projection device and a three-dimensional model of a space where the projection device is located are obtained, wherein the three-dimensional model is obtained by modeling the space where the projection device is located through a depth camera.
Here, the first current pose information of the projection device includes position information of the projection device and attitude information, wherein the attitude information includes a yaw angle, a pitch angle, and a roll angle of the projection device. For example, the pose information of the projection apparatus may be obtained by an IMU (Inertial Measurement Unit). The position information of the projection device may be determined by an indoor positioning method, such as a WiFi positioning method.
The three-dimensional model of the space in which the projection device is located is obtained by modeling the space in which the projection device is located with a depth camera. Illustratively, the depth camera may be disposed in the projection device. Of course, the depth camera may be disposed at any position in the space where the projection device is located, as long as the space where the projection device is located can be scanned three-dimensionally to complete modeling.
The three-dimensional model is specifically established as follows: the method comprises the steps of establishing a global coordinate system at an initial position of a depth camera, collecting image data and point cloud data corresponding to the initial position, controlling the depth camera to rotate, continuously collecting the image data and the point cloud data in the rotating process, and simultaneously carrying out odometer tracking according to the image data and the point cloud data to obtain position change information of the depth camera. After the depth camera rotates 360 degrees, image data and point cloud data acquired in the rotating process are fused into image data and point cloud data under a global coordinate system constructed by a first frame by an incremental method according to the obtained position change information. After the depth camera completes 360-degree rotation, all point cloud data form a closed loop through a loop detection algorithm, and a three-dimensional model of a space where the projection equipment is located is obtained.
In step 1102, based on the relative position relationship between the optical center of the projection device and the depth camera and the first current pose information, pose information of an optical axis corresponding to the optical center in a global coordinate system corresponding to the three-dimensional model is determined.
Here, the relative positional relationship between the optical center of the projection apparatus and the depth camera refers to a relative position between a setting position of the optical engine of the projection apparatus and a setting position of the depth camera. According to the relative position relation between the optical center of the projection equipment and the depth camera, the first current pose information can be converted into a global coordinate system corresponding to the three-dimensional model, and the pose information of the optical axis corresponding to the optical center in the global coordinate system is obtained.
It should be understood that if the installation position of the depth camera is infinitely close to the optical center, the relative positional relationship may be understood as "0". At this time, the first current pose information can be understood as the pose information of the optical axis corresponding to the optical center in the global coordinate system without conversion. Certainly, in an actual application process, the depth camera and the optical machine are generally installed on the same vertical plane and have the same orientation, at this time, a value of a physical height difference between the depth camera and the optical machine can be converted into a spatial displacement transformation matrix, and then the first current pose information is converted into the pose information of the optical axis corresponding to the optical center in the global coordinate system based on the spatial displacement transformation matrix.
In step 1103, first position information of a projection picture projected by the projection device in the target projection area is determined according to the pose information, the optical mechanical parameter information of the projection device, and the three-dimensional model.
Here, the pose information actually reflects the position and the posture of the optical axis corresponding to the optical center in the global coordinate system corresponding to the three-dimensional model. The light engine parameter information includes a throw ratio and an aspect ratio of the light engine, for example, the throw ratio is 1.2, and the aspect ratio is 16:9, which is determined according to the specific model of the projection equipment. And according to the pose information and the optical machine parameter information of the projection equipment, the position information of the projection picture projected by the projection equipment in the space can be obtained. And determining the projection picture projected in the target projection area by the projection equipment according to the intersection between the position information of the projection picture projected by the projection equipment and the three-dimensional model. Since the projection picture projected in the target projection area by the projection device is in the three-dimensional model, the first position information of the projection picture projected in the target projection area by the projection device can be calculated according to the size information of the three-dimensional model.
Fig. 12 is a schematic diagram of determining first location information provided in accordance with an example embodiment. As shown in fig. 12, a projection picture 1202 projected by the projection apparatus can be obtained according to the pose information of the optical axis 1201 and the optical machine parameter information, and then first position information of the projection picture projected by the projection apparatus in the target projection area is obtained according to the intersection of the projection picture 1202 and the surface in the three-dimensional model 1203.
FIG. 13 is a flow chart providing for determining a current gaze angle according to an exemplary embodiment. As shown in fig. 13, in the case where the projection screen does not follow the gaze angle of the target object, the current gaze angle of the target object may be determined by the following steps.
In step 1301, a first captured image of a target object is acquired by a depth camera.
Here, the related concept of the target object has been described in detail in the above embodiments, and is not described again here. Illustratively, the width of the shot image of the depth camera may be defined as W, and an arbitrary coordinate point of the detection frame of human body recognition is within a range of (W/2, W/3), it may be determined that a human body appears in the shot image. At this time, the target object is determined from the human body appearing in the captured image.
It should be appreciated that the depth camera may continuously acquire captured images of the target object to update the current gaze angle of the target object based on the newly acquired captured images. For example, the depth camera may track the target object in real time based on a depth learning human tracking algorithm to update the current gaze angle of the target object as the target object moves. For example, when the moving amplitude of the target object is too large, which may cause the depth camera to fail to track the target object, the depth camera may be initialized and the first captured image of the target object may be acquired again.
In step 1302, seventh position information of human eyes of the target object in a space where the target projection area is located is determined based on the first photographed image and the rotation angle of the depth camera.
Here, the rotation angle of the depth camera refers to an angle by which the depth camera rotates from an initial position to when the first captured image is acquired. Wherein the rotation angle may be obtained by an inclinometer.
The first shot image of the target object acquired by the depth camera contains the depth information of the target object in the space, and meanwhile, the position information and the depth information of human eyes of the target object in the first shot image can be determined through an image recognition algorithm. Then, in conjunction with the rotation angle of the depth camera, the position information of the human eyes of the target object in the first captured image and the depth information are converted into a space in which the target projection area is located, and seventh position information is obtained. And the space where the target projection area is located is the three-dimensional model.
In step 1303, a current gaze angle is obtained based on the seventh position information and the first position information of the projection screen.
Here, the seventh position information represents position information of human eyes of the target object in the three-dimensional model, and the current gaze angle may be obtained based on the seventh position information and the first position information of the projection screen. Wherein, the current view angle is an included angle between a connection line between human eyes of the target object and a central point of the projection picture and a vertical line of the projection picture of the horizontal line.
Therefore, the current sight angle of the target object can be quickly obtained through the depth camera, and the sight angle of the target object can be tracked in real time through the depth camera, so that the corrected image can be most appropriate to the sight angle of the user.
It should be noted that the method for determining the current gaze angle provided by the foregoing embodiment may be applied to an application scenario in which the projection device remains still. When the projection screen of the projection apparatus moves following the gaze angle of the target object, the current gaze angle of the target object may be determined by the following method.
Fig. 14 is a flow chart providing for determining a current gaze angle according to another exemplary embodiment. As shown in fig. 14, in a case where the projection screen of the projection apparatus moves following the gaze angle of the target object, the current gaze angle of the target object may be obtained by the following steps.
In step 1401, a second captured image of the target object is acquired by the depth camera.
Here, the method of acquiring the second captured image is identical to the method of acquiring the first captured image, and will not be described again.
In step 1402, second current pose information of the head of the target object is determined from the second captured image.
Here, the second current pose information includes position information of the head of the target object in a space in which the projection apparatus is located and pose information including orientation information of the head of the target object. The position information of the head of the target object in the second shot image can be determined according to a target detection algorithm, and the position information of the head of the target object in the space where the projection device is located can be determined according to the position information and the depth information included in the second shot image.
For the pose information, the head key points may be detected by using an image detection algorithm, and then weighted calculation is performed based on the detected head key points to obtain the orientation information of the head of the target object. For example, when the detected key points of the head include key points such as the eyes, nose, mouth, face contour, etc., a weighting calculation is performed based on the key points such as the eyes, nose, mouth, face contour, etc., thereby obtaining orientation information of the head of the target object.
In step 1403, a current gaze angle of the target object is determined from the second current pose information.
Here, the second current pose information of the head of the target object may be the current gaze angle of the target object.
It should be understood that, in the present embodiment, the current gaze angle determined by the second current pose information is independent of the position information of the projection screen. The position information of the projection picture is determined according to the current sight angle determined by the second current pose information. For example, the projection device tracks the gaze angle of the target object through the depth camera, and the center point of the projection screen changes to follow the gaze angle of the target object when the gaze angle of the target object changes. For example, when the line-of-sight angle of the target object is shifted by 1 ° to the left, the center point of the projection screen is simultaneously shifted by 1 ° to the left.
Therefore, when the projection picture moves along with the sight angle of the target object, the current sight angle of the target object can be positioned through the pose information of the head of the target object, and data support is never provided for subsequent projection correction.
Fig. 15 is a schematic block diagram of a projection correction apparatus according to an exemplary embodiment. As shown in fig. 15, the projection correction apparatus 1500 proposed by the present disclosure may include:
a first determination module 1501 configured to determine a current gaze angle of the target object;
a second determining module 1502 configured to determine a display area according to first position information of a projection picture projected on a target projection area by a projection device and a current sight angle, wherein the display area is an area where an orthographic projection of the projection picture on a virtual perspective plane perpendicular to a sight corresponding to the current sight angle is located;
a third determination module 1503 configured to determine a target screen region in the display region;
and a correcting module 1504 configured to correct the original image based on the second position information of the display area and the third position information of the target screen area to obtain a corrected original image, so that the screen of the corrected original image projected in the target projection area coincides with the target screen area.
Optionally, the third determination module 1503 includes:
a first intersection line determining unit configured to determine fourth position information of a first intersection line between the intersecting planes in a case where the target projection area includes at least two sub-projection areas located on the intersecting planes;
a second intersection line determining unit configured to determine fifth position information of a second intersection line corresponding to the first intersection line in the display area according to fourth position information of the first intersection line;
the third intersection line determining unit is configured to determine, according to fifth position information of the second intersection line, sixth position information of a third intersection line corresponding to the second intersection line on the original image corresponding to the projection picture;
and the picture construction unit is configured to map the original image into the display area according to the fifth position information of the second intersecting line and the sixth position information of the third intersecting line so as to construct the target picture area.
Optionally, the picture construction unit includes:
the moving unit is configured to move the original image to the display area based on fifth position information of the second intersection line and sixth position information of the third intersection line by taking the superposition of the second intersection line and the third intersection line as a constraint condition;
and the zooming unit is configured to perform equal-ratio zooming on the original image moved into the display area, and take the original image after equal-ratio zooming as a target picture area, wherein the original image after equal-ratio zooming is the largest rectangle positioned in the display area.
Optionally, the third intersection determining unit includes:
the proportion determining unit is configured to determine picture proportion information among sub-display areas in the display area according to fifth position information of a second intersection line, wherein the sub-display areas are obtained by dividing the display area according to the second intersection line;
and the intersection line determining unit is configured to determine sixth position information of a third intersection line corresponding to the second intersection line on the original image according to the picture proportion information.
Optionally, the correction module 1504 includes:
the matrix establishing unit is configured to establish a homography matrix relation between each sub-display area in the display area and the sub-image based on first coordinate information of the sub-display area and second coordinate information of the sub-image corresponding to the sub-display area, wherein the sub-display area is obtained by dividing the display area according to a second intersecting line, and the sub-image is obtained by dividing the original image according to a third intersecting line;
the mapping unit is configured to determine, for each sub-picture area in the target picture area, fourth coordinate information of an image area of the sub-picture area mapped on the modulation plane according to a homography matrix relationship corresponding to the sub-picture area and third coordinate information of the sub-picture area, wherein the sub-picture area is obtained by dividing the target picture area according to a second intersection line;
and the image correction unit is configured to correct the original image according to the fourth coordinate information of the image area of each sub-picture area mapped on the modulation plane, so as to obtain a corrected original image.
Optionally, the second determining module 1502 includes:
the system comprises an acquisition unit, a processing unit and a display unit, wherein the acquisition unit is configured to acquire first current pose information of the projection equipment and a three-dimensional model of a space where the projection equipment is located, and the three-dimensional model is obtained by modeling the space where the projection equipment is located through a depth camera;
the pose determining unit is configured to determine pose information of an optical axis corresponding to the optical center in a global coordinate system corresponding to the three-dimensional model based on the relative position relation between the optical center of the projection equipment and the depth camera and the first current pose information;
and the projection picture determining unit is configured to determine first position information of a projection picture projected in the target projection area by the projection equipment according to the pose information, the optical machine parameter information of the projection equipment and the three-dimensional model.
Optionally, the first determining module 1501 includes:
a shooting unit configured to acquire a second shot image of the target object through the depth camera while a projection screen of the projection apparatus moves following a gaze angle of the target object;
a head pose determination unit configured to determine second current pose information of the head of the target object from the second captured image;
an angle determination unit configured to determine a current gaze angle of the target object according to the second current pose information.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Fig. 16 is a block diagram illustrating a projection device according to an example embodiment. As shown in fig. 16, the projection apparatus 700 may include: a processor 701 and a memory 702. The projection device 700 may also include one or more of a multimedia component 703, an input/output (I/O) interface 704, and a communication component 705.
The processor 701 is configured to control the overall operation of the projection apparatus 700, so as to complete all or part of the steps in the projection correction method. Memory 702 is used to store various types of data to support operation at the projection device 700, such as instructions for any application or method operating on the projection device 700 and application-related data, such as contact data, messaging, pictures, audio, video, and the like. The Memory 702 may be implemented by any type of volatile or non-volatile Memory device or combination thereof, such as Static Random Access Memory (SRAM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Erasable Programmable Read-Only Memory (EPROM), Programmable Read-Only Memory (PROM), Read-Only Memory (ROM), magnetic Memory, flash Memory, magnetic disk, or optical disk. The multimedia components 703 may include screen and audio components. Wherein the screen may be, for example, a touch screen and the audio component is used for outputting and/or inputting audio signals. For example, the audio component may include a microphone for receiving external audio signals. The received audio signal may further be stored in the memory 702 or transmitted through the communication component 705. The audio assembly also includes at least one speaker for outputting audio signals. The I/O interface 704 provides an interface between the processor 701 and other interface modules, such as a keyboard, mouse, buttons, etc. These buttons may be virtual buttons or physical buttons. The communication component 705 is used for wired or wireless communication between the projection device 700 and other devices. Wireless Communication, such as Wi-Fi, bluetooth, Near Field Communication (NFC), 2G, 3G, 4G, NB-IOT, eMTC, or other 5G, etc., or a combination of one or more of them, which is not limited herein. The corresponding communication component 705 may thus include: Wi-Fi module, Bluetooth module, NFC module, etc.
In an exemplary embodiment, the projection Device 700 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic components for performing the projection correction method described above.
In another exemplary embodiment, a computer-readable storage medium is also provided, which comprises program instructions, which when executed by a processor, implement the steps of the projection correction method described above. For example, the computer readable storage medium may be the memory 702 described above including program instructions that are executable by the processor 701 of the projection device 700 to perform the projection correction method described above.
In another exemplary embodiment, a computer program product is also provided, which comprises a computer program executable by a programmable apparatus, the computer program having code portions for performing the projection correction method described above when executed by the programmable apparatus.
The preferred embodiments of the present disclosure are described in detail with reference to the accompanying drawings, however, the present disclosure is not limited to the specific details of the above embodiments, and various simple modifications may be made to the technical solution of the present disclosure within the technical idea of the present disclosure, and these simple modifications all belong to the protection scope of the present disclosure.
It should be noted that, in the foregoing embodiments, various features described in the above embodiments may be combined in any suitable manner, and in order to avoid unnecessary repetition, various combinations that are possible in the present disclosure are not described again.
In addition, any combination of various embodiments of the present disclosure may be made, and the same should be considered as the disclosure of the present disclosure, as long as it does not depart from the spirit of the present disclosure.

Claims (10)

1. A projection correction method, comprising:
determining a current sight angle of the target object;
determining a display area according to first position information of a projection picture projected on a target projection area by projection equipment and the current sight angle, wherein the display area is an area where the projection picture is orthographically projected on a virtual visual angle plane perpendicular to a sight corresponding to the current sight angle;
determining a target picture area in the display area;
and correcting the original image based on the second position information of the display area and the third position information of the target picture area to obtain a corrected original image, so that the picture projected in the target projection area by the corrected original image is overlapped with the target picture area.
2. The projection correction method according to claim 1, wherein the determining a target picture area in the display area includes:
determining fourth position information of a first intersection line between intersecting planes in the case that the target projection area includes at least two sub-projection areas located on the intersecting planes;
determining fifth position information of a second intersection line corresponding to the first intersection line in the display area according to the fourth position information of the first intersection line;
determining sixth position information of a third intersection line corresponding to the second intersection line on the original image corresponding to the projection picture according to the fifth position information of the second intersection line;
and mapping the original image to the display area according to fifth position information of the second intersection line and sixth position information of the third intersection line to construct the target picture area.
3. The projection correction method according to claim 2, wherein the mapping the original image into the display area according to the fifth position information of the second intersection line and the sixth position information of the third intersection line to construct the target picture area comprises:
based on fifth position information of the second intersection line and sixth position information of the third intersection line, taking the superposition of the second intersection line and the third intersection line as a constraint condition, and moving the original image to the display area;
and scaling the original image moved into the display area in an equal ratio, and taking the original image after the scaling as the target picture area, wherein the original image after the scaling is the largest rectangle in the display area.
4. The projection correction method according to claim 2, wherein the determining, according to the fifth position information of the second intersection line, sixth position information of a third intersection line corresponding to the second intersection line on the original image corresponding to the projection picture includes:
determining picture proportion information among sub-display areas in the display area according to fifth position information of the second intersection line, wherein the sub-display areas are obtained by dividing the display area according to the second intersection line;
and determining sixth position information of a third intersecting line corresponding to the second intersecting line on the original image according to the picture proportion information.
5. The projection correction method according to any one of claims 2 to 4, wherein the correcting an original image based on the second position information of the display region and the third position information of the target screen region to obtain a corrected original image includes:
aiming at each sub-display area in the display areas, establishing a homography matrix relation between the sub-display area and a sub-image corresponding to the sub-display area based on first coordinate information of the sub-display area and second coordinate information of the sub-image, wherein the sub-display area is obtained by dividing the display area according to the second intersection line, and the sub-image is obtained by dividing the original image according to the third intersection line;
determining fourth coordinate information of an image area of the sub-picture area mapped on the modulation plane according to a homography matrix relation corresponding to the sub-picture area and third coordinate information of the sub-picture area aiming at each sub-picture area in the target picture area, wherein the sub-picture area is obtained by dividing the target picture area according to the second intersection line;
and correcting the original image according to the fourth coordinate information of the image area of each sub-picture area mapped on the modulation plane to obtain the corrected original image.
6. The projection correction method according to claim 1, wherein the first position information of the projection picture is determined by:
acquiring first current pose information of projection equipment and a three-dimensional model of a space where the projection equipment is located, wherein the three-dimensional model is obtained by modeling the space where the projection equipment is located through a depth camera;
determining pose information of an optical axis corresponding to the optical center in a global coordinate system corresponding to the three-dimensional model based on the relative position relation between the optical center of the projection equipment and the depth camera and the first current pose information;
and determining first position information of a projection picture projected in the target projection area by the projection equipment according to the pose information, the optical machine parameter information of the projection equipment and the three-dimensional model.
7. The projection correction method according to claim 1, wherein the determining a current gaze angle of a target object comprises:
under the condition that a projection picture of the projection equipment moves along with the sight angle of the target object, acquiring a second shot image of the target object through a depth camera;
determining second current pose information of the head of the target object according to the second shot image;
and determining the current sight angle of the target object according to the second current pose information.
8. A projection correction apparatus, comprising:
a first determination module configured to determine a current gaze angle of the target object;
the second determining module is configured to determine a display area according to first position information of a projection picture projected on a target projection area by a projection device and the current sight angle, wherein the display area is an area where the projection picture is orthographically projected on a virtual visual angle plane perpendicular to a sight corresponding to the current sight angle;
a third determination module configured to determine a target screen area in the display area;
and the correction module is configured to correct the original image based on the second position information of the display area and the third position information of the target picture area to obtain a corrected original image, so that a picture projected by the corrected original image in the target projection area is overlapped with the target picture area.
9. A non-transitory computer readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
10. A projection device, comprising:
a memory having a computer program stored thereon;
a processor for executing the computer program in the memory to carry out the steps of the method of any one of claims 1 to 7.
CN202111593336.XA 2021-12-23 2021-12-23 Projection correction method, projection correction device, storage medium and projection equipment Pending CN114286066A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111593336.XA CN114286066A (en) 2021-12-23 2021-12-23 Projection correction method, projection correction device, storage medium and projection equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111593336.XA CN114286066A (en) 2021-12-23 2021-12-23 Projection correction method, projection correction device, storage medium and projection equipment

Publications (1)

Publication Number Publication Date
CN114286066A true CN114286066A (en) 2022-04-05

Family

ID=80874989

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111593336.XA Pending CN114286066A (en) 2021-12-23 2021-12-23 Projection correction method, projection correction device, storage medium and projection equipment

Country Status (1)

Country Link
CN (1) CN114286066A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116156132A (en) * 2022-12-27 2023-05-23 深圳市宇通联发科技有限公司 Projection image correction method, projection image correction device, electronic equipment and readable storage medium
CN116433476A (en) * 2023-06-09 2023-07-14 有方(合肥)医疗科技有限公司 CT image processing method and device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140247287A1 (en) * 2013-03-01 2014-09-04 Seiko Epson Corporation Image processing device, projector, and image processing method
CN105182662A (en) * 2015-09-28 2015-12-23 神画科技(深圳)有限公司 Projection method and system with augmented reality effect
US9355431B1 (en) * 2012-09-21 2016-05-31 Amazon Technologies, Inc. Image correction for physical projection-surface irregularities
US20200184684A1 (en) * 2018-12-07 2020-06-11 Industrial Technology Research Institute Depth camera calibration device and method thereof
CN112650461A (en) * 2020-12-15 2021-04-13 广州舒勇五金制品有限公司 Relative position-based display system
CN113099198A (en) * 2021-03-19 2021-07-09 深圳市火乐科技发展有限公司 Projection image adjusting method and device, storage medium and electronic equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9355431B1 (en) * 2012-09-21 2016-05-31 Amazon Technologies, Inc. Image correction for physical projection-surface irregularities
US20140247287A1 (en) * 2013-03-01 2014-09-04 Seiko Epson Corporation Image processing device, projector, and image processing method
CN105182662A (en) * 2015-09-28 2015-12-23 神画科技(深圳)有限公司 Projection method and system with augmented reality effect
US20200184684A1 (en) * 2018-12-07 2020-06-11 Industrial Technology Research Institute Depth camera calibration device and method thereof
CN112650461A (en) * 2020-12-15 2021-04-13 广州舒勇五金制品有限公司 Relative position-based display system
CN113099198A (en) * 2021-03-19 2021-07-09 深圳市火乐科技发展有限公司 Projection image adjusting method and device, storage medium and electronic equipment

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116156132A (en) * 2022-12-27 2023-05-23 深圳市宇通联发科技有限公司 Projection image correction method, projection image correction device, electronic equipment and readable storage medium
CN116156132B (en) * 2022-12-27 2023-11-14 深圳市宇通联发科技有限公司 Projection image correction method, projection image correction device, electronic equipment and readable storage medium
CN116433476A (en) * 2023-06-09 2023-07-14 有方(合肥)医疗科技有限公司 CT image processing method and device
CN116433476B (en) * 2023-06-09 2023-09-08 有方(合肥)医疗科技有限公司 CT image processing method and device

Similar Documents

Publication Publication Date Title
CN112689135B (en) Projection correction method, projection correction device, storage medium and electronic equipment
US10165179B2 (en) Method, system, and computer program product for gamifying the process of obtaining panoramic images
US9723203B1 (en) Method, system, and computer program product for providing a target user interface for capturing panoramic images
JP5740884B2 (en) AR navigation for repeated shooting and system, method and program for difference extraction
TWI535285B (en) Conference system, surveillance system, image processing device, image processing method and image processing program, etc.
JP6494239B2 (en) Control device, control method, and program
JP2016062486A (en) Image generation device and image generation method
JP2001061121A (en) Projector
CN114286066A (en) Projection correction method, projection correction device, storage medium and projection equipment
WO2019080047A1 (en) Augmented reality image implementation method, device, terminal device and storage medium
CN113454685A (en) Cloud-based camera calibration
KR101989087B1 (en) Distortion correction method and distortion correction system for projection display using personal digital imaging device
CN114449249B (en) Image projection method, image projection device, storage medium and projection apparatus
CN114286068B (en) Focusing method, focusing device, storage medium and projection equipment
JP4199641B2 (en) Projector device
WO2018167918A1 (en) Projector, method of creating data for mapping, program, and projection mapping system
CN113870213A (en) Image display method, image display device, storage medium, and electronic apparatus
JP2004326179A (en) Image processing device, image processing method, image processing program, and recording medium storing it
CN114827564A (en) Projection equipment control method and device, storage medium and projection equipment
CN114339179A (en) Projection correction method, projection correction device, storage medium and projection equipment
WO2017057426A1 (en) Projection device, content determination device, projection method, and program
JP4839858B2 (en) Remote indication system and remote indication method
JP6859763B2 (en) Program, information processing device
CN115103169B (en) Projection picture correction method, projection picture correction device, storage medium and projection device
CN110060355B (en) Interface display method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination