CN115311133A - Image processing method and device, electronic equipment and storage medium - Google Patents

Image processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115311133A
CN115311133A CN202210952430.8A CN202210952430A CN115311133A CN 115311133 A CN115311133 A CN 115311133A CN 202210952430 A CN202210952430 A CN 202210952430A CN 115311133 A CN115311133 A CN 115311133A
Authority
CN
China
Prior art keywords
target
coordinates
dot matrix
coordinate
pixel point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210952430.8A
Other languages
Chinese (zh)
Inventor
周航
宋良多
孙怀义
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Tricolor Technology Co ltd
Original Assignee
Beijing Tricolor Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Tricolor Technology Co ltd filed Critical Beijing Tricolor Technology Co ltd
Priority to CN202210952430.8A priority Critical patent/CN115311133A/en
Publication of CN115311133A publication Critical patent/CN115311133A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/08Projecting images onto non-planar surfaces, e.g. geodetic screens
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3179Video signal processing therefor
    • H04N9/3185Geometric adjustment, e.g. keystone or convergence

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application provides an image processing method, an image processing device, an electronic device and a storage medium, wherein the method comprises the following steps: acquiring a first dot matrix corresponding to a target projection channel; rotating the virtual plane from the initial position to a target position corresponding to the current eyepoint information, and determining coordinates of a plurality of target vertexes on the virtual plane after rotation; determining the coordinate of each second pixel point based on the rotated coordinates of the plurality of target vertexes; aiming at each original pixel point in the original dot matrix, determining a third pixel point corresponding to the original pixel point based on the position relation between a plurality of target first pixel points and the original pixel point, wherein the plurality of third pixel points form a third dot matrix; and carrying out deformation processing on the input source image by utilizing the third dot matrix to obtain a projection image corresponding to the first dot matrix. By adopting the image processing method, the image processing device, the electronic equipment and the storage medium, the problems that the correction and debugging efficiency of the projected image is low and the application scene of real-time change of the eyepoint position cannot be adapted are solved.

Description

Image processing method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing method and apparatus, an electronic device, and a storage medium.
Background
Virtual simulation is a computer system that creates and experiences a Virtual World (Virtual World). Such a virtual world may be generated by a computer, may be a reproduction of a real world, or may be a world in view, and a user may naturally interact with the virtual world through various sensing channels such as a visual channel, an auditory channel, and a tactile channel. In the virtual simulation process, a virtual simulation fusion technology is generally used, wherein the virtual simulation fusion refers to the creation of a three-dimensional interface in a projection fusion mode, and an observer stands at a projected visual center to form an immersive interactive scene. The virtual simulation fusion is a typical application of fusion splicing, has a large application market in the fields of education, scientific research, military industry, aerospace and the like, and creates a three-dimensional interface in a projection fusion mode.
However, in existing virtual simulation fusion scenarios, for example: in a flight simulator scene, the eye point position of a driver is fixed on a driver seat, once the eye point position changes, a visual cone calibration graph also changes, geometric correction and debugging needs to be carried out on a projected image again, and the problem of low correction and debugging efficiency of the projected image is caused.
Disclosure of Invention
In view of this, an object of the present application is to provide an image processing method, an image processing apparatus, an electronic device, and a storage medium, so as to solve the problem that a projected image is inefficient in debugging and correcting, and cannot adapt to an application scenario in which an eyepoint position dynamically changes in real time.
In a first aspect, an embodiment of the present application provides an image processing method, including:
acquiring a first dot matrix corresponding to a target projection channel, wherein the first dot matrix is a known dot matrix which is used for describing a projection effect expected to be achieved by an input source image projected by the target projection channel;
in a virtual simulation scene, rotating a virtual plane corresponding to a target projection channel from an initial position to a target position corresponding to current eyepoint information, and determining rotated coordinates of a plurality of target vertexes on the virtual plane;
determining the coordinates of each second pixel point in a second dot matrix based on the rotated coordinates of the target vertexes, wherein the second dot matrix is a matrix formed by a plurality of intersection points of a plurality of connecting lines and a virtual plane, and the connecting lines are connecting lines between the pixel points projected on the virtual projection screen by the input source image and the current eye point position;
determining a third pixel point corresponding to the original pixel point in the rotated virtual plane based on the position relationship between a plurality of target first pixel points corresponding to the original pixel point and the original pixel point for each original pixel point in the original dot matrix, wherein the plurality of third pixel points form the third dot matrix, and the plurality of target first pixel points are pixel points in the first dot matrix;
and performing deformation processing on the input source image by using the third dot matrix to obtain a projection image which is projected on the actual projection screen and corresponds to the first dot matrix.
Optionally, the current eyepoint information includes a current eyepoint position, a current observation direction, and a physical visual angle parameter, where the physical visual angle parameter is a projection angle of the input source image; rotating a virtual plane corresponding to the target projection channel from an initial position to a target position corresponding to the current eyepoint information, and determining the rotated coordinates of a plurality of target vertexes on the virtual plane, wherein the steps of: determining initial coordinates of a plurality of target vertexes when the virtual plane is located at an initial position based on the physical visual angle parameters and the current eyepoint position, wherein the initial position is a position corresponding to the projection direction of the target projection channel; determining the rotation angle of the virtual plane according to the offset angle between the current observation direction and the projection direction of the target projection channel; rotating the virtual plane according to the rotation angle to obtain a rotated virtual plane; and determining the rotated coordinates of the plurality of target vertexes on the virtual plane based on the initial coordinates and the rotation angles of the plurality of target vertexes.
Optionally, the plurality of target vertexes includes a first target vertex, a second target vertex, and a third target vertex, and the physical view angle parameter includes a first horizontal included angle, a second horizontal included angle, a first vertical included angle, and a second vertical included angle; determining initial coordinates of a plurality of target vertices when the virtual plane is at the initial position based on the physical perspective parameter and the current eyepoint position, including: respectively taking the sum of the X coordinate of the current eye point position and a first horizontal distance, the sum of the Y coordinate of the current eye point position and a first vertical distance, and the sum of the Z coordinate of the current eye point position and a set distance as the X-axis initial coordinate, the Y-axis initial coordinate and the Z-axis initial coordinate of the first target vertex, wherein the first horizontal distance is the product of the tangent value of the first horizontal included angle and the set distance, and the first vertical distance is the product of the tangent value of the first vertical included angle and the set distance; respectively taking the sum of the X coordinate of the current eye point position and the first horizontal distance, the sum of the Y coordinate of the current eye point position and the second vertical distance, and the sum of the Z coordinate of the current eye point position and the set distance as the X-axis initial coordinate, the Y-axis initial coordinate and the Z-axis initial coordinate of the second target vertex, wherein the second vertical distance is the product of the tangent value of the second vertical included angle and the set distance; and respectively taking the sum of the X coordinate of the current eye point position and a second horizontal distance, the sum of the Y coordinate of the current eye point position and a first vertical distance, and the sum of the Y coordinate of the current eye point position and a set distance as the X-axis initial coordinate, the Y-axis initial coordinate and the Z-axis initial coordinate of the third target vertex, wherein the second horizontal distance is the product of the tangent value of the second horizontal included angle and the set distance.
Optionally, determining the rotated coordinates of the plurality of target vertices on the virtual plane based on the initial coordinates and the rotation angles of the plurality of target vertices includes: sequentially determining an X-axis rotation matrix, a Y-axis rotation matrix and a Z-axis rotation matrix according to a set rotation sequence; taking the product of the X-axis rotation matrix, the Y-axis rotation matrix and the Z-axis rotation matrix as a target rotation matrix; and taking the product of the target rotation matrix and the initial coordinate of the first target vertex, the product of the target rotation matrix and the initial coordinate of the second target vertex, and the product of the target rotation matrix and the initial coordinate of the third target vertex as the rotated coordinate of the first target vertex, the rotated coordinate of the second target vertex and the rotated coordinate of the third target vertex respectively.
Optionally, determining coordinates of each second pixel point in the second lattice based on the rotated coordinates of the plurality of target vertices includes: acquiring an effective field angle range, wherein the effective field angle range is an observation range corresponding to an actual projection screen; acquiring coordinates of fourth pixel points in a fourth dot matrix on the virtual simulation screen according to the number of pixel points of the actual projection screen in the horizontal direction and the vertical direction, the pixel intervals in the horizontal direction and the vertical direction and the effective field angle range; aiming at each fourth pixel point in the fourth dot matrix, connecting the fourth pixel point with the current eye point position to obtain a corresponding connecting line of the fourth pixel point; and determining the coordinates of the intersection point of each connecting line and the virtual plane to obtain the coordinates of each second pixel point in the second dot matrix.
Optionally, determining a third pixel point corresponding to the original pixel point in the rotated virtual plane based on a position relationship between the plurality of target first pixel points corresponding to the original pixel point and the original pixel point, including: selecting four first pixel points adjacent to the original pixel point from the first dot matrix as a plurality of target first pixel points; determining the horizontal proportional relation of the original pixel point and a plurality of target first pixel points in the horizontal direction and the vertical proportional relation in the vertical direction; selecting a plurality of target second pixel points corresponding to the plurality of target first pixel points from the second dot matrix; and applying the horizontal proportional relation and the vertical proportional relation to a plurality of target second pixel points, and determining a third pixel point corresponding to the original pixel point in the rotated virtual plane.
Optionally, the method further comprises: selecting a plurality of actual projection pixel points projected on an actual projection screen by a target projection channel; selecting a plurality of actual projection pixel points on an actual projection screen; determining actual coordinates of a plurality of actual projection pixel points in a world coordinate system and virtual coordinates in a virtual simulation scene respectively; establishing an equation set based on the actual coordinates and the virtual coordinates of the plurality of actual projection pixel points to obtain a mapping matrix; and determining current eyepoint information corresponding to the target projection channel in the virtual simulation scene by using the mapping matrix, the current eyepoint position of the observer in the actual observation scene and the current observation direction.
In a second aspect, an embodiment of the present application further provides an image processing apparatus, including:
the first dot matrix acquisition module is used for acquiring a first dot matrix corresponding to the target projection channel, wherein the first dot matrix is a known dot matrix and is used for describing a projection effect expected to be achieved by an input source image projected by the target projection channel;
the vertex coordinate determination module is used for rotating a virtual plane corresponding to the target projection channel from an initial position to a target position corresponding to the current eyepoint information in a virtual simulation scene and determining the rotated coordinates of a plurality of target vertexes on the virtual plane;
the second dot matrix determining module is used for determining the coordinates of each second pixel point in the second dot matrix based on the rotated coordinates of the multiple target vertexes, the second dot matrix is a matrix formed by multiple intersection points of multiple connecting lines and a virtual plane, and the multiple connecting lines are connecting lines between the pixel points projected on the virtual projection screen by the input source image and the current eye point position;
the third pixel array determining module is used for determining a third pixel point corresponding to the original pixel point in the rotated virtual plane based on the position relation between a plurality of target first pixel points corresponding to the original pixel point and the original pixel point aiming at each original pixel point in the original dot array, wherein the plurality of target first pixel points are pixel points in the first dot array;
and the deformation processing module is used for carrying out deformation processing on the input source image by utilizing the third dot matrix to obtain a projection image which is projected on the actual projection screen and corresponds to the first dot matrix.
In a third aspect, an embodiment of the present application further provides an electronic device, including: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating via the bus when the electronic device is running, the machine-readable instructions when executed by the processor performing the steps of the image processing method as described above.
In a fourth aspect, the present application further provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to perform the steps of the image processing method as described above.
The embodiment of the application brings the following beneficial effects:
according to the image processing method, the device, the electronic device and the storage medium provided by the embodiment of the application, the expected projection effect of the input source image after deformation is determined, when the current eye point information of an observer is dynamically changed, the virtual plane can be adjusted to the target position corresponding to the current eye point information, the deformation parameter corresponding to the target projection channel after the current eye point information is changed, namely, the third dot matrix is further determined, the input source image can be directly projected onto an actual projection screen after being subjected to deformation processing by the third dot matrix to obtain the projection image corresponding to the shape of the first dot matrix, multiple times of geometric correction of the input source image are not needed, and compared with the image processing method in the prior art, the problems that the correction and debugging efficiency of the projection image is low, and the projection image cannot adapt to the application scene with the real-time dynamic change of the eye point position are solved.
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
To more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and those skilled in the art can also obtain other related drawings based on the drawings without inventive efforts.
FIG. 1 is a flow chart illustrating an image processing method provided by an embodiment of the present application;
FIG. 2 is a schematic diagram of a virtual plane in an initial position provided by an embodiment of the present application;
FIG. 3 is a schematic diagram of a virtual plane located at a target location provided by an embodiment of the present application;
FIG. 4 is a schematic diagram illustrating horizontal and vertical scaling provided by an embodiment of the present application;
fig. 5 is a schematic diagram illustrating a location of a third pixel provided in the embodiment of the present application;
fig. 6 is a schematic structural diagram illustrating an image processing apparatus provided in an embodiment of the present application;
fig. 7 shows a schematic structural diagram of an electronic device provided in an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. Every other embodiment that can be obtained by a person skilled in the art without making creative efforts based on the embodiments of the present application falls within the protection scope of the present application.
It is noted that prior to the present application, virtual simulation was a computer system that created and experienced a Virtual World (Virtual World). Such a virtual world may be generated by a computer, may be a reproduction of a real world, or may be a world in view, and a user may naturally interact with the virtual world through various sensing channels such as a visual channel, an auditory channel, and a tactile channel. In the virtual simulation process, a virtual simulation fusion technology is generally used, wherein the virtual simulation fusion refers to the creation of a three-dimensional interface in a projection fusion mode, and an observer stands at a projected visual center to form an immersive interactive scene. The virtual simulation fusion is a typical application of fusion splicing, has a large application market in the fields of education, scientific research, war industry, aerospace and the like, and creates a three-dimensional interface in a projection fusion mode. However, in existing virtual simulation fusion scenarios, for example: in a flight simulator scene, the eye point position of a driver is fixed on a driver seat, once the eye point position changes, a visual cone calibration graph also changes, geometric correction and debugging needs to be carried out on a projected image again, and the problem of low correction and debugging efficiency of the projected image is caused.
Based on this, the embodiment of the application provides an image processing method to improve the efficiency of correcting and debugging a projected image.
Referring to fig. 1, fig. 1 is a flowchart of an image processing method according to an embodiment of the present disclosure. As shown in fig. 1, an image processing method provided in an embodiment of the present application includes:
step S101, a first lattice corresponding to a target projection channel is obtained.
In this step, the target projection channel may refer to a projection channel selected from a plurality of actual projection channels, and the target projection channel is used to project the input source image corresponding to the projection channel onto the actual projection screen.
The first lattice is a lattice that is known and used to describe the desired projection effect of the input source image projected by the target projection channel.
The data in the first lattice is the position coordinates of the original lattice corresponding to each pixel point on the output image (i.e. the projection image) after geometric correction in the input source image, and the position coordinates can be decimal.
The original lattice is a lattice corresponding to a default picture of the projector, the default picture is a rectangular picture, the original lattice is also a rectangular lattice, and the original lattice is also known.
The original lattice is explained in detail below.
Taking an example that the projection resolution of the projector is 1024 × 768, and the size of the original dot matrix is expressed by M × N, then M = ceil (w/horin) +1, N = ceil (h/verin) +1, where w denotes the number of pixel points of the projector default picture in the horizontal direction, h denotes the number of pixel points of the projector default picture in the vertical direction, horin denotes the pixel interval of the projector default picture in the horizontal direction, and verin denotes the pixel interval of the projector default picture in the vertical direction. Here, taking the example that horin and verin both take 32, M = ceil (1024/32) +1=33, n = ceil (768/32) +1=25, where ceil represents rounding up. The original lattice is of the form:
(0,0)、(32,0)、(64,0)……(1024,0);
(0,32)、(32,32)、(64,32)……(1024,32);
……
(0,768)、(32,768)、(64,768)……(1024,768)。
here, the lattice size of the first lattice obtained after the geometric correction is the same as that of the original lattice, and both are 33 × 25 lattices.
In the embodiment of the application, after the input source images corresponding to the plurality of actual projection channels are projected on the actual projection screen, due to the influence of the projection angle, the projection position and the type of the actual projection screen, the problem that the projected images are blurred or the projection positions are disordered may be caused.
Here, each actual projection channel may be geometrically corrected by using the prior art to obtain a deformation parameter a corresponding to each actual projection channel, where the deformation parameter a is a lattice formed by a plurality of coordinate points, and is referred to as a first lattice. The first dot matrix is a shape parameter which is expected to be reached after an input source image is projected to an actual projection screen and is known, and the pixel points in the first dot matrix are first pixel points.
In an optional embodiment, the method further comprises: selecting a plurality of actual projection pixel points projected on an actual projection screen by a target projection channel; selecting a plurality of actual projection pixel points on an actual projection screen; determining actual coordinates of a plurality of actual projection pixel points in a world coordinate system and virtual coordinates in a virtual simulation scene respectively; establishing an equation set based on the actual coordinates and the virtual coordinates of the plurality of actual projection pixel points to obtain a mapping matrix; and determining current eyepoint information corresponding to the target projection channel in the virtual simulation scene by using the mapping matrix, the current eyepoint position of the observer in the actual observation scene and the current observation direction.
Here, first, an XYZ three-dimensional coordinate system, that is, a world coordinate system, is established for a field including an actual projection screen and a movable area where an observer is located, and three-dimensional coordinates of an eyepoint of the observer in the world coordinate system are determined in real time using a motion capture technique. Where an eye point may refer to a point near the eye of an observer (user), motion capture techniques include, but are not limited to: optical motion capture technology, inertial motion capture technology, image recognition motion capture technology.
The optical motion capture technology is that infrared reflective marker balls or marker points are adhered to the head or body of a user, a motion capture camera is used for obtaining a picture of the user, two-dimensional coordinates of all the marker points in the picture under a pixel coordinate system are calculated, software is used for collecting the two-dimensional coordinates of the marker points calculated by all the motion capture cameras at the same moment, and then three-dimensional coordinates of the marker points at the moment under a world coordinate system are calculated. The three-dimensional coordinates of the eyepoint can be calculated according to the three-dimensional coordinates of the mark point and the relative positions of the mark point and the eyepoint. Optical motion capture techniques are well known in the art and will not be described in detail herein.
The inertial motion capture technology is that an inertial sensor is bound on the body of a user, software is used for acquiring the data of the inertial sensor and calculating the three-dimensional coordinates of the inertial sensor, and the eyepoint position information in a world coordinate system can be calculated according to the three-dimensional coordinates. Inertial motion capture techniques are well known in the art and will not be described in detail herein.
The image recognition motion capture technology is to use a monocular or binocular camera to obtain the picture of a user, recognize the joint and skeleton information of the user through an algorithm, particularly the information of characteristic points of two eyes, two ears and the like of the head, and further calculate the eyepoint position under a world coordinate system. Image recognition motion capture techniques are also known in the art and will not be described in detail herein.
Specifically, more than 4 actual projection pixel points are selected on an actual projection screen, actual coordinates of the more than 4 actual projection pixel points in a world coordinate system and virtual coordinates in a virtual simulation system are obtained, an equation set is established by using the actual coordinates and the virtual coordinates of the actual projection pixel points, and a specific numerical value of a rotation matrix and a translation matrix can be obtained through solving. And forming a mapping matrix by the rotation matrix and the translation matrix.
The mapping matrix may be represented by the following formula:
Figure BDA0003789743130000101
in the above formula, R represents a rotation matrix, t represents a translation matrix,
Figure BDA0003789743130000102
representing a three-dimensional matrix with values of 0, wherein X represents an X-axis coordinate in the virtual simulation scene, Y represents a Y-axis coordinate in the virtual simulation scene, Z represents a Z-axis coordinate in the virtual simulation scene, and X w Representing the X-axis coordinate, y, in the world coordinate system w Representing the Y-axis coordinate, z, in the world coordinate system w Representing the Z-axis coordinates in the world coordinate system.
The obtained mapping matrix can be used for mapping the current eye point position and the current observation direction of the observer in the actual observation scene to the current eye point information in the virtual simulation scene.
Step S102, in a virtual simulation scene, rotating a virtual plane corresponding to a target projection channel from an initial position to a target position corresponding to current eyepoint information, and determining the rotated coordinates of a plurality of target vertexes on the virtual plane.
In this step, the virtual simulation scene may refer to a simulation scene corresponding to the actual observation scene, and the virtual simulation scene is used to perform simulation on the observer position, the actual projection screen information, and the actual projection channel in the actual observation scene.
The actual projection screen information includes: an actual projection screen size, an actual projection screen type, and an actual projection screen position.
Exemplary, actual projection screen types include, but are not limited to: spherical screen, cylindrical screen, planar screen.
The actual projection channels comprise a plurality of actual projection channels, and the plurality of actual projection channels project the actual projection screen together to obtain a fused projection image.
Each actual projection channel includes: the input source image is sent to the projector through the input source, and the projector projects the input source image onto an actual projection screen to obtain a projection image.
The projection angles and projector positions of the multiple actual projection channels may be different, resulting in different projection paths.
The virtual plane may refer to a rectangular plane formed at a set distance after the current eye point position is projected in the direction indicated by the preset physical view angle parameter in the virtual simulation scene.
The initial position may refer to a position of the virtual plane when the observer has an eyepoint declination angle of 0 at the current eyepoint position.
The initial position of the virtual plane corresponds to the current eyepoint position, and when the virtual plane is at the initial position, the virtual plane is perpendicular to the Z axis.
In the embodiment of the present application, if the actual projection screen is a spherical screen, a virtual spherical screen is established in the virtual simulation scene, the spherical radius of the virtual spherical screen is r, the spherical center O is located at the origin of the spatial direct coordinate system, and the coordinate is (0,0,0). If the actual projection screen is a cylindrical screen, establishing a virtual cylindrical screen in the virtual simulation scene, wherein the equation of a straight bus of the virtual cylindrical screen is as follows: x =0, z = r, cylinder directrix 2 +z 2 =r 2 . If the actual projection screen is a plane screen, a virtual plane screen is established in the virtual simulation scene, and the plane equation of the virtual plane screen is z = r.
In an optional embodiment, the current eyepoint information includes a current eyepoint position, a current observation direction and a physical viewpoint parameter, and the physical viewpoint parameter is a projection angle of the input source image; rotating a virtual plane corresponding to the target projection channel from an initial position to a target position corresponding to the current eyepoint information, and determining the rotated coordinates of a plurality of target vertexes on the virtual plane, wherein the steps of: determining initial coordinates of a plurality of target vertexes when the virtual plane is located at an initial position based on the physical visual angle parameters and the current eyepoint position, wherein the initial position is a position corresponding to the projection direction of the target projection channel; determining the rotation angle of the virtual plane according to the offset angle between the current observation direction and the projection direction of the target projection channel; rotating the virtual plane according to the rotation angle to obtain a rotated virtual plane; and determining the rotated coordinates of the plurality of target vertexes on the virtual plane based on the initial coordinates and the rotation angles of the plurality of target vertexes.
Here, the physical view parameter may refer to a range parameter indicating the input source image, and the physical view parameter may represent a screen size of the input source image.
The initial coordinates of the plurality of target vertices may refer to coordinates of the plurality of target vertices when the virtual plane is located at an initial position corresponding to the current eyepoint position.
Specifically, since the initial position of the virtual plane and the current eyepoint position correspond to each other, when the current eyepoint position changes, the initial position of the virtual plane also changes accordingly. Since the initial position of the virtual plane is the position when the eyepoint declination angle is 0, the Z-axis coordinate of the virtual plane is equal to the sum of the Z-axis coordinate of the current eyepoint position and the set distance, and the size of the virtual plane is determined by the physical visual angle parameter.
In order to determine the coordinates of each pixel point on the virtual plane, the initial coordinates of a plurality of target vertexes on the virtual plane can be determined, and then the coordinates of each pixel point in the rotated virtual plane can be calculated according to the initial coordinates and the rotation angle of the plurality of target vertexes.
In an optional embodiment, the plurality of target vertices includes a first target vertex, a second target vertex, and a third target vertex, and the physical view angle parameter includes a first horizontal angle, a second horizontal angle, a first vertical angle, and a second vertical angle; determining initial coordinates of a plurality of target vertices when the virtual plane is at the initial position based on the physical perspective parameter and the current eyepoint position, including: respectively taking the sum of the X coordinate of the current eye point position and a first horizontal distance, the sum of the Y coordinate of the current eye point position and a first vertical distance, and the sum of the Z coordinate of the current eye point position and a set distance as the X-axis initial coordinate, the Y-axis initial coordinate and the Z-axis initial coordinate of the first target vertex, wherein the first horizontal distance is the product of the tangent value of the first horizontal included angle and the set distance, and the first vertical distance is the product of the tangent value of the first vertical included angle and the set distance; respectively taking the sum of the X coordinate of the current eye point position and the first horizontal distance, the sum of the Y coordinate of the current eye point position and the second vertical distance, and the sum of the Z coordinate of the current eye point position and the set distance as the X-axis initial coordinate, the Y-axis initial coordinate and the Z-axis initial coordinate of the second target vertex, wherein the second vertical distance is the product of the tangent value of the second vertical included angle and the set distance; and respectively taking the sum of the X coordinate of the current eye point position and a second horizontal distance, the sum of the Y coordinate of the current eye point position and a first vertical distance, and the sum of the Y coordinate of the current eye point position and a set distance as the X-axis initial coordinate, the Y-axis initial coordinate and the Z-axis initial coordinate of the third target vertex, wherein the second horizontal distance is the product of the tangent value of the second horizontal included angle and the set distance.
The virtual plane is described below with reference to fig. 2.
Fig. 2 is a schematic diagram illustrating a virtual plane located at an initial position according to an embodiment of the present application.
As shown in fig. 2, the virtual projection screen is a virtual dome screen, and the coordinate of the current eyepoint position corresponding to the target projection channel is P 0 (x 0 ,y 0 ,z 0 ) Making a vertical line from the current eyepoint position to a virtual plane, taking the vertical line as a point S, wherein three target vertexes, namely a first target vertex a, a second target vertex B and a third target vertex C, are arranged on the virtual plane, a point A and a point B are two points on a straight line which is parallel to an X axis and passes through the point S, a point C and a point D are two points on a straight line which is parallel to a Y axis and passes through the point S, and the physical visual angle parameter comprises a first horizontal included angle AP 0 S, recording as follows: alpha is alpha 1 A second horizontal included angle BP 0 S, recorded as alpha 2 First vertical angle of intersection < CP 0 S, recorded as beta 1 Second vertical angle DP 0 S, denoted as beta 2 . Setting the distance as the distance P between the current eyepoint position and the virtual plane 0 S, for convenient calculation, let P 0 S=1。
Through the right triangle angle calculation formula, the following results can be obtained:
the coordinates of point a are denoted as (x) a ,y a ,z a ),x a =x 0 +tanα 1 ,y a =y 0 +tanβ 1 ,z a =z 0 +1。
The coordinates of point b are denoted (x) b ,y b ,z b ),x b =x 0 +tanα 2 ,y b =y 0 +tanβ 1 ,z b =z 0 +1。
The coordinate of point c is denoted as (x) c ,y c ,z c ),x c =x 0 +tanα 1 ,y c =y 0 +tanβ 2 ,z c =z 0 +1。
It should be noted that, for different target projection channels, the current eyepoint position in the virtual simulation scene is the same, but because the projection directions of the different target projection channels are different, the included angles between the different target projection channels and the current observation direction are also different.
In an alternative embodiment, determining the rotated coordinates of the plurality of target vertices on the virtual plane based on the initial coordinates and the rotation angles of the plurality of target vertices includes: sequentially determining an X-axis rotation matrix, a Y-axis rotation matrix and a Z-axis rotation matrix according to a set rotation sequence; taking the product of the X-axis rotation matrix, the Y-axis rotation matrix and the Z-axis rotation matrix as a target rotation matrix; and respectively taking the product of the target rotation matrix and the initial coordinate of the first target vertex, the product of the target rotation matrix and the initial coordinate of the second target vertex, and the product of the target rotation matrix and the initial coordinate of the third target vertex as the rotated coordinate of the first target vertex, the rotated coordinate of the second target vertex and the rotated coordinate of the third target vertex.
Specifically, the virtual plane is the same as the pixel points of the actual projection screen, the number of the pixel points in the horizontal direction of the virtual plane is w, and the number of the pixel points in the vertical direction of the virtual plane is h. The virtual plane can rotate around an x axis, a y axis and a z axis in the virtual simulation scene respectively, the rotation angles are sequentially recorded as alpha, beta and gamma, and the virtual plane must rotate around the x axis, the y axis and the z axis in sequence. According to a mathematical calculation formula, a rotation matrix of three axes can be constructed, R X (α)、R Y (β)、R z (γ)。
Wherein, the rotation matrix of three axles is respectively:
Figure BDA0003789743130000141
Figure BDA0003789743130000142
Figure BDA0003789743130000143
after the virtual plane is rotated, the rotated coordinates of the first target vertex a are denoted as (x) 1 ,y 1 ,z 1 ) The rotated coordinates of the second target vertex b are denoted as (x) 2 ,y 2 ,z 2 ) The rotated coordinates of the third target vertex c are denoted as (x) 3 ,y 3 ,z 3 ). Specific values of the rotated coordinates of the first target vertex a, the rotated coordinates of the second target vertex b and the rotated coordinates of the third target vertex c can be calculated according to the rotation matrix through a formula, wherein the specific calculation formula is as follows:
Figure BDA0003789743130000151
Figure BDA0003789743130000152
Figure BDA0003789743130000153
in the above formula, R Z (γ)×R Y (β)×R X And (alpha) is the target rotation matrix.
Step S103, determining the coordinates of each second pixel point in the second lattice based on the rotated coordinates of the plurality of target vertexes.
In this step, the second pixel point may refer to an intersection point of a plurality of connection lines and the virtual plane, and the second pixel points include a plurality of second pixel points, and each second pixel point is an intersection point of one connection line and the virtual plane.
The plurality of connecting lines are connecting lines between each pixel point projected by the input source image on the virtual projection screen and the current eyepoint position.
The second lattice is a matrix formed by a plurality of second pixel points, namely a matrix formed by a plurality of intersection points of a plurality of connecting lines and a virtual plane.
The second lattice is the lattice on the virtual plane after rotation, and the second lattice is mapped by the fourth lattice.
In an alternative embodiment, determining the coordinates of each second pixel point in the second lattice based on the rotated coordinates of the plurality of target vertices includes: acquiring an effective field angle range, wherein the effective field angle range is an observation range corresponding to the actual projection screen; acquiring coordinates of fourth pixel points in a fourth dot matrix on the virtual simulation screen according to the number of pixel points of the actual projection screen in the horizontal direction and the vertical direction, the pixel intervals in the horizontal direction and the vertical direction and the effective field angle range; aiming at each fourth pixel point in the fourth dot matrix, connecting the fourth pixel point with the current eye point position to obtain a corresponding connecting line of the fourth pixel point; and determining the coordinates of the intersection point of each connecting line and the virtual plane to obtain the coordinates of each second pixel point in the second dot matrix.
Here, the effective field angle range may refer to a field angle range that can be finally seen on the actual projection screen, and the effective field angle range is related to a range of the actual projection screen, and is 60 ° in the vertical direction on the assumption that the up-down angle range of the ball screen is ± 30 °. It should be noted that the dome in the actual observation scene is not a 360 ° dome, but a part of a 360 ° dome.
The effective angle-of-view range includes an effective angle-of-view horizontal range represented by [ sta _ h, end _ h ] and an effective angle-of-view vertical range represented by [ sta _ v, end _ v ].
The fourth lattice may refer to a lattice corresponding to the input source image, that is, a lattice on the original video image, and also a lattice on the curved surface video.
The second lattice will now be described with reference to fig. 3.
Fig. 3 is a schematic diagram illustrating a virtual plane located at a target position according to an embodiment of the present application.
As shown in FIG. 3, the first target vertex a, the second target vertex b and the third target vertex c are at new positions on the rotated virtual plane, and the rotated coordinate of the first target vertex a is (x) 1 ,y 1 ,z 1 ) The rotated coordinate of the second target vertex b is (x) 2 ,y 2 ,z 2 ) The rotated coordinate of the third target vertex c is (x) 3 ,y 3 ,z 3 ). A point U1 on the virtual simulation screen is a fourth pixel point in the fourth dot matrix, and the point U1 and the current eye point position P are compared 0 Are connected to obtain a corresponding connecting line P 0 U1, line P 0 U1 passes through the virtual plane and intersects at a point T1, and the point T1 is a second pixel point corresponding to the point U1 in the second lattice. Therefore, the second pixel point corresponding to each fourth pixel point in the fourth dot matrix can be determined, and the second dot matrix is formed by all the second pixel points.
Specifically, the M × N dot matrix at equal intervals can be calculated on the virtual simulation screen according to the effective field angle range, and is referred to as a fourth dot matrix. In the spherical screen, the longitude theta and the latitude theta are used for a fourth pixel point U in the jth row and the ith column in a fourth dot matrix
Figure BDA0003789743130000171
To represent; in the cylindrical screen, longitude theta and vertical direction coordinates are used for a fourth pixel point U in a jth row and an ith column in a fourth dot matrix
Figure BDA0003789743130000172
To represent; in the flat screen, a fourth pixel point U in a jth row and an ith column in a fourth dot matrix uses a horizontal coordinate theta and a vertical coordinate
Figure BDA0003789743130000173
To represent; in the virtual simulation screens of different types, the fourth pixel point
Figure BDA0003789743130000174
The coordinate meanings of (a) are different, but the calculation formulas are the same.
The coordinate calculation formula of the fourth pixel point is as follows:
Figure BDA0003789743130000175
Figure BDA0003789743130000176
in the above formula, w represents the number of pixel points of the actual projection screen in the horizontal direction, h represents the number of pixel points of the actual projection screen in the vertical direction, horin represents the pixel interval of the actual projection screen in the horizontal direction, and verin represents the pixel interval of the actual projection screen in the vertical direction.
After the coordinates of each fourth pixel point in the fourth dot matrix are determined, aiming at each fourth pixel point in the fourth dot matrix, the fourth pixel point and the current eye point position P are determined 0 And connecting, wherein the obtained connecting line is intersected with the rotated virtual plane, the intersection point is the second pixel point corresponding to the fourth pixel point, and the coordinate of the second pixel point at the moment is a three-dimensional coordinate. The third pixel point of the second pixel point can be calculated by the following formulaDimensional coordinates are as follows:
Figure BDA0003789743130000177
x(j,i)=x A ×z(j,i)+x B
y(j,i)=y A ×z(j,i)+y B
in the above formula, u = (y) 2 -y 1 )×(z 3 -z 1 )-(y 3 -y 1 )×(z 2 -z 1 );
v=(x 2 -x 1 )×(z 3 -z 1 )-(x 3 -x 1 )×(z 2 -z 1 );
w=(x 2 -x 1 )×(y 3 -y 1 )-(x 3 -x 1 )×(y 2 -y 1 );
t=-x 1 ×u+y 1 ×v-z 1 ×w;
Figure BDA0003789743130000181
Figure BDA0003789743130000182
Figure BDA0003789743130000183
Figure BDA0003789743130000184
After the three-dimensional coordinates of the second pixel points are determined, mapping each second pixel point to a two-dimensional plane, namely, mapping the second dot matrix to the two-dimensional plane corresponding to the virtual plane to obtain the two-dimensional coordinates of the second pixel points, wherein the two-dimensional coordinates of the second pixel points in the jth row and the ith column in the second dot matrix are marked as: (xpoint (j, i), ypoint (j, i)), the two-dimensional coordinates of the second pixel point can be calculated by the following formula:
Figure BDA0003789743130000185
Figure BDA0003789743130000186
in the above formula:
Figure BDA0003789743130000187
Figure BDA0003789743130000188
Figure BDA0003789743130000189
Figure BDA0003789743130000191
in the above formula, X, Y, and Z respectively represent an X-axis coordinate, a Y-axis coordinate, and a Z-axis coordinate of the second pixel point in the second lattice.
Step S104, aiming at each original pixel point in the original dot matrix, determining a third pixel point corresponding to the original pixel point in the rotated virtual plane based on the position relation between a plurality of target first pixel points corresponding to the original pixel point and the original pixel point, and forming a third dot matrix by a plurality of third pixel points.
In this step, the position relationship between the plurality of target first pixel points corresponding to the original pixel point and the original pixel point may refer to a horizontal proportional relationship in a horizontal direction and a vertical proportional relationship in a vertical direction.
The plurality of target first pixel points are pixel points in the first dot matrix.
The target first pixel is a pixel selected from a plurality of first pixels.
In an optional embodiment, determining, based on a positional relationship between a plurality of target first pixel points corresponding to the original pixel point and the original pixel point, a third pixel point corresponding to the original pixel point in the rotated virtual plane includes: selecting four first pixel points adjacent to the original pixel point from the first dot matrix as a plurality of target first pixel points; determining a horizontal proportional relation of the original pixel point and a plurality of target first pixel points in the horizontal direction and a vertical proportional relation of the original pixel point and the plurality of target first pixel points in the vertical direction; selecting a plurality of target second pixel points corresponding to the plurality of target first pixel points from the second dot matrix; and applying the horizontal proportional relation and the vertical proportional relation to a plurality of target second pixel points, and determining a third pixel point corresponding to the original pixel point in the rotated virtual plane.
Here, the M × N lattice interpolated uniformly on the actual projection screen is an original lattice, and the pixel coordinates of the jth row and ith column of the original lattice are: EP (ex (j, i), ey (j, i)) can give: ex (j, i) = (i-1) × horin, ey (j, i) = (j-1) × verin.
Specifically, for each original pixel point in the original dot matrix, 4 first pixel points adjacent to the original pixel point are searched in the first dot matrix to serve as target first pixel points, so that the original pixel point is located in a minimum grid surrounded by the 4 target first pixel points, and the coordinates of the selected 4 target first pixel points are respectively recorded as: op 1 (ox 1 ,oy 1 ),op 2 (ox 2 ,oy 2 ),op 3 (ox 3 ,oy 3 ),op 4 (ox 4 ,oy 4 ). The 4 target first pixels may be 2 first pixels which have the same row number as the original pixel but have column numbers which are respectively adjacent to the left and right sides of the column number of the original pixel, and 2 first pixels which have the same column number as the original pixel but have row numbers which are respectively adjacent to the top and bottom of the row number of the original pixel.
Aiming at each target first pixel point, determining the target first pixel pointAnd the line number and the column number of the pixel point are searched for a target second pixel point corresponding to the line number and the column number in the second dot matrix, so that 4 second target pixel points are determined, and the coordinates of the 4 second target pixel points are respectively recorded as: ip is 1 (ix 1 ,iy 1 ),ip 2 (ix 2 ,iy 2 ),ip 3 (ix 3 ,iy 3 ),ip 4 (ix 4 ,iy 4 )。
The calculation point EP (ex (j, i), ey (j, i)) is adjacent to the 4 target first pixel points op 1 (ox 1 ,oy 1 ),op 2 (ox 2 ,oy 2 ),op 3 (ox 3 ,oy 3 ),op 4 (ox 4 ,oy 4 ) A middle horizontal proportional relationship U and a vertical proportional relationship V.
The calculation method of the horizontal proportional relationship and the vertical proportional relationship will be described below with reference to fig. 4.
Fig. 4 shows a schematic diagram of the horizontal and vertical scaling relationships provided by an embodiment of the present application.
As shown in fig. 4, a point op 5 The coordinates of (d) are recorded as: op(s) 5 (ox 5 ,oy 5 ) Point op 6 The coordinates of (c) are recorded as: op 6 (ox 6 ,oy 6 ) Set point op 5 To point op 6 Length between is 1, point op 3 To point op 4 The length between the vertical and horizontal scales is 1, the horizontal proportional relationship and the vertical proportional relationship can be obtained by the following formula:
Figure BDA0003789743130000201
Figure BDA0003789743130000211
in the above formula:
a=xq×yt-yq×xt,b=yp×xt-yq×xs-xp×yt+xq×ys,c=yp×xs-xp×ys。
oy 5 =(oy 2 -oy 1 )×U+oy 1 ,oy 6 =(oy 4 -oy 3 )×U+oy 3
yP=ey(j,i)-oy 1 ,yq=oy 2 -oy 1 ,ys=oy 3 -oy 1 ,yt=oy 4 -oy 3 -oy 2 +oy 1
xp=ex(j,i)-ox 1 ,xq=ox 2 -ox 1 ,xs=ox 3 -ox 1 ,xt=ox 4 -ox 3 -ox 2 +ox 1
then, the horizontal proportional relation and the vertical proportional relation are applied to the selected 4 target second pixel points, a third pixel point corresponding to the original pixel point can be obtained, and the third pixel point is recorded as: CP (cx (j, i), cy (j, i)).
The calculation process of the third pixel point will be described with reference to fig. 5.
Fig. 5 is a schematic diagram illustrating a position of a third pixel provided in the embodiment of the present application.
As shown in FIG. 5, point ip 5 The coordinates of (d) are recorded as: ip is 5 (ix 5 ,iy 5 ) Point ip 6 The coordinates of (d) are recorded as: ip is 6 (ix 6 ,iy 6 ) The coordinates of the third pixel CP (cx (j, i), cy (j, i)) can be obtained by the following formula:
cx(j,i)=ix 5 +(ix 6 -ix 5 )×V;
cy(j,i)=iy 5 +(iy 6 -iy 5 )×V。
in the above formula, ix 5 =ix t +(ix 2 -ix 1 )×U,iy 5 =iy 1 +(iy 2 -iy 1 )×U,ix 6 =ix 3 +(ix 4 -ix 3 )×U,iy 6 =iy 3 +(iy 4 -iy 3 )×U。
And S105, carrying out deformation processing on the input source image by using the third dot matrix to obtain a projection image which is projected on the actual projection screen and corresponds to the first dot matrix.
In the step, the third lattice is used as a deformation parameter, the deformation parameter is used for carrying out deformation processing on the input source image, and the projected image obtained after the deformation processing is projected on an actual projection screen.
It should be noted that there are a plurality of projection channels in the actual observation scene, each projection channel needs to be subjected to deformation processing, and the plurality of projection images subjected to deformation processing are fused together to obtain a final projection picture.
In specific implementation, a new input source image is rendered in real time by a software base based on current eye point information and orientation data of each projection channel, meanwhile, a virtual simulation fusion system calculates to obtain a third dot matrix, the input source image is subjected to deformation processing by using the third dot matrix to obtain a projection image finally projected onto an actual projection screen, so that real-time deformation of the new input source image is realized, wherein the third dot matrix is calculated according to the first dot matrix and can reflect the shape of the first dot matrix, and therefore the projection image finally projected onto the actual projection screen corresponds to the shape of the first dot matrix. Since the method of performing deformation processing on the input source graph by using the third lattice belongs to the prior art, and is consistent with the common fusion rule, reference may be made to the patent with the application number of CN202111643036, and details are not described here. It can be understood that the image processing method of the present application is applied to a virtual simulation fusion system.
Compared with the image processing method in the prior art, the method and the device have the advantages that the expected projection effect of the input source image after deformation is determined firstly, when the current eye point information of an observer is dynamically changed, the virtual plane can be adjusted to the target position corresponding to the current eye point information, the deformation parameter corresponding to the target projection channel after the current eye point information is changed, namely, the third dot matrix is further determined, the input source image can be directly projected to an actual projection screen after being subjected to deformation processing by the third dot matrix to obtain the projection image corresponding to the shape of the first dot matrix, multiple times of geometric correction of the input source image are not needed, and the problems that the correction and debugging efficiency of the projection image is low and the application scene of the real-time dynamic change of the eye point position cannot be adapted are solved.
Based on the same inventive concept, an image processing apparatus corresponding to the image processing method is also provided in the embodiments of the present application, and since the principle of solving the problem of the apparatus in the embodiments of the present application is similar to the image processing method described above in the embodiments of the present application, reference may be made to the implementation of the apparatus for the method, and repeated details are not described herein.
Referring to fig. 6, fig. 6 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present disclosure. As shown in fig. 6, the image processing apparatus 200 includes:
a first dot matrix obtaining module 201, configured to obtain a first dot matrix corresponding to the target projection channel, where the first dot matrix is a known dot matrix and is used to describe a projection effect that is expected to be achieved by an input source image projected by the target projection channel;
a vertex coordinate determining module 202, configured to rotate, in a virtual simulation scene, a virtual plane corresponding to the target projection channel from an initial position to a target position corresponding to the current eyepoint information, and determine rotated coordinates of multiple target vertices on the virtual plane;
the second dot matrix determining module 203 is configured to determine coordinates of each second pixel point in a second dot matrix based on the rotated coordinates of the multiple target vertices, where the second dot matrix is a matrix formed by multiple intersection points where multiple connecting lines are intersected with the virtual plane, and the multiple connecting lines are connecting lines between each pixel point projected by the input source image on the virtual projection screen and the current eye point position;
a third pixel array determining module 204, configured to determine, for each original pixel point in the original dot array, a third pixel point corresponding to the original pixel point in the rotated virtual plane based on a positional relationship between a plurality of target first pixel points corresponding to the original pixel point and the original pixel point, where the plurality of third pixel points form the third dot array, and the plurality of target first pixel points are pixel points in the first dot array;
and the deformation processing module 205 is configured to perform deformation processing on the input source image by using the third dot matrix to obtain a projection image corresponding to the first dot matrix and projected on the actual projection screen.
Referring to fig. 7, fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure. As shown in fig. 7, the electronic device 300 includes a processor 310, a memory 320, and a bus 330.
The memory 320 stores machine-readable instructions executable by the processor 310, when the electronic device 300 runs, the processor 310 communicates with the memory 320 through the bus 330, and when the machine-readable instructions are executed by the processor 310, the steps of the image processing method in the embodiment of the method shown in fig. 1 may be executed.
An embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the steps of the image processing method in the method embodiment shown in fig. 1 may be executed.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present application, and are used for illustrating the technical solutions of the present application, but not limiting the same, and the scope of the present application is not limited thereto, and although the present application is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope disclosed in the present application; such modifications, changes or substitutions do not depart from the spirit and scope of the exemplary embodiments of the present application, and are intended to be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. An image processing method, characterized by comprising:
acquiring a first dot matrix corresponding to a target projection channel, wherein the first dot matrix is a known dot matrix and is used for describing a projection effect expected to be achieved by an input source image projected by the target projection channel;
in a virtual simulation scene, rotating a virtual plane corresponding to the target projection channel from an initial position to a target position corresponding to current eyepoint information, and determining the rotated coordinates of a plurality of target vertexes on the virtual plane;
determining the coordinates of each second pixel point in a second dot matrix based on the rotated coordinates of the target vertexes, wherein the second dot matrix is a matrix formed by a plurality of intersection points of a plurality of connecting lines and a virtual plane, and the connecting lines are connecting lines between the pixel points projected on the virtual projection screen by the input source image and the current eye point position;
aiming at each original pixel point in the original dot matrix, determining a third pixel point corresponding to the original pixel point in a rotated virtual plane based on the position relation between a plurality of target first pixel points corresponding to the original pixel point and the original pixel point, and forming the third dot matrix by a plurality of third pixel points, wherein the plurality of target first pixel points are pixel points in the first dot matrix;
and carrying out deformation processing on the input source image by utilizing the third dot matrix to obtain a projection image which is projected on the actual projection screen and corresponds to the first dot matrix.
2. The method of claim 1, wherein the current eyepoint information includes a current eyepoint position, a current viewing direction, and a physical viewing angle parameter, the physical viewing angle parameter being a projection angle of the input source image;
rotating the virtual plane corresponding to the target projection channel from the initial position to the target position corresponding to the current eyepoint information, and determining the rotated coordinates of a plurality of target vertexes on the virtual plane, including:
determining initial coordinates of a plurality of target vertexes when the virtual plane is located at an initial position based on the physical view angle parameters and the current eyepoint position, wherein the initial position is a position corresponding to the projection direction of the target projection channel;
determining the rotation angle of the virtual plane according to the offset angle between the current observation direction and the projection direction of the target projection channel;
rotating the virtual plane according to the rotation angle to obtain a rotated virtual plane;
and determining the rotated coordinates of the target vertexes on the virtual plane based on the initial coordinates of the target vertexes and the rotation angle.
3. The method of claim 2, wherein the plurality of target vertices comprises a first target vertex, a second target vertex, and a third target vertex, and the physical perspective parameters comprise a first horizontal angle, a second horizontal angle, a first vertical angle, and a second vertical angle;
determining initial coordinates of a plurality of target vertices when the virtual plane is located at the initial position based on the physical view angle parameters and the current eyepoint position, including:
respectively taking the sum of the X coordinate of the current eye point position and a first horizontal distance, the sum of the Y coordinate of the current eye point position and a first vertical distance, and the sum of the Z coordinate of the current eye point position and a set distance as the X-axis initial coordinate, the Y-axis initial coordinate and the Z-axis initial coordinate of the first target vertex, wherein the first horizontal distance is the product of the tangent value of the first horizontal included angle and the set distance, and the first vertical distance is the product of the tangent value of the first vertical included angle and the set distance;
respectively taking the sum of the X coordinate of the current eye point position and the first horizontal distance, the sum of the Y coordinate of the current eye point position and the second vertical distance, and the sum of the Z coordinate of the current eye point position and the set distance as the X-axis initial coordinate, the Y-axis initial coordinate and the Z-axis initial coordinate of the second target vertex, wherein the second vertical distance is the product of the tangent value of the second vertical included angle and the set distance;
and respectively taking the sum of the X coordinate of the current eye point position and a second horizontal distance, the sum of the Y coordinate of the current eye point position and a first vertical distance, and the sum of the Y coordinate of the current eye point position and a set distance as the X-axis initial coordinate, the Y-axis initial coordinate and the Z-axis initial coordinate of the third target vertex, wherein the second horizontal distance is the product of the tangent value of the second horizontal included angle and the set distance.
4. The method of claim 2, wherein determining the rotated coordinates of the plurality of target vertices on the virtual plane based on the initial coordinates of the plurality of target vertices and the rotation angle comprises:
sequentially determining an X-axis rotation matrix, a Y-axis rotation matrix and a Z-axis rotation matrix according to a set rotation sequence;
taking the product of the X-axis rotation matrix, the Y-axis rotation matrix and the Z-axis rotation matrix as a target rotation matrix;
and respectively taking the product of the target rotation matrix and the initial coordinate of the first target vertex, the product of the target rotation matrix and the initial coordinate of the second target vertex, and the product of the target rotation matrix and the initial coordinate of the third target vertex as the rotated coordinate of the first target vertex, the rotated coordinate of the second target vertex and the rotated coordinate of the third target vertex.
5. The method of claim 1, wherein determining the coordinates of each second pixel point in the second lattice based on the rotated coordinates of the plurality of target vertices comprises:
acquiring an effective field angle range, wherein the effective field angle range is an observation range corresponding to an actual projection screen;
acquiring coordinates of fourth pixel points in a fourth dot matrix on the virtual simulation screen according to the number of pixel points of the actual projection screen in the horizontal direction and the vertical direction, the pixel intervals in the horizontal direction and the vertical direction and the effective field angle range;
connecting the fourth pixel point with the current eye point position aiming at each fourth pixel point in the fourth dot matrix to obtain a corresponding connecting line of the fourth pixel point;
and determining the coordinates of the intersection point of each connecting line and the virtual plane to obtain the coordinates of each second pixel point in the second dot matrix.
6. The method according to claim 1, wherein determining a third pixel point corresponding to the original pixel point in the rotated virtual plane based on a position relationship between a plurality of target first pixel points corresponding to the original pixel point and the original pixel point comprises:
selecting four first pixel points adjacent to the original pixel point from the first dot matrix as a plurality of target first pixel points;
determining a horizontal proportional relation of the original pixel point and a plurality of target first pixel points in the horizontal direction and a vertical proportional relation of the original pixel point and the plurality of target first pixel points in the vertical direction;
selecting a plurality of target second pixel points corresponding to the plurality of target first pixel points from a second dot matrix;
and applying the horizontal proportional relation and the vertical proportional relation to the plurality of target second pixel points, and determining a third pixel point corresponding to the original pixel point in the rotated virtual plane.
7. The method of claim 1, further comprising:
selecting a plurality of actual projection pixel points on an actual projection screen;
determining actual coordinates of a plurality of actual projection pixel points in a world coordinate system and virtual coordinates in a virtual simulation scene respectively;
establishing an equation set based on the actual coordinates and the virtual coordinates of the plurality of actual projection pixel points to obtain a mapping matrix;
and determining the current eye point information corresponding to the target projection channel in the virtual simulation scene by using the mapping matrix, the current eye point position of the observer in the actual observation scene and the current observation direction.
8. An image processing apparatus characterized by comprising:
the first dot matrix acquisition module is used for acquiring a first dot matrix corresponding to the target projection channel, wherein the first dot matrix is a known dot matrix and is used for describing a projection effect expected to be achieved by an input source image projected by the target projection channel;
the vertex coordinate determining module is used for rotating a virtual plane corresponding to the target projection channel from an initial position to a target position corresponding to the current eyepoint information in a virtual simulation scene, and determining the rotated coordinates of a plurality of target vertexes on the virtual plane;
the second dot matrix determining module is used for determining the coordinates of each second pixel point in the second dot matrix based on the rotated coordinates of the multiple target vertexes, the second dot matrix is a matrix formed by multiple intersection points of multiple connecting lines and a virtual plane, and the multiple connecting lines are connecting lines between the pixel points projected on the virtual projection screen by the input source image and the current eye point position;
the third pixel array determining module is used for determining a third pixel point corresponding to the original pixel point in the rotated virtual plane based on the position relation between a plurality of target first pixel points corresponding to the original pixel point and the original pixel point aiming at each original pixel point in the original dot array, and the plurality of target first pixel points are pixel points in the first dot array;
and the deformation processing module is used for carrying out deformation processing on the input source image by utilizing the third dot matrix to obtain a projection image which is projected on the actual projection screen and corresponds to the first dot matrix.
9. An electronic device, comprising: a processor, a storage medium and a bus, the storage medium storing machine-readable instructions executable by the processor, the processor and the storage medium communicating via the bus when the electronic device is operating, the processor executing the machine-readable instructions to perform the steps of the image processing method according to any one of claims 1 to 7.
10. A computer-readable storage medium, characterized in that a computer program is stored thereon which, when being executed by a processor, carries out the steps of the image processing method according to any one of claims 1 to 7.
CN202210952430.8A 2022-08-09 2022-08-09 Image processing method and device, electronic equipment and storage medium Pending CN115311133A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210952430.8A CN115311133A (en) 2022-08-09 2022-08-09 Image processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210952430.8A CN115311133A (en) 2022-08-09 2022-08-09 Image processing method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115311133A true CN115311133A (en) 2022-11-08

Family

ID=83859965

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210952430.8A Pending CN115311133A (en) 2022-08-09 2022-08-09 Image processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115311133A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116542847A (en) * 2023-07-05 2023-08-04 海豚乐智科技(成都)有限责任公司 Low-small slow target high-speed image simulation method, storage medium and device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116542847A (en) * 2023-07-05 2023-08-04 海豚乐智科技(成都)有限责任公司 Low-small slow target high-speed image simulation method, storage medium and device
CN116542847B (en) * 2023-07-05 2023-10-10 海豚乐智科技(成都)有限责任公司 Low-small slow target high-speed image simulation method, storage medium and device

Similar Documents

Publication Publication Date Title
CN107564089B (en) Three-dimensional image processing method, device, storage medium and computer equipment
EP3057066B1 (en) Generation of three-dimensional imagery from a two-dimensional image using a depth map
JP6764533B2 (en) Calibration device, chart for calibration, chart pattern generator, and calibration method
CA2995665C (en) Image generating apparatus and image display control apparatus for a panoramic image
JP6368142B2 (en) Information processing apparatus and information processing method
CN106604003B (en) Method and system for realizing curved-surface curtain projection through short-focus projection
US20130135310A1 (en) Method and device for representing synthetic environments
JP2005339313A (en) Method and apparatus for presenting image
JP7182920B2 (en) Image processing device, image processing method and program
CN110648274B (en) Method and device for generating fisheye image
CN110337674A (en) Three-dimensional rebuilding method, device, equipment and storage medium
JP6319804B2 (en) Projection image generation apparatus, projection image generation method, and projection image generation program
US20180213215A1 (en) Method and device for displaying a three-dimensional scene on display surface having an arbitrary non-planar shape
CN111062869A (en) Curved screen-oriented multi-channel correction splicing method
CN115311133A (en) Image processing method and device, electronic equipment and storage medium
CN114511447A (en) Image processing method, device, equipment and computer storage medium
EP3573018B1 (en) Image generation device, and image display control device
CN108450031A (en) Image capture apparatus
US10902554B2 (en) Method and system for providing at least a portion of content having six degrees of freedom motion
CN110060349B (en) Method for expanding field angle of augmented reality head-mounted display equipment
CN111915739A (en) Real-time three-dimensional panoramic information interactive information system
JP6503098B1 (en) Image processing apparatus, image processing program and image processing method
JP6378794B1 (en) Image processing apparatus, image processing program, and image processing method
CN111915740A (en) Rapid three-dimensional image acquisition method
JP7465133B2 (en) Information processing device and information processing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination