CN112132740B - Video image display method, device and system - Google Patents

Video image display method, device and system Download PDF

Info

Publication number
CN112132740B
CN112132740B CN201910553681.7A CN201910553681A CN112132740B CN 112132740 B CN112132740 B CN 112132740B CN 201910553681 A CN201910553681 A CN 201910553681A CN 112132740 B CN112132740 B CN 112132740B
Authority
CN
China
Prior art keywords
dimensional
chart
video image
dimensional correction
display
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910553681.7A
Other languages
Chinese (zh)
Other versions
CN112132740A (en
Inventor
林耀冬
张欣
辛安民
陈杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN201910553681.7A priority Critical patent/CN112132740B/en
Publication of CN112132740A publication Critical patent/CN112132740A/en
Application granted granted Critical
Publication of CN112132740B publication Critical patent/CN112132740B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • G06T3/08
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4007Interpolation-based scaling, e.g. bilinear interpolation

Abstract

The application provides a video image display method, a device and a system, wherein the method comprises the following steps: determining a two-dimensional correction chart and a three-dimensional perspective chart corresponding to a video image acquired by a camera; and fusing the two-dimensional correction chart and the three-dimensional perspective chart by utilizing interpolation variables and displaying the fused image. By utilizing interpolation variables to fuse and display the two-dimensional correction chart and the three-dimensional perspective chart, not only can all image information be displayed, but also the animation transition from 3D to 2D or from 2D to 3D can be realized, and stronger visual impact is brought to a user.

Description

Video image display method, device and system
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a method, an apparatus, and a system for displaying video images.
Background
The fisheye video is a video image acquired by adopting a super wide-angle lens, the visual angle range is 220-230 degrees, and the picture distortion effect is strong. In two-dimensional (2D) display, it is necessary to correct distortion of a video image according to the principle of camera imaging, and by adjusting an image area, the entire picture can be displayed on a screen. In performing three-dimensional (3D) display, it is necessary to map image coordinates of a video image with three-dimensional coordinates of a three-dimensional model, and a user views the video image by setting a three-dimensional viewing angle.
However, the two-dimensional display system is capable of displaying all image information, but lacks a three-dimensional effect, and the three-dimensional display system is capable of displaying only partial image information, although it has a three-dimensional effect, both of which have drawbacks.
Disclosure of Invention
In view of the above, the present application provides a method, apparatus and system for displaying video images, so as to solve the technical drawbacks of the current display method.
According to a first aspect of an embodiment of the present application, there is provided a video image display method including:
determining a two-dimensional correction chart and a three-dimensional perspective chart corresponding to a video image acquired by a camera;
and fusing the two-dimensional correction chart and the three-dimensional perspective chart by utilizing interpolation variables and displaying the fused image.
According to a second aspect of an embodiment of the present application, there is provided a video image display apparatus including:
the determining module is used for determining a two-dimensional correction chart and a three-dimensional perspective chart corresponding to the video image acquired by the camera;
and the display module is used for fusing the two-dimensional correction chart and the three-dimensional perspective chart by utilizing interpolation variables and displaying the fused image.
According to a third aspect of embodiments of the present application, there is provided a video image display system, the system comprising;
the camera is used for collecting video images and sending the video images to the electronic equipment;
the electronic equipment is used for determining a two-dimensional correction chart and a three-dimensional perspective chart corresponding to the video image; fusing the two-dimensional correction chart and the three-dimensional perspective chart by utilizing interpolation variables to obtain a fused image;
and the display is used for displaying the fused image.
By applying the embodiment of the application, the two-dimensional correction chart and the three-dimensional perspective chart corresponding to the video image acquired by the camera are determined, then the interpolation variable is utilized to fuse the two-dimensional correction chart and the three-dimensional perspective chart, and the fused image is displayed.
Based on the above description, by using interpolation variables to fuse the two-dimensional correction chart and the three-dimensional perspective chart and display the two-dimensional correction chart and the three-dimensional perspective chart, not only all image information can be displayed, but also the animation transition from 3D to 2D or from 2D to 3D can be realized, and stronger visual impact is brought to the user.
Drawings
FIG. 1A is a flow chart of an embodiment of a video image display method according to an exemplary embodiment of the present application;
FIG. 1B is a raw fisheye image according to the embodiment of FIG. 1A;
FIG. 1C is a schematic diagram of an image coordinate system and a screen coordinate system according to the embodiment of FIG. 1A;
FIG. 1D is a two-dimensional correction chart corresponding to a fisheye image according to the embodiment of FIG. 1A;
FIG. 1E is a schematic diagram of a two-dimensional correction chart displayed to a screen according to the embodiment of FIG. 1A of the present application;
FIG. 1F is a schematic diagram of a three-dimensional model coordinate system and screen coordinate system according to the embodiment of FIG. 1A;
FIG. 1G is a schematic view of a view point according to the embodiment of FIG. 1A;
FIG. 1H is a three-dimensional perspective view of the present application at a different point of view according to the embodiment of FIG. 1A;
FIG. 1J is a schematic diagram illustrating a fusion variant effect transition according to the embodiment of FIG. 1A;
FIG. 2 is a flow chart illustrating another video image display method according to an exemplary embodiment of the present application;
FIG. 3 is a block diagram of a video image display system according to an exemplary embodiment of the present application;
fig. 4 is a block diagram illustrating an embodiment of a video image display apparatus according to an exemplary embodiment of the present application.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the application. Rather, they are merely examples of apparatus and methods consistent with aspects of the application as detailed in the accompanying claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any or all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used herein to describe various information, these information should not be limited by these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the application. The word "if" as used herein may be interpreted as "at … …" or "at … …" or "responsive to a determination", depending on the context.
At present, the display mode adopted for the fish-eye video is either a two-dimensional display mode capable of seeing all image information or a three-dimensional display mode capable of seeing only partial image information, and the two display modes have the defects of display per se, so that the visual impact force brought to a user is weaker, and the adaptability of the two display modes is not high.
In order to solve the problems, the application provides a video image display method, which comprises the steps of determining a two-dimensional correction chart and a three-dimensional perspective chart corresponding to a video image acquired by a camera, then fusing the two-dimensional correction chart and the three-dimensional perspective chart by utilizing interpolation variables, and displaying the fused image.
Based on the above description, by using interpolation variables to fuse the two-dimensional correction chart and the three-dimensional perspective chart and display the two-dimensional correction chart and the three-dimensional perspective chart, not only all image information can be displayed, but also the animation transition from 3D to 2D or from 2D to 3D can be realized, and stronger visual impact is brought to the user.
Fig. 1A is a flowchart of an embodiment of a video image display method according to an exemplary embodiment of the present application, where the video image display method may be applied to an electronic device (e.g., a mobile terminal, a PC), and a video image is taken as a fisheye image for illustration.
As shown in fig. 1A, the video display method includes the following steps:
step 101: and determining a two-dimensional correction chart and a three-dimensional perspective chart corresponding to the video image acquired by the camera.
Before two-dimensional correction is performed, a correction function needs to be determined first, different correction function forms exist under different camera installation modes (such as top mounting, bottom mounting and side mounting), and the correction calculation amount of a frame of video image is large because the calculation amount of the correction function is large, and correction calculation is performed on each pixel point in the video image by using the correction function when correction is performed.
Illustratively, assume that pixel coordinates (u, v) in a two-dimensional rectification map correspond to pixel coordinates (u 1 ,v 1 ) The following formula is involved in the correction calculation:
u’=(u-c u )/f u
v’=(v-c v )/f v
R=u’ 2 +v’ 2
d R =1+R 2 k 1 +R 2 k 2 +R 3 k 3
d TX =2p 1 u’v’+p 2 (R+2u’ 2 )
d TY =2p 2 u’v’+p 1 (R+2v’ 2 )
u 1 =(u’d R +d TX )f u +c u
v 1 =(v’d R +d TY )f v +c v
wherein c u 、c v 、f u 、f v As an internal reference of the camera, k 1 、k 2 、k 3 、p 1 、p 2 Is a distortion parameter.
Based on the method, in order to reduce the calculated amount and unify various correction function forms, a two-dimensional correction chart with the same size as that of a video image is established, correction pre-calculation is carried out on each pixel coordinate in the two-dimensional correction chart by using a correction function, so that the pixel coordinate corresponding to the video image is obtained, and then the pixel coordinate corresponding to each pixel coordinate in the two-dimensional correction chart in the video image is stored in a two-dimensional correction lookup table for subsequent direct searching correction.
The two-dimensional correction lookup tables are correspondingly arranged in different camera installation modes, so that conversion of the two-dimensional correction chart can be realized rapidly by switching the two-dimensional correction lookup tables in different installation scenes. It will be appreciated by those skilled in the art that the form of the correction function employed for the different modes of installation may be implemented by correlation techniques in the construction of the two-dimensional correction look-up table.
In one example, for the process of determining the two-dimensional correction map corresponding to the video image acquired by the camera, the two-dimensional correction map corresponding to the installation mode of the camera may be selected from the two-dimensional correction lookup tables established in advance, and then the two-dimensional correction map corresponding to the video image may be obtained according to the two-dimensional correction lookup table, and the obtained two-dimensional correction map may be converted into the screen coordinate system of the display.
For example, as shown in fig. 1B, which is an original fisheye image, the displayed picture is severely distorted, and when fig. 1B is displayed, the display relationship adopted is Color (w, h) =getpixel (u, v); where (w, h) denotes screen coordinates in the screen coordinate system, (u, v) denotes pixel coordinates in the image coordinate system, and GetPixel denotes filling the color value at (u, v) to (w, h) of the display.
As shown in fig. 1C, (a) is an image coordinate system of an image, a left lower corner vertex of the image is taken as an origin, a horizontal direction is taken as a horizontal axis, a vertical direction is taken as a vertical axis, (b) is a screen coordinate system of a display, a screen center of the display is taken as the origin, the horizontal direction is taken as the horizontal axis, the vertical direction is taken as the vertical axis, after normalization, a value range of pixel coordinates (u, v) is 0 to 1, a value range of screen coordinates (w, h) is-1 to 1, and a relationship between the two is:
w=2u-1
h=2v-1
as shown in fig. 1D, which is a two-dimensional correction chart corresponding to a video image, the displayed picture has no distortion problem, when the fig. 1D is displayed, the two-dimensional correction chart needs to be converted into a screen coordinate system of a display to be displayed, as shown in fig. 1E, (a) is a screen of the display, (b) is a two-dimensional correction chart,(c) For the original fish-eye image, the pixel coordinate F point (u, v) in the two-dimensional correction chart and the pixel coordinate Q point (u 1 ,v 1 ) The relation between the two is: (u) 1 ,v 1 ) =lookup2d (u, v), i.e. the pixel coordinate Q point (u 1 ,v 1 ) The correction chart may be converted into a screen coordinate system of the display by the above-described conversion relation (w, h) =2 (u, v) -1 of the image to the screen shown in fig. 1C, and displayed by using a Color (w, h) =getpixel (u, v) display relation.
It should be noted that, because the user selects different correction modes, such as no loss of correction, large span, and the like, the correction function forms are also different, so that the influence of the correction mode on the correction function needs to be considered when the two-dimensional correction lookup table is established, and the pre-established two-dimensional correction lookup table corresponds to the installation mode and the correction mode, so that the conversion of the two-dimensional correction chart can be rapidly realized by switching the two-dimensional correction lookup table under different installation modes and correction modes.
In another example, for the process of determining the two-dimensional correction chart corresponding to the video image collected by the camera, the two-dimensional correction chart corresponding to the video image can be obtained according to the two-dimensional correction lookup table by receiving information of an externally input correction mode, selecting a two-dimensional correction lookup table corresponding to the installation mode of the camera and the correction mode from a pre-established two-dimensional correction lookup table, and converting the obtained two-dimensional correction chart into a screen coordinate system of the display.
In an embodiment, for the process of determining the three-dimensional perspective view corresponding to the video image collected by the camera, the obtained three-dimensional model image may be converted into the three-dimensional perspective view by obtaining a three-dimensional model image corresponding to the video image according to a pre-established three-dimensional mapping lookup table, and according to externally input observation viewpoint information and perspective conversion relation between a screen coordinate system of the display and the three-dimensional model coordinate system.
The three-dimensional model related to the pre-established three-dimensional mapping lookup table can be a spherical model, a cylindrical model, a conical model and the like. For different three-dimensional models, different conversion function forms can be adopted for pre-calculation, the establishment principle of the three-dimensional mapping lookup table is consistent with that of the two-dimensional correction lookup table, and the pre-established three-dimensional mapping lookup table is recorded with pixel coordinates of each pixel coordinate in the three-dimensional model diagram in the video image for subsequent direct searching and conversion.
The process of converting a three-dimensional perspective from a three-dimensional model map is described in detail below:
taking the three-dimensional model as a conical model, as shown in fig. 1F, (a) is a three-dimensional model coordinate system of the conical model, (b) is a screen coordinate system of a display, fov represents a view range angle in a vertical direction in the three-dimensional model, n represents a value of a near clipping surface of the three-dimensional model, F represents a value of a far clipping surface of the three-dimensional model, and a perspective conversion relationship between P (x, y, z) in the three-dimensional model coordinate system and Q (w, h) in the screen coordinate system is as follows:
where aspect represents the screen aspect ratio value of the display, aspect, n, f, fov are all known quantities, depth represents the depth of the display from the camera lens.
After perspective transformation, even in a three-dimensional display mode, only partial image information can be seen through observing the display, so that viewpoint change is needed to obtain a three-dimensional perspective view which can be displayed, and the conversion from a three-dimensional model coordinate system to a screen coordinate system of the display cannot be saved in a lookup table mode because the observation viewpoint information input by a user is different and the viewpoint matrix is different. In the viewpoint changing process, as shown in fig. 1G, taking an example that the observation viewpoint information at least includes a pitch angle pitch, a heading angle yaw, and a distance Scale between a viewpoint V and an observation target, the calculation process of the viewpoint matrix View is as follows:
right=up×forward
head=forward×right
it can be seen that the conversion relationship between the three-dimensional model coordinate system and the screen coordinate system of the display is:
(w,h)=PerspectiveView(x,y,z)
wherein, perspective represents the Perspective transformation matrix and View represents the viewpoint transformation matrix.
The three-dimensional model image can be converted into a three-dimensional perspective view under the screen coordinate system for display through the conversion relation between the three-dimensional model coordinate system and the screen coordinate system of the display, and the display relation of the three-dimensional perspective view is as follows:
Color(w,h)=GetPixel(LookUp3D(PerspectiveView -1 (w,h)))
wherein LookUp3D represents looking up a three-dimensional mapping LookUp table, perspotive View -1 The GetPixel represents a conversion matrix from a three-dimensional coordinate system to a screen coordinate system, and the GetPixel represents a color value of a pixel coordinate in the acquired video image.
As shown in fig. 1H, fig. 1, fig. 2, fig. 3, fig. 4, fig. 5, fig. 6 are three-dimensional perspective views of the display, in which the original fisheye image is shown in fig. 1B, and the user inputs different viewpoint information.
It should be further noted that the order of obtaining the two-dimensional correction chart and the three-dimensional perspective chart is not limited in the application, and the two-dimensional correction chart and the three-dimensional perspective chart can be obtained sequentially or simultaneously.
Step 102: and fusing the two-dimensional correction chart and the three-dimensional perspective chart by utilizing interpolation variables and displaying the fused image through a display.
Wherein, the value range of the interpolation variable is [0,1].
In an embodiment, as can be seen from the description of step 101, since the determined two-dimensional correction chart and the three-dimensional perspective chart are both located in the screen coordinate system of the display, the two-dimensional correction chart and the three-dimensional perspective chart can be directly fused, and the fused chart is displayed, and the fusion display process can be as follows: traversing each interpolation variable according to a specified sequence, fusing the pixel value corresponding to each screen coordinate in the two-dimensional correction chart with the pixel value corresponding to the three-dimensional perspective chart according to the currently traversed interpolation variable, and displaying the fused pixel value through a display.
In the traversing process, an interpolation variable can be taken at preset intervals (the smaller intervals are, the smoother the animation transition is), the traversing sequence can be from 0 to 1, or from 1 to 0, the traversing sequences are different, the animation transition effects are different, the traversing sequence of the interpolation variable is assumed to be from 0 to 1, and the display effect of the video image can be the animation transition effect from 3D to 2D.
In an embodiment, for a process of fusing a pixel value corresponding to each screen coordinate in the two-dimensional correction chart with a pixel value corresponding to the three-dimensional perspective chart according to the interpolation variable of the current traversal and displaying the fused pixel value through the display, a first interpolation coefficient of the two-dimensional correction chart and a second interpolation coefficient of the three-dimensional perspective chart can be determined according to the interpolation variable of the current traversal, the sum of the first interpolation coefficient and the second interpolation coefficient is 1, and the pixel value corresponding to the two-dimensional correction chart and the pixel value corresponding to the three-dimensional perspective chart are fused by using the first interpolation coefficient and the second interpolation coefficient.
Before determining a first interpolation coefficient of the two-dimensional correction chart and a second interpolation coefficient of the three-dimensional perspective chart, determining an interpolation function form, wherein when the interpolation function needs to meet the condition, the interpolation coefficient obtained by the interpolation function is 0 when the interpolation variable is 0; the interpolation coefficient obtained from the interpolation function is 1 when the interpolation variable is 1.
Based on the condition that the interpolation function needs to meet, the interpolation function can be in the form of L (x) =x or L (x) =x n N is a positive integer greater than or equal to 1.
Illustratively, assume that the fusion relationship of the two-dimensional rectification map and the three-dimensional perspective map is:
Color(w,h)=L(x)Color2D(w,h)+(1-L(x))Color3D(w,h)
color2D (w, h) represents a Color value of a screen coordinate (w, h) in the two-dimensional correction chart, color3D (w, h) represents a Color value of a screen coordinate (w, h) in the three-dimensional perspective chart, L (x) represents a first interpolation coefficient of the two-dimensional correction chart, that is, an interpolation coefficient obtained by an interpolation function, x represents an interpolation variable, and 1-L (x) represents a second interpolation coefficient of the three-dimensional perspective chart. It can be seen that the sum of the first interpolation coefficient and the second interpolation coefficient is 1.
Based on the above fusion relation, when the traversal order of the interpolation variable is 0 to 1, the transition effect of the displayed video image is from 3D to 2D, and when the traversal order of the interpolation variable is 1 to 0, the transition efficiency of the displayed video image is from 2D to 3D.
As shown in fig. 1J, the fusion change effect of the two-dimensional correction chart and the three-dimensional perspective chart has not only an animation effect of dynamic switching, but also a 3D effect, wherein the image information of the original fisheye image is all maintained.
In the embodiment of the application, the two-dimensional correction chart and the three-dimensional perspective chart corresponding to the video image acquired by the camera are determined, then the interpolation variable is utilized to fuse the two-dimensional correction chart and the three-dimensional perspective chart, and the fused image is displayed through a display.
Based on the above description, by using interpolation variables to fuse the two-dimensional correction chart and the three-dimensional perspective chart and display the two-dimensional correction chart and the three-dimensional perspective chart, not only all image information can be displayed, but also the animation transition from 3D to 2D or from 2D to 3D can be realized, and stronger visual impact is brought to the user.
Fig. 2 is a flowchart of another embodiment of a video image display method according to an exemplary embodiment of the present application, and the video image display method includes the following steps based on the embodiment shown in fig. 1A:
step 201: and establishing a two-dimensional correction lookup table, a three-dimensional mapping lookup table and an interpolation function form.
Step 202: and receiving the video image, and respectively determining a two-dimensional correction chart and a three-dimensional perspective chart corresponding to the video image by utilizing the two-dimensional correction lookup table and the three-dimensional mapping lookup table.
Step 203: and fusing the two-dimensional correction chart and the three-dimensional perspective chart by using an interpolation function form and outputting and displaying.
For the process from step 201 to step 203, the detailed implementation may refer to the related descriptions from step 101 to step 102, which are not repeated.
Fig. 3 is a block diagram of a video image display system according to an exemplary embodiment of the present application, and as shown in fig. 3, the video image display system includes:
a camera 310 for capturing video images and transmitting the video images to an electronic device;
an electronic device 320, configured to determine a two-dimensional rectification chart and a three-dimensional perspective corresponding to the video image; fusing the two-dimensional correction chart and the three-dimensional perspective chart by utilizing interpolation variables to obtain a fused image;
and a display 330 for displaying the fused image.
Fig. 4 is a block diagram showing an embodiment of a video image display apparatus according to an exemplary embodiment of the present application, which can be applied to an electronic device, as shown in fig. 4, the video image display apparatus including:
a determining module 410, configured to determine a two-dimensional rectification chart and a three-dimensional perspective chart corresponding to a video image acquired by a camera;
and the display module 420 is configured to fuse the two-dimensional correction chart and the three-dimensional perspective chart by using interpolation variables and display the fused image.
In an optional implementation manner, the determining module 410 is specifically configured to select, in determining a two-dimensional correction chart corresponding to a video image acquired by a camera, a two-dimensional correction lookup table corresponding to an installation mode of the camera from two-dimensional correction lookup tables established in advance, where pixel coordinates of each pixel coordinate in the two-dimensional correction chart corresponding to the video image are recorded in the two-dimensional correction lookup table; and obtaining a two-dimensional correction chart corresponding to the video image according to the two-dimensional correction lookup table, and converting the obtained two-dimensional correction chart into a screen coordinate system of the display.
In an optional implementation manner, the determining module 410 is specifically configured to receive information of a correction mode input from the outside in a process of determining a two-dimensional correction chart corresponding to a video image acquired by the camera; selecting a two-dimensional correction lookup table corresponding to the installation mode of the camera and the correction mode from a pre-established two-dimensional correction lookup table, wherein the two-dimensional correction lookup table records pixel coordinates of each pixel coordinate in a two-dimensional correction chart in a video image; and obtaining a two-dimensional correction chart corresponding to the video image according to the two-dimensional correction lookup table, and converting the obtained two-dimensional correction chart into a screen coordinate system of the display.
In an optional implementation manner, the determining module 410 is specifically configured to obtain, in determining a three-dimensional perspective view corresponding to a video image acquired by a camera, a three-dimensional model map corresponding to the video image according to a pre-established three-dimensional mapping lookup table, where coordinates of each pixel in the three-dimensional model map correspond to coordinates of the pixel in the video image; and converting the obtained three-dimensional model diagram into a three-dimensional perspective view according to the perspective conversion relation between the screen coordinate system and the three-dimensional model coordinate system of the display and the externally input observation viewpoint information.
In an alternative implementation, the two-dimensional rectification map and the three-dimensional perspective are both located in a screen coordinate system of a display; the display module 420 is specifically configured to traverse each interpolation variable according to a specified sequence, fuse a pixel value corresponding to each screen coordinate in the two-dimensional correction chart with a pixel value corresponding to each screen coordinate in the three-dimensional perspective chart according to the currently traversed interpolation variable, and display the fused pixel value through the display; wherein the value range of the interpolation variable is more than or equal to 0 and less than or equal to 1.
In an optional implementation manner, the display module 420 is specifically configured to determine, according to the currently traversed interpolation variable, a first interpolation coefficient of the two-dimensional correction chart and a second interpolation coefficient of the three-dimensional perspective chart, where a sum of the first interpolation coefficient and the second interpolation coefficient is 1, in a process of fusing, according to the currently traversed interpolation variable, a pixel value corresponding to each screen coordinate in the two-dimensional correction chart with a pixel value corresponding to the three-dimensional perspective chart; and fusing the corresponding pixel value in the two-dimensional correction chart with the corresponding pixel value in the three-dimensional perspective chart by using the first interpolation coefficient and the second interpolation coefficient.
The implementation process of the functions and roles of each unit in the above device is specifically shown in the implementation process of the corresponding steps in the above method, and will not be described herein again.
For the device embodiments, reference is made to the description of the method embodiments for the relevant points, since they essentially correspond to the method embodiments. The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purposes of the present application. Those of ordinary skill in the art will understand and implement the present application without undue burden.
Other embodiments of the application will be apparent to those skilled in the art from consideration of the specification and practice of the application disclosed herein. This application is intended to cover any variations, uses, or adaptations of the application following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the application pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
The foregoing description of the preferred embodiments of the application is not intended to be limiting, but rather to enable any modification, equivalent replacement, improvement or the like to be made within the spirit and principles of the application.

Claims (9)

1. A video image display method, the method comprising:
determining a two-dimensional correction chart and a three-dimensional perspective chart corresponding to a video image acquired by a camera; the two-dimensional correction chart and the three-dimensional perspective chart are both positioned in a screen coordinate system of a display;
traversing each interpolation variable according to a specified sequence, and determining a first interpolation coefficient of a two-dimensional correction chart and a second interpolation coefficient of a three-dimensional perspective according to the currently traversed interpolation variable, wherein the sum of the first interpolation coefficient and the second interpolation coefficient is 1; fusing the corresponding pixel value in the two-dimensional correction chart with the corresponding pixel value in the three-dimensional perspective chart by utilizing the first interpolation coefficient and the second interpolation coefficient; displaying the fused pixel values through the display; wherein the value range of the interpolation variable is more than or equal to 0 and less than or equal to 1.
2. The method of claim 1, wherein determining a two-dimensional rectification map corresponding to the video image captured by the camera comprises:
selecting a two-dimensional correction lookup table corresponding to the installation mode of the camera from a pre-established two-dimensional correction lookup table, wherein the two-dimensional correction lookup table records pixel coordinates of each pixel coordinate in the two-dimensional correction chart in a video image;
and obtaining a two-dimensional correction chart corresponding to the video image according to the two-dimensional correction lookup table, and converting the obtained two-dimensional correction chart into a screen coordinate system of the display.
3. The method of claim 1, wherein determining a two-dimensional rectification map corresponding to the video image captured by the camera comprises:
receiving information of an externally input correction mode;
selecting a two-dimensional correction lookup table corresponding to the installation mode of the camera and the correction mode from a pre-established two-dimensional correction lookup table, wherein the two-dimensional correction lookup table records pixel coordinates of each pixel coordinate in a two-dimensional correction chart in a video image;
and obtaining a two-dimensional correction chart corresponding to the video image according to the two-dimensional correction lookup table, and converting the obtained two-dimensional correction chart into a screen coordinate system of the display.
4. The method of claim 1, wherein determining a three-dimensional perspective corresponding to the video image captured by the camera comprises:
obtaining a three-dimensional model image corresponding to a video image according to a pre-established three-dimensional mapping lookup table, wherein the three-dimensional mapping lookup table records pixel coordinates of each pixel coordinate in the three-dimensional model image in the video image;
and converting the obtained three-dimensional model diagram into a three-dimensional perspective view according to the perspective conversion relation between the screen coordinate system and the three-dimensional model coordinate system of the display and the externally input observation viewpoint information.
5. A video image display apparatus, the apparatus comprising:
the determining module is used for determining a two-dimensional correction chart and a three-dimensional perspective chart corresponding to the video image acquired by the camera; the two-dimensional correction chart and the three-dimensional perspective chart are both positioned in a screen coordinate system of a display;
the display module is used for traversing each interpolation variable according to a specified sequence, and determining a first interpolation coefficient of the two-dimensional correction chart and a second interpolation coefficient of the three-dimensional perspective according to the currently traversed interpolation variable, wherein the sum of the first interpolation coefficient and the second interpolation coefficient is 1; fusing the corresponding pixel value in the two-dimensional correction chart with the corresponding pixel value in the three-dimensional perspective chart by utilizing the first interpolation coefficient and the second interpolation coefficient; displaying the fused pixel values through the display; wherein the value range of the interpolation variable is more than or equal to 0 and less than or equal to 1.
6. The apparatus according to claim 5, wherein the determining module is specifically configured to select, in determining a two-dimensional correction map corresponding to a video image acquired by a camera, a two-dimensional correction lookup table corresponding to an installation mode of the camera from two-dimensional correction lookup tables established in advance, where pixel coordinates of each pixel coordinate in the two-dimensional correction map corresponds to pixel coordinates in the video image; and obtaining a two-dimensional correction chart corresponding to the video image according to the two-dimensional correction lookup table, and converting the obtained two-dimensional correction chart into a screen coordinate system of the display.
7. The device according to claim 5, wherein the determining module is specifically configured to receive information of an externally input correction mode in determining a two-dimensional correction map corresponding to a video image acquired by the camera; selecting a two-dimensional correction lookup table corresponding to the installation mode of the camera and the correction mode from a pre-established two-dimensional correction lookup table, wherein the two-dimensional correction lookup table records pixel coordinates of each pixel coordinate in a two-dimensional correction chart in a video image; and obtaining a two-dimensional correction chart corresponding to the video image according to the two-dimensional correction lookup table, and converting the obtained two-dimensional correction chart into a screen coordinate system of the display.
8. The apparatus according to claim 5, wherein the determining module is specifically configured to obtain a three-dimensional model map corresponding to the video image according to a pre-established three-dimensional mapping lookup table in a process of determining a three-dimensional perspective view corresponding to the video image acquired by the camera, where coordinates of each pixel in the three-dimensional model map correspond to coordinates of the pixel in the video image; and converting the obtained three-dimensional model diagram into a three-dimensional perspective view according to the perspective conversion relation between the screen coordinate system and the three-dimensional model coordinate system of the display and the externally input observation viewpoint information.
9. A video image display system, the system comprising;
the camera is used for collecting video images and sending the video images to the electronic equipment;
the electronic equipment traverses each interpolation variable according to a specified sequence, and determines a first interpolation coefficient of a two-dimensional correction chart and a second interpolation coefficient of a three-dimensional perspective according to the currently traversed interpolation variable, wherein the sum of the first interpolation coefficient and the second interpolation coefficient is 1; fusing the corresponding pixel value in the two-dimensional correction chart with the corresponding pixel value in the three-dimensional perspective chart by utilizing the first interpolation coefficient and the second interpolation coefficient; wherein the value range of the interpolation variable is more than or equal to 0 and less than or equal to 1;
and the display is used for displaying the fused pixel values and displaying the two-dimensional correction chart and the three-dimensional perspective view.
CN201910553681.7A 2019-06-25 2019-06-25 Video image display method, device and system Active CN112132740B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910553681.7A CN112132740B (en) 2019-06-25 2019-06-25 Video image display method, device and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910553681.7A CN112132740B (en) 2019-06-25 2019-06-25 Video image display method, device and system

Publications (2)

Publication Number Publication Date
CN112132740A CN112132740A (en) 2020-12-25
CN112132740B true CN112132740B (en) 2023-08-25

Family

ID=73849377

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910553681.7A Active CN112132740B (en) 2019-06-25 2019-06-25 Video image display method, device and system

Country Status (1)

Country Link
CN (1) CN112132740B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105894549A (en) * 2015-10-21 2016-08-24 乐卡汽车智能科技(北京)有限公司 Panorama assisted parking system and device and panorama image display method
CN106846410A (en) * 2016-12-20 2017-06-13 北京鑫洋泉电子科技有限公司 Based on three-dimensional environment imaging method and device
CN109308686A (en) * 2018-08-16 2019-02-05 北京市商汤科技开发有限公司 A kind of fish eye images processing method and processing device, equipment and storage medium
WO2019096323A1 (en) * 2017-11-20 2019-05-23 杭州海康威视数字技术股份有限公司 Fisheye image processing method and apparatus, and electronic device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003141562A (en) * 2001-10-29 2003-05-16 Sony Corp Image processing apparatus and method for nonplanar image, storage medium, and computer program

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105894549A (en) * 2015-10-21 2016-08-24 乐卡汽车智能科技(北京)有限公司 Panorama assisted parking system and device and panorama image display method
CN106846410A (en) * 2016-12-20 2017-06-13 北京鑫洋泉电子科技有限公司 Based on three-dimensional environment imaging method and device
WO2019096323A1 (en) * 2017-11-20 2019-05-23 杭州海康威视数字技术股份有限公司 Fisheye image processing method and apparatus, and electronic device
CN109308686A (en) * 2018-08-16 2019-02-05 北京市商汤科技开发有限公司 A kind of fish eye images processing method and processing device, equipment and storage medium

Also Published As

Publication number Publication date
CN112132740A (en) 2020-12-25

Similar Documents

Publication Publication Date Title
CN109348119B (en) Panoramic monitoring system
CN105243637B (en) One kind carrying out full-view image joining method based on three-dimensional laser point cloud
EP0680019B1 (en) Image processing method and apparatus
CN107067447B (en) Integrated video monitoring method for large spatial region
CN109547766A (en) A kind of panorama image generation method and device
US20160134859A1 (en) 3D Photo Creation System and Method
CN104756489B (en) A kind of virtual visual point synthesizing method and system
CN103971352A (en) Rapid image splicing method based on wide-angle lenses
US20100302234A1 (en) Method of establishing dof data of 3d image and system thereof
KR20090078463A (en) Distorted image correction apparatus and method
JP6585938B2 (en) Stereoscopic image depth conversion apparatus and program thereof
CN107317998A (en) Full-view video image fusion method and device
WO2022047701A1 (en) Image processing method and apparatus
JP3032414B2 (en) Image processing method and image processing apparatus
KR101916419B1 (en) Apparatus and method for generating multi-view image from wide angle camera
CN110428361A (en) A kind of multiplex image acquisition method based on artificial intelligence
JP2006119843A (en) Image forming method, and apparatus thereof
CN114449303A (en) Live broadcast picture generation method and device, storage medium and electronic device
CN107743222B (en) Image data processing method based on collector and three-dimensional panorama VR collector
US11043019B2 (en) Method of displaying a wide-format augmented reality object
CN112132740B (en) Video image display method, device and system
JP4908350B2 (en) Image processing method and imaging apparatus using the image processing method
JP3054312B2 (en) Image processing apparatus and method
JPH09200803A (en) Image processing unit and image processing method
CN114022562A (en) Panoramic video stitching method and device capable of keeping integrity of pedestrians

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant