CN110958444A - 720-degree view field environment situation sensing method and situation sensing system - Google Patents

720-degree view field environment situation sensing method and situation sensing system Download PDF

Info

Publication number
CN110958444A
CN110958444A CN201911342552.XA CN201911342552A CN110958444A CN 110958444 A CN110958444 A CN 110958444A CN 201911342552 A CN201911342552 A CN 201911342552A CN 110958444 A CN110958444 A CN 110958444A
Authority
CN
China
Prior art keywords
curved surface
cameras
camera
images
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911342552.XA
Other languages
Chinese (zh)
Inventor
梁艳菊
常嘉义
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kunshan Branch Institute of Microelectronics of CAS
Original Assignee
Kunshan Branch Institute of Microelectronics of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kunshan Branch Institute of Microelectronics of CAS filed Critical Kunshan Branch Institute of Microelectronics of CAS
Priority to CN201911342552.XA priority Critical patent/CN110958444A/en
Publication of CN110958444A publication Critical patent/CN110958444A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/243Image signal generators using stereoscopic image cameras using three or more 2D image sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/122Improving the 3D impression of stereoscopic images by modifying image signal contents, e.g. by filtering or adding monoscopic depth cues
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/275Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

The invention discloses a 720-degree view field environment situation perception method which comprises the steps of firstly mapping images shot by a camera module comprising six cameras distributed in a cube to six curved surfaces corresponding to the surfaces of a three-dimensional closed model, finally acquiring a control signal input by a user, and displaying the images in a target area in a display according to the control signal. The six cameras distributed in a cube can obtain all images in six directions by taking the camera module as a center; images acquired by the six cameras are mapped to six areas divided by the surface of the three-dimensional closed model, and the surface of the three-dimensional closed model can be completely covered, so that a 720-degree panoramic three-dimensional image is formed; and finally, projecting the image of the target area in a display for display according to the control signal, thereby realizing the environment situation perception of the 720-degree view field. The invention also provides an environment situation perception system, which also has the beneficial effects.

Description

720-degree view field environment situation sensing method and situation sensing system
Technical Field
The invention relates to the field of panoramic image display, in particular to a 720-degree view field environment situation perception method and a 720-degree view field environment situation perception system.
Background
With the continuous progress of science and technology and the continuous development of society in recent years, the panoramic image mapping technology is greatly developed. Panoramic image mapping may present images of the surrounding environment in a stereoscopic manner, typically by stitching images acquired by multiple cameras to form a panoramic image.
In the prior art, images acquired by a plurality of cameras are usually mapped into an annular model with a certain height, the images are spliced in the model to form a panoramic image, and then a target area of the images mapped in the model is displayed on a display. However, in the prior art, a 360-degree panoramic image can be usually obtained only in the horizontal direction, a user can only obtain the panoramic image in the horizontal direction around one circle, but cannot obtain the 360-degree panoramic image in the vertical direction, and the view angle of the user is limited within the height range of the model. How to realize the environment situation perception of the 720-degree field of view is an urgent problem to be solved by those skilled in the art.
Disclosure of Invention
The invention aims to provide a 720-degree view field environment situation perception method, which can realize 720-degree view field environment situation perception; the invention also provides a 720-degree view field environment situation perception system which can realize 720-degree view field environment situation perception.
In order to solve the technical problem, the invention provides a 720-degree view field environment situation perception method, which comprises the following steps:
acquiring images shot by six cameras in a camera module; the six cameras are distributed in a cube shape;
respectively mapping images shot by the six cameras to six curved surfaces on the surface of the three-dimensional closed model; the curved surfaces comprise a front curved surface, a rear curved surface, a left curved surface, a right curved surface, an upper curved surface and a lower curved surface; the upper curved surface and the lower curved surface are arranged oppositely; the front curved surface, the rear curved surface, the left curved surface and the right curved surface are connected into an annular arc surface which is positioned between the upper curved surface and the lower curved surface; the cameras correspond to the curved surfaces one by one;
acquiring a control signal input by a user;
and displaying the image of the target area corresponding to the control signal in the stereo closed model on a display according to the control signal.
Optionally, the three-dimensional closed model is a spherical model, and the upper curved surface and the lower curved surface are both in a spherical crown shape.
Optionally, two overlapped camera fusion regions are arranged between the adjacent curved surfaces in the surface of the three-dimensional closed model;
after the images taken by the six cameras are respectively mapped to the six curved surfaces of the surface of the stereo closed model, the method further comprises the following steps:
and fusing the images in the double-camera fusion area.
Optionally, the fusing the images located in the dual-camera fusion region includes:
and fusing the images in the fusion area of the two cameras based on an image fusion technology to balance the illumination brightness of the surface mapping image of the three-dimensional closed model after fusion.
Optionally, the camera is a wide-view camera, and the view angle of the wide-view camera is not less than 120 degrees;
before the mapping of the images taken by the six cameras to the six curved surfaces of the surface of the stereo closed model, the method further includes:
and calibrating the wide-view camera.
Optionally, the image includes a plurality of graphs acquired at different times;
after the images shot by the six cameras in the camera module are obtained, the method further comprises:
and preprocessing the image to eliminate the inter-field information deviation of the acquired graph at multiple moments in the image.
Optionally, acquiring images shot by six cameras in the camera module comprises:
and images shot by six cameras in the camera module are obtained through the PCI-E bus.
The invention also provides a 720-degree view field environment situation perception system which comprises a camera module, a display, a processor and a memory;
the camera module is internally provided with six cameras which are distributed in a cube shape;
the storage is pre-stored with a three-dimensional closed model, and six curved surfaces are divided on the surface of the three-dimensional closed model; the curved surfaces comprise a front curved surface, a rear curved surface, a left curved surface, a right curved surface, an upper curved surface and a lower curved surface; the upper curved surface and the lower curved surface are arranged oppositely; the front curved surface, the rear curved surface, the left curved surface and the right curved surface are connected into an annular arc surface which is positioned between the upper curved surface and the lower curved surface; the cameras correspond to the curved surfaces one by one;
the processor is configured to:
acquiring images shot by six cameras in the camera module;
calling the three-dimensional closed model stored in the memory, and respectively mapping the images shot by the six cameras to the six curved surfaces on the surface of the three-dimensional closed model;
acquiring a control signal input by a user;
and displaying the image of the target area corresponding to the control signal in the stereo closed model on a display according to the control signal.
Optionally, the three-dimensional closed model is a spherical model, and the upper curved surface and the lower curved surface are both in a spherical crown shape.
Optionally, two overlapped camera fusion regions are arranged between the adjacent curved surfaces in the surface of the three-dimensional closed model;
the processor is further configured to:
and fusing the images in the double-camera fusion area.
The 720-degree view field environment situation perception method provided by the invention comprises the steps that firstly, images shot by a camera module comprising six cameras distributed in a cube are mapped to six curved surfaces corresponding to the surfaces of a three-dimensional closed model respectively, the images comprise a front curved surface, a rear curved surface, a left curved surface, a right curved surface, an upper curved surface and a lower curved surface, and the upper curved surface and the lower curved surface are arranged oppositely; the front curved surface, the rear curved surface, the left curved surface and the right curved surface are connected into an annular curved surface which is positioned between the upper curved surface and the lower curved surface, and the cameras correspond to the curved surfaces one by one; and finally, acquiring a control signal input by a user, and displaying an image of a target area corresponding to the control signal in the three-dimensional closed model in a display according to the control signal. The six cameras distributed in a cube can obtain images in all six directions, namely front, back, left, right, upper and lower directions by taking the camera module as a center; images acquired by the six cameras are mapped to six areas divided by the surface of the three-dimensional closed model, and the surface of the three-dimensional closed model can be completely covered, so that a 720-degree panoramic three-dimensional image with 360 degrees in the horizontal direction and 360 degrees in the vertical direction is formed; and finally, projecting the image of the target area in a display for display according to the control signal, thereby realizing the environment situation perception of the 720-degree view field.
The invention also provides a 720-degree view field environment situation perception system, which has the beneficial effects and is not repeated herein.
Drawings
In order to more clearly illustrate the embodiments or technical solutions of the present invention, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained based on these drawings without creative efforts.
Fig. 1 is a flowchart of a 720-degree view field environment situation awareness method according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a camera module according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a three-dimensional closed model according to an embodiment of the present invention;
fig. 4 is a flowchart of a specific 720-degree view field environment situation awareness method according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of an upper curved surface or a lower curved surface in a specific three-dimensional closed model according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a front curved surface, a rear curved surface, a left curved surface, or a right curved surface in a specific three-dimensional closed model according to an embodiment of the present invention;
fig. 7 is a block diagram of a 720-degree view field environment situation awareness system according to an embodiment of the present invention.
In the figure: 1. the camera comprises an upper curved surface, a lower curved surface 2, a front curved surface 3, a left curved surface 4, a right curved surface 5, a double-camera fusion area 6, a single-camera mapping area 7, a camera module 10, a camera 11, a camera 12, a processor 13, a memory 14 and a display.
Detailed Description
The core of the invention is to provide a 720-degree view field environment situation perception method. In the prior art, images acquired by a plurality of cameras are usually mapped into an annular model with a certain height, the images are spliced in the model to form a panoramic image, and then a target area of the images mapped in the model is displayed on a display. However, in the prior art, a 360-degree panoramic image can be usually obtained only in the horizontal direction, a user can only obtain the panoramic image in the horizontal direction around one circle, but cannot obtain the 360-degree panoramic image in the vertical direction, and the view angle of the user is limited within the height range of the model.
The 720-degree view field environment situation perception method provided by the invention comprises the steps of firstly shooting images through a shooting module which comprises six cameras distributed in a cube, and then respectively mapping the images to six curved surfaces corresponding to the surfaces of a three-dimensional closed model, wherein the six curved surfaces comprise a front curved surface, a rear curved surface, a left curved surface, a right curved surface, an upper curved surface and a lower curved surface, and the upper curved surface and the lower curved surface are oppositely arranged; the front curved surface, the rear curved surface, the left curved surface and the right curved surface are connected into an annular curved surface which is positioned between the upper curved surface and the lower curved surface, and the cameras correspond to the curved surfaces one by one; and finally, acquiring a control signal input by a user, and displaying an image of a target area corresponding to the control signal in the three-dimensional closed model in a display according to the control signal. The six cameras distributed in a cube can obtain images in all six directions, namely front, back, left, right, upper and lower directions by taking the camera module as a center; images acquired by the six cameras are mapped to six areas divided by the surface of the three-dimensional closed model, and the surface of the three-dimensional closed model can be completely covered, so that a 720-degree panoramic three-dimensional image with 360 degrees in the horizontal direction and 360 degrees in the vertical direction is formed; and finally, projecting the image of the target area in a display for display according to the control signal, thereby realizing the environment situation perception of the 720-degree view field.
In order that those skilled in the art will better understand the disclosure, the invention will be described in further detail with reference to the accompanying drawings and specific embodiments. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, fig. 2 and fig. 3, fig. 1 is a flowchart illustrating a method for sensing an environmental situation of a 720-degree field of view according to an embodiment of the present invention; fig. 2 is a schematic structural diagram of a camera module according to an embodiment of the present invention; fig. 3 is a schematic structural diagram of a three-dimensional closed model according to an embodiment of the present invention.
Referring to fig. 1, in the embodiment of the present invention, a 720-degree field of view environment situation perception method includes:
s101: images shot by six cameras in the camera module are obtained.
Referring to fig. 2, in the embodiment of the present invention, six cameras 11 are distributed in a cube. The six cameras 11 are distributed in a cube, and can acquire images in all six directions, namely front, rear, left, right, up and down, by taking the camera module 10 as a center. Specifically, the six cameras 11 in the camera module 10 generally include a front camera, a rear camera, a left camera, a right camera, an upper camera, and a lower camera. The camera module 10 and the six cameras 11 are generally in a cubic structure, and a front camera is usually located on the front side surface of the camera module 10 and is used for acquiring images of the front side of the camera module 10; the rear camera is positioned on the rear side surface of the camera module 10 and used for acquiring a rear side image of the camera module 10; the left camera is positioned on the left side surface of the camera module 10 and is used for acquiring a left image of the camera module 10; the right camera is positioned on the right side surface of the camera module 10 and used for acquiring a right image of the camera module 10; the upper camera is positioned on the upper side surface of the camera module 10 and used for acquiring an upper side image of the camera module 10; the lower camera is located on the lower side surface of the camera module 10 for obtaining the image of the lower side of the camera module 10.
In this step, images taken by the six cameras 11 are acquired, so that the images taken by the six cameras 11 are spliced into a 720-degree panoramic image in the subsequent step.
S102: and respectively mapping the images shot by the six cameras to six curved surfaces on the surface of the three-dimensional closed model.
In the embodiment of the invention, the curved surfaces comprise a front curved surface 3, a rear curved surface, a left curved surface 4, a right curved surface 5, an upper curved surface 1 and a lower curved surface 2; the upper curved surface 1 and the lower curved surface 2 are arranged oppositely; the front curved surface 3, the rear curved surface, the left curved surface 4 and the right curved surface 5 are connected into an annular arc surface and are positioned between the upper curved surface 1 and the lower curved surface 2; the cameras 11 correspond to the curved surfaces one to one.
Referring to fig. 3, in general, the stereo closed model is a central symmetric model in the embodiment of the present invention, so that the image obtained by the camera 11 is mapped to the stereo closed model in this step. Specifically, in the embodiment of the present invention, the three-dimensional closed model is preferably a spherical model. The spherical model surface is uniformly transited among all the areas without distorted edges and corners, the panoramic image is mapped on the spherical model surface, the splicing uniformity of the panoramic image can be ensured, and the distortion of the panoramic image is avoided. Of course, the solid closed model in the embodiment of the present invention may have other structures, such as an orthocubic model, or a transition structure model between an orthocubic model and a spherical model.
Before this step, the surface of the three-dimensional closed model is divided into six curved surfaces, namely a front curved surface 3, a rear curved surface, a left curved surface 4, a right curved surface 5, an upper curved surface 1 and a lower curved surface 2. The six curved surfaces cover the whole surface of the three-dimensional closed model. Specifically, the upper curved surface 1 and the lower curved surface 2 are disposed opposite to each other, and the front curved surface 3, the rear curved surface, the left curved surface 4, and the right curved surface 5 enclose an annular curved surface between the upper curved surface 1 and the lower curved surface 2. The front curved surface 3, the rear curved surface, the left curved surface 4, and the right curved surface 5 enclose a structure having a certain height, and the front curved surface 3, the rear curved surface, the left curved surface 4, and the right curved surface 5 are generally in contact with the upper curved surface 1 and the lower curved surface 2.
When the three-dimensional closed model is a spherical model, the upper curved surface 1 and the lower curved surface 2 are both generally in the shape of a spherical crown, and the front curved surface 3, the rear curved surface, the left curved surface 4 and the right curved surface 5 form an annular arc surface with a certain radian. In general, the diameter of the upper curved surface 1 is generally equal to the diameter of the lower curved surface 2, and at this time, the structure of the upper curved surface 1 is the same as that of the lower curved surface 2, which facilitates mapping of the image acquired by the camera 11. In general, the front curved surface 3 and the rear curved surface are also arranged oppositely, and the left curved surface 4 and the right curved surface 5 are also arranged oppositely; the corresponding front curved surface 3 is adjacent to the left curved surface 4 and the right curved surface 5 at the same time, and the back curved surface is adjacent to the left curved surface 4 and the right curved surface 5 at the same time, so as to divide the surface of the three-dimensional closed model.
Specifically, in the embodiment of the present invention, when the three-dimensional closed model is a spherical model, a longitude and latitude coordinate system may be set in the spherical model to represent the range of each divided region in the spherical model. Specifically, the upper curved surface 1 and the lower curved surface 2 may be disposed opposite to each other along a meridian of the spherical model surface, that is, the upper curved surface 1 and the lower curved surface 2 may be disposed opposite to each other along a parallel to a meridian of the spherical model surface. At this time, the lengths of the front curved surface 3, the rear curved surface, the left curved surface 4, and the right curved surface 5 may be all the same in the warp direction, and the lengths of the front curved surface 3, the rear curved surface, the left curved surface 4, and the right curved surface 5 may be all the same in the weft direction. At this time, the front curved surface 3, the back curved surface, the left curved surface 4 and the right curved surface 5 have the same structure, and the front curved surface 3, the back curved surface, the left curved surface 4 and the right curved surface 5 are uniformly distributed, so as to facilitate the mapping of the image in this step.
In the embodiment of the present invention, the cameras 11 correspond to the curved surfaces one to one. Specifically, the front camera corresponds to the front curved surface 3 one to one, the rear camera corresponds to the rear curved surface one to one, the left camera corresponds to the left curved surface 4 one to one, the right camera corresponds to the right curved surface 5 one to one, the upper camera corresponds to the upper curved surface 1 one to one, and the lower camera corresponds to the lower curved surface 2 one to one.
In this step, images obtained by the six cameras 11 arranged in the camera module 10 are respectively mapped to corresponding curved surfaces according to internal parameters and external parameters calibrated in advance by the cameras 11, so as to complete mapping of the panoramic image. For a specific technique for mapping the camera 11 to the corresponding curved surface, reference may be made to the prior art, and details thereof are not repeated herein.
It should be noted that, in the embodiment of the present invention, the stereo closed model is stored in the memory 13, so as to call the stereo closed model in a specific mapping process.
S103: and acquiring a control signal input by a user.
In this step, a control signal input by the user is obtained, where the control signal includes information of a target area that the user wants to display, i.e. the control signal corresponds to a target area, which can be understood to be located on the surface of the stereo closed model, and a partial panoramic image is mapped in the target area, i.e. an image that the user wants to display on the display 14. The specific form of the control signal is not particularly limited in the embodiments of the present invention, and is determined according to the specific situation. It should be noted that, the manner in which the user inputs the control signal is also not specifically limited in the embodiment of the present invention, and the user may input the control signal through touch, gravity sensing, gyroscope sensing, keyboard and mouse control, and the like.
S104: and displaying the image of the target area corresponding to the control signal in the stereo closed model on the display according to the control signal.
In this step, the image in the target area is finally mapped to the display 14 to be displayed on the display 14. For a specific algorithm for mapping the image in the target area to the display 14, reference may be made to the prior art, and details thereof will not be described herein.
In the embodiment of the present invention, a projection model, that is, texture data of the stereo closed model is calculated according to a mapping relationship between a world coordinate system and an image coordinate system, based on a pre-established stereo closed model and according to internal parameters and external parameters of the camera 11. The relationship between the coordinates of the 3D scene point Qw represented by the world coordinate system and its projection point q in the computer image coordinate system is represented as:
Qw=[Xw Yw Zw 1],q=[u v 1];
the relationship between the world coordinate system and the camera coordinate system is called the camera's external reference. Expressing (R, T) the external parameters by using a rotation and translation matrix, converting a point Qw in a world coordinate system to a camera coordinate system Qc by using the external parameters, and imaging, wherein:
Qc=[Xc Yc Zc 1];
Figure BDA0002331840930000091
the conversion from the camera coordinate system Qc to the image coordinate system can be completed by the camera internal parameters according to the formula.
The 720-degree view field environment situation perception method provided by the real-time example of the invention includes that images shot by a camera module 10 comprising six cameras 11 distributed in a cube are firstly mapped to six curved surfaces corresponding to the surfaces of a three-dimensional closed model respectively, the images comprise a front curved surface 3, a rear curved surface, a left curved surface 4, a right curved surface 5, an upper curved surface 1 and a lower curved surface 2, and the upper curved surface 1 and the lower curved surface 2 are arranged oppositely; the front curved surface 3, the rear curved surface, the left curved surface 4 and the right curved surface 5 are connected into an annular arc surface and are positioned between the upper curved surface 1 and the lower curved surface 2, and the cameras 11 correspond to the curved surfaces one by one; finally, the control signal input by the user is obtained, and the image of the target area corresponding to the control signal in the stereo closed model is displayed on the display 14 according to the control signal. The six cameras 11 distributed in a cube can obtain images in all six directions, namely front, rear, left, right, upper and lower directions by taking the camera module 10 as a center; images acquired by the six cameras 11 are mapped to six areas divided by the surface of the three-dimensional closed model, and the surface of the three-dimensional closed model can be completely covered, so that a 720-degree panoramic three-dimensional image with 360 degrees in the horizontal direction and 360 degrees in the vertical direction is formed; finally, the image of the target area is projected on the display 14 to be displayed according to the control signal, so that the environment situation perception of the 720-degree view field can be realized.
The details of the method for sensing the environment situation of the 720-degree field of view provided by the present invention will be described in detail in the following embodiments of the invention.
Referring to fig. 4, fig. 5 and fig. 6, fig. 4 is a flowchart illustrating a specific 720-degree view field environment situation awareness method according to an embodiment of the present invention; fig. 5 is a schematic structural diagram of an upper curved surface or a lower curved surface in a specific three-dimensional closed model according to an embodiment of the present invention; fig. 6 is a schematic structural diagram of a front curved surface or a rear curved surface or a left curved surface or a right curved surface in a specific three-dimensional closed model according to an embodiment of the present invention.
Referring to fig. 4, in the embodiment of the present invention, a method for sensing an environmental situation of a 720-degree field of view includes:
s201: and images shot by six cameras in the camera module are obtained through the PCI-E bus.
In the embodiment of the present invention, the camera module 10 is specifically connected to the processor 12 through the PCI-E bus, and accordingly, in this step, images taken by six cameras 11 in the camera module 10 can be acquired through the PCI-E bus. PCI-Express is a high-speed serial computer expansion bus standard, PCIE belongs to high-speed serial point-to-point double-channel high-bandwidth transmission, connected equipment distributes independent channel bandwidth and does not share bus bandwidth, and the PCI-Express mainly supports functions of active power management, error report, end-to-end reliable transmission, hot plug, quality of service (QOS) and the like. Its main advantage is high data transmission rate, and the current highest 16X 2.0 version can reach 10GB/s, and it also has considerable development potential. Of course, in the embodiment of the present invention, images captured by the six cameras 11 in the camera module 10 may also be obtained through other approaches, and a specific manner of image transmission is not specifically limited in the embodiment of the present invention.
In the embodiment of the present invention, the processor 12 of the 720-degree view field environment situation awareness method provided in the embodiment of the present invention is generally provided with a CPU and a GPU at the same time, where the GPU is used as a main processing component for image processing. The GPU has the main advantage of performing 3D graphics operations and data intensive scientific and engineering calculations. In the embodiment of the invention, the specific calculation mode of the method is GPU + CPU, namely the CPU and the GPU are combined and utilized in a heterogeneous cooperative processing calculation model. The serial part of the application runs on the CPU, while the computationally burdensome part is accelerated by the GPU.
Preferably, in the embodiment of the present invention, the camera 11 is a wide-angle camera, and an angle of view of the wide-angle camera is not less than 120 °. The wide-view camera with the view angle not less than 120 degrees is used for acquiring images with enough view angles, and meanwhile, the adjacent cameras 11, for example, a certain view angle overlapping area is formed between the upper camera and the front camera, so that the images acquired by different cameras 11 can be fused with each other based on the images of the view angle overlapping area in the subsequent steps. In the case of the same line, the cameras 11 used in the embodiment of the present invention are all fisheye cameras.
Before this step, it is usually necessary to calibrate the camera 11 in the display module, usually the wide-angle camera, in advance to determine the internal parameters and distortion parameters of the wide-angle camera. Because the imaging model of the wide-angle camera is different from the imaging model of a common vacuum camera, the reflected light path of the wide-angle camera is refracted, so that the imaging distortion of the wide-angle camera is caused, and the distortion is actually a general term of the inherent perspective distortion of the optical lens. In the embodiment of the invention, the wide-view camera is usually calibrated by adopting a Zhang calibration algorithm to obtain the internal parameters and distortion parameters of the wide-view camera.
S202: and preprocessing the image to eliminate the inter-field information deviation of the acquired images at multiple moments in the image.
In the embodiment of the present invention, the image acquired in S201 includes a plurality of patterns acquired at different times. That is, in the embodiment of the present invention, the camera 11 or the acquired image is specifically generated by superimposing a plurality of graphs, where each graph is a graph acquired by the camera 11 at a different time. The image collected at the same time is a field of image, and the image from the camera 11 is generated by superposing two fields of image collected at different times. Due to the motion, the two field patterns will have a large deviation. That is, during the motion process, each field image guarantees independent stability, but the inter-field information deviation is large, so that the direct combination of the inter-field information into one frame image can generate a severe ripple phenomenon. The phenomenon that the camera 11 shakes as the observation target moves faster or the camera module 10 moves too fast occurs. If the burr and ripple phenomena are not processed, the performance of the later image enhancement algorithm is seriously influenced. Correspondingly, in this step, an inter-field median filtering algorithm is usually adopted, and the algorithm synthesizes information of two fields, and takes adjacent position information of an odd field and an even field for processing, so as to realize image de-interlacing and early-stage image de-noising.
S203: and respectively mapping the images shot by the six cameras to six curved surfaces on the surface of the three-dimensional closed model.
This step is substantially the same as S102 in the above embodiment of the invention, and in the embodiment of the invention, two overlapped camera fusion regions 6 are provided between adjacent curved surfaces in the surface of the three-dimensional closed model. Specifically, each of the curved surfaces is adjacent to four curved surfaces, for example, the front curved surface 3 is simultaneously adjacent to the left curved surface 4, the right curved surface 5, the upper curved surface 1, and the lower curved surface 2, and the upper curved surface 1 is simultaneously adjacent to the front curved surface 3, the rear curved surface, the left curved surface 4, and the right curved surface 5. In the embodiment of the invention, an overlapped dual-camera fusion area 6 is arranged between any two adjacent curved surfaces, and images mapped by at least two adjacent curved surfaces are overlapped in the dual-camera fusion area 6.
Specifically, in the embodiment of the present invention, a single-camera mapping region 7 and a dual-camera fusion region 6 are disposed on any curved surface, and the dual-camera fusion region 6 surrounds the single-camera mapping region 7. The above-mentioned double-camera fusion area 6 is an area where two adjacent curved surfaces overlap each other, in the double-camera fusion area 6, images acquired by at least two cameras 11 overlap each other, and in the single-camera mapping area 7 of the curved surface, only the image acquired by the camera 11 corresponding to the curved surface is mapped in the single-camera mapping area 7. At this time, in one curved surface, the two-camera fusion area 6 surrounds the single-camera mapping area 7. In other words, in the stereo closed model of the mapped image, the image located in the single-camera mapping region 7 is only originated from the corresponding one camera 11, and the image in the dual-camera fusion region 6 is originated from at least two cameras 11.
S204: and fusing the images in the dual-camera fusion area.
In this step, usually based on image fusion technology, the method of image gradient fusion is adopted to eliminate the obvious visual deviation in the mapped image. Namely, the step is usually embodied as follows: and fusing the images in the double-camera fusion area 6 based on an image fusion technology to balance the illumination brightness of the surface mapping image of the fused three-dimensional closed model.
In image splicing, because the illumination positions of adjacent cameras 11 are different, images shot by the cameras have differences in brightness characteristics, and the images are directly mapped according to the positions, so that obvious visual differences in image splicing are caused. Therefore, the illumination brightness equalization processing of the stitched image is required. The image fusion technology is a common illumination brightness equalization processing technology in the application of image splicing technology, and is mainly used for fusing overlapping areas in spliced images in digital image splicing so that the spliced images can be kept consistent visually. In the embodiment of the present invention, the image fusion is usually performed by using a gaussian pyramid image weighting method, and a gain value is added to the pixel brightness value of each camera 11 before the display.
S205: and acquiring a control signal input by a user.
S206: and displaying the image of the target area corresponding to the control signal in the stereo closed model on the display according to the control signal.
S205 to S206 are substantially the same as S103 to S104 in the above embodiment of the invention, and for details, reference is made to the above embodiment of the invention, which is not repeated herein.
According to the 720-degree view field environment situation perception method provided by the embodiment of the invention, the double-camera fusion area 6 with the overlapping is arranged between the adjacent curved surfaces in the surface of the three-dimensional closed model, and the images in the double-camera fusion area 6 are fused, so that the obvious visual deviation in the mapped images can be eliminated.
In the following, the 720-degree view field environment situation awareness system provided by the embodiment of the present invention is introduced, and the 720-degree view field environment situation awareness system described below and the 720-degree view field environment situation awareness method described above may be referred to correspondingly.
Referring to fig. 7, fig. 7 is a block diagram of a 720-degree view field environment situation awareness system according to an embodiment of the present invention.
Referring to fig. 7, in the embodiment of the present invention, the 720-degree view field environment situation awareness system includes a camera module 10, a display 14, a processor 12, and a memory 13; six cameras 11 are arranged in the camera module 10, and the six cameras 11 are distributed in a cube shape; a three-dimensional closed model is stored in the memory 13 in advance, and six curved surfaces are divided on the surface of the three-dimensional closed model; the curved surfaces comprise a front curved surface 3, a rear curved surface, a left curved surface 4, a right curved surface 5, an upper curved surface 1 and a lower curved surface 2; the upper curved surface 1 and the lower curved surface 2 are arranged oppositely; the front curved surface 3, the rear curved surface, the left curved surface 4 and the right curved surface 5 are connected into an annular arc surface and are positioned between the upper curved surface 1 and the lower curved surface 2; the cameras 11 correspond to the curved surfaces one by one; the processor 12 is configured to: acquiring images shot by six cameras 11 in the camera module 10; calling the three-dimensional closed model stored in the memory 13, and respectively mapping the images shot by the six cameras 11 to the six curved surfaces on the surface of the three-dimensional closed model; acquiring a control signal input by a user; and displaying the image of the target area corresponding to the control signal in the stereo closed model on a display 14 according to the control signal.
Preferably, in the embodiment of the present invention, the three-dimensional closed model is a spherical model, and the upper curved surface 1 and the lower curved surface 2 are both in a spherical crown shape.
Preferably, in the embodiment of the present invention, the surfaces of the three-dimensional closed model have two overlapped camera fusion regions 6 between adjacent curved surfaces; the processor 12 is further configured to: and fusing the images in the double-camera fusion area 6.
Preferably, in the embodiment of the present invention, the processor 12 is specifically configured to: and fusing the images in the double-camera fusion area 6 based on an image fusion technology to balance the illumination brightness of the surface mapping image of the fused three-dimensional closed model.
Preferably, in the embodiment of the present invention, the camera 11 is a wide-angle camera, and an angle of view of the wide-angle camera is not less than 120 °; the processor 12 is further configured to: and calibrating the wide-view camera.
Preferably, in the embodiment of the present invention, the image includes a plurality of patterns acquired at a plurality of times; the processor 12 is further configured to: and preprocessing the image to eliminate the inter-field information deviation of the acquired graph at multiple moments in the image.
Preferably, in the embodiment of the present invention, the processor 12 is connected to the camera module 10 through a PCI-E bus.
The camera module 10 is used for acquiring images according to the camera 11, the memory 13 is used for storing a preset three-dimensional closed model, the display 14 is used for displaying images, and the processor 12 is mainly used for acquiring images through the camera module 10, mapping out panoramic images according to the three-dimensional closed model and finally displaying the images according to control signals input by a user. That is, the processor 12 mainly depends on the camera module 10, the display 14 and the memory 13 to implement the 720-degree view field environment situation perception method provided by the embodiment of the present invention. Therefore, the detailed description thereof may refer to the description of the corresponding partial embodiments, which is not repeated herein.
The embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same or similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The 720-degree view field environment situation sensing method and the 720-degree view field environment situation sensing system provided by the invention are described in detail above. The principles and embodiments of the present invention are explained herein using specific examples, which are presented only to assist in understanding the method and its core concepts. It should be noted that, for those skilled in the art, it is possible to make various improvements and modifications to the present invention without departing from the principle of the present invention, and those improvements and modifications also fall within the scope of the claims of the present invention.

Claims (10)

1. A720-degree view field environment situation perception method is characterized by comprising the following steps:
acquiring images shot by six cameras in a camera module; the six cameras are distributed in a cube shape;
respectively mapping images shot by the six cameras to six curved surfaces on the surface of the three-dimensional closed model; the curved surfaces comprise a front curved surface, a rear curved surface, a left curved surface, a right curved surface, an upper curved surface and a lower curved surface; the upper curved surface and the lower curved surface are arranged oppositely; the front curved surface, the rear curved surface, the left curved surface and the right curved surface are connected into an annular arc surface which is positioned between the upper curved surface and the lower curved surface; the cameras correspond to the curved surfaces one by one;
acquiring a control signal input by a user;
and displaying the image of the target area corresponding to the control signal in the stereo closed model on a display according to the control signal.
2. The method of claim 1, wherein the closed solid model is a spherical model, and the upper curved surface and the lower curved surface are each spherical crown shaped.
3. The method according to claim 1, wherein the stereo closed model surface has overlapping dual-camera blending regions between adjacent curved surfaces;
after the images taken by the six cameras are respectively mapped to the six curved surfaces of the surface of the stereo closed model, the method further comprises the following steps:
and fusing the images in the double-camera fusion area.
4. The method of claim 3, wherein fusing the images located within the dual-camera fusion region comprises:
and fusing the images in the fusion area of the two cameras based on an image fusion technology to balance the illumination brightness of the surface mapping image of the three-dimensional closed model after fusion.
5. The method of claim 1, wherein the camera is a wide view camera having a view angle of not less than 120 °;
before the mapping of the images taken by the six cameras to the six curved surfaces of the surface of the stereo closed model, the method further includes:
and calibrating the wide-view camera.
6. The method of claim 5, wherein the image comprises a pattern acquired at a plurality of times;
after the images shot by the six cameras in the camera module are obtained, the method further comprises:
and preprocessing the image to eliminate the inter-field information deviation of the acquired graph at multiple moments in the image.
7. The method of claim 6, wherein the obtaining images captured by six cameras in a camera module comprises:
and images shot by six cameras in the camera module are obtained through the PCI-E bus.
8. A720-degree view field environment situation perception system is characterized by comprising a camera module, a display, a processor and a memory;
the camera module is internally provided with six cameras which are distributed in a cube shape;
the storage is pre-stored with a three-dimensional closed model, and six curved surfaces are divided on the surface of the three-dimensional closed model; the curved surfaces comprise a front curved surface, a rear curved surface, a left curved surface, a right curved surface, an upper curved surface and a lower curved surface; the upper curved surface and the lower curved surface are arranged oppositely; the front curved surface, the rear curved surface, the left curved surface and the right curved surface are connected into an annular arc surface which is positioned between the upper curved surface and the lower curved surface; the cameras correspond to the curved surfaces one by one;
the processor is configured to:
acquiring images shot by six cameras in the camera module;
calling the three-dimensional closed model stored in the memory, and respectively mapping the images shot by the six cameras to the six curved surfaces on the surface of the three-dimensional closed model;
acquiring a control signal input by a user;
and displaying the image of the target area corresponding to the control signal in the stereo closed model on a display according to the control signal.
9. The system of claim 8, wherein the closed solid model is a spherical model, and the upper curved surface and the lower curved surface are each spherical crown shaped.
10. The system according to claim 8, wherein the stereo closed model surface has overlapping dual-camera blending regions between adjacent curved surfaces;
the processor is further configured to:
and fusing the images in the double-camera fusion area.
CN201911342552.XA 2019-12-23 2019-12-23 720-degree view field environment situation sensing method and situation sensing system Pending CN110958444A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911342552.XA CN110958444A (en) 2019-12-23 2019-12-23 720-degree view field environment situation sensing method and situation sensing system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911342552.XA CN110958444A (en) 2019-12-23 2019-12-23 720-degree view field environment situation sensing method and situation sensing system

Publications (1)

Publication Number Publication Date
CN110958444A true CN110958444A (en) 2020-04-03

Family

ID=69983663

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911342552.XA Pending CN110958444A (en) 2019-12-23 2019-12-23 720-degree view field environment situation sensing method and situation sensing system

Country Status (1)

Country Link
CN (1) CN110958444A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112788252A (en) * 2020-12-29 2021-05-11 中国科学院长春光学精密机械与物理研究所 720-degree panoramic camera capable of eliminating bottom image shielding
CN114007017A (en) * 2021-11-18 2022-02-01 浙江博采传媒有限公司 Video generation method and device and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105611128A (en) * 2015-12-28 2016-05-25 上海集成电路研发中心有限公司 Panorama camera
US20160286138A1 (en) * 2015-03-27 2016-09-29 Electronics And Telecommunications Research Institute Apparatus and method for stitching panoramaic video
CN109819157A (en) * 2017-11-20 2019-05-28 富泰华工业(深圳)有限公司 Panoramic camera and its control method
CN110383843A (en) * 2017-03-22 2019-10-25 高通股份有限公司 The sphere equatorial projection being effectively compressed for 360 degree of videos

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160286138A1 (en) * 2015-03-27 2016-09-29 Electronics And Telecommunications Research Institute Apparatus and method for stitching panoramaic video
CN105611128A (en) * 2015-12-28 2016-05-25 上海集成电路研发中心有限公司 Panorama camera
CN110383843A (en) * 2017-03-22 2019-10-25 高通股份有限公司 The sphere equatorial projection being effectively compressed for 360 degree of videos
CN109819157A (en) * 2017-11-20 2019-05-28 富泰华工业(深圳)有限公司 Panoramic camera and its control method

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112788252A (en) * 2020-12-29 2021-05-11 中国科学院长春光学精密机械与物理研究所 720-degree panoramic camera capable of eliminating bottom image shielding
CN112788252B (en) * 2020-12-29 2021-10-22 中国科学院长春光学精密机械与物理研究所 720-degree panoramic camera capable of eliminating bottom image shielding
CN114007017A (en) * 2021-11-18 2022-02-01 浙江博采传媒有限公司 Video generation method and device and storage medium

Similar Documents

Publication Publication Date Title
CN107431796B (en) The omnibearing stereo formula of panoramic virtual reality content captures and rendering
JP5739584B2 (en) 3D image synthesizing apparatus and method for visualizing vehicle periphery
WO2020192706A1 (en) Object three-dimensional model reconstruction method and device
US11330172B2 (en) Panoramic image generating method and apparatus
TWI397317B (en) Method for providing output image in either cylindrical mode or perspective mode
CN106934772B (en) Horizontal calibration method and system for panoramic image or video and portable terminal
CN104778656B (en) Fisheye image correcting method based on spherical perspective projection
KR20180101165A (en) Frame stitching with panoramic frame
CN103065318B (en) The curved surface projection method and device of multiple-camera panorama system
CN107945112A (en) A kind of Panorama Mosaic method and device
US20190132529A1 (en) Image processing apparatus and image processing method
CN108200360A (en) A kind of real-time video joining method of more fish eye lens panoramic cameras
CN111047709A (en) Binocular vision naked eye 3D image generation method
CN107967666B (en) Panoramic image generation method and device
CN111866523B (en) Panoramic video synthesis method and device, electronic equipment and computer storage medium
CN110782507A (en) Texture mapping generation method and system based on face mesh model and electronic equipment
CN110958444A (en) 720-degree view field environment situation sensing method and situation sensing system
CN110047039A (en) A kind of redundancy visual field full-view image construction method of Virtual reality interaction
CN105991929A (en) Extrinsic parameter calibration and whole-space video stitching method for whole-space camera
JP2008217593A (en) Subject area extraction device and subject area extraction program
CN113838116B (en) Method and device for determining target view, electronic equipment and storage medium
CN107197135A (en) A kind of video generation method, player method and video-generating device, playing device
US10802390B2 (en) Spherical omnipolar imaging
KR20190019059A (en) System and method for capturing horizontal parallax stereo panoramas
CN111860632B (en) Multipath image consistency fusion method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 215347 7th floor, IIR complex, 1699 Weicheng South Road, Kunshan City, Suzhou City, Jiangsu Province

Applicant after: Kunshan Microelectronics Technology Research Institute

Address before: 215347 7th floor, complex building, No. 1699, Zuchongzhi South Road, Kunshan City, Suzhou City, Jiangsu Province

Applicant before: KUNSHAN BRANCH, INSTITUTE OF MICROELECTRONICS OF CHINESE ACADEMY OF SCIENCES

CB02 Change of applicant information
RJ01 Rejection of invention patent application after publication

Application publication date: 20200403

RJ01 Rejection of invention patent application after publication