CN107948544A - A kind of multi-channel video splicing system and method based on FPGA - Google Patents
A kind of multi-channel video splicing system and method based on FPGA Download PDFInfo
- Publication number
- CN107948544A CN107948544A CN201711212165.5A CN201711212165A CN107948544A CN 107948544 A CN107948544 A CN 107948544A CN 201711212165 A CN201711212165 A CN 201711212165A CN 107948544 A CN107948544 A CN 107948544A
- Authority
- CN
- China
- Prior art keywords
- image
- video
- fpga
- coordinate
- splicing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
- H04N7/181—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/10—Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
- H04N23/84—Camera processing pipelines; Components thereof for processing colour signals
- H04N23/88—Camera processing pipelines; Components thereof for processing colour signals for colour balance, e.g. white-balance circuits or colour temperature control
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/265—Mixing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/64—Circuits for processing colour signals
Abstract
A kind of multi-channel video splicing system and method based on FPGA, wherein splicing system include multiple fish-eye cameras, fpga chip, ASIC video frequency processing chips;All fish-eye cameras are fixed on same level with equally spaced angle;The video data of the fish-eye camera is input in FPGA, and pattern distortion correction and image co-registration are carried out in FPGA, after the completion of give video data transmission to ASIC video frequency processing chips, video is compressed coding, the splicing video that can be transmitted with formation.The present invention uses cores of the FPGA as multi-channel video splicing, FPGA is functioning in parallel, and single clock cycle interior energy is largely operated, particularly suitable for Computer Vision, so that the synchronous collection with video of concatenation, reaches the effect that video is handled in real time.
Description
Technical field
The invention belongs to image communication technology field, and in particular to a kind of multi-channel video splicing system and side based on FPGA
Method.
Background technology
Video-splicing technology is used using computer technology and image procossing as core, from multiple video image acquisition equipment
Upper synchronization gain position and angle be different and the video image in the region that overlaps, by the technologies such as image registration and fusion and
Obtain multi-channel video stitching image.
At present, the overwhelming majority of multi-channel video splicing system is realized using embedded-type ARM method, embedded platform
Processor be serial process, its processing speed is limited, big to data volume and to calculate complicated video-splicing method difficult
To meet requirement of real-time;The power consumption of embedded platform is usually very big, since video-splicing requires processing mass data, operand
Greatly, this will cause power consumption of processing unit increase, and then add the heat dissipation design difficulty of multi-channel video splicing system;In order to meet to regard
The requirement of real-time of frequency splicing, then must use high performance embeded processor, so as to increase the cost of design.Therefore, base
In the multi-channel video splicing of embedded platform be difficult to promote on a mobile platform.
The content of the invention
In the prior art, the overwhelming majority of multi-channel video splicing system is realized using embedded-type ARM method, embedded
The processor of formula platform is serial process, its processing speed is limited, to the video-splicing that data volume is big and calculating is complicated
Method is difficult to meet requirement of real-time, this in order to solve the problems, such as, the present invention provides a kind of multi-channel video splicing based on FPGA
System, concrete scheme are as follows:
A kind of multi-channel video splicing system based on FPGA, including:
Multiple fish-eye cameras, fpga chip, ASIC video frequency processing chips;All fish-eye cameras are with equally spaced angle
It is fixed on same level;The video data of the fish-eye camera is input in FPGA, and pattern distortion is carried out in FPGA and is rectified
Just with image co-registration, after the completion of give video data transmission to ASIC video frequency processing chips, video is compressed coding, to be formed
The splicing video that can be transmitted.
Wherein, in said system, the system also includes real-time clock module and GPS module, the real-time clock mould
Block and the GPS module are used to add shooting time and spot for photography information in video or picture.
Wherein, in said system, the system also includes gyroscope, for preventing the shake produced in shooting process,
With the quality of lifting shooting.
Wherein, in said system, the system also includes SD card and WIFI communication module, to pass through WIFI connections intelligence
Terminal, by video data transmission to intelligent terminal, can carry out live preview splicing video, and pass through in the APP of intelligent terminal
Shooting instruction can be transmitted in intelligent terminal, and after ASIC video frequency processing chips receive shooting instruction, video or picture are stored in
In the SD card of equipment.
Multi-channel video splicing system provided by the invention based on FPGA, the system can be fabricated to portable equipment, be easy to
Carry, the present invention is fabricated to portable handheld device, and can carry with from charged pool and carry out panoramic video bat
Take the photograph, be connected to by WIFI in cell phone application and carry out live preview, while equipment realizes video store function, can regarding shooting
Frequency is stored in SD card.
Present invention also offers a kind of multi-channel video joining method based on FPGA, specifically comprise the following steps:
Step S1, initializes multi-channel video splicing system, and using same layout line at the same time to multi-channel video splicing system
Each camera carry out initial configuration, the initial configuration includes video frequency processing chip, fpga chip, image ISP are set
Put and the initial configuration file of the more Splicing models of video;
Step S2, gathers image, and reads the video data of every road camera in real time by FPGA, to carry out image
Distortion correction;
Step S3, image co-registration is carried out by the video data of distortion correction by FPGA;
Step S4, carries out video compression coding processing, the splicing that generating to transmit is regarded by the video data after image co-registration
Frequently.
Wherein, in the above-mentioned methods, multi-channel video splicing system uses a full set of autonomous controllable image in the step S2
ISP Processing Algorithms carry out Image Acquisition, specifically include;
First, the Raw images of Bayer format are gathered using each camera of multi-channel video splicing system;
Then, Raw images are sent to ISP, and pass through ISP algorithm process, export the image in rgb space domain to subsequently
Video acquisition unit;
Wherein, the autonomous controllable ISP Processing Algorithms which provides specifically include automatic white balance (AWB) correction, color
Correction matrix (CCM), camera lens shadow correction (LSC) and color purple boundary correction (CAC).
Wherein, in the above-mentioned methods, the step S2 is rectified using the distortion of the method progress image of coordinates computed mapping table
Just, specifically comprise the following steps:
Step S21:To given multi-channel video splicing system, calculated using gridiron pattern scaling method in each camera lens
Parameter and distortion factor, the spatial relationship parameters between adjacent camera lens;
Step S22:Original is calculated using the spatial relationship parameters between the intrinsic parameter and distortion factor and camera lens of camera lens
Coordinate map between beginning image and target image, coordinate map save the original image and process of each camera lens collection
One-to-many correspondence in the target image that image split-joint method is formed between pixel point coordinates, wherein, target image is
Remove the image of distortion;
Step S23:The original image that each fish eye lens photographs is mapped to by target according to the coordinate map
On image, using the pixel in the target image as target pixel points, using the preset coordinate mapping table, determine described
Target pixel points and the correspondence of the source image vegetarian refreshments on the pending original image;
Wherein, in the above-mentioned methods, the target pixel points and the pending original image are determined in the step S2
On the correspondence of source image vegetarian refreshments specifically include:
Searched in original image as follows with the corresponding source image vegetarian refreshments of target pixel points, its computational methods:
Dst (x, y)=Src (Lut_x (x, y), Lut_y (x, y))
Wherein, Dst (x, y) denotation coordination is the target pixel points of (x, y), and Lut_x (x, y) represents object pixel point coordinates
(x, y) is mapped in source images the coordinate value in X-direction by preset coordinate mapping table, and Lut_y (x, y) represents target pixel points
Coordinate (x, y) is mapped in source images the coordinate value in Y-direction, Src (Lut_x (x, y), Lut_y by preset coordinate mapping table
(x, y)) represent the position that object pixel point coordinates (x, y) is mapped to by preset coordinate mapping table in source images;
When it is not integer that object pixel point coordinates, which is mapped to the coordinate value that is obtained in source images, non-integer coordinates are carried out
Interpolation run, generate non-integer coordinates under integer pixel values, using bi-cubic interpolation algorithm generate integer pixel values, double cubes
The calculation formula of interpolation algorithm is as follows:
Wherein, (i ', j ') represents that pixel to be calculated includes the pixel coordinate of fractional part in 4 × 4 sample area,
P (i ', j ') represents the new pixel that 16 pixel values in 4 × 4 sample area do convolution and formed afterwards with respective weight
It is worth, the fractional coordinate of dx expression X-directions, the fractional coordinate of dy expression Y-directions, in the sample area of m expressions 4 × 4 in X-direction
Coordinate, n represent the coordinate in Y-direction in 4 × 4 sample area, P (m, n) represent coordinate in 4 × 4 sample area for (m,
N) pixel value, R () represent interpolation expression, common are based on triangle value, Bell distribution expression formulas, B-spline curves table
It is as follows into row interpolation, mathematical formulae using B-spline curves expression formula in the present embodiment up to formula:
After carrying out projection mapping to each target pixel points in target image using bi-cubic interpolation, object pixel is preserved
The correspondence of point and the correspondence of valid pixel in original image, target pixel points and source image vegetarian refreshments is in the target image
Overlapping region is a kind of a pair of two relation, is man-to-man relation in other regions.
Wherein, in the above-mentioned methods, the video data of distortion correction is carried out image by FPGA in the step S3 to melt
Conjunction specifically includes:
The coordinate map calculated according to the step S2, due between adjacent two pictures in each original image
There are certain overlapping region, and for overlapping region, then preset coordinate mapping table is its two groups of correspondence of preservation;
Judge whether the number of the source image vegetarian refreshments determined using the target pixel points is unique;If the number
Uniquely, then judge that the target pixel points are not located at the overlapping region;Otherwise judge the target pixel points positioned at described heavy
Folded region;
α is done to the pixel value of two source image vegetarian refreshments of overlapping region using the method for linear weighted function and is mixed to get mixed pixel
Value, computational methods are as follows:
Idst(x, y)=α × Isrc1(x, y)+(1- α) × Isrc2(x, y)
Wherein, Idst(x, y) represents the pixel value at coordinate (x, y) place in target image, Isrc1(x, y) represents original image 1
The pixel value at middle coordinate (x, y) place, Lsrc2(x, y) represents the pixel value at coordinate (x, y) place in original image 2, and α represents weighting system
Number;
Lap left image " gradually going out " can be caused using the method for linear weighted function, and lap right image
" being fade-in ", which achieves gentle excessive so that image mosaic seems nature.
Wherein, in the above-mentioned methods, video data is compressed by ASIC video frequency processing chips in the step S4,
And video is output to by mobile terminal by WIFI.
Multi-channel video splicing system provided by the invention based on FPGA, the system can be fabricated to portable equipment, be easy to
Carry, the present invention is fabricated to portable handheld device, and can carry with from charged pool and carry out panoramic video bat
Take the photograph, be connected to by WIFI in cell phone application and carry out live preview, while equipment realizes video store function, can regarding shooting
Frequency is stored in SD card.
Multi-channel video splicing system and method provided by the invention based on FPGA can realize that high-quality video splices in real time,
Conventional splicing is all based on greatly image and is spliced, and since algorithm complex is big, image mosaic technology is transplanted to and is regarded
Fail to reach the effect handled in real time when on frequency, the present invention, again by the improvement of algorithm, is made using the parallel processing advantage of FPGA
The synchronous collection with video of concatenation is obtained, reaches the effect that video is handled in real time.
Brief description of the drawings
Fig. 1 is the structure diagram for the example that multi-channel video splicing system of the present invention based on FPGA provides;
Fig. 2 is the method flow diagram for the example that multi-channel video joining method of the present invention based on FPGA provides;
Fig. 3 is the linear weighted function image co-registration schematic diagram that the embodiment of the present invention provides;
Fig. 4 is image interpolation schematic diagram provided in an embodiment of the present invention.
Embodiment
To make the object, technical solutions and advantages of the present invention of greater clarity, with reference to embodiment and join
According to attached drawing, the present invention is described in more detail.It should be understood that these descriptions are merely illustrative, and it is not intended to limit this hair
Bright scope.In addition, in the following description, the description to known features and technology is eliminated, to avoid this is unnecessarily obscured
The concept of invention.
Video-splicing technology is used using computer technology and image procossing as core, from multiple video image acquisition equipment
Upper synchronization gain position and angle be different and the video image in the region that overlaps, by the technologies such as image registration and fusion and
Obtain multi-channel video stitching image.
At present, the overwhelming majority of multi-channel video splicing system is realized using embedded-type ARM method, embedded platform
Processor be serial process, its processing speed is limited, big to data volume and to calculate complicated video-splicing method difficult
To meet requirement of real-time;The power consumption of embedded platform is usually very big, since video-splicing requires processing mass data, operand
Greatly, this will cause power consumption of processing unit increase, and then add the heat dissipation design difficulty of multi-channel video splicing system;In order to meet to regard
The requirement of real-time of frequency splicing, then must use high performance embeded processor, so as to increase the cost of design.The present invention adopts
Core by the use of FPGA as multi-channel video splicing, FPGA are functioning in parallel, and single clock cycle interior energy carries out substantial amounts of
Operation, particularly suitable for Computer Vision.
Multi-channel video splicing system provided by the invention based on FPGA, as shown in Figure 1, the system includes:
Multiple fish-eye cameras, fpga chip, ASIC video frequency processing chips;All fish-eye cameras are with equally spaced angle
It is fixed on same level;The video data of the fish-eye camera is input in FPGA, and pattern distortion is carried out in FPGA and is rectified
Just with image co-registration, after the completion of give video data transmission to ASIC video frequency processing chips, video is compressed coding, to be formed
The splicing video that can be transmitted.
Preferably, multi-channel video splicing system of the invention employs 4 fish-eye cameras, is satisfied by transverse field angle
For 100 degree, longitudinal field of view angle is 200 degree, and each exportable video resolution of fish-eye camera is 1920*1080, real
When the frame per second of splicing video that exports be 30fps.
4 90 degree of camera intervals are fixed in collecting device, and remain at same level, and relative position is solid
Fixed constant, therefore, equipment acquired image quality is higher, be not in rotation, not in video-splicings such as same levels not
Situation about getting up.
It is further preferred that the data that are gathered of multi-channel video splicing system of the present invention fpga chip RAM or
In FIFO, the asynchronous signal of 4 fish-eye cameras is synchronized into processing.
In specific implementation, the system also includes real-time clock module and GPS module, the real-time clock module and described
GPS module is used to add shooting time and spot for photography information in video or picture.
In specific implementation, the system also includes gyroscope, for preventing the shake produced in shooting process, is clapped with being lifted
The quality taken the photograph.
In specific implementation, the system also includes SD card and WIFI communication module, by WIFI connection intelligent terminals, to incite somebody to action
Video data transmission can carry out live preview splicing video, and pass through intelligent terminal to intelligent terminal in the APP of intelligent terminal
Video or picture, after ASIC video frequency processing chips receive shooting instruction, are stored in the SD of equipment by transmittable shooting instruction
In card, wherein, intelligent terminal is mobile phone, can be downloaded by the preset APP of mobile phone and video is spliced in preview.
Further, multi-channel video splicing system of the invention support External microphone input, it can be achieved that audio and video it is same
Step, in addition, the HDMI of multi-channel video splicing system supports audio output.
Further, multi-channel video splicing system of the invention supports the live agreement of RTMP plug-flows, and system passes through cable
Internet is accessed, it can be achieved that 4K high-definition live broadcastings.
Multi-channel video splicing system provided by the invention based on FPGA, the system can be fabricated to portable equipment, be easy to
Carry, the present invention is fabricated to portable handheld device, and can carry with from charged pool and carry out panoramic video bat
Take the photograph, be connected to by WIFI in cell phone application and carry out live preview, while equipment realizes video store function, can regarding shooting
Frequency is stored in SD card.
Present invention also offers a kind of multi-channel video joining method based on FPGA, as shown in Fig. 2, this method specifically includes
Following steps:
Step S1, initializes multi-channel video splicing system, and using same layout line at the same time to multi-channel video splicing system
Each camera carry out initial configuration, the initial configuration includes video frequency processing chip, fpga chip, image ISP are set
Put and the initial configuration file of the more Splicing models of video;
Step S2, gathers image, and reads the video data of every road camera in real time by FPGA, to carry out image
Distortion correction;
Step S3, crosses FPGA by the video data of distortion correction and carries out image co-registration;
Step S4, carries out video compression coding processing, the splicing that generating to transmit is regarded by the video data after image co-registration
Frequently.
In specific implementation, multi-channel video splicing system is using a full set of autonomous controllable image ISP processing in the step S2
Algorithm carries out Image Acquisition, specifically includes;
First, the Raw images of Bayer format are gathered using each camera of multi-channel video splicing system;
Then, Raw images are sent to ISP, and pass through ISP algorithm process, export the image in rgb space domain to subsequently
Video acquisition unit;
Wherein, the autonomous controllable ISP Processing Algorithms which provides specifically include automatic white balance (AWB) correction, color
Correction matrix (CCM), camera lens shadow correction (LSC) and color purple boundary correction (CAC).
Based on autonomous controllable ISP Processing Algorithms the content of shooting can be made to improve imaging closer to the color of itself
Quality, obtain more preferable user experience.
It is further preferred that the multi-channel video splicing system of the present invention employs the Splicing model of four kinds of different scenes, point
It is not macro mode, indoor mode, outdoor pattern and pattern of taking photo by plane.
In different Splicing models, different splicing distances, the scope of application are set, regarding for several scenes is suitable for this
Frequency splices, it is ensured that it is issued to seamless spliced in different scenes.
The more Splicing models of video that the system uses can improve video-splicing mistake compared with existing single Splicing model
The splicing ghost image that occurs in journey, it is fuzzy the problems such as.
In specific implementation, the step S2 carries out the distortion correction of image using the method for coordinates computed mapping table, specifically
Include the following steps:
Step S21:To given multi-channel video splicing system, calculated using gridiron pattern scaling method in each camera lens
Parameter and distortion factor, the spatial relationship parameters between adjacent camera lens;
Step S22:Original is calculated using the spatial relationship parameters between the intrinsic parameter and distortion factor and camera lens of camera lens
Coordinate map between beginning image and target image, coordinate map save the original image and process of each camera lens collection
One-to-many correspondence in the target image that image split-joint method is formed between pixel point coordinates, wherein, target image is
Remove the image of distortion;
Step S23:The original image that each fish eye lens photographs is mapped to by target according to the coordinate map
On image, using the pixel in the target image as target pixel points, using the preset coordinate mapping table, determine described
Target pixel points and the correspondence of the source image vegetarian refreshments on the pending original image;
Wherein, in the above-mentioned methods, the target pixel points and the pending original image are determined in the step S2
On the correspondence of source image vegetarian refreshments specifically include:
Searched in original image as follows with the corresponding source image vegetarian refreshments of target pixel points, its computational methods:
Dst (x, y)=Src (Lut_x (x, y), Lut_y (x, y))
Wherein, Dst (x, y) denotation coordination is the target pixel points of (x, y), and Lut_x (x, y) represents object pixel point coordinates
(x, y) is mapped in source images the coordinate value in X-direction by preset coordinate mapping table, and Lut_y (x, y) represents target pixel points
Coordinate (x, y) is mapped in source images the coordinate value in Y-direction, Src (Lut_x (x, y), Lut_y by preset coordinate mapping table
(x, y)) represent the position that object pixel point coordinates (x, y) is mapped to by preset coordinate mapping table in source images;
Integer is not necessarily since object pixel point coordinates is mapped to the coordinate value obtained in source images, under non-integer coordinates
Corresponding pixel value will appear as decimal, it is contemplated that the pixel value of image is represented with integer value, it is therefore desirable to what is obtained
Non-integer coordinates are run into row interpolation, generate the integer pixel values under non-integer coordinates, are generated using bi-cubic interpolation algorithm whole
Number pixel value, the schematic diagram of bi-cubic interpolation algorithm is as shown in figure 3, the calculation formula of bi-cubic interpolation algorithm is as follows:
Wherein, (i ', j ') represents that pixel to be calculated includes the pixel coordinate of fractional part in 4 × 4 sample area,
P (i ', j ') represents the new pixel that 16 pixel values in 4 × 4 sample area do convolution and formed afterwards with respective weight
It is worth, the fractional coordinate of dx expression X-directions, the fractional coordinate of dy expression Y-directions, in the sample area of m expressions 4 × 4 in X-direction
Coordinate, n represent the coordinate in Y-direction in 4 × 4 sample area, P (m, n) represent coordinate in 4 × 4 sample area for (m,
N) pixel value, R () represent interpolation expression, common are based on triangle value, Bell distribution expression formulas, B-spline curves table
It is as follows into row interpolation, mathematical formulae using B-spline curves expression formula in the present embodiment up to formula:
After carrying out projection mapping to each target pixel points in target image using bi-cubic interpolation, object pixel is preserved
The correspondence of point and the correspondence of valid pixel in original image, target pixel points and source image vegetarian refreshments is in the target image
Overlapping region is a kind of a pair of two relation, is man-to-man relation in other regions.
The video data of distortion correction is carried out image co-registration by FPGA in specific implementation, in the step S3 specifically to wrap
Include:
The coordinate map calculated according to the step S2, due between adjacent two pictures in each original image
There are certain overlapping region, and for overlapping region, then preset coordinate mapping table is its two groups of correspondence of preservation;
Judge whether the number of the source image vegetarian refreshments determined using the target pixel points is unique;If the number
Uniquely, then judge that the target pixel points are not located at the overlapping region;Otherwise judge the target pixel points positioned at described heavy
Folded region;
α is done to the pixel value of two source image vegetarian refreshments of overlapping region using the method for linear weighted function and is mixed to get mixed pixel
Value, as shown in figure 4, computational methods are as follows:
Idst(x, y)=α × Isrc1(x, y)+(1- α) × Isrc2(x, y)
Wherein, Idst(x, y) represents the pixel value at coordinate (x, y) place in target image, Isrc1(x, y) represents original image 1
The pixel value at middle coordinate (x, y) place, Isrc2(x, y) represents the pixel value at coordinate (x, y) place in original image 2, and α represents weighting system
Number;
Lap left image " gradually going out " can be caused using the method for linear weighted function, and lap right image
" being fade-in ", which achieves gentle excessive so that image mosaic seems nature.
The method of linear weighted function needs to determine the region of fusion before image co-registration, and that chooses in embodiments of the present invention melts
Conjunction width is 128 pixels, then selects a fusion line as preferable as possible, in embodiments of the present invention, using dynamic
The method of state planning calculates fusion line.
In specific implementation, video data is compressed by ASIC video frequency processing chips in the step S4, and passes through
Video is output to mobile terminal by WIFI, to be checked in mobile device end.
Relative to embeded processor, FPGA is functioning in parallel, and single clock cycle interior energy is largely operated, special
Not Shi Yongyu Computer Vision, and FPAG have the characteristics that it is low in energy consumption, using FPGA as processing core will in power consumption and
Cost is all reduced.
The present invention uses cores of the FPGA as multi-channel video splicing, and FPGA is functioning in parallel, single clock week
Phase interior energy is largely operated, and particularly suitable for Computer Vision, the video that the present invention devises low-resource occupancy is spelled
Algorithm and autonomous controllable ISP Processing Algorithms are connect, both algorithms are without using dedicated mass storage, using only FPGA
Internal storage resources, algorithm realize the expense that external memory storage is simply saved while logical resource is taken less, and
FPGA has the characteristics that low in energy consumption, will be all reduced in power consumption and cost as core using FPGA.
Multi-channel video joining method provided by the invention based on FPGA can realize that high-quality video splices in real time, conventional
Splicing is all based on greatly image and is spliced, since algorithm complex is big, when image mosaic technology is transplanted on video
Fail to reach the effect handled in real time, the present invention passes through the improvement of algorithm again using the parallel processing advantage of FPGA so that splicing
The synchronous collection with video of operation, reaches the effect that video is handled in real time.
It should be appreciated that the above-mentioned embodiment of the present invention is used only for exemplary illustration or explains the present invention's
Principle, without being construed as limiting the invention.Therefore, that is done without departing from the spirit and scope of the present invention is any
Modification, equivalent substitution, improvement etc., should all be included in the protection scope of the present invention.In addition, appended claims purport of the present invention
Covering the whole changes fallen into scope and border or this scope and the equivalents on border and repairing
Change example.
Claims (10)
- A kind of 1. multi-channel video splicing system based on FPGA, it is characterised in that including multiple fish-eye cameras, fpga chip, ASIC video frequency processing chips;All fish-eye cameras are fixed on same level with equally spaced angle;The fish-eye camera Video data be input in FPGA, pattern distortion correction and image co-registration are carried out in FPGA, after the completion of video data is passed ASIC video frequency processing chips are defeated by, video is compressed coding, to form the splicing video that can be transmitted.
- 2. system according to claim 1, it is characterised in that the system also includes real-time clock module and GPS module, The real-time clock module and the GPS module are used to add shooting time and spot for photography information in video or picture.
- 3. system according to claim 1, it is characterised in that the system also includes gyroscope, for preventing from shooting The shake produced in journey, with the quality of lifting shooting.
- 4. system according to claim 1, it is characterised in that the system also includes SD card and WIFI communication module, with By WIFI connection intelligent terminals, by video data transmission to intelligent terminal, live preview can be carried out in the APP of intelligent terminal Splice video, and shooting instruction can be transmitted by intelligent terminal, after ASIC video frequency processing chips receive shooting instruction, will regard Frequency or picture are stored in the SD card of equipment.
- 5. a kind of joining method of the multi-channel video splicing system based on FPGA based on Claims 1-4 any one of them, its It is characterized in that, includes the following steps:Step S1, initializes multi-channel video splicing system, and using same layout line at the same time to the every of multi-channel video splicing system A camera carries out initial configuration, the initial configuration include video frequency processing chip, fpga chip, image ISP set with And the initial configuration file of the more Splicing models of video;Step S2, gathers image, and reads the video data of every road camera in real time by FPGA, to carry out the distortion of image Correction;Step S3, image co-registration is carried out by the video data of distortion correction by FPGA;Step S4, carries out video compression coding processing by the video data after image co-registration, generates the splicing video that can be transmitted.
- 6. according to the method described in claim 5, it is characterized in that, multi-channel video splicing system is using a full set of in the step S2 Autonomous controllable image ISP Processing Algorithms carry out Image Acquisition, specifically include;First, the Raw images of Bayer format are gathered using each camera of multi-channel video splicing system;Then, Raw images are sent to ISP, and pass through ISP algorithm process, export the image in rgb space domain to follow-up video Collecting unit;Wherein, the autonomous controllable ISP Processing Algorithms which provides specifically include automatic white balance (AWB) correction, color school Positive matrices (CCM), camera lens shadow correction (LSC) and color purple boundary correction (CAC).
- 7. according to the method described in claim 6, it is characterized in that, the step S2 using coordinates computed mapping table method into The distortion correction of row image, specifically comprises the following steps:Step S21:To given multi-channel video splicing system, the intrinsic parameter of each camera lens is calculated using gridiron pattern scaling method And distortion factor, the spatial relationship parameters between adjacent camera lens;Step S22:Original graph is calculated using the spatial relationship parameters between the intrinsic parameter and distortion factor and camera lens of camera lens As saving the original image of each camera lens collection with the coordinate map between target image, coordinate map and passing through image One-to-many correspondence in the target image that joining method is formed between pixel point coordinates, wherein, target image is abnormal to go The image of change;Step S23:The original image that each fish eye lens photographs is mapped to by target image according to the coordinate map On, using the pixel in the target image as target pixel points, using the preset coordinate mapping table, determine the target The correspondence of pixel and the source image vegetarian refreshments on the pending original image.
- 8. the method according to the description of claim 7 is characterized in that in the step S2 determine the target pixel points with it is described The correspondence of source image vegetarian refreshments on pending original image specifically includes:Searched in original image as follows with the corresponding source image vegetarian refreshments of target pixel points, its computational methods:Dst (x, y)=Src (Lut_x (x, y), Lut_y (x, y))Wherein, Dst (x, y) denotation coordination is the target pixel points of (x, y), Lut_x (x, y) represent object pixel point coordinates (x, Y) coordinate value being mapped to by preset coordinate mapping table in source images in X-direction, Lut_y (x, y) represent that target pixel points are sat Mark (x, y) is mapped in source images the coordinate value in Y-direction, Scr (Lut_x (x, y), LUt_y by preset coordinate mapping table (x, y) represents the position that object pixel point coordinates (x, y) is mapped to by preset coordinate mapping table in source images;When it is not integer that object pixel point coordinates, which is mapped to the coordinate value that is obtained in source images, by non-integer coordinates into row interpolation Operation, generates the integer pixel values under non-integer coordinates, and integer pixel values, bi-cubic interpolation are generated using bi-cubic interpolation algorithm The calculation formula of algorithm is as follows:Wherein, (i ', j ') represents that pixel to be calculated includes the pixel coordinate of fractional part, P in 4 × 4 sample area (i ', j ') represents the new pixel that 16 pixel values in 4 × 4 sample area do convolution and formed afterwards with respective weight It is worth, the fractional coordinate of dx expression X-directions, the fractional coordinate of dy expression Y-directions, in the sample area of m expressions 4 × 4 in X-direction Coordinate, n represent the coordinate in Y-direction in 4 × 4 sample area, P (m, n) represent coordinate in 4 × 4 sample area for (m, N) pixel value, R () represent interpolation expression, common are based on triangle value, Bell distribution expression formulas, B-spline curves table It is as follows into row interpolation, mathematical formulae using B-spline curves expression formula in the present embodiment up to formula:Using bi-cubic interpolation in target image each target pixel points carry out projection mapping after, preserve target pixel points with The correspondence of valid pixel in original image, the correspondence of target pixel points and source image vegetarian refreshments in the target image overlapping Region is a kind of a pair of two relation, is man-to-man relation in other regions.
- 9. according to the method described in claim 8, it is characterized in that, the video data of distortion correction is passed through in the step S3 FPGA carries out image co-registration and specifically includes:The coordinate map calculated according to the step S2, due to existing in each original image between adjacent two pictures Certain overlapping region, for overlapping region, then preset coordinate mapping table is its two groups of correspondence of preservation;Judge whether the number of the source image vegetarian refreshments determined using the target pixel points is unique;If the number is only One, then judge that the target pixel points are not located at the overlapping region;Otherwise judge the target pixel points positioned at described overlapping Region;α is done to the pixel value of two source image vegetarian refreshments of overlapping region using the method for linear weighted function and is mixed to get mixed pixel value, Computational methods are as follows:Idst (x, y)=α × Isrc1 (x, y)+(1- α) × Isrc2 (x, y)Wherein, Idst (x, y) represents the pixel value at coordinate (x, y) place in target image, and Isrc1 (x, y) is represented in original image 1 The pixel value at coordinate (x, y) place, Isrc2 (x, y) represent the pixel value at coordinate (x, y) place in original image 2, and α represents weighting system Number;Lap left image " gradually going out " can be caused using the method for linear weighted function, and lap right image is " gradually Enter ", which achieves gentle excessive so that image mosaic seems nature.
- 10. according to the method described in claim 9, it is characterized in that, pass through ASIC video frequency processing chips pair in the step S4 Video data is compressed, and video is output to mobile terminal by WIFI.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711212165.5A CN107948544A (en) | 2017-11-28 | 2017-11-28 | A kind of multi-channel video splicing system and method based on FPGA |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711212165.5A CN107948544A (en) | 2017-11-28 | 2017-11-28 | A kind of multi-channel video splicing system and method based on FPGA |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107948544A true CN107948544A (en) | 2018-04-20 |
Family
ID=61950172
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711212165.5A Withdrawn CN107948544A (en) | 2017-11-28 | 2017-11-28 | A kind of multi-channel video splicing system and method based on FPGA |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107948544A (en) |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109151341A (en) * | 2018-09-27 | 2019-01-04 | 中国船舶重工集团公司第七0九研究所 | A kind of embedded platform multi-source HD video fusion realization system and method |
CN109726697A (en) * | 2019-01-04 | 2019-05-07 | 北京灵优智学科技有限公司 | Merge the Online Video system and method for AV video communication and the identification of AI material object |
CN110246081A (en) * | 2018-11-07 | 2019-09-17 | 浙江大华技术股份有限公司 | A kind of image split-joint method, device and readable storage medium storing program for executing |
CN110572621A (en) * | 2019-09-26 | 2019-12-13 | 湖州南太湖智能游艇研究院 | Method for splicing panoramic video in real time |
CN110691203A (en) * | 2019-10-21 | 2020-01-14 | 湖南泽天智航电子技术有限公司 | Multi-path panoramic video splicing display method and system based on texture mapping |
CN110910312A (en) * | 2019-11-21 | 2020-03-24 | 北京百度网讯科技有限公司 | Image processing method and device, automatic driving vehicle and electronic equipment |
CN111193877A (en) * | 2019-08-29 | 2020-05-22 | 桂林电子科技大学 | ARM-FPGA (advanced RISC machine-field programmable gate array) cooperative wide area video real-time fusion method and embedded equipment |
CN113409719A (en) * | 2021-08-19 | 2021-09-17 | 南京芯视元电子有限公司 | Video source display method, system, micro display chip and storage medium |
WO2022089083A1 (en) * | 2020-10-29 | 2022-05-05 | 深圳Tcl数字技术有限公司 | Display method for led television wall, and television and computer-readable storage medium |
CN114449245A (en) * | 2022-01-28 | 2022-05-06 | 上海瞳观智能科技有限公司 | Real-time two-way video processing system and method based on programmable chip |
CN114513675A (en) * | 2022-01-04 | 2022-05-17 | 桂林电子科技大学 | Construction method of panoramic video live broadcast system |
WO2023040540A1 (en) * | 2021-09-15 | 2023-03-23 | Oppo广东移动通信有限公司 | Image processing chip, application processing chip, electronic device, and image processing method |
-
2017
- 2017-11-28 CN CN201711212165.5A patent/CN107948544A/en not_active Withdrawn
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109151341A (en) * | 2018-09-27 | 2019-01-04 | 中国船舶重工集团公司第七0九研究所 | A kind of embedded platform multi-source HD video fusion realization system and method |
CN110246081B (en) * | 2018-11-07 | 2023-03-17 | 浙江大华技术股份有限公司 | Image splicing method and device and readable storage medium |
CN110246081A (en) * | 2018-11-07 | 2019-09-17 | 浙江大华技术股份有限公司 | A kind of image split-joint method, device and readable storage medium storing program for executing |
CN109726697A (en) * | 2019-01-04 | 2019-05-07 | 北京灵优智学科技有限公司 | Merge the Online Video system and method for AV video communication and the identification of AI material object |
CN109726697B (en) * | 2019-01-04 | 2021-07-20 | 北京灵优智学科技有限公司 | Online video system and method integrating AV video communication and AI real object identification |
CN111193877A (en) * | 2019-08-29 | 2020-05-22 | 桂林电子科技大学 | ARM-FPGA (advanced RISC machine-field programmable gate array) cooperative wide area video real-time fusion method and embedded equipment |
CN111193877B (en) * | 2019-08-29 | 2021-11-30 | 桂林电子科技大学 | ARM-FPGA (advanced RISC machine-field programmable gate array) cooperative wide area video real-time fusion method and embedded equipment |
CN110572621A (en) * | 2019-09-26 | 2019-12-13 | 湖州南太湖智能游艇研究院 | Method for splicing panoramic video in real time |
CN110691203A (en) * | 2019-10-21 | 2020-01-14 | 湖南泽天智航电子技术有限公司 | Multi-path panoramic video splicing display method and system based on texture mapping |
CN110691203B (en) * | 2019-10-21 | 2021-11-16 | 湖南泽天智航电子技术有限公司 | Multi-path panoramic video splicing display method and system based on texture mapping |
CN110910312A (en) * | 2019-11-21 | 2020-03-24 | 北京百度网讯科技有限公司 | Image processing method and device, automatic driving vehicle and electronic equipment |
WO2022089083A1 (en) * | 2020-10-29 | 2022-05-05 | 深圳Tcl数字技术有限公司 | Display method for led television wall, and television and computer-readable storage medium |
CN113409719A (en) * | 2021-08-19 | 2021-09-17 | 南京芯视元电子有限公司 | Video source display method, system, micro display chip and storage medium |
WO2023040540A1 (en) * | 2021-09-15 | 2023-03-23 | Oppo广东移动通信有限公司 | Image processing chip, application processing chip, electronic device, and image processing method |
CN114513675A (en) * | 2022-01-04 | 2022-05-17 | 桂林电子科技大学 | Construction method of panoramic video live broadcast system |
CN114449245A (en) * | 2022-01-28 | 2022-05-06 | 上海瞳观智能科技有限公司 | Real-time two-way video processing system and method based on programmable chip |
CN114449245B (en) * | 2022-01-28 | 2024-04-05 | 上海瞳观智能科技有限公司 | Real-time two-way video processing system and method based on programmable chip |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107948544A (en) | A kind of multi-channel video splicing system and method based on FPGA | |
CN103763479B (en) | The splicing apparatus and its method of real time high-speed high definition panorama video | |
CN105894451B (en) | Panorama Mosaic method and apparatus | |
WO2017016050A1 (en) | Image preview method, apparatus and terminal | |
CN106875339A (en) | A kind of fish eye images joining method based on strip scaling board | |
CN104835118A (en) | Method for acquiring panorama image by using two fish-eye camera lenses | |
EP3674967A1 (en) | Image signal processing method, apparatus and device | |
CN201523430U (en) | Panoramic video monitoring system | |
US20120274738A1 (en) | Method and apparatus for shooting panorama | |
CN107925751A (en) | For multiple views noise reduction and the system and method for high dynamic range | |
CN105205796A (en) | Wide-area image acquisition method and apparatus | |
CN103618881A (en) | Multi-lens panoramic stitching control method and multi-lens panoramic stitching control device | |
CN108200360A (en) | A kind of real-time video joining method of more fish eye lens panoramic cameras | |
CN107302657B (en) | Image capturing system suitable for Internet of Things | |
JP2022524806A (en) | Image fusion method and mobile terminal | |
CN107995421A (en) | A kind of panorama camera and its image generating method, system, equipment, storage medium | |
CN109166076B (en) | Multi-camera splicing brightness adjusting method and device and portable terminal | |
CN107169926A (en) | Image processing method and device | |
CN103544696B (en) | A kind of suture line real-time searching method realized for FPGA | |
CN106157305B (en) | High-dynamics image rapid generation based on local characteristics | |
CN110009567A (en) | For fish-eye image split-joint method and device | |
CN109600556B (en) | High-quality precise panoramic imaging system and method based on single lens reflex | |
CN105635603A (en) | System for mosaicing videos by adopting brightness and color cast between two videos | |
CN106657947A (en) | Image generation method and photographing device | |
WO2022227752A1 (en) | Photographing method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20180420 |
|
WW01 | Invention patent application withdrawn after publication |