CN105761208A - Picture fusing and splicing method - Google Patents

Picture fusing and splicing method Download PDF

Info

Publication number
CN105761208A
CN105761208A CN201610077363.4A CN201610077363A CN105761208A CN 105761208 A CN105761208 A CN 105761208A CN 201610077363 A CN201610077363 A CN 201610077363A CN 105761208 A CN105761208 A CN 105761208A
Authority
CN
China
Prior art keywords
picture
splicing
pictures
data
merged
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610077363.4A
Other languages
Chinese (zh)
Other versions
CN105761208B (en
Inventor
陈利军
王琴萍
季惟婷
吴昊阳
俞琼
李斌
俞蔚
余刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Kelan Information Technology Co Ltd
Original Assignee
Zhejiang Kelan Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Kelan Information Technology Co Ltd filed Critical Zhejiang Kelan Information Technology Co Ltd
Priority to CN201610077363.4A priority Critical patent/CN105761208B/en
Publication of CN105761208A publication Critical patent/CN105761208A/en
Application granted granted Critical
Publication of CN105761208B publication Critical patent/CN105761208B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction

Abstract

The invention discloses a picture fusing and splicing method, and relates to the field of computer technology. The method comprises the following steps that (1) the picture amount G*G is calculated, G=int(N/A), and each splicing unit is processed in a circulated manner; (2) in each basic splicing unit, pictures are spliced and fused in the horizontal direction, picture data in the same row is merged into one picture, the pictures of the different rows are merged successively, pictures are then spliced and fused in the vertical direction, and A*A pictures are merged into one picture; and (3) when N cannot be divided by A exactly, data of N%A rows is remained according to a A*A splicing rule, the picture amount is maintained in a manner that G*G=int(N/A)*int(N/A), and thus, the pictures are taken into the splicing units nearby, namely the size of the splicing unit in the edge ranges from A*A and 2A*2A. The picture fusing and splicing method has the advantages that occupied memory during picture capture is reduced and the stability of picture capture is high.

Description

A kind of image co-registration joining method
Technical field
The present invention relates to field of computer technology, particularly relate to a kind of image co-registration joining method.
Background technology
Magnanimity three-dimensional data is carried out high definition sectional drawing by conventional needle, it is necessary to data are loaded completely, sometimes will exhausting because of memory source, cause that the exception of picture effect or picture can not normally generate.In high definition sectional drawing scheme, it is multi-block data sectional drawing respectively by screen area cutting, it is general way, but the premise that realizes of this way is that the data for small pieces carry out, it may be possible to the amplification to a piece of community, it is also likely to be the amplification to a house, building, it is not managed for internal memory, therefore, in these sectional drawing processes, being difficult to occur memory source to take too high problem, stability can be guaranteed.But, in whole city, the data in the even whole nation are when needing sectional drawing, the big data quantity bottleneck problem that can run into, the problem that there will be EMS memory occupation height, sectional drawing poor stability in high definition sectional drawing process.
Summary of the invention
The present invention is directed in prior art the shortcoming that EMS memory occupation is high, sectional drawing is unstable, it is provided that a kind of image co-registration joining method.
In order to solve above-mentioned technical problem, the present invention is addressed by following technical proposals:
A kind of image co-registration joining method, comprises the following steps:
(1) picture is N*N dispersion file, and basic splicing unit is A*A, calculates picture number G*G, G=int (N/A) according to ranks number N*N, basic splicing unit A*A, then starts the cycle over and each splicing unit is processed;
(2) in each basic splicing unit, the picture splicing first carried out in horizontal direction is merged, and the image data in same a line is merged into a pictures, and each row is sequentially carried out merging;After row used has merged, the picture splicing then carried out in vertical direction is merged, and A*A pictures synthesizes a pictures the most at last;
(3) when N does not divide exactly A time, splicing rule according to A*A, have more the data of N%A ranks, final picture number to be maintained at G*G=int (N/A) * int (N/A), therefore they are grouped in the concatenation unit that they are neighbouring, are namely in the size of concatenation unit at edge between A*A and 2A*2A.
Due to the fact that and have employed above technical scheme, there is significant technique effect: picture fusion method effectively processes the relation between the final number of achievement picture and individual resolution, the process facilitating the later stage operates, it is to avoid in sectional drawing process, EMS memory occupation is high and cause the phenomenon of instability.
Detailed description of the invention
The present invention is described in further detail by example below.
Embodiment 1
A kind of image co-registration joining method, screen area view data is carried out N etc. horizontally and vertically and divides operation, divide the on-screen data region unit obtaining the sizes such as N*N, each on-screen data region unit is referred to as elementary area, it is identified with n row n row, because sectional drawing is the scene image data intercepting whole screen, need to be divided into screen N*N block, and result map chip resolution is necessary for N times of screen size, then on each on-screen data region unit, by arranging specific sectional drawing parameter, the data of this on-screen data region unit are amplified N times, sectional drawing, the picture size generated is identical with whole screen size, also it is that effect effect is amplified N times, thus realizing the function of high definition sectional drawing, further comprising the steps of:
Pre-treatment step (1): arranging the projection relation of n row n row elementary area, screen resolution, projection relation is rectangular projection, and calculate the viewport size of n row n row elementary area, what comes into a driver's body size;
Pre-treatment step (2): in the elementary area of certain row row, carries out data area segmentation;
Pre-treatment step (3): data area segmentation includes following control parameter: V represents that data are predetermined and loads threshold value, and m represents the length starting zoning in vertical direction from viewpoint position;First, in from distance viewpoint 0 to the scope of m, carry out the preload operation of a secondary data, when the data actual loaded numerical value C obtained exceedes the predetermined loading threshold value V of setting, reduce m value, carrying out repartitioning of region, if after repartitioning, C value still exceedes V-value, again carry out reducing m Value Operations, so repeatedly, until C value stabilization is within the scope of V-value, m value obtains the m ' value determined;When C value is within the scope of V-value, then proceeding by the loading of three-dimensional data, after data have loaded, render, rendering result is saved in image frame buffer;
Pre-treatment step (4): then data area continues to divide, and scope is from m ' to 2m ', it is judged that whether C value is within the scope of V-value, if it is, data area divides scope to adjacent 2m ' to 3m ';Scope is from 2m ' to 3m ', it is judged that whether C value is within the scope of V-value, if it is, data area divides scope to adjacent 3m ' to 4m ', and repeats this operation;If C value exceedes V-value, then carrying out the operation of step 3, next region divides the m ' value of scope and confirms according to the m ' value of upper and adjacent area, and stop when ranging up to critical distance set in advance until dividing, then single image unit sectional drawing completes;
Pre-treatment step (5): the image information being saved in by elementary area in image frame buffer processes, after the every frame of image frame buffer receives image information, image frame buffer preserves the depth buffer of image, color cache information, the image information that contrast receives, by carrying out pixel replacement according to depth buffer information, the pixel of same position, it is little that what depth information was big can not replace depth information, namely depth information is more little, and it is more high that pixel replaces priority;
Pre-treatment step (6): it is exactly the final image-forming information of this elementary area that image frame buffer processes final result, is written in picture file by the image-forming information of this elementary area and preserves, with the data that this picture file of tense marker is n row n row;
Pre-treatment step (7): carry out the shot operation of next elementary area, repeats the pre-treatment step 3 operation to pre-treatment step 6;
, can there is the N*N picture opened in pre-treatment step (8): after all of elementary area is all write as picture file;
Step (1): picture is N*N dispersion file, and basic splicing unit is A*A, calculates picture number G*G, G=int (N/A) according to ranks number N*N, basic splicing unit A*A, then starts the cycle over and each splicing unit is processed;
Step (2): in each basic splicing unit, the picture splicing first carried out in horizontal direction is merged, and the image data in same a line is merged into a pictures, and each row is sequentially carried out merging;After row used has merged, the picture splicing then carried out in vertical direction is merged, and A*A pictures synthesizes a pictures the most at last;Step (3): when N does not divide exactly A time, splicing rule according to A*A, have more the data of N%A ranks, final picture number to be maintained at G*G=int (N/A) * int (N/A), therefore they are grouped in the concatenation unit that they are neighbouring, are namely in the size of concatenation unit at edge between A*A and 2A*2A.
Embodiment 2
With embodiment 1, institute is perspective projection the difference is that projection relation.
Embodiment 3
With embodiment 1, institute is the difference is that, after certain sub-region divides, when the ratio of C value and V-value is equal to 0.2, carrying out m value and increase.
Embodiment 4
A kind of image co-registration joining method, screen area view data is carried out 18 grades horizontally and vertically and divides operation, divide the on-screen data region unit obtaining the sizes such as 18*18, each on-screen data region unit is referred to as elementary area, it is identified with n row n row, because sectional drawing is the scene image data intercepting whole screen, need to be divided into screen 18*18 block, and result map chip resolution is necessary for 18 times of screen size, then on each on-screen data region unit, by arranging specific sectional drawing parameter, the data of this on-screen data region unit are amplified 18 times, sectional drawing, the picture size generated is identical with whole screen size, also it is that effect effect is amplified 18 times, thus realizing the function of high definition sectional drawing, further comprising the steps of:
Pre-treatment step (1): arranging the projection relation of n row n row elementary area, screen resolution, projection relation is rectangular projection, and calculate the viewport size of n row n row elementary area, what comes into a driver's body size;
Pre-treatment step (2): in the elementary area of certain row row, carries out data area segmentation;
Pre-treatment step (3): data area segmentation includes following control parameter: V represents that data are predetermined and loads threshold value, m represents the length starting zoning in vertical direction from viewpoint position, the present embodiment m is 500 meters, V represents predetermined loading data chained list membership, and predetermined loading data chained list membership is 500;First, in the scope of distance viewpoint 0 to 500 meters, carry out the preload operation of a secondary data, when the data actual loaded numerical value C obtained exceedes the predetermined loading threshold value 500 of setting, C is 800, reduces m value, carries out repartitioning of region, when m value reduces, C value also declines accordingly, if after repartitioning, C value still exceedes V-value, again carry out reducing m Value Operations, so repeatedly, until C value stabilization is within the scope of V-value, m value obtains the m ' value determined, m ' value is 250 meters, and C value also drops to 400 accordingly;When C value is within the scope of V-value, then proceeding by the loading of three-dimensional data, after data have loaded, render, rendering result is saved in image frame buffer;
Pre-treatment step (4): then data area continues to divide, and scope is from m ' to 2m ', it is judged that whether C value is within the scope of V-value, if it is, data area divides scope to adjacent 2m ' to 3m ';Scope is from 2m ' to 3m ', it is judged that whether C value is within the scope of V-value, if it is, data area divides scope to adjacent 3m ' to 4m ', and repeats this operation;If C value exceedes V-value, then carrying out the operation of step 3, next region divides the m ' value of scope and confirms according to the m ' value of upper and adjacent area, and stop when ranging up to critical distance set in advance until dividing, then single image unit sectional drawing completes;
Pre-treatment step (5): the image information being saved in by elementary area in image frame buffer processes, after the every frame of image frame buffer receives image information, image frame buffer preserves the depth buffer of image, color cache information, the image information that contrast receives, by carrying out pixel replacement according to depth buffer information, the pixel of same position, it is little that what depth information was big can not replace depth information, namely depth information is more little, and it is more high that pixel replaces priority;
Pre-treatment step (6): it is exactly the final image-forming information of this elementary area that image frame buffer processes final result, is written in picture file by the image-forming information of this elementary area and preserves, with the data that this picture file of tense marker is n row n row;
Pre-treatment step (7): carry out the shot operation of next elementary area, repeats the pre-treatment step 3 operation to pre-treatment step 6;
, can there is the 18*18 picture opened in pre-treatment step (8): after all of elementary area is all write as picture file;
Step (1): picture is 18*18 dispersion file, basic splicing unit is 5*5, calculate picture number G*G, G=int (18/5)=3 according to ranks number 18*18, basic splicing unit 5*5, then start the cycle over and each splicing unit is processed;
Step (2): in each basic splicing unit, the picture splicing first carried out in horizontal direction is merged, and the image data in same a line is merged into a pictures, and each row is sequentially carried out merging;After row used has merged, the picture splicing then carried out in vertical direction is merged, and 25 pictures synthesize a pictures the most at last;
Step (3): when 18 do not divide exactly 5 time, splicing rule according to 5*5, have more the data of 3 row 3 row, final picture number to be maintained at G*G=3*3=9, therefore they are grouped in the concatenation unit that they are neighbouring, are namely in the size of concatenation unit at edge between 5*5 and 10*10.
In a word, the foregoing is only presently preferred embodiments of the present invention, all equalizations made according to the present patent application the scope of the claims change and modify, and all should belong to the covering scope of patent of the present invention.

Claims (1)

1. an image co-registration joining method, it is characterised in that comprise the following steps:
(1) picture is N*N dispersion file, and basic splicing unit is A*A, calculates picture number G*G, G=int (N/A) according to ranks number N*N, basic splicing unit A*A, then starts the cycle over and each splicing unit is processed;
(2) in each basic splicing unit, the picture splicing first carried out in horizontal direction is merged, and the image data in same a line is merged into a pictures, and each row is sequentially carried out merging;After row used has merged, the picture splicing then carried out in vertical direction is merged, and A*A pictures synthesizes a pictures the most at last;
(3) when N does not divide exactly A time, splicing rule according to A*A, have more the data of N%A ranks, final picture number to be maintained at G*G=int (N/A) * int (N/A), therefore they are grouped in the concatenation unit that they are neighbouring, are namely in the size of concatenation unit at edge between A*A and 2A*2A.
CN201610077363.4A 2016-02-03 2016-02-03 A kind of image co-registration joining method Active CN105761208B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610077363.4A CN105761208B (en) 2016-02-03 2016-02-03 A kind of image co-registration joining method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610077363.4A CN105761208B (en) 2016-02-03 2016-02-03 A kind of image co-registration joining method

Publications (2)

Publication Number Publication Date
CN105761208A true CN105761208A (en) 2016-07-13
CN105761208B CN105761208B (en) 2019-03-01

Family

ID=56329959

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610077363.4A Active CN105761208B (en) 2016-02-03 2016-02-03 A kind of image co-registration joining method

Country Status (1)

Country Link
CN (1) CN105761208B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108881873A (en) * 2018-07-31 2018-11-23 杭州隅千象科技有限公司 The methods, devices and systems of high-definition picture fusion
CN110087054A (en) * 2019-06-06 2019-08-02 北京七鑫易维科技有限公司 The processing method of image, apparatus and system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101212674A (en) * 2006-12-30 2008-07-02 上海奇码数字信息有限公司 Image address mapping method in memory
CN101751659A (en) * 2009-12-24 2010-06-23 北京优纳科技有限公司 Large-volume rapid image splicing method
US20100220209A1 (en) * 1999-08-20 2010-09-02 Yissum Research Development Company Of The Hebrew University System and method for rectified mosaicing of images recorded by a moving camera
CN102129702A (en) * 2010-01-12 2011-07-20 北大方正集团有限公司 Image thumbnail making method and system thereof
CN102713937A (en) * 2009-12-23 2012-10-03 印度孟买技术研究院 System and method for fusing images

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100220209A1 (en) * 1999-08-20 2010-09-02 Yissum Research Development Company Of The Hebrew University System and method for rectified mosaicing of images recorded by a moving camera
CN101212674A (en) * 2006-12-30 2008-07-02 上海奇码数字信息有限公司 Image address mapping method in memory
CN102713937A (en) * 2009-12-23 2012-10-03 印度孟买技术研究院 System and method for fusing images
CN101751659A (en) * 2009-12-24 2010-06-23 北京优纳科技有限公司 Large-volume rapid image splicing method
CN102129702A (en) * 2010-01-12 2011-07-20 北大方正集团有限公司 Image thumbnail making method and system thereof

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108881873A (en) * 2018-07-31 2018-11-23 杭州隅千象科技有限公司 The methods, devices and systems of high-definition picture fusion
CN110087054A (en) * 2019-06-06 2019-08-02 北京七鑫易维科技有限公司 The processing method of image, apparatus and system

Also Published As

Publication number Publication date
CN105761208B (en) 2019-03-01

Similar Documents

Publication Publication Date Title
WO2019179168A1 (en) Projection distortion correction method, device and system, and storage medium
CN105741300A (en) Region segmentation screenshot method
WO2019041842A1 (en) Image processing method and device, storage medium and computer device
CN111034166B (en) Multi-camera post-capture image processing method and system
CN104281426A (en) Image display method and device
CN105761208A (en) Picture fusing and splicing method
KR101810019B1 (en) Animation data generating method, apparatus, and electronic device
CN105761253A (en) High-definition screenshot method for 3D virtual data
CN111161127B (en) Picture resource rendering optimization method
US8274613B2 (en) Display masks for display and calibration in projector-based display systems
CN105323519A (en) Image processing device, display apparatus, and image processing method
CN113255289A (en) Method and system for composing and distributing file
CN104539922A (en) Processing method and device for projection fusion dark field
US7646385B2 (en) Computer graphics rendering method and apparatus
US11620965B2 (en) Video display method, video display system, electronic device, and storage medium
CN112149745B (en) Method, device, equipment and storage medium for determining difficult example sample
CN111034193A (en) Multi-source video stabilization
WO2014111968A1 (en) Video generation device, video generation program, and video generation method
RU2470368C2 (en) Image processing method
CN114173059A (en) Video editing system, method and device
US10643312B2 (en) Smoothed image generating device, abnormality determining device, and smoothed image generating method
CN104517273A (en) Image super-resolution processing method and apparatus
CN111736791A (en) Large-scale performance dynamic stage digital display mapping method
CN115272549B (en) Storage and rendering scheduling method and device for oversized digital scene
CN113822997B (en) Method and system for adjusting elevation by using bitmap information

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant