CN115689886A - Distributed slicing method and distributed slicing device for framing images - Google Patents

Distributed slicing method and distributed slicing device for framing images Download PDF

Info

Publication number
CN115689886A
CN115689886A CN202211329074.0A CN202211329074A CN115689886A CN 115689886 A CN115689886 A CN 115689886A CN 202211329074 A CN202211329074 A CN 202211329074A CN 115689886 A CN115689886 A CN 115689886A
Authority
CN
China
Prior art keywords
slice
target
image
image data
range
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211329074.0A
Other languages
Chinese (zh)
Inventor
王焰新
李泽波
沈旭明
傅锦荣
樊旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Dayun Data Technology Co ltd
Original Assignee
Wuhan Dayun Data Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Dayun Data Technology Co ltd filed Critical Wuhan Dayun Data Technology Co ltd
Priority to CN202211329074.0A priority Critical patent/CN115689886A/en
Publication of CN115689886A publication Critical patent/CN115689886A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention relates to the technical field of geographic information science and provides a distributed slicing method and a distributed slicing device for a framing image. The method comprises the steps of establishing an R tree index according to the spatial range of each frame image, obtaining a target slice corresponding to the map range to be displayed, calculating the slice spatial range of the target slice under a projection coordinate system according to slice information, retrieving the slice spatial range through the R tree index to obtain the frame images which are intersected with the slice spatial range, obtaining the target intersection of the frame image spatial range and the slice spatial range, calculating the corresponding image area of the target intersection on the frame images, resampling the image area to obtain image data, mapping the image data to the corresponding mapping area of the target slice until the image reconstruction of the target slice is completed, achieving the effect of presenting the slice images in real time, and solving the problems that the image data of the image data is pre-sliced, the updating period is delayed, a large amount of disk space is needed, and the like.

Description

Distributed slicing method and distributed slicing device for framing images
Technical Field
The invention relates to the technical field of geographic information science, in particular to a distributed slicing method and a distributed slicing device for a framing image.
Background
The image data is one of main data achievements of satellite remote sensing and unmanned aerial vehicle aerial photography. Publishing image data as a spatial service is a common way of sharing and applying the image data, in order to enable an application to smoothly access and display the image data, slicing processing is usually performed on the image data to convert the image data into image tiles organized in a pyramid structure, and due to the characteristics of high precision, large volume and the like of the image data, the traditional slicing processing usually requires few days, more weeks or even months, and along with the fact that the application has higher requirements on the timeliness of the image data and needs to frequently update the image data, the traditional service publishing process using a pre-slicing mode cannot meet the actual use requirements more and more.
In view of the above, overcoming the drawbacks of the prior art is an urgent problem in the art.
Disclosure of Invention
The technical problem to be solved by the invention is to provide a distributed slicing method and a distributed slicing device for a framing image, and solve the problems that the pre-slicing period of image data is long, the updating is delayed, a large amount of disk space is needed, and the like.
In order to solve the problems, the invention adopts the following technical scheme:
in a first aspect, a method for distributed slicing of a segmented image is provided, including:
acquiring a first spatial range of each framing image data, and establishing an R tree index according to each first spatial range;
acquiring at least one target slice corresponding to a map range to be displayed, and calculating a slice space range of the target slice in a projection coordinate system according to slice information of the target slice;
performing spatial retrieval on the slice spatial range through the R tree index to obtain at least one first spatial range which has an intersection relationship with the slice spatial range, and further determining at least one framing image data which has an intersection relationship with the target slice;
acquiring a target intersection of each first space range and the slice space range, and calculating the image area of the corresponding framing image data of the target intersection;
and resampling the image area to obtain target image data, and mapping the target image data to a corresponding mapping area of the target slice until the image reconstruction of the target slice is completed.
Further, the obtaining at least one target slice corresponding to the map range to be displayed, and calculating the slice space range of the target slice in the projection coordinate system according to the slice information of the target slice includes:
acquiring a central point coordinate of a map displayed by a client, and calculating a map range according to the central point coordinate and the size of a client window;
acquiring a slice level and a slice size according to a current map scale of a client, acquiring a target slice corresponding to the map range according to the map range and the slice level, taking a column number and a row number of the target slice and the slice level corresponding to the target slice as slice coordinates, and taking the slice coordinates and the slice size as slice information of the slice;
and calculating the slice space range of the target slice in the projection coordinate system according to the slice information of the target slice.
Further, the calculating a slice space range of the target slice in the projection coordinate system according to the slice information of the target slice includes:
acquiring a reference starting point coordinate and a reference end point coordinate of a space range of the whole original image on the projection coordinate system, and calculating a reference width and a reference height of the space range of the original image according to the reference starting point coordinate and the reference end point coordinate;
acquiring the slice width, the slice height and the slice level of the target slice, calculating a first spatial resolution of the target slice in the x-axis direction according to the reference width, the slice width and the slice level, and calculating a second spatial resolution of the target slice in the y-axis direction according to the reference height, the slice height and the slice level;
determining a target starting point coordinate of the target slice according to the reference starting point coordinate, the first spatial resolution, the slice width and the slice column number, and the reference starting point coordinate, the second spatial resolution, the slice height and the slice row number;
determining a target end point coordinate of the target slice according to the reference end point coordinate, the first spatial resolution, the slice width, the slice column number, the reference end point coordinate, the second spatial resolution, the slice height, and the slice row number;
and determining the slice space range according to the target starting point coordinate, the target end point coordinate, the slice width and the slice height.
Further, the performing spatial retrieval on the slice spatial range through the R-tree index to obtain at least one first spatial range having an intersection relationship with the slice spatial range, and determining at least one piece of framing image data having an intersection relationship with the target slice includes:
starting from a root node of the R tree index, judging whether the space range of the slice is intersected with the space range of the child node;
if not, stopping the search of the child node;
if the first spatial range and the second spatial range are intersected, the next-level node of the child node is searched continuously until all the first spatial ranges intersected with the slice spatial range are found out, and then at least one piece of framing image data which is intersected with the target slice is determined;
and allocating a framing image data set for each target slice, and adding the retrieved framing image data to the corresponding framing image data set.
Further, the obtaining a target intersection of each first spatial range and the slice spatial range, and calculating an image area of the corresponding framed image data where the target intersection is located, includes:
under the projection coordinate system, acquiring a target intersection of the first space range and the slice space range, and calculating an intersection space range of the target intersection;
establishing a first coordinate system of the framed image data, and carrying out affine transformation on the intersection space range according to the geographic transformation parameters of the framed image data to obtain the image coordinates and the image size of the intersection space range under the first coordinate system;
and determining the image area in the framing image data according to the image coordinates and the image size under the first coordinate system.
Further, the resampling the image area to obtain target image data, and mapping the target image data to a corresponding mapping area of the target slice includes:
establishing a second coordinate system of the target slice, and performing coordinate conversion on the intersection space range based on the second coordinate system to obtain a mapping coordinate and a mapping size of the intersection space range in the second coordinate system;
under a second coordinate system, determining a mapping area in the target slice according to the mapping coordinates and the mapping size;
and establishing association between the image area and the corresponding mapping area, resampling the image area to obtain target image data, and mapping the target image data into the corresponding mapping area.
Further, the resampling the image area to obtain target image data, and mapping the target image data to a corresponding mapping area of the target slice includes:
adding the target image data and the mapping area corresponding to the target image data to a target image data set;
allocating a memory meeting the storage requirement of each target slice, and performing reconstruction operation on the target image data set to add the target image data set into the memory;
and reading the data in the memory area to obtain the corresponding slice image.
Furthermore, after the reconstruction of the image of the current target slice is completed, the distributed slicing method records the number of times of reconstruction of the current target slice, if the number of times of reconstruction is greater than a certain specific value, the image of the target slice is cached in the server, and when the reconstruction task of the target slice is performed next time, the image of the target slice cached in the server is directly called.
Further, after the image reconstruction of the current target slice is completed, the distributed slicing method acquires a next-stage target slice or a previous-stage target slice of the current target slice, or acquires brother slices which are located around the current target slice, have the same level as the target slice, and have no intersection relation with a map range to be displayed;
and sequentially reconstructing images of the next-stage target slice, the previous-stage target slice or the brother slice.
In a second aspect, the present invention further provides a device for distributed slicing of a framed image, which is used to implement the method for distributed slicing of a framed image in the first aspect, and the device includes:
at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor for performing the distributed slicing method for framed images of the first aspect.
In a third aspect, the present invention further provides a non-transitory computer storage medium storing computer-executable instructions for execution by one or more processors to perform the method for distributed slicing of a framed image according to the first aspect.
The method comprises the steps of establishing an R tree index through the space range of each frame image, obtaining at least one target slice corresponding to a map range to be displayed, calculating the slice space range of the target slice in a projection coordinate system according to the slice information of the target slice, carrying out space retrieval on the slice space range through the R tree index to obtain the frame image which is intersected with the slice space range, obtaining a target intersection of the frame image space range and the slice space range, calculating the corresponding image area of the target intersection on the frame image, resampling the image area to obtain target image data, mapping the target image data to the corresponding mapping area of the target slice until the image reconstruction of the target slice is completed. The invention replaces the traditional pre-slicing processing mode with a real-time slicing mode, shortens the period from data storage and data processing to data service release, realizes quick release and second-level response of image data, improves the operating efficiency for business application, provides a strong performance foundation for subsequent data application, and effectively solves the problems of long pre-slicing period, lagging data updating, large disk space requirement and the like in the traditional image data service release mode.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings used in the embodiments of the present invention will be briefly described below. It is obvious that the drawings described below are only some embodiments of the invention, and that for a person skilled in the art, other drawings can be derived from them without inventive effort.
Fig. 1 is a schematic flowchart of a distributed slicing method for a segmented image according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart illustrating the step 102 in FIG. 1 according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a slicing pyramid of a distributed slicing method for a frame image according to an embodiment of the present invention;
fig. 4 is a schematic diagram of an image pyramid of a distributed slicing method for a frame image according to an embodiment of the present invention;
fig. 5 is a schematic view of slice division of a distributed slicing method for a segmented image according to an embodiment of the present invention;
FIG. 6 is a schematic flow chart diagram of step 104 in FIG. 1 according to an embodiment of the present invention;
FIG. 7 is a schematic flow chart diagram of step 105 in FIG. 1 according to an embodiment of the present invention;
fig. 8 is a schematic diagram illustrating a relationship between a frame image and a slice image in a distributed slicing method for frame images according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of a distributed slicing apparatus for a segmented image according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
In the description of the present invention, the terms "inner", "outer", "longitudinal", "lateral", "upper", "lower", "top", "bottom", and the like indicate orientations or positional relationships based on those shown in the drawings, and are for convenience only to describe the present invention without requiring the present invention to be necessarily constructed and operated in a specific orientation, and thus should not be construed as limiting the present invention.
In addition, the technical features involved in the embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
Example 1:
referring to fig. 1, an embodiment 1 of the present invention provides a distributed slicing method for a frame image, including:
step 101: and acquiring a first spatial range of each framing image data, and establishing an R tree index according to each first spatial range.
Step 102: and acquiring at least one target slice corresponding to the map range to be displayed, and calculating the slice space range of the target slice in the projection coordinate system according to the slice information of the target slice.
Step 103: and carrying out spatial retrieval on the slice space range through the R tree index to obtain at least one first space range which has an intersection relation with the slice space range, and further determining at least one framing image data which has an intersection relation with the target slice.
Step 104: and acquiring a target intersection of each first space range and the slice space range, and calculating an image area of the corresponding framing image data of the target intersection.
Step 105: and resampling the image area to obtain target image data, and mapping the target image data to a corresponding mapping area of the target slice until the image reconstruction of the target slice is completed.
The projection coordinate system is a plane coordinate system obtained by processing a geographic coordinate system through a projection method, the position of a certain point on the earth can be represented by adopting x-axis and y-axis coordinates in the projection coordinate system, and the unit distance in the projection coordinate system is usually 1 meter; meanwhile, the space range of the image can be described as a circumscribed rectangle of the image under a projection coordinate system; the first space range and the slicing space range are located in the same projection coordinate system.
The framing image data refers to data corresponding to a framing image, the framing image is based on an embedded image, the image acquisition device acquires a plurality of local images, then the plurality of local images are embedded to obtain a whole image (namely, an original image), and the whole image is divided again according to a certain proportion to obtain the framing image.
The map range to be displayed is specifically determined according to the request of a client. The map range to be displayed may include one target slice or a plurality of target slices, depending on the actual scene.
For the slice tissues in the pyramid model, the number of slices included in different levels is different, the size of a single slice is different in different levels, and the number of the framing images corresponding to the single slice is also different, which will be explained in detail below.
In the embodiment of the invention, target image data corresponding to a target slice is acquired in real time according to a request of a client, the target image data is mapped to a corresponding mapping area of the target slice until the image reconstruction of the target slice is completed, a traditional pre-slicing processing mode is replaced by a real-time slicing mode, on one hand, the period from data warehousing, data processing to data service publishing is shortened, the rapid publishing and second-level response of the image data is realized, the operation efficiency is improved for business application, meanwhile, a strong performance basis is provided for subsequent data application, and on the other hand, the problems of long pre-slicing period, lag in data updating, large amount of disk space and the like in a traditional image data service publishing mode are effectively solved.
It should be noted that, the conventional pre-slicing processing method can achieve millisecond-level response, but the slicing processing in this method usually requires several days, several weeks or even months as little as possible, and a lot of time is consumed in the pre-slicing processing; the scheme of the embodiment of the invention can realize response of second level, although the response speed is lower than that of the traditional mode, the response speed of second level can meet the requirement of a user, and the mode does not need slice pretreatment, but carries out slice treatment in real time according to the request of a client, thereby saving the time of slice pretreatment and improving the timeliness.
The following describes specific implementations of embodiments of the present invention. With reference to fig. 2, step 102 specifically includes the following processes:
step 1021: and acquiring the central point coordinate of the map displayed by the client, and calculating the map range according to the central point coordinate and the size of the client window.
The map center point coordinate displayed by the client refers to a map coordinate corresponding to a window center point of a map displayed by the client, and does not refer to the center point coordinate of the whole map; the client window refers to a window used by the client to display a map; the map range calculated according to the above is the map range displayed in the client window.
Step 1022: the method comprises the steps of obtaining a slice level and a slice size according to a current map scale of a client, obtaining a target slice corresponding to a map range according to the map range and the slice level, taking a slice column number and a slice row number of the target slice and the slice level corresponding to the target slice as slice coordinates, and taking the slice coordinates and the slice size as slice information of the slice.
Wherein the slice size includes a slice width and a slice height.
Step 1023: and calculating the slice space range of the target slice in the projection coordinate system according to the slice information of the target slice.
The pyramid model is as shown in fig. 3 and 4, fig. 3 is a 4-layer (zoom 0 to zoom 3) slice pyramid model, fig. 4 is an image pyramid model after sampling of all slices corresponding to fig. 3, the slice level is the same as the slice pyramid level, the slices are located at the slice pyramid level and are called as the slice level, the slice pyramid is mainly composed of slices with different resolutions generated by an original image according to a certain rule, the higher the level is, the lower the resolution is, and the position of a slice can be represented by a row number and a column number of the slice at a certain level of the slice pyramid. For example, when the size of the slice is 256 × 256 pixels, at the level zoom0, one slice may display the entire original image, at the level zoom1, 4 slices may be required to display the entire original image, and when the size of the client window is 256 × 256 pixels, at the level zoom1, the range of the client window display may involve 1, 2, or 4 slices, and at this time, the image data of the concerned slice needs to be acquired to complete the slice within the range of the client display.
In the case of fig. 5, the client window range refers to 12 slices at level 3, where the slice coordinate (3,6-4) represents the slice at level 3, column 6, row 4.
As can be seen from the foregoing, the resolution of the slices at different levels is different, the slice sizes are different, and the row numbers and the column numbers of the slices at the same level are different for distinguishing the different slices.
In this embodiment, it is necessary to calculate the resolution of the target slice according to the whole original image reference start point coordinate, the whole original image reference end point coordinate, the slice size, and the slice level, and then calculate the spatial range of each target slice according to the spatial resolution, the slice row number, and the slice column number, which specifically includes in step 1023:
and acquiring a reference starting point coordinate and a reference end point coordinate of the space range of the whole original image on the projection coordinate system, and calculating the reference width and the reference height of the space range of the original image according to the reference starting point coordinate and the reference end point coordinate.
The whole original image refers to a whole image formed by inlaying a plurality of images.
Wherein, after the original image is cut by a certain rule, a plurality of framing images can be obtained.
The reference starting point coordinate and the reference end point coordinate can be understood as a pair of coordinates which are arranged in a diagonal manner, and the difference value between the X value of the reference starting point coordinate and the X value of the reference end point coordinate is the reference width; the difference between the Y value of the reference start point coordinate and the Y value of the reference end point coordinate is the reference height.
Further, the slice width, the slice height and the slice level of the target slice are obtained, a first spatial resolution of the target slice in the x-axis direction is calculated according to the reference width, the slice width and the slice level, and a second spatial resolution of the target slice in the y-axis direction is calculated according to the reference height, the slice height and the slice level.
The first spatial resolution may be obtained by dividing a reference width by a width of the entire map at the slice pyramid level, specifically, the width of the map is expressed in pixel units, the pyramid level is denoted as zoom, the entire map is divided into slices of 4 zooms in the slice pyramid model, and the width of the map at the slice pyramid level may be expressed as a product of 2 zooms and the slice width; the second resolution may be obtained by dividing the reference height by the height of the entire map at this slice pyramid level, which may be expressed as the product of 2 to the zoom power and the slice height.
In an alternative embodiment, the reference start point coordinate is a lower left coordinate of a circumscribed rectangle of the original image on the projection coordinate system, which is denoted as (original _ x, original _ y), the reference end point coordinate is an upper right coordinate of the circumscribed rectangle of the original image on the projection coordinate system, which is denoted as (final _ x, final _ y), the reference width is a length of the original image in the x-axis direction, which can be denoted as final _ x-original _ x, and the reference height is a length of the original image in the y-axis direction, which can be denoted as final _ y-original _ y; the unit of the slice size is pixel, the slice width is recorded as width, the slice height is recorded as height, the slice level is recorded as zoom, the slice row number is recorded as row, and the slice column number is recorded as column; when the first spatial resolution is denoted as resolution _ x and the second spatial resolution is denoted as resolution _ y, the calculation formula can be expressed as:
resolution_x=(final_x-original_x)/(width*Math.pow(2,zoom));
resolution_y=(final_y-original_y)/(height*Math.pow(2,zoom));
in the formula, math.pow (2, zoom) has the function of calculating zoom power of 2 to obtain the slice number in the x-axis direction or the slice number in the y-axis direction under the zoom level slice pyramid.
In this embodiment, after the first spatial resolution and the second spatial resolution are obtained, the spatial range of each target slice is calculated according to the spatial resolution, the slice row number, and the slice column number, which is specifically implemented as follows:
and determining the target starting point coordinate of the target slice according to the reference starting point coordinate, the first spatial resolution, the slice width and the slice column number, the reference starting point coordinate, the second spatial resolution, the slice height and the slice row number.
And determining the target end point coordinate of the target slice according to the reference end point coordinate, the first spatial resolution, the slice width and the slice column number, and the reference end point coordinate, the second spatial resolution, the slice height and the slice row number.
And determining the slice space range according to the target starting point coordinate, the target end point coordinate, the slice width and the slice height.
The target start point coordinate and the target end point coordinate of the slice are diagonal coordinates of a circumscribed rectangle of the slice in the projection coordinate system, for example, the target start point coordinate is a lower left corner coordinate of the circumscribed rectangle, and the target end point coordinate is an upper right corner coordinate.
The calculation principle of the target starting point coordinate is that the number of columns of the target starting point coordinate from the original image reference starting point coordinate can be obtained through the column number of the slice, the width of the slice is multiplied by the first space resolution of the slice, the width of the slice under a projection coordinate system can be obtained, and the x-axis coordinate of the target starting point coordinate is obtained by adding the x-axis coordinate of the original image reference starting point coordinate to the total column distance of the x-axis coordinate of the target starting point coordinate and the x-axis coordinate of the reference starting point. The specific operation process is related to a slice numbering mode and a reference starting point coordinate, a reference end point coordinate, a target starting point coordinate and a target end point coordinate taking method.
In the case shown in fig. 5, the row number of the slice in the y-axis direction is changed from small to large, the column number in the x-axis direction is changed from small to large, in the projection coordinate system, the lower left corner coordinate of the original image is taken as the reference start point coordinate and expressed as (original _ x, original _ y), the upper right corner coordinate of the original image is taken as the reference end point coordinate and expressed as (final _ x, final _ y), the lower left corner coordinate of the slice space range is set as the target start point coordinate and expressed as (tileinx, tileiny), the upper right corner coordinate of the slice space range is set as the target end point coordinate and expressed as (tileMaxx, tileMaxy), and in this case, the specific calculation method is:
tileinx is calculated by multiplying the first spatial resolution _ x in the x-axis direction by the slice width, then by the column number column of the slice, and then by the coordinate orginal _ x of the reference starting point coordinate in the x-axis.
the tileMiny is calculated by the product of subtracting the coordinate final _ y of the reference terminal point coordinate on the y axis, and adding 1 to the second spatial resolution _ y, the slice height, and the row number row of the slice in the y axis direction; the row number row plus 1 is to calculate the total row distance between the target start point coordinate y-axis coordinate and the reference end point coordinate y-axis coordinate.
In the same way, a target end point coordinate (tileMaxx, tileMaxy) can be obtained, and the calculation formula can be expressed as follows:
tileMinx=original_x+resolution_x*width*column;
tileMiny=final_y-resolution_y*height*(row+1);
tileMaxx=original_x+resolution_x*width*(column+1);
tileMaxy=final_y-resolution_y*height*row。
the spatial extent of the slice is finally obtained.
The foregoing mainly explains how to determine the slice spatial range of the target slice, and the following explains how to retrieve the framed image data corresponding to the target slice according to the slice spatial range, that is, in step 103, the spatial retrieval is performed on the slice spatial range through the R-tree index, so as to obtain at least one first spatial range having an intersection relationship with the slice spatial range, and further, a specific process of determining at least one piece of framed image data having an intersection relationship with the target slice includes:
starting from a root node of the R tree index, judging whether the space range of the slice is intersected with the space range of the child node; if not, stopping the retrieval of the child node; and if the first spatial range and the second spatial range are intersected, continuously searching the next-level node of the child node until all the first spatial ranges intersected with the slice spatial range are found out, and further determining at least one piece of framing image data which is intersected with the target slice.
And allocating a framing image data set for each target slice, and adding the retrieved framing image data to the corresponding framing image data set.
The specific way of searching the R tree is not limited to depth-first search, but may be breadth-first search or other search ways.
As shown in fig. 8, after obtaining the framing image data set corresponding to each target slice, image data needs to be sampled, and then a target slice image corresponding to the target slice is constructed, which mainly includes two steps: (1) Calculating the target intersection in the image area of the corresponding framing image data, and performing data sampling in the corresponding framing image data to obtain target image data; (2) And filling the target image data to the corresponding position of the target slice. For example, the intersections a to D in fig. 8 may be understood as the target intersections in the image areas of the corresponding framed image data, the target intersection areas are resampled, and the resampled data are filled in the corresponding positions of the target slices to form slice images.
In the process of calculating the target intersection, the intersection is calculated only according to the spatial range, the spatial range only includes coordinate data but not image data, the real image data is included in the framed image data, the sampled data is required to be filled into the target slice, the coordinate conversion is required to be performed to acquire the image area of the target intersection in the corresponding framed image data from the framed image data, and the coordinate conversion is also required to know the position where the target image data is filled into the target slice.
Referring to fig. 6, in step 104, acquiring a target intersection of each first spatial range and the slice spatial range, and calculating the target intersection in the corresponding image area of the framed image data includes:
step 1041: and under the projection coordinate system, acquiring a target intersection of the first space range and the slice space range, and calculating an intersection space range of the target intersection.
Wherein, under the projection coordinate system, the intersection space range can be expressed as an intersection circumscribed rectangle lower left coordinate (minx, miny) and upper right coordinate (maxx, maxy).
Step 1042: and establishing a first coordinate system of the framed image data, and carrying out affine transformation on the intersection space range according to the geographic transformation parameters of the framed image data to obtain the image coordinates of the intersection space range in the first coordinate system.
Step 1043: and determining the image area in the framing image data according to the image coordinates and the size under the first coordinate system.
The geographic transformation parameter geoTransfrom of the framing image data is an array including six parameters, which specifically includes:
GeoTransfrom [0]: coordinates of the upper left corner of the framing image on the x axis are in a projection coordinate system;
geoTransform [1]: resolution of the framed image on the x-axis;
geoTransform [2]: rotating parameters;
GeoTransform [3]: coordinates of the upper left corner of the framed image on the y axis in the projection coordinate system
geoTransform [4]: rotating parameters;
GeoTransform [5]: the resolution of the framing image on the y axis;
the first coordinate system is a rectangular coordinate system established by taking the upper left corner of the framing image corresponding to the framing image data as an origin, the width direction as an x axis and the height direction as a y axis, and the framing image is located in the fourth quadrant of the first coordinate system at the moment.
After affine transformation, a frame image area corresponding to the intersection circumscribed rectangle is obtained, and in a first coordinate system, the frame image area can be expressed as an image size: width fwidth, height fhight; image coordinates are as follows: corresponding to the coordinates (fx, fy) of the upper left corner of the image area.
The specific affine principle is that the distance between the left side of the circumscribed rectangle and the left side of the circumscribed rectangle in the projection coordinate system of the framed image is represented as the distance between the x-axis coordinate minx at the lower left corner of the circumscribed rectangle and the x-axis coordinate geoTransfrom [0] at the x-axis coordinate of the circumscribed rectangle in the geographic range of the framed image, the distance is divided by the resolution geoTransform [1] of the framed image on the x-axis coordinate, and after downward rounding, the coordinate fx on the x-axis coordinate of the image area of the framed image corresponding to the circumscribed rectangle is obtained.
Dividing the width of the intersection circumscribed rectangle by the resolution geoTransform [1] of the framed image on the x axis to obtain the width fWidth of the image area of the framed image corresponding to the intersection circumscribed rectangle in the x axis direction.
And similarly, the coordinate fy on the y axis of the image area of the framed image corresponding to the intersection circumscribed rectangle and the height fhight on the x axis direction of the image area of the framed image corresponding to the intersection circumscribed rectangle can be calculated.
The specific operation formula is embodied as follows:
fx=(int)((minx-geoTransfrom[0])/geoTransform[1]);
fy=(int)((maxy-geoTransform[3])/geoTransform[5]);
fwidth=(maxx-minx)/geoTransform[1];
fheight=(maxy-miny)/geoTransform[5]。
referring to fig. 7, in step 105, the resampling is performed on the image area to obtain target image data, and the specific process of mapping the target image data to the corresponding mapping area of the target slice includes:
step 1051: and establishing a second coordinate system of the target slice, and performing coordinate conversion on the intersection space range based on the second coordinate system to obtain the mapping coordinate and the size of the intersection space range in the second coordinate system.
Step 1052: and under a second coordinate system, determining a mapping area in the target slice according to the mapping coordinates.
The second coordinate system is a rectangular coordinate system established by taking the upper left corner of the target slice as an origin, the width direction as an x axis and the height direction as a y axis, and the target slice is positioned in the fourth quadrant in the second coordinate system at the moment.
The region on the slice corresponding to the intersection bounding rectangle may be expressed as the coordinates (tx, ty) in the upper left corner of the slice image, and the width twidth and height theight of the region on the slice image.
The specific transformation process is that the distance between the left side of the intersection external rectangle and the left side of the external rectangle in the geographic range of the slice is represented as the distance between the lower left x-axis coordinate minx of the intersection external rectangle and the lower left x-axis coordinate tminx of the geographic range of the slice, the width of the external rectangle in the geographic range of the slice is divided by the width of the external rectangle in the geographic range of the slice, and the x-axis coordinate tx of the upper left corner of the intersection external rectangle on the original slice is obtained after the original width of the slice is multiplied by the width of the original slice.
And dividing the width of the intersection circumscribed rectangle by the circumscribed rectangle in the slice geographic range, and multiplying by the original width of the slice to obtain the corresponding width twidth of the intersection circumscribed rectangle on the slice image.
According to the same way, ty and height can be calculated, and the specific operation formula is as follows:
tx=((minx-tminx)/(tmaxx-tminx))*width;
ty=((maxy-tmaxy)/(tmaxy-tminy))*height;
twidth=((maxx-minx)/(tmaxx-tminx))*width;
theight=((maxy-miny)/(tmaxy-tminy))*height。
step 1053: and establishing association between the image area and the corresponding mapping area, resampling the image area to obtain target image data, and mapping the target image data into the corresponding mapping area.
It should be noted that the meaning of performing coordinate transformation on the intersection space range is that the intersection space range, the first space range, and the slice space range only include coordinate data, but not image data, and on one hand, an image area of a sub-image corresponding to the intersection space range and needing to be resampled needs to be obtained through coordinate transformation, so as to resample the image area and obtain target image data; on the other hand, it is necessary to determine where the target image data is filled in the target slice, and therefore, two transformations are required to determine the image region in the framed image data by transforming the intersection space range in the projection coordinate system into the range in the first coordinate system, and to determine the mapping region corresponding to the image region by transforming the intersection space range in the projection coordinate system into the coordinates in the second coordinate system.
In an actual application scene, the process of reconstructing the target slicing process includes adding the target image data and a mapping area corresponding to the target image data to a target image data set; allocating a memory meeting the storage requirement of each target slice, performing reconstruction operation on the target image data set to obtain target image data, and adding the target image data set into the memory; and reading the data in the memory area to obtain the corresponding target slice image.
The memory of one slice memory is the slice size multiplied by the number of bytes occupied by each pixel, the slice size is expressed by the pixels, and when the slice image has 4 channels, namely RBGA, one pixel occupies 4 bytes, and the memory corresponding to one slice is width height 32bit.
The target slice image is stored in the memory, and after the client end browses the map, the corresponding memory space is cleared. Compared with the traditional slicing method, the method saves a large amount of disk space for caching all sliced images.
In a preferred embodiment, in order to reduce response delay of the slicing service, the distributed slicing method further includes, after completing reconstruction of the image of the current target slice, recording the number of reconstruction times of the current target slice, if the number of reconstruction times is greater than a certain value, caching the target slice image in the server, and when performing a reconstruction task of the target slice next time, directly calling the target slice image cached in the server.
In another preferred embodiment, in order to reduce response delay of the slicing service, the distributed slicing method further includes, after completing the image reconstruction of the current target slice, acquiring a next-level target slice or a previous-level target slice of the current target slice, or acquiring sibling slices located around the current target slice, having the same level as the target slice, and having no intersection relationship with the map range to be displayed; and sequentially reconstructing the images of the next-stage target slice or the previous-stage target slice or the brother slice.
The next-level target slice is 4 slices of the next level corresponding to the target slice in the slice pyramid model; the upper-level target slice is 1 slice of the corresponding upper level of the target slice in the slice pyramid model; the sibling slices are a plurality of slices of the same level corresponding to the target slice in the slice pyramid model.
In the process of browsing the map by the user, zooming-in and zooming-out operations may be executed, and then the image data of the slice adjacent to the current target slice needs to be displayed; in the process of browsing the map by the user, operations of left shift, right shift, up shift or down shift may also be executed, according to this way, image data of a slice which is at the same level as the current target slice and is adjacent to the current target slice needs to be displayed, according to the foregoing way, image data which may be needed by the user may be obtained in advance, and the response speed is increased.
Example 2:
based on the distributed slicing method provided in embodiment 1, this embodiment further provides a distributed slicing apparatus for framing an image, which includes at least one processor and a memory, where the at least one processor and the memory are connected through a data bus, and the memory stores instructions executable by the at least one processor, and the instructions are used to complete the distributed slicing method described in the embodiment after being executed by the processor.
Fig. 9 is a schematic structural diagram of a distributed slicing apparatus for framing an image according to an embodiment of the present invention. The distributed slicing apparatus for the frame image of the present embodiment includes one or more processors 21 and a memory 22. In fig. 9, one processor 21 is taken as an example.
The processor 21 and the memory 22 may be connected by a bus or other means, such as the bus shown in fig. 9.
The memory 22 is a non-volatile computer-readable storage medium, and can be used to store a non-volatile software program and a non-volatile computer-executable program, such as the distributed slicing method of the frame image in embodiment 1. The processor 21 executes the distributed slicing method of the framed images by executing non-volatile software programs and instructions stored in the memory 22.
The memory 22 may include high speed random access memory and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some embodiments, the memory 22 may optionally include memory located remotely from the processor 21, and these remote memories may be connected to the processor 21 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The program instructions/modules are stored in the memory 22, and when executed by the one or more processors 21, perform the distributed slicing method for framed images in embodiment 1 described above, for example, perform the steps shown in fig. 1 described above.
It should be noted that, for the information interaction, execution process, and other contents between the modules and units in the apparatus and system, the specific contents may refer to the description in the embodiment of the method of the present invention because the same concept is used as the embodiment of the processing method of the present invention, and are not described herein again.
Those of ordinary skill in the art will appreciate that all or a portion of the steps in the various methods of the embodiments may be performed by associated hardware that is instructed by a program that may be stored in a computer-readable storage medium, which may include: read Only Memory (ROM), random Access Memory (RAM), magnetic or optical disks, and the like.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.

Claims (10)

1. A method for distributed slicing of a segmented image, comprising:
acquiring a first spatial range of each framing image data, and establishing an R tree index according to each first spatial range;
acquiring at least one target slice corresponding to a map range to be displayed, and calculating a slice space range of the target slice in a projection coordinate system according to slice information of the target slice;
performing spatial retrieval on the slice spatial range through the R tree index to obtain at least one first spatial range having an intersection relation with the slice spatial range, and further determining at least one framing image data having an intersection relation with the target slice;
acquiring a target intersection of each first space range and the slice space range, and calculating the image area of the corresponding framing image data of the target intersection;
and resampling the image area to obtain target image data, and mapping the target image data to a corresponding mapping area of the target slice until the image reconstruction of the target slice is completed.
2. The distributed slicing method of claim 1, wherein the obtaining at least one target slice corresponding to a map range to be displayed, and the calculating a slice space range of the target slice in the projection coordinate system according to slice information of the target slice comprises:
acquiring a central point coordinate of a map displayed by a client, and calculating a map range according to the central point coordinate and the size of a client window;
acquiring a slice level and a slice size according to a current map scale of a client, acquiring a target slice corresponding to the map range according to the map range and the slice level, taking a slice column number and a slice row number of the target slice and the slice level corresponding to the target slice as slice coordinates, and taking the slice coordinates and the slice size as slice information of the slice;
and calculating the slice space range of the target slice in the projection coordinate system according to the slice information of the target slice.
3. The distributed slicing method of claim 2 wherein said calculating a slice spatial extent of said target slice in a projection coordinate system from slice information of said target slice comprises:
acquiring a reference starting point coordinate and a reference end point coordinate of a space range of the whole original image on the projection coordinate system, and calculating a reference width and a reference height of the space range of the original image according to the reference starting point coordinate and the reference end point coordinate;
acquiring the slice width, the slice height and the slice level of the target slice, calculating a first spatial resolution of the target slice in the x-axis direction according to the reference width, the slice width and the slice level, and calculating a second spatial resolution of the target slice in the y-axis direction according to the reference height, the slice height and the slice level;
determining a target starting point coordinate of the target slice according to the reference starting point coordinate, the first spatial resolution, the slice width and the slice column number, and the reference starting point coordinate, the second spatial resolution, the slice height and the slice row number;
determining a target endpoint coordinate of the target slice according to the reference endpoint coordinate, the first spatial resolution, the slice width, and the slice column number, and the reference endpoint coordinate, the second spatial resolution, the slice height, and the slice row number;
and determining the slice space range according to the target starting point coordinate, the target end point coordinate, the slice width and the slice height.
4. The distributed slicing method as claimed in claim 2, wherein said spatially retrieving the slice space range through the R-tree index to obtain at least one first space range intersecting the slice space range, and further determining at least one piece of framed image data intersecting the target slice comprises:
starting from a root node of the R tree index, judging whether the space range of the slice is intersected with the space range of the child node;
if not, stopping the retrieval of the child node;
if the first spatial range and the second spatial range are intersected, the next-level node of the child node is searched continuously until all the first spatial ranges intersected with the slice spatial range are found out, and then at least one piece of framing image data which is intersected with the target slice is determined;
and allocating a framing image data set for each target slice, and adding the retrieved framing image data into the corresponding framing image data set.
5. The distributed slicing method as claimed in claim 1, wherein said obtaining a target intersection of each of said first spatial ranges and said slice spatial range, and said calculating said target intersection in an image area of the corresponding framed image data comprises:
under the projection coordinate system, acquiring a target intersection of the first space range and the slice space range, and calculating an intersection space range of the target intersection;
establishing a first coordinate system of the framed image data, and carrying out affine transformation on the intersection space range according to the geographic transformation parameters of the framed image data to obtain the image coordinates and the image size of the intersection space range under the first coordinate system;
and determining the image area in the framing image data according to the image coordinates and the image size under the first coordinate system.
6. The distributed slicing method of claim 5, wherein the resampling the image area to obtain target image data, and the mapping the target image data to a corresponding mapping area of the target slice comprises:
establishing a second coordinate system of the target slice, and performing coordinate conversion on the intersection space range based on the second coordinate system to obtain a mapping coordinate and a mapping size of the intersection space range in the second coordinate system;
determining a mapping area in the target slice according to the mapping coordinates and the mapping size under a second coordinate system;
and establishing association between the image area and the corresponding mapping area, resampling the image area to obtain target image data, and mapping the target image data into the corresponding mapping area.
7. The distributed slicing method as claimed in any one of claims 1 to 6, wherein the resampling the image area to obtain target image data, and the mapping the target image data to the corresponding mapping area of the target slice comprises:
adding the target image data and the mapping area corresponding to the target image data to a target image data set;
allocating a memory meeting the storage requirement of each target slice, performing reconstruction operation on the target image data set to obtain a target slice image, and adding the target slice image into the memory;
and reading the data in the memory area to obtain the corresponding target slice image.
8. The distributed slicing method of any of claims 1 to 6, wherein the distributed slicing method:
after the reconstruction of the image of the current target slice is finished, recording the reconstruction times of the current target slice, caching the image of the target slice into a server if the reconstruction times of the current target slice is more than a certain specific value, and directly calling the image of the target slice cached in the server when the reconstruction task of the target slice is carried out next time.
9. The distributed slicing method of any of claims 1 to 6, wherein the distributed slicing method:
after the image reconstruction of the current target slice is completed, acquiring a next-stage target slice or a previous-stage target slice of the current target slice, or acquiring brother slices which are located around the current target slice, have the same level as the target slice and do not have an intersection relation with a map range to be displayed;
and sequentially reconstructing images of the next-stage target slice, the previous-stage target slice or the brother slice.
10. A distributed slicing apparatus for sliced images, comprising at least one processor and a memory, wherein the at least one processor and the memory are connected by a data bus, and the memory stores instructions executable by the at least one processor, and the instructions are used for performing the distributed slicing method according to any one of claims 1 to 9 after being executed by the processor.
CN202211329074.0A 2022-10-27 2022-10-27 Distributed slicing method and distributed slicing device for framing images Pending CN115689886A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211329074.0A CN115689886A (en) 2022-10-27 2022-10-27 Distributed slicing method and distributed slicing device for framing images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211329074.0A CN115689886A (en) 2022-10-27 2022-10-27 Distributed slicing method and distributed slicing device for framing images

Publications (1)

Publication Number Publication Date
CN115689886A true CN115689886A (en) 2023-02-03

Family

ID=85100076

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211329074.0A Pending CN115689886A (en) 2022-10-27 2022-10-27 Distributed slicing method and distributed slicing device for framing images

Country Status (1)

Country Link
CN (1) CN115689886A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117218309A (en) * 2023-09-21 2023-12-12 中国铁路设计集团有限公司 Quick image map service manufacturing method considering linear band-shaped characteristics of railway

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117218309A (en) * 2023-09-21 2023-12-12 中国铁路设计集团有限公司 Quick image map service manufacturing method considering linear band-shaped characteristics of railway
CN117218309B (en) * 2023-09-21 2024-02-20 中国铁路设计集团有限公司 Quick image map service manufacturing method considering linear band-shaped characteristics of railway

Similar Documents

Publication Publication Date Title
CN109977192B (en) Unmanned aerial vehicle tile map rapid loading method, system, equipment and storage medium
CN101388043B (en) OGC high performance remote sensing image map service method based on small picture
CN110070613B (en) Large three-dimensional scene webpage display method based on model compression and asynchronous loading
CN112256897B (en) Vector tile loading method in three-dimensional scene
US20120299920A1 (en) Rendering and Navigating Photographic Panoramas with Depth Information in a Geographic Information System
CN110956673A (en) Map drawing method and device
CN102509022B (en) Method for quickly constructing raster database facing to Virtual Earth
CN105786942A (en) Geographic information storage system based on cloud platform
CN112686997B (en) WebGIS-based three-dimensional model data analysis display platform and method
CN112115534A (en) Method for converting three-dimensional house model into two-dimensional vector plane with height attribute
CN113626550B (en) Image tile map service method based on triple bidirectional index and optimized cache
CN111949817A (en) Crop information display system, method, equipment and medium based on remote sensing image
CN111354084A (en) Network geographic information service system based on three-dimensional model tiles
CN110990612B (en) Method and terminal for rapidly displaying vector big data
CN109859109B (en) Series scale PDF map seamless organization and display method
CN110647596B (en) Map data processing method and device
CN110110248B (en) Computer system for realizing display of panoramic image along electronic map
CN113535867A (en) Vector tile generation method and system adaptive to multiple data sources
CN115689886A (en) Distributed slicing method and distributed slicing device for framing images
CN112085826A (en) Efficient three-dimensional space grid rendering method and device
CN113254559A (en) Equipment site selection method based on geographic information system
US8896601B1 (en) Projecting geographic data from a spherical surface to two-dimensional cartesian space
Antoniou et al. Tiled vectors: A method for vector transmission over the web
CN112487129A (en) Visualization method and device for mass remote sensing vector data
CN110706240A (en) Unmanned aerial vehicle image data batch cropping method based on small pattern spots

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination