TW202008328A - Data processing method and device for map region merging - Google Patents

Data processing method and device for map region merging Download PDF

Info

Publication number
TW202008328A
TW202008328A TW108117080A TW108117080A TW202008328A TW 202008328 A TW202008328 A TW 202008328A TW 108117080 A TW108117080 A TW 108117080A TW 108117080 A TW108117080 A TW 108117080A TW 202008328 A TW202008328 A TW 202008328A
Authority
TW
Taiwan
Prior art keywords
merged
map image
target
transparency
initial
Prior art date
Application number
TW108117080A
Other languages
Chinese (zh)
Other versions
TWI698841B (en
Inventor
董曉慶
Original Assignee
香港商阿里巴巴集團服務有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 香港商阿里巴巴集團服務有限公司 filed Critical 香港商阿里巴巴集團服務有限公司
Publication of TW202008328A publication Critical patent/TW202008328A/en
Application granted granted Critical
Publication of TWI698841B publication Critical patent/TWI698841B/en

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B29/00Maps; Plans; Charts; Diagrams, e.g. route diagram
    • G09B29/003Maps
    • G09B29/005Map projections or methods associated specifically therewith

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Business, Economics & Management (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • General Physics & Mathematics (AREA)
  • Processing Or Creating Images (AREA)
  • Image Generation (AREA)

Abstract

The present specification provides a data processing method and device for map region merging. The method comprises the following steps of: drawing a map image of an area to be merged by using a linewith preset transparency to generate an initial merged map image; acquiring transparency of pixel points in the initial merged map image; and taking a pixel point whose transparency be the same as thepreset transparency as a target pixel point, and generating a merged target merged map image according to the target pixel point. The data processing method and device for map region merging can realize flexible merging of map regions. The method is simple and fast, does not need complex data processing, and can accurately detect the overlapping part of the boundary, so that the merging of map regions is more accurate and fast, and the merged map regions can be used as a whole for interaction, convenient for subsequent use, and wide applicability.

Description

地圖區域合併的資料處理方法及裝置Data processing method and device for merging map areas

本說明書屬於地圖資料處理技術領域,尤其涉及一種地圖區域合併的資料處理方法及裝置。This specification belongs to the technical field of map data processing, and particularly relates to a data processing method and device for merging map areas.

隨著電腦技術的發展,電子地圖的出現給人們的生活帶來了極大的方便。在使用電子地圖時,經常會遇到需要將地圖中多個區域合併成一個區域的需求,例如:將東北三省合併成大東北區,浙江、上海、蘇州合併成華東區,將很多國家合併成中東區等。 現有技術中,對於地圖中區域的合併時,通常可以利用數學計算多個區域之間的重合線,資料處理過程複雜,不夠靈活,適用性較差。因此,極需一種方便快捷的地圖區域合併的實施方案。With the development of computer technology, the emergence of electronic maps has brought great convenience to people's lives. When using electronic maps, we often encounter the need to merge multiple areas in the map into one area, for example: merge three northeast provinces into a large northeast area, merge Zhejiang, Shanghai, and Suzhou into the east China area, and merge many countries into Middle East etc. In the prior art, when merging regions in a map, it is usually possible to use mathematics to calculate coincidence lines between multiple regions. The data processing process is complicated, not flexible enough, and has poor applicability. Therefore, there is a great need for a convenient and fast implementation method for merging map areas.

本說明書目的在於提供一種地圖區域合併的資料處理方法及裝置,方法簡單快捷,滿足地圖區域合併的技術需求。 一方面本說明書實施例提供了一種地圖區域合併的資料處理方法,包括: 使用預設透明度的線條繪製待合併區域的地圖圖像,生成初始合併地圖圖像; 獲取所述初始合併地圖圖像中像素點的透明度; 將所述像素點的透明度與所述預設透明度相同的像素點作為目標像素點,根據所述目標像素點生成合併後的目標合併地圖圖像。 進一步地,所述方法的另一個實施例中,所述使用預設透明度的線條繪製待合併區域的地圖圖像,生成初始合併地圖圖像,包括: 在第一地圖繪製區域獲取用戶選擇的所述待合併區域; 根據用戶選擇的所述待合併區域,在第二地圖繪製區域使用所述預設透明度的線條繪製待合併區域的地圖圖像,生成所述初始合併地圖圖像。 進一步地,所述方法的另一個實施例中,所述根據所述目標像素點生成合併後的目標合併地圖圖像,包括: 將所述初始合併地圖圖像中的所述目標像素點之外的像素點從所述初始合併地圖圖像中剔除; 將所述初始合併地圖圖像中所述目標像素點組成的地圖圖像作為所述目標合併地圖圖像。 進一步地,所述方法的另一個實施例中,所述將所述初始合併地圖圖像中的所述目標像素點之外的像素點從所述初始合併地圖圖像中剔除後,所述方法包括: 將所述目標像素點之外的像素點的位置處的顏色設置為與所述初始合併地圖圖像中所述待合併區域內部的像素點的顏色相同。 進一步地,所述方法的另一個實施例中,所述根據所述目標像素點生成合併後的目標合併地圖圖像,包括: 提取所述目標像素點對應的座標資訊,根據所述目標像素點對應的座標資訊的集合,生成所述目標合併地圖圖像。 進一步地,所述方法的另一個實施例中,所述獲取所述初始合併地圖圖像中像素點的透明度,包括: 根據所述待合併區域的座標資訊,遍歷所述初始合併地圖圖像中所述待合併區域內的像素點,獲取所述初始合併地圖圖像中所述待合併區域內的像素點對應的透明度。 另一方面,本說明書提供了地圖區域合併的資料處理裝置,包括: 初始合併圖像繪製模組,用於使用預設透明度的線條繪製待合併區域的地圖圖像,生成初始合併地圖圖像; 透明度獲取模組,用於獲取所述初始合併地圖圖像中像素點的透明度; 目標合併地圖生成模組,用於將所述像素點的透明度與所述預設透明度相同的像素點作為目標像素點,根據所述目標像素點生成合併後的目標合併地圖圖像。 進一步地,所述裝置的另一個實施例中,所述初始合併圖像繪製模組具體用於: 在第一地圖繪製區域獲取用戶選擇的所述待合併區域; 根據用戶選擇的所述待合併區域,在第二地圖繪製區域使用所述預設透明度的線條繪製待合併區域的地圖圖像,生成所述初始合併地圖圖像。 進一步地,所述裝置的另一個實施例中,所述目標合併地圖生成模組具體用於: 將所述初始合併地圖圖像中的所述目標像素點之外的像素點從所述初始合併地圖圖像中剔除; 將所述初始合併地圖圖像中所述目標像素點組成的地圖圖像作為所述目標合併地圖圖像。 進一步地,所述裝置的另一個實施例中,所述目標合併地圖生成模組還用於: 將所述目標像素點之外的像素點的位置處的顏色設置為與所述初始合併地圖圖像中所述待合併區域內部的像素點的顏色相同。 進一步地,所述裝置的另一個實施例中,所述目標合併地圖生成模組具體用於: 提取所述目標像素點對應的座標資訊,根據所述目標像素點對應的座標資訊的集合,生成所述目標合併地圖圖像。 進一步地,所述裝置的另一個實施例中,所述透明度獲取模組具體用於: 根據所述待合併區域的座標資訊,遍歷所述初始合併地圖圖像中所述待合併區域內的像素點,獲取所述初始合併地圖圖像中所述待合併區域內的像素點對應的透明度。 再一方面,本說明書實施例提供了一種電腦儲存媒體,其上儲存有電腦程式,所述電腦程式被執行時,實現申請專利範圍上述地圖區域合併的資料處理方法。 又一方面,本說明書實施例提供了地圖區域合併的資料處理系統,包括至少一個處理器以及用於儲存處理器可執行指令的儲存器,所述處理器執行所述指令時實現上述地圖區域合併的資料處理方法。 本說明書提供的地圖區域合併的資料處理方法、裝置、系統,可以透過畫布上染色技術檢測像素點的透明度是否有變化,基於地圖和合併時重疊部分的像素點的透明度與未重疊部分的像素點的透明度不同,篩選出邊界重疊的像素點和邊界未重疊的像素點,進一步基於邊界未重疊的像素點生成合併後的地圖圖像。方法簡單快捷,不需要複雜的資料處理,並且能夠準確的檢測出邊界重疊部分,使得地圖區域的合併更加準確快速,合併後的地圖區域可以作為一個整體進行互動,方便後續使用,適用性廣。The purpose of this specification is to provide a data processing method and device for merging map areas. The method is simple and fast, and meets the technical requirements for merging map areas. On the one hand, the embodiments of the present specification provide a data processing method for merging map areas, including: Use the preset transparency lines to draw the map image of the area to be merged to generate the initial merged map image; Obtaining the transparency of pixels in the initial merged map image; A pixel point having the same transparency as the preset transparency is used as a target pixel point, and a merged target merged map image is generated according to the target pixel point. Further, in another embodiment of the method, the drawing of the map image of the region to be merged using lines of preset transparency to generate the initial merged map image includes: Obtaining the area to be merged selected by the user in the first mapping area; According to the area to be merged selected by the user, a map image of the area to be merged is drawn in the second map drawing area using the lines of the preset transparency to generate the initial merged map image. Further, in another embodiment of the method, the generating the merged target merged map image according to the target pixel includes: Removing pixels other than the target pixel in the initial merged map image from the initial merged map image; A map image composed of the target pixel points in the initial merged map image is used as the target merged map image. Further, in another embodiment of the method, after excluding pixels other than the target pixel in the initial merged map image from the initial merged map image, the method include: The color at the position of the pixel point other than the target pixel point is set to be the same as the color of the pixel point inside the region to be merged in the initial merged map image. Further, in another embodiment of the method, the generating the merged target merged map image according to the target pixel includes: The coordinate information corresponding to the target pixel is extracted, and the target merged map image is generated according to the set of coordinate information corresponding to the target pixel. Further, in another embodiment of the method, the obtaining transparency of pixels in the initial merged map image includes: According to the coordinate information of the region to be merged, traverse the pixels in the region to be merged in the initial merged map image to obtain the transparency corresponding to the pixels in the region to be merged in the initial merged map image . On the other hand, this specification provides a data processing device for merging map areas, including: The initial merged image drawing module is used to draw a map image of the area to be merged using lines of preset transparency to generate an initial merged map image; Transparency acquisition module for acquiring the transparency of pixels in the initial merged map image; The target merged map generation module is configured to use pixels with the same transparency as the preset transparency as target pixels, and generate a merged target merged map image according to the target pixels. Further, in another embodiment of the device, the initial merged image rendering module is specifically used to: Obtaining the area to be merged selected by the user in the first mapping area; According to the area to be merged selected by the user, a map image of the area to be merged is drawn in the second map drawing area using the lines of the preset transparency to generate the initial merged map image. Further, in another embodiment of the device, the target merge map generation module is specifically used to: Removing pixels other than the target pixel in the initial merged map image from the initial merged map image; A map image composed of the target pixel points in the initial merged map image is used as the target merged map image. Further, in another embodiment of the device, the target merge map generation module is further used to: The color at the position of the pixel point other than the target pixel point is set to be the same as the color of the pixel point inside the region to be merged in the initial merged map image. Further, in another embodiment of the device, the target merge map generation module is specifically used to: The coordinate information corresponding to the target pixel is extracted, and the target merged map image is generated according to the set of coordinate information corresponding to the target pixel. Further, in another embodiment of the device, the transparency acquisition module is specifically used to: According to the coordinate information of the region to be merged, traverse the pixels in the region to be merged in the initial merged map image to obtain the transparency corresponding to the pixels in the region to be merged in the initial merged map image . In still another aspect, the embodiments of the present specification provide a computer storage medium on which a computer program is stored, and when the computer program is executed, a data processing method for combining the above-mentioned map areas for patent application is realized. In another aspect, an embodiment of the present specification provides a data processing system for merging map areas, including at least one processor and a memory for storing processor executable instructions, and the processor implements the instructions to implement the above map area merging Data processing method. The data processing method, device, and system for merging map areas provided in this manual can detect whether the transparency of pixels has changed through the dyeing technique on the canvas, based on the transparency of the pixels in the overlapping part and the pixels in the non-overlapping part The transparency is different, the pixels with overlapping borders and the pixels with non-overlapping borders are screened out, and the merged map image is further generated based on the pixels with non-overlapping borders. The method is simple and fast, does not require complex data processing, and can accurately detect the overlapping part of the border, making the merge of map areas more accurate and fast. The merged map areas can be interacted as a whole, which is convenient for subsequent use and has wide applicability.

為了使本技術領域的人員更好地理解本說明書中的技術方案,下面將結合本說明書實施例中的圖式,對本說明書實施例中的技術方案進行清楚、完整地描述,顯然,所描述的實施例僅僅是本說明書一部分實施例,而不是全部的實施例。基於本說明書中的實施例,本領域普通技術人員在沒有作出創造性勞動前提下所獲得的所有其他實施例,都應當屬於本說明書保護的範圍。 隨著電腦網路技術的發展,人們可以透過電子地圖瞭解世界各地的地理位置等資訊。通常情況下地圖或者電子地圖是以省份、國家來進行區域的劃分,但是,有些情況下,可能需要將一些指定的區域在地圖上進行合併,如:將中國地圖中的東北三省合併成東北區,以方便用戶對東北區域的進行整體的瞭解認識。 本說明書實施例提供的地圖區域合併的資料處理方法,透過設置統一的透明度,在合併地圖區域時,重合的部分透明度會發生變化,基於地圖圖像中透明度的變化,將指定的區域進行合併。基於像素點透明度的變化,合併指定的地圖區域,方法簡單快捷,不需要複雜的數學計算,合併後的地圖區域可以作為一個整體進行互動,適用性強。 本發明實施例中合併地圖區域的資料處理過程可以在用戶端上進行如:智慧型手機、平板電腦、智慧型可穿戴設備(智慧型手錶、虛擬實境眼鏡、虛擬實境頭盔等)等電子設備。具體可以在用戶端的瀏覽器端進行,如:PC瀏覽器端、行動瀏覽器端、伺服器端web容器等。 具體地,圖1是本說明書提供的一個實施例中的地圖區域合併的資料處理方法的流程示意圖,如圖1所示,本說明書實施例提供的地圖區域合併的資料處理方法,包括: S2、使用預設透明度的線條繪製待合併區域的地圖圖像,生成初始合併地圖圖像。 待合併區域可以包括多個地圖區域,如:東北三省中的遼寧省、吉林省、黑龍江省三個省份可以表示三個待合併區域,可以在同一個畫布(畫布可以表示用於繪製圖形的組件或區域)或其他地圖繪製區域使用預設透明度的線條繪製出待合併區域的地圖圖像。例如:圖2是本說明書一個實施例中東北地區的初始合併地圖圖像的示意圖,如圖2所示,可以根據東北三省的經緯度資訊,在同一個畫布中按照遼寧省、吉林省、黑龍江省的相對位置分別繪製出遼寧省、吉林省、黑龍江省的地圖圖像,三個省份的地圖圖像共同組成東北地區的初始合併地圖圖像,可以看出初始合併地圖圖像中相鄰的待合併區域之間可能會出現邊界重合。如圖2所示,在繪製待合併區域的地圖圖像時,可以只繪製出各個待合併區域的輪廓線即邊界線,邊界線以內的區域可以表示該待合併區域。 其中,繪製待合併區域的地圖圖像時,本說明書一個實施例中使用預設透明度的線條進行繪製,預設透明度可以根據實際需要進行選取,通常情況下預設透明度可以設置在0-1之間,本說明書一個實施例中預設透明度可以為0.5,這樣設置可以方便後續像素點的檢測。 此外,在繪製各個待合併地圖區域時,不同的待合併地圖區域可以是使用相同顏色的線條繪製,也可以使用不同顏色的線條進行繪製。如:東北三省地圖區域作為待合併區域時,可以在同一個畫布中都使用黑色(或其他顏色如:紅色、藍色等)、透明度為0.5的線條繪製出遼寧省、吉林省、黑龍江省的地圖圖像,將三個省份的地圖圖像的整體作為東北地區的初始合併圖像。也可以使用黑色、透明度為0.5的線條繪製遼寧省的地圖圖像,使用紅色、透明度為0.5的線條繪製吉林省的地圖圖像,使用藍色、透明度為0.5的線條繪製黑龍江省的地圖圖像,將三個省份的地圖圖像的整體作為東北地區的初始合併圖像。即本說明書一個實施例中在繪製待合併區域中各個區域的地圖圖像時,不同區域的地圖圖像可以使用相同的透明度的線條,但是線條顏色可以不進行具體的限定。 在繪製待合併區域的地圖圖像時,可以透過導入GeoJSON資料,接著用程式在畫布上繪製出地圖。GeoJSON是一種對各種地理資料結構進行編碼的格式,是一種地圖資料的組織格式,可以透過解析這種資料繪製出地圖。 S4、獲取所述初始合併地圖圖像中像素點的透明度。 在生成初始合併地圖圖像後,可以獲取初始合併地圖圖像中的各個像素點的透明度。本說明書一個實施例中,在繪製待合併區域的地圖圖像時,可以將待合併區域的經緯度資訊轉化為座標資訊。根據待合併區域的座標資訊,可以遍歷初始合併地圖圖像中待合併區域內的各個像素點,即可以遍歷初始合併地圖圖像中待合併區域內的每個原始資料(包含經緯度資訊的資料點或者座標資訊的資料點)對應的畫布像素點,獲取初始合併地圖圖像中待合併區域內的像素點對應的透明度。可以基於染色技術獲取各個像素點的透明度,具體方法本發明實施例不作具體限定。 透過遍歷待合併區域內部的像素點,獲得各個像素點的透明度變化,方法簡單,並且可以減少待合併區域外部像素點的檢測,提高了資料處理的速度。 S6、將所述像素點的透明度與所述預設透明度相同的像素點作為目標像素點,根據所述目標像素點生成合併後的目標合併地圖圖像。 在獲取到初始合併地圖圖像中像素點的透明度後,可以將像素點的透明度和在繪製待合併區域的地圖圖像時使用的線條的預設透明度進行對比,將透明度與預設透明度相同的像素點作為目標像素點。例如:在繪製待合併區域的地圖圖像時使用的預設透明度為0.5,則可以將初始合併地圖圖像中透明度為0.5的像素點作為目標像素點。利用目標像素點可以生成合併後的目標合併地圖圖像,完成地圖區域的合併。 本說明書一個實施例中,可以提取出目標像素點的座標資訊,將目標像素點對應的座標資訊導出,利用目標像素點對應的座標資訊的集合生成目標合併地圖圖像。如:若在繪製待合併區域的地圖圖像時使用的預設透明度為0.5,可以將初始合併地圖圖像中透明度為0.5的像素點作為目標像素點,提取目標像素點的座標資訊,可以將目標像素點的座標資訊保存在座標點集合中,可以將所有目標像素點的座標資訊組成的座標點集合導出。根據目標像素點的座標集合繪製出目標合成地圖圖像,目標合成地圖圖像可以由待合併區域的邊界圖像組成,此時生成的待合併區域的邊界圖像中可以不包括待合併區域的邊界重疊的部分。 例如:本說明書一個實施例可以在畫布或者地圖繪製區域使用黑色、透明度為0.5的線條繪製出待合併區域的地圖圖像,待合併區域的地圖圖像中可以包括待合併區域的邊界圖像,待合併區域的地圖圖像可以組成初始合併地圖圖像,具體可以參考圖2中東北地圖的初始合併地圖圖像示意圖。遍歷初始合併地圖圖像中待合併區域內的像素點,待合併區域的邊界處未重疊的部分像素點的透明度為0.5,邊界重疊的部分像素點的透明度通常大於0.5,邊界圖像內部的其他區域因未繪製圖像內容,像素點的透明度為0。可以將透明度為0.5的像素點作為目標像素點,即將邊界圖像中未重疊的部分的像素點作為目標像素點。目標像素點組合在一起,可以表示合併後的待合併區域的邊界圖像,此時,合併後的邊界圖像中不包括各個待合併區域的重疊部分,可以表示各待合併區域整體的邊界輪廓。可以提取並保存目標像素點的座標資訊,將目標像素點對應的座標資訊導出可以生成合併後的目標合併地圖圖像。圖3是本說明書一個實施例中合併後的東北地區的目標合併地圖圖像示意圖,如圖3所示,本說明書一個實施例中合併後的目標地圖圖像可以將各待合併區域的邊界重疊部分去除,只保留邊界未重疊部分,直觀的表示地圖區域合併的效果,方便用戶查看。 圖4(a)-4(b)是本說明書一個實施例中透明度變化檢測示意圖,如圖4(a)所示,圖中將兩個透明度為0.5的圖像進行部分疊加,從圖中可以看出中間疊加部分的圖像的透明度大於其他未疊加部分的圖像的透明度。同樣的,如圖4(b)所示,將兩個沒有邊框的透明度為0.5的圖像的進行部分疊加,可以看出,中間重疊部分的圖像的透明度大於其他未疊加部分的圖像的透明度。本說明書實施例中,透過檢測像素點的透明度的變化,可以準確快速的檢測出待合併區域中哪些部分發生重疊,哪些部分未重疊,以實現快速準確的生成合併後的地圖圖像。 此外,本發明實施例還可以根據待合併區域所處的地理位置,為合併後的目標合併地圖圖像進行命名,例如:圖3中將合併後的東北三省命名為東北區。 本說明書提供的地圖區域合併的資料處理方法,可以透過畫布上染色技術檢測像素點的透明度是否有變化,基於地圖合併時重疊部分的像素點的透明度與未重疊部分的像素點的透明度不同,篩選出邊界重疊的像素點和邊界未重疊的像素點,進一步基於邊界未重疊的像素點生成合併後的地圖圖像。方法簡單快捷,不需要複雜的資料處理,並且能夠準確的檢測出邊界重疊部分,使得地圖區域的合併更加準確快速,合併後的地圖區域可以作為一個整體進行互動,方便後續使用,適用性廣。 在上述實施例的基礎上,本說明書一個實施例中,所述使用預設透明度的線條繪製待合併區域的地圖圖像,生成初始合併地圖圖像,可以包括: 在第一地圖繪製區域獲取用戶選擇的所述待合併區域; 根據用戶選擇的所述待合併區域,在第二地圖繪製區域使用所述預設透明度的線條繪製待合併區域的地圖圖像,生成所述初始合併地圖圖像。 具體地,用戶在第一地圖繪製區域(如某用戶端的畫布中)中查看地圖時,若需要將部分地圖區域進行合併,如需要將東北三省的地圖區域進行合併,則用戶可以透過點擊或其他操作選擇需要合併的待合併區域。如:用戶在第一地圖繪製區域中透過導入GeoJSON資料繪製出完整的地圖(如繪製出中國地圖),並透過點擊繪製出的中國地圖中的東北三省遼寧省、吉林省、黑龍江省,選擇出待合併區域。識別出用戶選擇的待合併區域後,可以在第二地圖繪製區域(第二地圖繪製區域可以使用隱藏畫布)使用預設透明度的線條繪製出待合併區域的地圖圖像,生成初始合併地圖圖像,具體可以參考上述實施例生成初始合併地圖圖像,此處不再贅述。基於用戶的選擇生成初始合併地圖圖像,用戶可以根據實際需要選擇待合併區域,進行地圖區域的合併,方法簡單,靈活,提高了用戶體驗。 在上述實施例的基礎上,本說明書一個實施例中,所述根據所述目標像素點生成合併後的目標合併地圖圖像,可以包括: 將所述初始合併地圖圖像中的所述目標像素點之外的像素點從所述初始合併地圖圖像中剔除; 將所述初始合併地圖圖像中所述目標像素點組成的地圖圖像作為所述目標合併地圖圖像。 具體地,在確定出目標像素點,基於目標像素點生成目標合併地圖圖像時,可以將初始合併地圖圖像中的非目標像素點(即除去目標像素點之外的像素點)從初始合併地圖圖像中剔除,此時,初始合併地圖圖像中只剩下目標像素點,可以將剩下的目標像素點合併組成合併後的目標合併地圖圖像。如:上述實施例中在第二地圖繪製區域中將非目標像素點剔除,保留目標像素點,則第二地圖繪製區域中剩餘的目標像素點組成的圖像即可以表示合併後的目標合併地圖圖像。 將透明度不符合要求的非目標像素點從初始合併地圖圖像中剔除,剩餘的目標像素點直接生成合併後的目標合併地圖圖像,方法簡單,地圖區域合併準確。 在剔除非目標像素點後,可以將非目標像素點的位置處的顏色設置為與初始合併地圖圖像中邊界內部區域(即待合併區域內部區域)的像素點的顏色相同。這樣可以避免剔除非目標像素點後,非目標像素點的顏色與待合併區域邊界內部其他區域的顏色不同,影響合併後地圖圖像的顯示效果。如:若在繪製待合併區域的地圖圖像生成初始合併地圖圖像時,初始合併地圖圖像具有底色如:待合併區域的邊界內部區域使用紅色像素點填充,則在剔除非目標像素點後,非目標像素點位置處的顏色可能會變成白色或無色,與待合併區域的邊界內部區域其他像素點的顏色不同,影響合併地圖圖像的顯示效果。可以在剔除非目標像素點後,將非目標像素點處的顏色設置為紅色,與待合併區域的邊界內部區域的顏色保持一致,提高合併後地圖圖像的顯示效果。若待合併區域中各個區域邊界內部使用不同的顏色進行填充,即初始合併地圖圖像中非邊界處有多種顏色,則可以取非目標像素點臨近的任意一個像素點的顏色作為該非目標像素點剔除後該位置處的顏色。 本說明書提供的地圖區域合併的資料處理方法,透過設置統一的透明度,在合併地圖區域時,重合的部分透明度會發生變化,基於地圖圖像中透明度的變化,將指定的區域進行合併。基於像素點透明度的變化,合併指定的地圖區域,方法簡單快捷,不需要複雜的數學計算,合併後的地圖區域可以作為一個整體進行互動,適用性強。 本說明書中上述方法的各個實施例均採用遞進的方式描述,各個實施例之間相同相似的部分互相參見即可,每個實施例重點說明的都是與其他實施例的不同之處。相關之處參見方法實施例的部分說明即可。 基於上述所述的地圖區域合併的資料處理方法,本說明書一個或多個實施例還提供一種地圖區域合併的資料處理裝置。所述的裝置可以包括使用了本說明書實施例所述方法的系統(包括分布式系統)、軟體(應用)、模組、組件、伺服器、用戶端等並結合必要的實施硬體的裝置。基於同一創新構思,本說明書實施例提供的一個或多個實施例中的裝置如下面的實施例所述。由於裝置解決問題的實現方案與方法相似,因此本說明書實施例具體的裝置的實施可以參見前述方法的實施,重複之處不再贅述。以下所使用的,術語“單元”或者“模組”可以實現預定功能的軟體和/或硬體的組合。儘管以下實施例所描述的裝置較佳地以軟體來實現,但是硬體,或者軟體和硬體的組合的實現也是可能並被構想的。 具體地,圖5是本說明書提供的地圖區域合併的資料處理裝置一個實施例的模組結構示意圖,如圖5所示,本說明書中提供的地圖區域合併的資料處理裝置包括:初始合併圖像繪製模組51、透明度獲取模組52、目標合併地圖生成模組53,其中: 初始合併圖像繪製模組51,可以用於使用預設透明度的線條繪製待合併區域的地圖圖像,生成初始合併地圖圖像; 透明度獲取模組52,可以用於獲取所述初始合併地圖圖像中像素點的透明度; 目標合併地圖生成模組53,可以用於將所述像素點的透明度與所述預設透明度相同的像素點作為目標像素點,根據所述目標像素點生成合併後的目標合併地圖圖像。 本說明書實施例提供的地圖區域合併的資料處理裝置,可以透過畫布上染色技術檢測像素點的透明度是否有變化,基於地圖和合併時重疊部分的像素點的透明度與未重疊部分的像素點的透明度不同,篩選出邊界重疊的像素點和邊界未重疊的像素點,進一步基於邊界未重疊的像素點生成合併後的地圖圖像。方法簡單快捷,不需要複雜的資料處理,並且能夠準確的檢測出邊界重疊部分,使得地圖區域的合併更加準確快速,合併後的地圖區域可以作為一個整體進行互動,方便後續使用,適用性廣。 在上述實施例的基礎上,所述初始合併圖像繪製模組具體用於: 在第一地圖繪製區域獲取用戶選擇的所述待合併區域; 根據用戶選擇的所述待合併區域,在第二地圖繪製區域使用所述預設透明度的線條繪製待合併區域的地圖圖像,生成所述初始合併地圖圖像。 本說明書實施例,所述初始合併圖像繪製模組具體用於: 在第一地圖繪製區域獲取用戶選擇的所述待合併區域; 根據用戶選擇的所述待合併區域,在第二地圖繪製區域使用所述預設透明度的線條繪製待合併區域的地圖圖像,生成所述初始合併地圖圖像。 在上述實施例的基礎上,所述目標合併地圖生成模組具體用於: 將所述初始合併地圖圖像中的所述目標像素點之外的像素點從所述初始合併地圖圖像中剔除; 將所述初始合併地圖圖像中所述目標像素點組成的地圖圖像作為所述目標合併地圖圖像。 本說明書實施例,將透明度不符合要求的非目標像素點從初始合併地圖圖像中剔除,剩餘的目標像素點直接生成合併後的目標合併地圖圖像,方法簡單,地圖區域合併準確。 在上述實施例的基礎上,所述目標合併地圖生成模組還用於: 將所述目標像素點之外的像素點的位置處的顏色設置為與所述初始合併地圖圖像中所述待合併區域內部的像素點的顏色相同。 本說明書實施例,在剔除非目標像素點後,將非目標像素點處的顏色設置為紅色,與待合併區域的邊界內部區域的顏色保持一致,提高合併後地圖圖像的顯示效果。 在上述實施例的基礎上,所述目標合併地圖生成模組具體用於: 提取所述目標像素點對應的座標資訊,根據所述目標像素點對應的座標資訊的集合,生成所述目標合併地圖圖像。 本說明書實施例,基於目標像素點的座標資訊的集合生成目標合併地圖圖像,方法快捷,不需要複雜的資料處理,並且能夠準確的檢測出邊界重疊部分,使得地圖區域的合併更加準確快速,合併後的地圖區域可以作為一個整體進行互動,方便後續使用,適用性廣。 在上述實施例的基礎上,所述透明度獲取模組具體用於: 根據所述待合併區域的座標資訊,遍歷所述初始合併地圖圖像中所述待合併區域內的像素點,獲取所述初始合併地圖圖像中所述待合併區域內的像素點對應的透明度。 本說明書實施例,透過遍歷待合併區域內部的像素點,獲得各個像素點的透明度變化,方法簡單,並且可以減少待合併區域外部像素點的檢測,提高了資料處理的速度。 需要說明書的是,上述所述的裝置根據方法實施例的描述還可以包括其他的實施方式。具體的實現方式可以參照相關方法實施例的描述,在此不作一一贅述。 本說明書一個實施例中,還可以提供一種電腦儲存媒體,其上儲存有電腦程式,所述電腦程式被執行時,實現上述實施例中視頻資料的處理方法,例如可以實現如下方法: 使用預設透明度的線條繪製待合併區域的地圖圖像,生成初始合併地圖圖像; 獲取所述初始合併地圖圖像中像素點的透明度; 將所述像素點的透明度與所述預設透明度相同的像素點作為目標像素點,根據所述目標像素點生成合併後的目標合併地圖圖像。 上述對本說明書特定實施例進行了描述。其它實施例在所附申請專利範圍的範圍內。在一些情況下,在申請專利範圍中記載的動作或步驟可以按照不同於實施例中的順序來執行並且仍然可以實現期望的結果。另外,在圖式中描繪的過程不一定要求顯示的特定順序或者連續順序才能實現期望的結果。在某些實施方式中,多任務處理和並行處理也是可以的或者可能是有利的。 本說明書提供的上述實施例所述的方法或裝置可以透過電腦程式實現業務邏輯並記錄在儲存媒體上,所述的儲存媒體可以電腦讀取並執行,實現本說明書實施例所描述方案的效果。 本說明書實施例提供的上述地圖區域合併的資料處理方法或裝置可以在電腦中由處理器執行相應的程式指令來實現,如使用windows作業系統的c++語言在PC端實現、linux系統實現,或其他例如使用android、iOS系統程式設計語言在智慧型終端實現,以及基於量子電腦的處理邏輯實現等。本說明書提供的一種地圖區域合併的資料處理系統的一個實施例中,圖6是本說明書提供的一種地圖區域合併的資料處理系統實施例的模組結構示意圖,如圖6所示,本說明書實施例提供的地圖區域合併的資料處理系統可以包括處理器61以及用於儲存處理器可執行指令的儲存器62, 處理器61和儲存器62透過匯流排63完成相互間的通訊; 所述處理器61用於呼叫所述儲存器62中的程式指令,以執行上述各地震資料處理方法實施例所提供的方法,例如包括:使用預設透明度的線條繪製待合併區域的地圖圖像,生成初始合併地圖圖像;獲取所述初始合併地圖圖像中像素點的透明度;將所述像素點的透明度與所述預設透明度相同的像素點作為目標像素點,根據所述目標像素點生成合併後的目標合併地圖圖像。 需要說明的是說明書上述所述的裝置、電腦儲存媒體、系統根據相關方法實施例的描述還可以包括其他的實施方式,具體的實現方式可以參照方法實施例的描述,在此不作一一贅述。 本說明書中的各個實施例均採用遞進的方式描述,各個實施例之間相同相似的部分互相參見即可,每個實施例重點說明的都是與其他實施例的不同之處。尤其,對於硬體+程式類實施例而言,由於其基本相似於方法實施例,所以描述的比較簡單,相關之處參見方法實施例的部分說明即可。 本說明書實施例並不局限於必須是符合行業通訊標準、標準電腦資料處理和資料儲存規則或本說明書一個或多個實施例所描述的情況。某些行業標準或者使用自定義方式或實施例描述的實施基礎上略加修改後的實施方案也可以實現上述實施例相同、等同或相近、或變形後可預料的實施效果。應用這些修改或變形後的資料獲取、儲存、判斷、處理方式等獲取的實施例,仍然可以屬於本說明書實施例的可選實施方案範圍之內。 在20世紀90年代,對於一個技術的改進可以很明顯地區分是硬體上的改進(例如,對二極體、電晶體、開關等電路結構的改進)還是軟體上的改進(對於方法流程的改進)。然而,隨著技術的發展,當今的很多方法流程的改進已經可以視為硬體電路結構的直接改進。設計人員幾乎都透過將改進的方法流程程式化到硬體電路中來得到相應的硬體電路結構。因此,不能說一個方法流程的改進就不能用硬體實體模組來實現。例如,可程式化邏輯裝置(Programmable Logic Device, PLD)(例如現場可程式化閘陣列(Field Programmable Gate Array,FPGA))就是這樣一種積體電路,其邏輯功能由用戶對裝置程式化來確定。由設計人員自行程式化來把一個數位系統“整合”在一片PLD上,而不需要請晶片製造廠商來設計和製作專用的積體電路晶片。而且,如今,取代手工地製作積體電路晶片,這種程式化也多半改用“邏輯編譯器(logic compiler)”軟體來實現,它與程式開發撰寫時所用的軟體編譯器相類似,而要編譯之前的原始代碼也得用特定的程式化語言來撰寫,此稱之為硬體描述語言(Hardware Description Language,HDL),而HDL也並非僅有一種,而是有許多種,如ABEL (Advanced Boolean Expression Language)、AHDL(Altera Hardware Description Language)、Confluence、CUPL (Cornell University Programming Language)、HDCal、JHDL(Java Hardware Description Language)、Lava、Lola、MyHDL、PALASM、RHDL(Ruby Hardware Description Language)等,目前最普遍使用的是VHDL(Very-High-Speed Integrated Circuit Hardware Description Language)與Verilog。本領域技術人員也應該清楚,只需要將方法流程用上述幾種硬體描述語言稍作邏輯程式化並程式化到積體電路中,就可以很容易得到實現該邏輯方法流程的硬體電路。 控制器可以按任何適當的方式實現,例如,控制器可以採取例如微處理器或處理器以及儲存可由該(微)處理器執行的電腦可讀程式代碼(例如軟體或韌體)的電腦可讀媒體、邏輯閘、開關、特殊應用積體電路(Application Specific Integrated Circuit,ASIC)、可程式化邏輯控制器和嵌入微控制器的形式,控制器的例子包括但不限於以下微控制器:ARC 625D、Atmel AT91SAM、Microchip PIC18F26K20以及Silicone Labs C8051F320,儲存器控制器還可以被實現為儲存器的控制邏輯的一部分。本領域技術人員也知道,除了以純電腦可讀程式代碼方式實現控制器以外,完全可以透過將方法步驟進行邏輯程式化來使得控制器以邏輯閘、開關、特殊應用積體電路、可程式化邏輯控制器和嵌入微控制器等的形式來實現相同功能。因此這種控制器可以被認為是一種硬體部件,而對其內包括的用於實現各種功能的裝置也可以視為硬體部件內的結構。或者甚至,可以將用於實現各種功能的裝置視為既可以是實現方法的軟體模組又可以是硬體部件內的結構。 上述實施例闡明的系統、裝置、模組或單元,具體可以由電腦晶片或實體實現,或者由具有某種功能的產品來實現。一種典型的實現設備為電腦。具體的,電腦例如可以為個人電腦、膝上型電腦、車載人機互動設備、蜂巢式電話、相機電話、智慧型手機、個人數位助理、媒體播放器、導航設備、電子郵件設備、遊戲控制台、平板電腦、可穿戴設備或者這些設備中的任何設備的組合。 雖然本說明書一個或多個實施例提供了如實施例或流程圖所述的方法操作步驟,但基於常規或者無創造性的手段可以包括更多或者更少的操作步驟。實施例中列舉的步驟順序僅僅為眾多步驟執行順序中的一種方式,不代表唯一的執行順序。在實際中的裝置或終端產品執行時,可以按照實施例或者圖式所示的方法順序執行或者並行執行(例如平行處理器或者多執行緒處理的環境,甚至為分布式資料處理環境)。術語“包括”、“包含”或者其任何其他變體意在涵蓋非排他性的包含,從而使得包括一系列要素的過程、方法、產品或者設備不僅包括那些要素,而且還包括沒有明確列出的其他要素,或者是還包括為這種過程、方法、產品或者設備所固有的要素。在沒有更多限制的情況下,並不排除在包括所述要素的過程、方法、產品或者設備中還存在另外的相同或等同要素。第一,第二等詞語用來表示名稱,而並不表示任何特定的順序。 為了描述的方便,描述以上裝置時以功能分為各種模組分別描述。當然,在實施本說明書一個或多個時可以把各模組的功能在同一個或多個軟體和/或硬體中實現,也可以將實現同一功能的模組由多個子模組或子單元的組合實現等。以上所描述的裝置實施例僅僅是示意性的,例如,所述單元的劃分,僅僅為一種邏輯功能劃分,實際實現時可以有另外的劃分方式,例如多個單元或組件可以結合或者可以集成到另一個系統,或一些特徵可以忽略,或不執行。另一點,所顯示或討論的相互之間的耦接或直接耦接或通訊連接可以是透過一些介面,裝置或單元的間接耦接或通訊連接,可以是電性,機械或其它的形式。 本發明是參照根據本發明實施例的方法、裝置(系統)、和電腦程式產品的流程圖和/或方塊圖來描述的。應理解可由電腦程式指令實現流程圖和/或方塊圖中的每一流程和/或方塊、以及流程圖和/或方塊圖中的流程和/或方塊的結合。可提供這些電腦程式指令到通用電腦、專用電腦、嵌入式處理機或其他可程式化資料處理設備的處理器以產生一個機器,使得透過電腦或其他可程式化資料處理設備的處理器執行的指令產生用於實現在流程圖一個流程或多個流程和/或方塊圖一個方塊或多個方塊中指定的功能的裝置。 這些電腦程式指令也可儲存在能引導電腦或其他可程式化資料處理設備以特定方式工作的電腦可讀儲存器中,使得儲存在該電腦可讀儲存器中的指令產生包括指令裝置的製造品,該指令裝置實現在流程圖一個流程或多個流程和/或方塊圖一個方塊或多個方塊中指定的功能。 這些電腦程式指令也可裝載到電腦或其他可程式化資料處理設備上,使得在電腦或其他可程式化設備上執行一系列操作步驟以產生電腦實現的處理,從而在電腦或其他可程式化設備上執行的指令提供用於實現在流程圖一個流程或多個流程和/或方塊圖一個方塊或多個方塊中指定的功能的步驟。 在一個典型的配置中,計算設備包括一個或多個處理器(CPU)、輸入/輸出介面、網路介面和記憶體。 記憶體可能包括電腦可讀媒體中的非永久性記憶體,隨機存取記憶體(RAM)和/或非揮發性記憶體等形式,如唯讀記憶體(ROM)或快閃記憶體(flash RAM)。記憶體是電腦可讀媒體的範例。 電腦可讀媒體包括永久性和非永久性、可移動和非可移動媒體可以由任何方法或技術來實現資訊儲存。資訊可以是電腦可讀指令、資料結構、程式的模組或其他資料。電腦的儲存媒體的例子包括,但不限於相變記憶體(PRAM)、靜態隨機存取記憶體(SRAM)、動態隨機存取記憶體(DRAM)、其他類型的隨機存取記憶體(RAM)、唯讀記憶體(ROM)、電可抹除可程式化唯讀記憶體(EEPROM)、快閃記憶體或其他記憶體技術、唯讀光碟唯讀記憶體(CD-ROM)、數位多功能光碟(DVD)或其他光學儲存、磁盒式磁帶,磁帶磁磁碟儲存、石墨烯儲存或其他磁性儲存設備或任何其他非傳輸媒體,可用於儲存可以被計算設備存取的資訊。按照本文中的界定,電腦可讀媒體不包括暫態電腦可讀媒體(transitory media),如調變的資料訊號和載波。 本領域技術人員應明白,本說明書一個或多個實施例可提供為方法、系統或電腦程式產品。因此,本說明書一個或多個實施例可採用完全硬體實施例、完全軟體實施例或結合軟體和硬體態樣的實施例的形式。而且,本說明書一個或多個實施例可採用在一個或多個其中包含有電腦可用程式代碼的電腦可用儲存媒體(包括但不限於磁碟儲存器、CD-ROM、光學儲存器等)上實施的電腦程式產品的形式。 本說明書一個或多個實施例可以在由電腦執行的電腦可執行指令的一般上下文中描述,例如程式模組。一般地,程式模組包括執行特定任務或實現特定抽象資料類型的例程、程式、物件、組件、資料結構等等。也可以在分布式計算環境中實踐本說明書一個或多個實施例,在這些分布式計算環境中,由透過通訊網路而被連接的遠端處理設備來執行任務。在分布式計算環境中,程式模組可以位於包括儲存設備在內的本地和遠端電腦儲存媒體中。 本說明書中的各個實施例均採用遞進的方式描述,各個實施例之間相同相似的部分互相參見即可,每個實施例重點說明的都是與其他實施例的不同之處。尤其,對於系統實施例而言,由於其基本相似於方法實施例,所以描述的比較簡單,相關之處參見方法實施例的部分說明即可。在本說明書的描述中,參考術語“一個實施例”、“一些實施例”、“範例”、“具體範例”、或“一些範例”等的描述意指結合該實施例或範例描述的具體特徵、結構、材料或者特點包含於本說明書的至少一個實施例或範例中。在本說明書中,對上述術語的示意性表述不必須針對的是相同的實施例或範例。而且,描述的具體特徵、結構、材料或者特點可以在任一個或多個實施例或範例中以合適的方式結合。此外,在不相互矛盾的情況下,本領域的技術人員可以將本說明書中描述的不同實施例或範例以及不同實施例或範例的特徵進行結合和組合。 以上所述僅為本說明書一個或多個實施例的實施例而已,並不用於限制本說明書一個或多個實施例。對於本領域技術人員來說,本說明書一個或多個實施例可以有各種更改和變化。凡在本說明書的精神和原理之內所作的任何修改、等同替換、改進等,均應包含在申請專利範圍的範圍之內。In order to enable those skilled in the art to better understand the technical solutions in this specification, the technical solutions in the embodiments of this specification will be described clearly and completely in conjunction with the drawings in the embodiments of this specification. Obviously, the described The embodiments are only a part of the embodiments of this specification, but not all the embodiments. Based on the embodiments in this specification, all other embodiments obtained by a person of ordinary skill in the art without creative efforts shall fall within the scope of protection of this specification. With the development of computer network technology, people can learn about the geographical location of the world and other information through electronic maps. Usually, maps or electronic maps are divided by provinces and countries. However, in some cases, it may be necessary to merge some designated areas on the map, such as: merge the three northeastern provinces in the map of China into the northeast region. , In order to facilitate users' overall understanding of the Northeast region. The data processing method for merging map areas provided by the embodiments of the present specification, by setting a unified transparency, when merging map areas, the transparency of the overlapping parts will change, and the designated areas will be merged based on the change in the transparency of the map image. Based on the change in transparency of pixels, merging designated map areas is simple and fast, and does not require complicated mathematical calculations. The merged map areas can be interacted as a whole and have strong applicability. In the embodiment of the present invention, the data processing process of the merged map area can be performed on the user side, such as: smart phones, tablet computers, smart wearable devices (smart watches, virtual reality glasses, virtual reality helmets, etc.) equipment. Specifically, it can be done on the browser side of the user side, such as: PC browser side, mobile browser side, server side web container, etc. Specifically, FIG. 1 is a schematic flowchart of a data processing method for map area merging in an embodiment provided by this specification. As shown in FIG. 1, the data processing method for map area merging provided by an embodiment of this specification includes: S2. Draw a map image of the area to be merged using lines of preset transparency to generate an initial merged map image. The area to be merged can include multiple map areas, such as Liaoning, Jilin, and Heilongjiang among the three northeastern provinces can represent three areas to be merged, which can be on the same canvas (canvas can represent components used to draw graphics Or areas) or other map drawing areas use preset transparency lines to draw a map image of the area to be merged. For example: FIG. 2 is a schematic diagram of the initial merged map image of the northeast region in an embodiment of this specification. As shown in FIG. 2, according to the latitude and longitude information of the three provinces in the northeast, Liaoning Province, Jilin Province, and Heilongjiang Province can be used in the same canvas The relative positions of the maps of Liaoning Province, Jilin Province and Heilongjiang Province are drawn respectively. The map images of the three provinces together form the initial merged map image of the Northeast region. It can be seen that the adjacent merged map images There may be overlapping boundaries between merged areas. As shown in FIG. 2, when drawing the map image of the region to be merged, only the outline of each region to be merged, that is, the boundary line may be drawn, and the area within the boundary line may represent the region to be merged. Among them, when drawing the map image of the area to be merged, in one embodiment of the present specification, lines with preset transparency are used for drawing. The preset transparency can be selected according to actual needs. Generally, the preset transparency can be set at 0-1. At this time, in one embodiment of the present specification, the preset transparency may be 0.5, so that the setting may facilitate subsequent pixel detection. In addition, when each map area to be merged is drawn, different map areas to be merged may be drawn using lines of the same color, or may be drawn using lines of different colors. For example, when the map areas of the three northeastern provinces are to be merged, you can use black (or other colors such as red, blue, etc.) and a line with transparency of 0.5 in the same canvas to draw the lines of Liaoning Province, Jilin Province, and Heilongjiang Province. For the map image, the entire map image of the three provinces is used as the initial merged image of the northeast region. You can also use black lines with a transparency of 0.5 to draw map images of Liaoning Province, red lines with a transparency of 0.5 to draw map images of Jilin Province, and blue lines with a transparency of 0.5 to draw map images of Heilongjiang Province , The whole map image of the three provinces is used as the initial merged image of the northeast region. That is, in one embodiment of the present specification, when drawing map images of each area in the area to be merged, the map images of different areas may use lines of the same transparency, but the line color may not be specifically limited. When drawing the map image of the area to be merged, you can import GeoJSON data, and then use the program to draw the map on the canvas. GeoJSON is a format that encodes various geographic data structures. It is an organization format for map data. A map can be drawn by parsing this data. S4. Obtain the transparency of pixels in the initial merged map image. After the initial merged map image is generated, the transparency of each pixel in the initial merged map image can be obtained. In an embodiment of the present specification, when drawing a map image of an area to be merged, the latitude and longitude information of the area to be merged may be converted into coordinate information. According to the coordinate information of the area to be merged, each pixel in the area to be merged in the initial merged map image can be traversed, that is, each original data (data point containing latitude and longitude information) in the area to be merged in the initial merged map image can be traversed Or the data points of the coordinate information) corresponding to the pixels of the canvas, to obtain the transparency corresponding to the pixels in the area to be merged in the initial merged map image. The transparency of each pixel can be obtained based on the dyeing technique, and the specific method is not specifically limited in the embodiments of the present invention. By traversing the pixels inside the area to be merged, the transparency change of each pixel is obtained, the method is simple, and the detection of the pixels outside the area to be merged can be reduced, and the speed of data processing can be improved. S6. Use pixel points with the same transparency as the preset transparency as target pixel points, and generate a merged target merged map image according to the target pixel points. After obtaining the transparency of the pixels in the initial merged map image, you can compare the transparency of the pixels with the preset transparency of the lines used when drawing the map image of the area to be merged, and compare the transparency with the preset transparency The pixel is used as the target pixel. For example, when the preset transparency used when drawing the map image of the region to be merged is 0.5, the pixel point with the transparency of 0.5 in the initial merged map image can be used as the target pixel point. The target pixel points can be used to generate the merged target merged map image to complete the merged map area. In one embodiment of the present specification, the coordinate information of the target pixel can be extracted, the coordinate information corresponding to the target pixel can be derived, and the target merged map image can be generated using the set of coordinate information corresponding to the target pixel. For example, if the preset transparency used when drawing the map image of the area to be merged is 0.5, the pixels with the transparency of 0.5 in the initial merged map image can be used as the target pixels, and the coordinate information of the target pixels can be extracted. The coordinate information of the target pixel is stored in the coordinate point set, and the coordinate point set composed of the coordinate information of all target pixels can be exported. The target synthetic map image is drawn according to the coordinate set of the target pixel. The target synthetic map image may be composed of the boundary image of the region to be merged, and the boundary image of the region to be merged generated at this time may not include the region to be merged. The part where the border overlaps. For example, an embodiment of this specification can use a black line with a transparency of 0.5 to draw the map image of the area to be merged on the canvas or the map drawing area. The map image of the area to be merged may include a boundary image of the area to be merged, The map images of the regions to be merged may constitute an initial merged map image, for details, refer to the schematic diagram of the initial merged map image of the northeast map in FIG. 2. Traverse the pixels in the area to be merged in the initial merged map image. The transparency of some pixels that do not overlap at the boundary of the area to be merged is 0.5, and the transparency of the pixels that overlap the boundary is usually greater than 0.5. The area has no drawn image content, and the transparency of pixels is 0. A pixel point with a transparency of 0.5 can be used as a target pixel point, that is, a pixel point of a portion of the border image that does not overlap as a target pixel point. The target pixels are combined together, which can represent the boundary image of the merged region to be merged. At this time, the merged boundary image does not include the overlapping part of each region to be merged, and can represent the overall boundary contour of each region to be merged . The coordinate information of the target pixel can be extracted and saved, and the coordinate information corresponding to the target pixel can be derived to generate the merged target merged map image. FIG. 3 is a schematic diagram of the target merged map image of the merged northeast region in one embodiment of this specification. As shown in FIG. 3, the merged target map image in one embodiment of this specification may overlap the boundaries of the regions to be merged Partially removed, only the non-overlapping parts of the boundary are kept, and the effect of merging the map areas is intuitively displayed, which is convenient for users to view. 4(a)-4(b) is a schematic diagram of the detection of transparency changes in an embodiment of this specification. As shown in FIG. 4(a), two images with a transparency of 0.5 are partially superimposed. It can be seen that the transparency of the image in the middle superimposed part is greater than the transparency of the images in the other unsuperimposed parts. Similarly, as shown in FIG. 4(b), partially overlaying two images with no border and transparency of 0.5, it can be seen that the transparency of the image in the middle overlapping part is greater than that of the other non-overlapping parts. transparency. In the embodiment of the present specification, by detecting the change in the transparency of pixels, it is possible to accurately and quickly detect which parts in the region to be merged overlap and which parts do not overlap, so as to quickly and accurately generate the merged map image. In addition, in this embodiment of the present invention, the merged target merged map image may be named according to the geographic location of the area to be merged, for example, in FIG. 3, the merged three northeastern provinces are named as the northeast region. The data processing method for the merge of map areas provided in this manual can detect whether the transparency of pixels has changed through the dyeing technology on the canvas. Based on the difference between the transparency of the pixels in the overlapping part and the transparency of the pixels in the non-overlapping part when the map is merged, filter Pixels with overlapping borders and pixels with no overlapping borders are generated, and a merged map image is further generated based on pixels with no overlapping borders. The method is simple and fast, does not require complex data processing, and can accurately detect the overlapping part of the border, making the merge of map areas more accurate and fast. The merged map areas can be interacted as a whole, which is convenient for subsequent use and has wide applicability. Based on the above embodiment, in one embodiment of the present specification, the use of a preset transparency line to draw a map image of a region to be merged to generate an initial merged map image may include: Obtaining the area to be merged selected by the user in the first mapping area; According to the area to be merged selected by the user, a map image of the area to be merged is drawn in the second map drawing area using the lines of the preset transparency to generate the initial merged map image. Specifically, when the user views the map in the first map drawing area (such as a user-side canvas), if some map areas need to be merged, if the map areas of the three northeastern provinces need to be merged, the user can click or other Select the area to be merged by operation. For example, in the first map drawing area, the user draws a complete map by importing GeoJSON data (such as drawing a map of China), and clicks on the three maps of Northeast China in Liaoning Province, Jilin Province, and Heilongjiang Province to select Areas to be merged. After identifying the area to be merged selected by the user, a map image of the area to be merged can be drawn using lines of preset transparency in the second map drawing area (the second map drawing area can use a hidden canvas) to generate an initial merged map image For details, reference may be made to the foregoing embodiment to generate an initial merged map image, which will not be repeated here. The initial merged map image is generated based on the user's selection. The user can select the area to be merged according to actual needs to merge the map areas. The method is simple, flexible, and improves the user experience. Based on the above embodiment, in one embodiment of the present specification, the generating the merged target merged map image according to the target pixel may include: Removing pixels other than the target pixel in the initial merged map image from the initial merged map image; A map image composed of the target pixel points in the initial merged map image is used as the target merged map image. Specifically, when the target pixel point is determined and the target merged map image is generated based on the target pixel point, the non-target pixel points (that is, pixels except the target pixel point) in the initial merged map image can be merged from the initial merged The image is removed from the map image. At this time, only the target pixels are left in the initial merged map image, and the remaining target pixels can be merged to form the merged target merged map image. For example, in the above embodiment, the non-target pixels are eliminated in the second map drawing area, and the target pixels are retained, then the image composed of the remaining target pixels in the second map drawing area can represent the merged target merged map image. The non-target pixels whose transparency does not meet the requirements are removed from the initial merged map image, and the remaining target pixels are directly generated into the merged target merged map image. The method is simple and the map area is merged accurately. After excluding the non-target pixel points, the color at the position of the non-target pixel points can be set to be the same as the color of the pixel points in the area inside the boundary (that is, the area inside the area to be merged) in the initial merged map image. In this way, the color of the non-target pixels and the colors of other areas within the boundary of the area to be merged after the non-target pixels are eliminated can be avoided, which affects the display effect of the combined map image. For example, when the initial merged map image is generated when the map image of the area to be merged is drawn, the initial merged map image has a background color. For example: the inner area of the boundary of the area to be merged is filled with red pixels, then the non-target pixels are excluded After that, the color at the location of the non-target pixel may become white or colorless, which is different from the color of other pixels in the inner area of the boundary of the area to be merged, which affects the display effect of the merged map image. After excluding the non-target pixels, the color of the non-target pixels can be set to red, which is consistent with the color of the internal area of the boundary of the area to be merged to improve the display effect of the merged map image. If the boundary of each area in the area to be merged is filled with different colors, that is, there are multiple colors at the non-boundary in the initial merged map image, the color of any pixel near the non-target pixel can be taken as the non-target pixel The color at that position after culling. The data processing method for merging map areas provided in this manual, by setting a unified transparency, when merging map areas, the transparency of the overlapping parts will change, and the specified areas will be merged based on the change in the transparency of the map image. Based on the change in transparency of pixels, merging designated map areas is simple and fast, and does not require complicated mathematical calculations. The merged map areas can be interacted as a whole and have strong applicability. The embodiments of the above method in this specification are described in a progressive manner. The same or similar parts between the embodiments can be referred to each other. Each embodiment focuses on the differences from other embodiments. For the relevant parts, please refer to the description of the method embodiments. Based on the data processing method for map area merge described above, one or more embodiments of the present specification further provide a data processing apparatus for map area merge. The device may include a system (including a distributed system), software (applications), modules, components, servers, users, etc. using the method described in the embodiments of the present specification in combination with necessary implementation hardware. Based on the same innovative concept, the devices in one or more embodiments provided by the embodiments of this specification are as described in the following embodiments. Since the implementation solution of the device to solve the problem is similar to the method, the implementation of the specific device in the embodiments of the present specification may refer to the implementation of the foregoing method, and the repetition is not repeated. As used below, the term “unit” or “module” can be a combination of software and/or hardware that can realize a predetermined function. Although the devices described in the following embodiments are preferably implemented in software, implementation of hardware or a combination of software and hardware is also possible and conceived. Specifically, FIG. 5 is a schematic diagram of a module structure of an embodiment of a data processing device for map area merging provided in this specification. As shown in FIG. 5, the data processing device for map area merging provided in this specification includes: an initial merged image The drawing module 51, the transparency acquisition module 52, and the target merged map generation module 53, wherein: The initial merged image drawing module 51 can be used to draw a map image of a region to be merged using lines of preset transparency to generate an initial merged map image; The transparency obtaining module 52 may be used to obtain the transparency of pixels in the initial merged map image; The target merged map generation module 53 may be used to use pixels with the same transparency as the preset transparency as target pixels, and generate a merged target merged map image according to the target pixels. The data processing device for merging map areas provided in the embodiments of the present specification can detect whether the transparency of pixels has changed through the dyeing technique on the canvas, based on the transparency of the pixels in the overlapping part and the pixels in the non-overlapping part of the map and the merge Differently, pixels with overlapping boundaries and pixels with non-overlapping boundaries are screened out, and a merged map image is further generated based on pixels with non-overlapping boundaries. The method is simple and fast, does not require complex data processing, and can accurately detect the overlapping part of the border, making the merge of map areas more accurate and fast. The merged map areas can be interacted as a whole, which is convenient for subsequent use and has wide applicability. Based on the above embodiment, the initial merged image drawing module is specifically used to: Obtaining the area to be merged selected by the user in the first mapping area; According to the area to be merged selected by the user, a map image of the area to be merged is drawn in the second map drawing area using the lines of the preset transparency to generate the initial merged map image. In the embodiment of the present specification, the initial merged image drawing module is specifically used for: Obtaining the area to be merged selected by the user in the first mapping area; According to the area to be merged selected by the user, a map image of the area to be merged is drawn in the second map drawing area using the lines of the preset transparency to generate the initial merged map image. Based on the above embodiment, the target merge map generation module is specifically used to: Removing pixels other than the target pixel in the initial merged map image from the initial merged map image; A map image composed of the target pixel points in the initial merged map image is used as the target merged map image. In the embodiment of the present specification, the non-target pixels whose transparency does not meet the requirements are eliminated from the initial merged map image, and the remaining target pixels directly generate the merged target merged map image. The method is simple, and the map area is merged accurately. Based on the above embodiment, the target merge map generation module is also used to: The color at the position of the pixel point other than the target pixel point is set to be the same as the color of the pixel point inside the region to be merged in the initial merged map image. In the embodiment of the present specification, after excluding non-target pixels, the color of non-target pixels is set to red, which is consistent with the color of the internal area of the boundary of the area to be merged, and the display effect of the merged map image is improved. Based on the above embodiment, the target merge map generation module is specifically used to: The coordinate information corresponding to the target pixel is extracted, and the target merged map image is generated according to the set of coordinate information corresponding to the target pixel. In the embodiment of the present specification, the target merged map image is generated based on the collection of the coordinate information of the target pixels. The method is quick, does not require complicated data processing, and can accurately detect the overlapping parts of the border, making the merge of map areas more accurate and fast. The merged map area can be interacted as a whole, which is convenient for subsequent use and has wide applicability. Based on the above embodiments, the transparency acquisition module is specifically used to: According to the coordinate information of the region to be merged, traverse the pixels in the region to be merged in the initial merged map image to obtain the transparency corresponding to the pixels in the region to be merged in the initial merged map image . In the embodiment of the present specification, by traversing the pixels inside the area to be merged, the transparency change of each pixel is obtained, the method is simple, and the detection of the pixels outside the area to be merged can be reduced, and the speed of data processing can be improved. It should be noted that the above description of the device according to the method embodiment may also include other implementations. For a specific implementation manner, reference may be made to the description of related method embodiments, and details are not repeated herein. In an embodiment of this specification, a computer storage medium may also be provided, on which a computer program is stored, and when the computer program is executed, the method for processing video data in the foregoing embodiment is implemented, for example, the following method may be implemented: Use the preset transparency lines to draw the map image of the area to be merged to generate the initial merged map image; Obtaining the transparency of pixels in the initial merged map image; A pixel point having the same transparency as the preset transparency is used as a target pixel point, and a merged target merged map image is generated according to the target pixel point. The foregoing describes specific embodiments of the present specification. Other embodiments are within the scope of the attached patent application. In some cases, the actions or steps described in the scope of the patent application may be performed in a different order than in the embodiment and still achieve the desired result. In addition, the processes depicted in the drawings do not necessarily require a particular order of display or sequential order to achieve the desired results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous. The method or device described in the above embodiments provided in this specification can realize business logic through a computer program and record it on a storage medium. The storage medium can be read and executed by a computer to achieve the effects of the solutions described in the embodiments of this specification. The above-mentioned data processing method or device for merging map areas provided in the embodiments of the present specification can be implemented by a processor executing corresponding program instructions in a computer, such as using the C++ language of the windows operating system to implement on the PC, linux system, or other For example, using Android, iOS system programming language to achieve in smart terminals, and quantum computer-based processing logic implementation. In an embodiment of a data processing system for merging map areas provided in this specification, FIG. 6 is a schematic diagram of a module structure of an embodiment of a data processing system for merging map areas provided in this specification. As shown in FIG. 6, the implementation of this specification The data processing system for merging map areas provided by the example may include a processor 61 and a storage 62 for storing processor executable instructions, The processor 61 and the memory 62 communicate with each other through the bus 63; The processor 61 is used to call program instructions in the storage 62 to execute the methods provided by the above embodiments of the seismic data processing method, for example, including: drawing a map image of the region to be merged using lines with preset transparency , Generate an initial merged map image; obtain the transparency of pixels in the initial merged map image; take the pixel with the same transparency as the preset transparency as the target pixel, according to the target pixel Generate the merged target merged map image. It should be noted that the description of the device, computer storage medium, and system described above in the specification according to related method embodiments may also include other implementation manners. For specific implementation manners, reference may be made to the description of the method embodiments, and details are not repeated herein. The embodiments in this specification are described in a progressive manner. The same or similar parts between the embodiments can be referred to each other. Each embodiment focuses on the differences from other embodiments. In particular, for the hardware + program embodiment, since it is basically similar to the method embodiment, the description is relatively simple, and the relevant part can be referred to the description of the method embodiment. The embodiments of this specification are not limited to the situations that must comply with industry communication standards, standard computer data processing and data storage rules or one or more embodiments described in this specification. Certain industry standards or implementations described in a custom manner or using examples with slight modifications can also achieve the same, equivalent, or similar, or predictable implementation effects of the above examples. Examples of data acquisition, storage, judgment, and processing methods obtained after applying these modifications or deformations can still fall within the scope of optional implementations of the examples in this specification. In the 1990s, the improvement of a technology can be clearly distinguished from the improvement of hardware (for example, the improvement of the circuit structure of diodes, transistors, switches, etc.) or the improvement of software (for the process flow Improve). However, with the development of technology, the improvement of many methods and processes can be regarded as a direct improvement of the hardware circuit structure. Designers almost always get the corresponding hardware circuit structure by programming the improved method flow into the hardware circuit. Therefore, it cannot be said that the improvement of a method and process cannot be achieved with hardware physical modules. For example, a programmable logic device (Programmable Logic Device, PLD) (such as a field programmable gate array (Field Programmable Gate Array, FPGA)) is such an integrated circuit whose logic function is determined by the user programming the device. It is up to the designer to “integrate” a digital system on a PLD without having to ask a chip manufacturer to design and manufacture a dedicated integrated circuit chip. Moreover, nowadays, instead of manually manufacturing integrated circuit chips, this kind of programming is also mostly implemented with "logic compiler" software, which is similar to the software compiler used in program development and writing. The original code before compilation must also be written in a specific programming language, which is called hardware description language (Hardware Description Language, HDL), and HDL is not only one, but there are many, such as ABEL (Advanced Boolean Expression Language), AHDL (Altera Hardware Description Language), Confluence, CUPL (Cornell University Programming Language), HDCal, JHDL (Java Hardware Description Language), Lava, Lola, MyHDL, PALASM, RHDL (Ruby Hardware Description Language), etc., Currently the most commonly used are VHDL (Very-High-Speed Integrated Circuit Hardware Description Language) and Verilog. Those skilled in the art should also understand that it is only necessary to logically program the method flow in the above hardware description languages and program it into the integrated circuit, and the hardware circuit that implements the logic method flow can be easily obtained. The controller can be implemented in any suitable way, for example, the controller can take, for example, a microprocessor or processor and a computer-readable program code (such as software or firmware) that can be executed by the (micro)processor. Media, logic gates, switches, integrated circuits for special applications (Application Specific Integrated Circuit (ASIC), programmable logic controllers and embedded microcontrollers. Examples of controllers include but are not limited to the following microcontrollers: ARC 625D, Atmel AT91SAM, Microchip PIC18F26K20 and Silicone Labs C8051F320, the memory controller can also be implemented as part of the control logic of the memory. Those skilled in the art also know that, in addition to implementing the controller in pure computer-readable program code, the method steps can be logically programmed to make the controller logic gate, switch, special application integrated circuit, programmable logic Controller and embedded microcontroller to achieve the same function. Therefore, such a controller can be regarded as a hardware component, and the device for implementing various functions included therein can also be regarded as a structure within the hardware component. Or even, the device for realizing various functions can be regarded as both a software module of the implementation method and a structure in the hardware component. The system, device, module or unit explained in the above embodiments may be implemented by a computer chip or entity, or by a product with a certain function. A typical implementation device is a computer. Specifically, the computer may be, for example, a personal computer, a laptop computer, an on-board human-machine interactive device, a cellular phone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console , Tablet computers, wearable devices, or any combination of these devices. Although one or more embodiments of this specification provide method operation steps as described in the embodiments or flowcharts, more or fewer operation steps may be included based on conventional or non-inventive means. The order of the steps listed in the embodiment is only one way among the order of execution of many steps, and does not represent a unique order of execution. When the actual device or terminal product is executed, it can be executed sequentially or in parallel according to the method shown in the embodiment or the drawings (for example, a parallel processor or multi-thread processing environment, or even a distributed data processing environment). The terms "include", "include" or any other variant thereof are intended to cover non-exclusive inclusion, so that a process, method, product, or device that includes a series of elements includes not only those elements, but also others that are not explicitly listed Elements, or also include elements inherent to such processes, methods, products, or equipment. Without more restrictions, it does not exclude that there are other identical or equivalent elements in the process, method, product or equipment including the elements. The first and second words are used to indicate names, but do not indicate any particular order. For the convenience of description, when describing the above device, the functions are divided into various modules and described separately. Of course, when implementing one or more of this specification, the functions of each module may be implemented in the same software or multiple hardware and/or hardware, or the modules that implement the same function may be composed of multiple submodules or subunits The combination of realization. The device embodiments described above are only schematic. For example, the division of the units is only a division of logical functions. In actual implementation, there may be other divisions, for example, multiple units or components may be combined or integrated into Another system, or some features can be ignored, or not implemented. In addition, the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms. The present invention is described with reference to flowcharts and/or block diagrams of methods, devices (systems), and computer program products according to embodiments of the present invention. It should be understood that each flow and/or block in the flowchart and/or block diagram and a combination of the flow and/or block in the flowchart and/or block diagram can be implemented by computer program instructions. These computer program instructions can be provided to the processors of general-purpose computers, special-purpose computers, embedded processors, or other programmable data processing equipment to produce a machine that allows instructions to be executed by the processor of the computer or other programmable data processing equipment Generate means for implementing the functions specified in a block or blocks in a flowchart or a flow and/or a block diagram. These computer program instructions can also be stored in a computer readable memory that can guide the computer or other programmable data processing equipment to work in a specific manner, so that the instructions stored in the computer readable memory produce a manufactured product including an instruction device The instruction device implements the functions specified in one block or multiple blocks in one flow or multiple flows in the flowchart and/or one block in the block diagram. These computer program instructions can also be loaded onto a computer or other programmable data processing device, so that a series of operating steps can be performed on the computer or other programmable device to generate computer-implemented processing, and thus on the computer or other programmable device The instructions executed on the provide steps for implementing the functions specified in one block or multiple blocks of the flowchart one flow or multiple flows and/or block diagrams. In a typical configuration, the computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory. Memory may include non-permanent memory, random access memory (RAM) and/or non-volatile memory in computer-readable media, such as read-only memory (ROM) or flash memory (flash) RAM). Memory is an example of computer-readable media. Computer-readable media, including permanent and non-permanent, removable and non-removable media, can be stored by any method or technology. The information can be computer readable instructions, data structures, modules of programs, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM) , Read-only memory (ROM), electrically erasable and programmable read-only memory (EEPROM), flash memory or other memory technologies, read-only disc read-only memory (CD-ROM), digital multifunction Optical discs (DVD) or other optical storage, magnetic tape cassettes, magnetic tape magnetic disk storage, graphene storage or other magnetic storage devices or any other non-transmission media can be used to store information that can be accessed by computing devices. According to the definition in this article, computer-readable media does not include transient computer-readable media (transitory media), such as modulated data signals and carrier waves. Those skilled in the art should understand that one or more embodiments of this specification may be provided as a method, system, or computer program product. Therefore, one or more embodiments of this specification may take the form of a complete hardware embodiment, a complete software embodiment, or an embodiment combining software and hardware aspects. Moreover, one or more embodiments of this specification can be implemented on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) containing computer-usable program code In the form of computer program products. One or more embodiments of this specification may be described in the general context of computer-executable instructions executed by a computer, such as program modules. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform specific tasks or implement specific abstract data types. One or more embodiments of this specification can also be practiced in distributed computing environments in which tasks are performed by remote processing devices that are connected through a communication network. In a distributed computing environment, program modules can be located in local and remote computer storage media including storage devices. The embodiments in this specification are described in a progressive manner. The same or similar parts between the embodiments can be referred to each other. Each embodiment focuses on the differences from other embodiments. In particular, for the system embodiment, since it is basically similar to the method embodiment, the description is relatively simple, and the relevant part can be referred to the description of the method embodiment. In the description of this specification, the description referring to the terms "one embodiment", "some embodiments", "examples", "specific examples", or "some examples" means specific features described in conjunction with the embodiment or examples , Structure, materials or features are included in at least one embodiment or example of this specification. In this specification, the schematic representation of the above terms does not necessarily refer to the same embodiment or example. Moreover, the specific features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. In addition, without contradicting each other, those skilled in the art may combine and combine different embodiments or examples and features of different embodiments or examples described in this specification. The above is only an embodiment of one or more embodiments of this specification, and is not intended to limit one or more embodiments of this specification. For those skilled in the art, various modifications and changes can be made to one or more embodiments of this specification. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of this specification shall be included in the scope of the patent application.

S2-S6‧‧‧步驟 51‧‧‧初始合併圖像繪製模組 52‧‧‧透明度獲取模組 53‧‧‧目標合併地圖生成模組 61‧‧‧處理器 62‧‧‧儲存器 63‧‧‧匯流排S2-S6‧‧‧Step 51‧‧‧ Initial merged image drawing module 52‧‧‧Transparency acquisition module 53‧‧‧ target merge map generation module 61‧‧‧ processor 62‧‧‧Storage 63‧‧‧Bus

為了更清楚地說明本說明書實施例或現有技術中的技術方案,下面將對實施例或現有技術描述中所需要使用的圖式作簡單地介紹,顯而易見地,下面描述中的圖式僅僅是本說明書中記載的一些實施例,對於本領域普通技術人員來講,在不付出創造性勞動性的前提下,還可以根據這些圖式獲得其他的圖式。 圖1是本說明書提供的一個實施例中的地圖區域合併的資料處理方法的流程示意圖; 圖2是本說明書一個實施例中東北地區初始合併地圖圖像的示意圖; 圖3是本說明書一個實施例中合併後的東北地區的目標合併地圖圖像示意圖; 圖4(a)-4(b)是本說明書一個實施例中透明度變化檢測示意圖; 圖5是本說明書提供的地圖區域合併的資料處理裝置一個實施例的模組結構示意圖; 圖6是本說明書提供的一種地圖區域合併的資料處理系統實施例的模組結構示意圖。In order to more clearly explain the embodiments of the present specification or the technical solutions in the prior art, the drawings required in the description of the embodiments or the prior art will be briefly introduced below. Obviously, the drawings in the following description are only For some embodiments described in the specification, for those of ordinary skill in the art, without paying any creative labor, other drawings can also be obtained according to these drawings. FIG. 1 is a schematic flowchart of a data processing method for merging map areas in an embodiment provided by this specification; 2 is a schematic diagram of the initial merged map image of the northeast region in an embodiment of this specification; 3 is a schematic diagram of a target merged map image of the merged northeast region in an embodiment of the present specification; 4(a)-4(b) are schematic diagrams of detection of transparency changes in an embodiment of this specification; 5 is a schematic diagram of a module structure of an embodiment of a data processing device for merging map areas provided in this specification; FIG. 6 is a schematic diagram of a module structure of an embodiment of a data processing system for merging map regions provided in this specification.

Claims (14)

一種地圖區域合併的資料處理方法,包括: 使用預設透明度的線條繪製待合併區域的地圖圖像,生成初始合併地圖圖像; 獲取所述初始合併地圖圖像中像素點的透明度; 將所述像素點的透明度與所述預設透明度相同的像素點作為目標像素點,根據所述目標像素點生成合併後的目標合併地圖圖像。A data processing method for merging map areas, including: Use the preset transparency lines to draw the map image of the area to be merged to generate the initial merged map image; Obtaining the transparency of pixels in the initial merged map image; A pixel point having the same transparency as the preset transparency is used as a target pixel point, and a merged target merged map image is generated according to the target pixel point. 如申請專利範圍第1項所述的方法,所述使用預設透明度的線條繪製待合併區域的地圖圖像,生成初始合併地圖圖像,包括: 在第一地圖繪製區域獲取用戶選擇的所述待合併區域; 根據用戶選擇的所述待合併區域,在第二地圖繪製區域使用所述預設透明度的線條繪製待合併區域的地圖圖像,生成所述初始合併地圖圖像。According to the method described in item 1 of the patent application scope, the use of preset transparency lines to draw a map image of the area to be merged to generate an initial merged map image includes: Obtaining the area to be merged selected by the user in the first mapping area; According to the area to be merged selected by the user, a map image of the area to be merged is drawn in the second map drawing area using the lines of the preset transparency to generate the initial merged map image. 如申請專利範圍第1項所述的方法,所述根據所述目標像素點生成合併後的目標合併地圖圖像,包括: 將所述初始合併地圖圖像中的所述目標像素點之外的像素點從所述初始合併地圖圖像中剔除; 將所述初始合併地圖圖像中所述目標像素點組成的地圖圖像作為所述目標合併地圖圖像。According to the method described in item 1 of the patent application scope, the generating the merged target merged map image according to the target pixel includes: Removing pixels other than the target pixel in the initial merged map image from the initial merged map image; A map image composed of the target pixel points in the initial merged map image is used as the target merged map image. 如申請專利範圍第3項所述的方法,所述將所述初始合併地圖圖像中的所述目標像素點之外的像素點從所述初始合併地圖圖像中剔除後,所述方法包括: 將所述目標像素點之外的像素點的位置處的顏色設置為與所述初始合併地圖圖像中所述待合併區域內部的像素點的顏色相同。According to the method described in item 3 of the patent application scope, after removing pixels other than the target pixel in the initial merged map image from the initial merged map image, the method includes : The color at the position of the pixel point other than the target pixel point is set to be the same as the color of the pixel point inside the region to be merged in the initial merged map image. 如申請專利範圍第1項所述的方法,所述根據所述目標像素點生成合併後的目標合併地圖圖像,包括: 提取所述目標像素點對應的座標資訊,根據所述目標像素點對應的座標資訊的集合,生成所述目標合併地圖圖像。According to the method described in item 1 of the patent application scope, the generating the merged target merged map image according to the target pixel includes: The coordinate information corresponding to the target pixel is extracted, and the target merged map image is generated according to the set of coordinate information corresponding to the target pixel. 如申請專利範圍第1項所述的方法,所述獲取所述初始合併地圖圖像中像素點的透明度,包括: 根據所述待合併區域的座標資訊,遍歷所述初始合併地圖圖像中所述待合併區域內的像素點,獲取所述初始合併地圖圖像中所述待合併區域內的像素點對應的透明度。According to the method described in item 1 of the patent application scope, the obtaining transparency of pixels in the initial merged map image includes: According to the coordinate information of the region to be merged, traverse the pixels in the region to be merged in the initial merged map image to obtain the transparency corresponding to the pixels in the region to be merged in the initial merged map image . 一種地圖區域合併的資料處理裝置,包括: 初始合併圖像繪製模組,用於使用預設透明度的線條繪製待合併區域的地圖圖像,生成初始合併地圖圖像; 透明度獲取模組,用於獲取所述初始合併地圖圖像中像素點的透明度; 目標合併地圖生成模組,用於將所述像素點的透明度與所述預設透明度相同的像素點作為目標像素點,根據所述目標像素點生成合併後的目標合併地圖圖像。A data processing device for merging map areas, including: The initial merged image drawing module is used to draw a map image of the area to be merged using lines of preset transparency to generate an initial merged map image; Transparency acquisition module for acquiring the transparency of pixels in the initial merged map image; The target merged map generation module is configured to use pixels with the same transparency as the preset transparency as target pixels, and generate a merged target merged map image according to the target pixels. 如申請專利範圍第7項所述的裝置,所述初始合併圖像繪製模組具體用於: 在第一地圖繪製區域獲取用戶選擇的所述待合併區域; 根據用戶選擇的所述待合併區域,在第二地圖繪製區域使用所述預設透明度的線條繪製待合併區域的地圖圖像,生成所述初始合併地圖圖像。As in the device described in item 7 of the patent application scope, the initial merged image drawing module is specifically used for: Obtaining the area to be merged selected by the user in the first mapping area; According to the area to be merged selected by the user, a map image of the area to be merged is drawn in the second map drawing area using the lines of the preset transparency to generate the initial merged map image. 如申請專利範圍第7項所述的裝置,所述目標合併地圖生成模組具體用於: 將所述初始合併地圖圖像中的所述目標像素點之外的像素點從所述初始合併地圖圖像中剔除; 將所述初始合併地圖圖像中所述目標像素點組成的地圖圖像作為所述目標合併地圖圖像。As in the device described in item 7 of the patent application scope, the target merge map generation module is specifically used for: Removing pixels other than the target pixel in the initial merged map image from the initial merged map image; A map image composed of the target pixel points in the initial merged map image is used as the target merged map image. 如申請專利範圍第9項所述的裝置,所述目標合併地圖生成模組還用於: 將所述目標像素點之外的像素點的位置處的顏色設置為與所述初始合併地圖圖像中所述待合併區域內部的像素點的顏色相同。As in the device described in item 9 of the patent application scope, the target merge map generation module is also used to: The color at the position of the pixel point other than the target pixel point is set to be the same as the color of the pixel point inside the region to be merged in the initial merged map image. 如申請專利範圍第7項所述的裝置,所述目標合併地圖生成模組具體用於: 提取所述目標像素點對應的座標資訊,根據所述目標像素點對應的座標資訊的集合,生成所述目標合併地圖圖像。As in the device described in item 7 of the patent application scope, the target merge map generation module is specifically used for: The coordinate information corresponding to the target pixel is extracted, and the target merged map image is generated according to the set of coordinate information corresponding to the target pixel. 如申請專利範圍第7項所述的裝置,所述透明度獲取模組具體用於: 根據所述待合併區域的座標資訊,遍歷所述初始合併地圖圖像中所述待合併區域內的像素點,獲取所述初始合併地圖圖像中所述待合併區域內的像素點對應的透明度。As in the device described in item 7 of the patent application scope, the transparency acquisition module is specifically used for: According to the coordinate information of the region to be merged, traverse the pixels in the region to be merged in the initial merged map image to obtain the transparency corresponding to the pixels in the region to be merged in the initial merged map image . 一種電腦儲存媒體,其上儲存有電腦程式,所述電腦程式被執行時,實現申請專利範圍第1至6項中任一項所述的方法。A computer storage medium on which a computer program is stored, and when the computer program is executed, the method described in any one of items 1 to 6 of the patent application scope is realized. 一種地圖區域合併的資料處理系統,包括至少一個處理器以及用於儲存處理器可執行指令的儲存器,所述處理器執行所述指令時實現申請專利範圍第1至6項中任一項所述的方法。A data processing system for merging map areas, including at least one processor and a storage for storing processor executable instructions, when the processor executes the instructions, any one of items 1 to 6 of the patent application scope is implemented Described method.
TW108117080A 2018-07-27 2019-05-17 Data processing method and device for merging map areas TWI698841B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810839748.9A CN109192054B (en) 2018-07-27 2018-07-27 Data processing method and device for map region merging
CN201810839748.9 2018-07-27

Publications (2)

Publication Number Publication Date
TW202008328A true TW202008328A (en) 2020-02-16
TWI698841B TWI698841B (en) 2020-07-11

Family

ID=64937165

Family Applications (1)

Application Number Title Priority Date Filing Date
TW108117080A TWI698841B (en) 2018-07-27 2019-05-17 Data processing method and device for merging map areas

Country Status (3)

Country Link
CN (1) CN109192054B (en)
TW (1) TWI698841B (en)
WO (1) WO2020019899A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112395380A (en) * 2020-11-20 2021-02-23 上海莉莉丝网络科技有限公司 Merging method, merging system and computer readable storage medium for dynamic area boundary in game map

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108573653B (en) * 2017-03-13 2022-01-04 腾讯科技(深圳)有限公司 Electronic map generation method and device
CN109192054B (en) * 2018-07-27 2020-04-28 阿里巴巴集团控股有限公司 Data processing method and device for map region merging
CN109785355A (en) * 2019-01-25 2019-05-21 网易(杭州)网络有限公司 Region merging method and device, computer storage medium, electronic equipment
CN111489411B (en) * 2019-01-29 2023-06-20 北京百度网讯科技有限公司 Line drawing method and device, image processor, display card and vehicle
CN110068344B (en) * 2019-04-08 2021-11-23 丰图科技(深圳)有限公司 Map data production method, map data production device, server, and storage medium
CN112019702B (en) * 2019-05-31 2023-08-25 北京嗨动视觉科技有限公司 Image processing method, device and video processor
CN112179361B (en) 2019-07-02 2022-12-06 华为技术有限公司 Method, device and storage medium for updating work map of mobile robot
CN111080732B (en) * 2019-11-12 2023-09-22 望海康信(北京)科技股份公司 Method and system for forming virtual map
CN111862204A (en) * 2019-12-18 2020-10-30 北京嘀嘀无限科技发展有限公司 Method for extracting visual feature points of image and related device
CN111127543B (en) * 2019-12-23 2024-04-05 北京金山安全软件有限公司 Image processing method, device, electronic equipment and storage medium
CN112269850B (en) * 2020-11-10 2024-05-03 中煤航测遥感集团有限公司 Geographic data processing method and device, electronic equipment and storage medium
CN112652063B (en) * 2020-11-20 2022-09-20 上海莉莉丝网络科技有限公司 Method and system for generating dynamic area boundary in game map and computer readable storage medium
CN114140548A (en) * 2021-11-05 2022-03-04 深圳集智数字科技有限公司 Map-based drawing method and device

Family Cites Families (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1299220C (en) * 2004-05-13 2007-02-07 上海交通大学 Automatic splicing method for digital road map
US7911481B1 (en) * 2006-12-14 2011-03-22 Disney Enterprises, Inc. Method and apparatus of graphical object selection
TWI329825B (en) * 2007-04-23 2010-09-01 Network e-map graphic automatically generating system and method therefor
TWI480809B (en) * 2009-08-31 2015-04-11 Alibaba Group Holding Ltd Image feature extraction method and device
US8872848B1 (en) * 2010-09-29 2014-10-28 Google Inc. Rendering vector data as tiles
TWI479343B (en) * 2011-11-11 2015-04-01 Easymap Digital Technology Inc Theme map generating system and method thereof
US9043150B2 (en) * 2012-06-05 2015-05-26 Apple Inc. Routing applications for navigation
GB2499694B8 (en) * 2012-11-09 2017-06-07 Sony Computer Entertainment Europe Ltd System and method of image reconstruction
KR101459636B1 (en) * 2013-04-08 2014-11-07 현대엠엔소프트 주식회사 Method for displaying map of navigation apparatus and navigation apparatus
WO2015112263A2 (en) * 2013-12-04 2015-07-30 Urthecast Corp. Systems and methods for processing distributing earth observation images
CN103714540B (en) * 2013-12-21 2017-01-11 浙江传媒学院 SVM-based transparency estimation method in digital image matting processing
CN103761094A (en) * 2014-01-22 2014-04-30 上海诚明融鑫科技有限公司 Method for polygon combination in planar drawing
CN104077100B (en) * 2014-06-27 2017-04-12 广东威创视讯科技股份有限公司 Composite buffer area image display method and device
CN104715451B (en) * 2015-03-11 2018-01-05 西安交通大学 A kind of image seamless fusion method unanimously optimized based on color and transparency
CN104867170B (en) * 2015-06-02 2017-11-03 厦门卫星定位应用股份有限公司 Public bus network Density Distribution drawing drawing method and system
CN106128291A (en) * 2016-08-31 2016-11-16 武汉拓普伟域网络有限公司 A kind of method based on the self-defined map layer of electronic third-party mapping
CN107919012B (en) * 2016-10-09 2020-11-27 北京嘀嘀无限科技发展有限公司 Method and system for scheduling transport capacity
CN106530219B (en) * 2016-11-07 2020-03-24 青岛海信移动通信技术股份有限公司 Image splicing method and device
CN106557567A (en) * 2016-11-21 2017-04-05 中国农业银行股份有限公司 A kind of data processing method and system
CN107146201A (en) * 2017-05-08 2017-09-08 重庆邮电大学 A kind of image split-joint method based on improvement image co-registration
CN109192054B (en) * 2018-07-27 2020-04-28 阿里巴巴集团控股有限公司 Data processing method and device for map region merging

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112395380A (en) * 2020-11-20 2021-02-23 上海莉莉丝网络科技有限公司 Merging method, merging system and computer readable storage medium for dynamic area boundary in game map
CN112395380B (en) * 2020-11-20 2022-03-22 上海莉莉丝网络科技有限公司 Merging method, merging system and computer readable storage medium for dynamic area boundary in game map

Also Published As

Publication number Publication date
CN109192054B (en) 2020-04-28
TWI698841B (en) 2020-07-11
CN109192054A (en) 2019-01-11
WO2020019899A1 (en) 2020-01-30

Similar Documents

Publication Publication Date Title
TWI698841B (en) Data processing method and device for merging map areas
US20180246635A1 (en) Generating user interfaces combining foreground and background of an image with user interface elements
CN109272454B (en) Coordinate system calibration method and device of augmented reality equipment
US20210258511A1 (en) Diy effects image modification
US9594493B2 (en) Graphical user interface with dial control for a parameter
CN110738722B (en) Thermodynamic diagram texture generation method, device and equipment
TWI691206B (en) Watermark adding processing method, device and client
US20140002487A1 (en) Animated visualization of alpha channel transparency
JP2022535524A (en) Facial image processing method, device, readable medium and electronic apparatus
US10768799B2 (en) Display control of an image on a display screen
CN116977525B (en) Image rendering method and device, storage medium and electronic equipment
TWI671675B (en) Information display method and device
CN112954441B (en) Video editing and playing method, device, equipment and medium
TW201933083A (en) Page display method, apparatus and device
CN115131260A (en) Image processing method, device, equipment, computer readable storage medium and product
CN113163135B (en) Animation adding method, device, equipment and medium for video
CN107404427A (en) One kind chat background display method and device
AU2020301254B2 (en) Sticker generating method and apparatus, and medium and electronic device
CN107703537B (en) The methods of exhibiting and device of a kind of inspection point in three-dimensional earth's surface
US10649640B2 (en) Personalizing perceivability settings of graphical user interfaces of computers
CN115661318A (en) Method, device, storage medium and electronic device for rendering model
CN117348782A (en) Rotation control method and electronic equipment
CN118069487A (en) Page display method, device, equipment and storage medium
CN107015792A (en) A kind of chart unifies the implementation method and equipment of animation
Murru et al. Augmented Visualization on Handheld Devices for Cultural Heritage