WO2020019899A1 - 一种地图区域合并的数据处理方法及装置 - Google Patents

一种地图区域合并的数据处理方法及装置 Download PDF

Info

Publication number
WO2020019899A1
WO2020019899A1 PCT/CN2019/091262 CN2019091262W WO2020019899A1 WO 2020019899 A1 WO2020019899 A1 WO 2020019899A1 CN 2019091262 W CN2019091262 W CN 2019091262W WO 2020019899 A1 WO2020019899 A1 WO 2020019899A1
Authority
WO
WIPO (PCT)
Prior art keywords
merged
map image
map
target
initial
Prior art date
Application number
PCT/CN2019/091262
Other languages
English (en)
French (fr)
Inventor
董晓庆
Original Assignee
阿里巴巴集团控股有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 阿里巴巴集团控股有限公司 filed Critical 阿里巴巴集团控股有限公司
Publication of WO2020019899A1 publication Critical patent/WO2020019899A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B29/00Maps; Plans; Charts; Diagrams, e.g. route diagram
    • G09B29/003Maps
    • G09B29/005Map projections or methods associated specifically therewith

Definitions

  • This specification belongs to the technical field of map data processing, and particularly relates to a data processing method and device for merging map regions.
  • the purpose of this specification is to provide a data processing method and device for merging map regions.
  • the method is simple and fast, and meets the technical requirements for merging map regions.
  • embodiments of the present specification provide a data processing method for merging map regions, including:
  • a pixel point having the same transparency as the preset transparency point is used as a target pixel point, and a merged target merged map image is generated according to the target pixel point.
  • the drawing a map image of a region to be merged by using a preset transparency line to generate an initial merged map image includes:
  • a map image of the area to be merged is drawn using the preset transparency line in the second map drawing area to generate the initial merged map image.
  • the generating a merged target merged map image according to the target pixel includes:
  • a map image composed of the target pixel points in the initial merged map image is used as the target merged map image.
  • the method after removing pixels other than the target pixel point in the initial merged map image from the initial merged map image, the method includes:
  • generating the merged target merged map image according to the target pixel includes:
  • the coordinate information corresponding to the target pixel is extracted, and the target merged map image is generated according to a set of coordinate information corresponding to the target pixel.
  • the obtaining the transparency of the pixel points in the initial merged map image includes:
  • the pixels in the region to be merged in the initial merged map image are traversed to obtain the transparency corresponding to the pixels in the region to be merged in the initial merged map image.
  • this specification provides a data processing device for merging map regions, including:
  • the initial merged image drawing module is used to draw a map image of a region to be merged by using a preset transparency line to generate an initial merged map image;
  • a transparency obtaining module configured to obtain the transparency of the pixels in the initial merged map image
  • the target merged map generation module is configured to use a pixel point having the same transparency as the preset transparency point as a target pixel point, and generate a merged target merged map image according to the target pixel point.
  • the initial merged image drawing module is specifically configured to:
  • a map image of the area to be merged is drawn using the preset transparency line in the second map drawing area to generate the initial merged map image.
  • the target merged map generation module is specifically configured to:
  • a map image composed of the target pixel points in the initial merged map image is used as the target merged map image.
  • the target merged map generation module is further configured to:
  • the target merged map generation module is specifically configured to:
  • the coordinate information corresponding to the target pixel is extracted, and the target merged map image is generated according to a set of coordinate information corresponding to the target pixel.
  • the transparency obtaining module is specifically configured to:
  • the pixels in the region to be merged in the initial merged map image are traversed to obtain the transparency corresponding to the pixels in the region to be merged in the initial merged map image.
  • an embodiment of the present specification provides a computer storage medium on which a computer program is stored.
  • the data processing method for merging map areas described above is implemented.
  • an embodiment of the present specification provides a data processing system for merging map regions, including at least one processor and a memory for storing processor-executable instructions. When the processor executes the instructions, the map region merging is implemented. Data processing methods.
  • the data processing method, device, and system for the map area merger provided in this manual can detect whether the transparency of the pixels has changed through the dyeing technique on the canvas. Based on the map and the transparency of the pixels in the overlapping part and the pixels in the non-overlapping part during the map and merge The difference in transparency is to filter out pixels with overlapping borders and pixels with non-overlapping borders, and further generate a merged map image based on pixels with non-overlapping borders.
  • the method is simple and fast, does not require complicated data processing, and can accurately detect overlapping borders, making the merge of map areas more accurate and fast.
  • the merged map areas can be interacted as a whole, which is convenient for subsequent use and has wide applicability.
  • FIG. 1 is a schematic flowchart of a data processing method for merging map regions according to an embodiment provided in this specification
  • FIG. 2 is a schematic diagram of an initial merged map image of the northeast region in one embodiment of the present specification
  • FIG. 3 is a schematic diagram of a target merged map image of a merged Northeast region in one embodiment of the present specification
  • FIG. 5 is a schematic diagram of a module structure of an embodiment of a data processing device for merging map regions provided in this specification;
  • FIG. 6 is a schematic diagram of a module structure of an embodiment of a data processing system for merging map regions provided in this specification.
  • maps or electronic maps are divided by provinces and countries. However, in some cases, some designated areas may need to be merged on the map. For example, the three northeast provinces in the map of China are merged into the northeast area. To facilitate users' overall understanding of the Northeast region.
  • the data processing method for the map region merging provided by the embodiment of the present specification, by setting uniform transparency, when the map regions are merged, the overlapping transparency will change, and the specified regions will be merged based on the change in transparency in the map image. Based on the changes in pixel transparency, merging the specified map areas is simple and fast, and does not require complicated mathematical calculations. The merged map areas can interact as a whole, which is highly applicable.
  • the data processing process of the merged map area may be performed on the client such as: a smart phone, a tablet computer, a smart wearable device (smart watch, virtual reality glasses, virtual reality helmet, etc.) and other electronic devices.
  • client browser such as a PC browser, a mobile browser, and a server-side web container.
  • FIG. 1 is a schematic flowchart of a data processing method for merging map regions according to an embodiment provided in this specification.
  • a data processing method for merging map regions according to an embodiment of this specification includes:
  • the area to be merged can include multiple map areas.
  • the three provinces of Liaoning, Jilin, and Heilongjiang in the three northeastern provinces can represent three areas to be merged, which can be on the same canvas (canvas can represent components used to draw graphics Or area) or other map drawing areas use a preset transparency line to draw a map image of the area to be merged.
  • FIG. 2 is a schematic diagram of an initial merged map image of the Northeast region in one embodiment of the present specification. As shown in FIG. 2, according to the latitude and longitude information of the three provinces of Northeast China, the same canvas can be used according to the The relative positions are drawn respectively for the map images of Liaoning, Jilin and Heilongjiang provinces.
  • the map images of the three provinces together form the initial merged map image of the Northeast region. It can be seen that the adjacent merged regions in the initial merged map image may be possible. Boundary coincidence will occur. As shown in FIG. 2, when drawing a map image of a region to be merged, only an outline of each region to be merged, that is, a boundary line may be drawn, and an area within the boundary line may represent the region to be merged.
  • a line with a preset transparency is used for drawing.
  • the preset transparency can be selected according to actual needs. Generally, the preset transparency can be set between 0-1. In one embodiment of the present specification, the preset transparency may be 0.5, so that the setting may facilitate subsequent pixel detection.
  • map areas to be merged may be drawn using lines of the same color, or may be drawn using lines of different colors.
  • the map image uses the entire map image of the three provinces as the initial merged image of the Northeast region.
  • the entire map image of each province serves as the initial merged image of the Northeast region. That is, in the embodiment of the present specification, when a map image of each region in the region to be merged is drawn, the map images of different regions may use lines of the same transparency, but the color of the lines may not be specifically limited.
  • GeoJSON is a format that encodes various geographic data structures. It is an organization format for map data. Maps can be drawn by parsing this data.
  • each pixel point in the area to be merged in the initial merged map image can be traversed, that is, each original data (data point or coordinate containing latitude and longitude information) in the area to be merged in the initial merged map image can be traversed Canvas pixels corresponding to the information data points) to obtain the transparency corresponding to the pixels in the region to be merged in the initial merged map image.
  • the transparency of each pixel can be obtained based on the coloring technology, and the specific method is not specifically limited in this embodiment of the present application.
  • the method By traversing the pixels inside the region to be merged, the transparency of each pixel is obtained, the method is simple, and the detection of the pixels outside the region to be merged can be reduced, and the speed of data processing is improved.
  • a pixel point having the same transparency as the preset transparency point is used as a target pixel point, and a merged target merged map image is generated according to the target pixel point.
  • the transparency of the pixels can be compared with the preset transparency of the lines used when drawing the map image of the area to be merged, and the pixels with the same transparency as the preset transparency can be compared.
  • the target pixel For example: when drawing a map image of a region to be merged with a preset transparency of 0.5, a pixel with a transparency of 0.5 in the initial merged map image can be used as a target pixel.
  • the target pixel can be used to generate a merged target merged map image to complete the merge of the map area.
  • the coordinate information of the target pixel can be extracted, the coordinate information corresponding to the target pixel can be derived, and the target merged map image can be generated by using the set of coordinate information corresponding to the target pixel.
  • the preset transparency used when drawing the map image of the area to be merged is 0.5
  • the pixel with a transparency of 0.5 in the initial merged map image can be used as the target pixel
  • the coordinate information of the target pixel can be extracted to target the pixel.
  • the coordinate information of the points is stored in the coordinate point set, and the coordinate point set composed of the coordinate information of all target pixels can be exported.
  • the target composite map image is drawn according to the coordinate set of the target pixel point.
  • the target composite map image may be composed of the boundary image of the region to be merged. At this time, the boundary image of the region to be merged generated may not include the overlapping portion of the boundary of the region to be merged. .
  • an embodiment of the present specification may draw a map image of a region to be merged using black lines with a transparency of 0.5 on a canvas or a map drawing area.
  • the map image of the region to be merged may include a boundary image of the region to be merged, and a region to be merged
  • the map image can be composed of the initial merged map image.
  • the transparency of the non-overlapping pixels at the boundary of the area to be merged is 0.5, and the transparency of the overlapping pixels is usually greater than 0.5.
  • the image content is not drawn, and the transparency of the pixels is 0.
  • a pixel point with a transparency of 0.5 may be used as the target pixel point, that is, a pixel point of a non-overlapping portion in the boundary image is used as the target pixel point.
  • the target pixels are grouped together to represent the boundary image of the merged region to be merged.
  • the merged boundary image does not include the overlapping portion of each region to be merged, and can represent the overall boundary contour of each region to be merged.
  • the coordinate information of the target pixel can be extracted and saved, and the coordinate information corresponding to the target pixel can be exported to generate a merged target merged map image.
  • FIG. 3 is a schematic diagram of a merged target map image of the northeast region after being merged in one embodiment of the present specification. As shown in FIG.
  • the merged target map image in one embodiment of the present specification may remove the overlapping portions of the boundaries of the regions to be merged. , Only the non-overlapping part of the boundary is retained, and the effect of merging the map areas is intuitively displayed, which is convenient for users to check.
  • Figures 4 (a) -4 (b) are schematic diagrams of detecting changes in transparency in an embodiment of the present specification.
  • Figure 4 (a) two images with a transparency of 0.5 are partially superimposed.
  • the transparency of the image with the middle superimposed part is greater than the transparency of the other non-superimposed parts.
  • FIG. 4 (b) two images without a border and having a transparency of 0.5 are partially superimposed, and it can be seen that the transparency of the image of the middle overlapping portion is greater than that of the other non-overlapping portions.
  • by detecting changes in the transparency of the pixel points it is possible to accurately and quickly detect which parts of the area to be merged overlap and which parts are not overlapped, so as to quickly and accurately generate the merged map image.
  • the embodiment of the present application may also name the merged target merged map image according to the geographical location of the area to be merged. For example, in FIG. 3, the three northeastern provinces after the merge are named the Northeast District.
  • the data processing method for the map region merger provided in this manual can detect whether the transparency of the pixels has changed through the dyeing technique on the canvas. Based on the transparency of the pixels of the overlapping part and the transparency of the pixels of the non-overlapping part during the map merge, the filtering Pixels with overlapping borders and pixels with non-overlapping borders are generated, and a merged map image is further generated based on pixels with non-overlapping borders.
  • the method is simple and fast, does not require complicated data processing, and can accurately detect overlapping borders, making the merge of map areas more accurate and fast.
  • the merged map areas can be interacted as a whole, which is convenient for subsequent use and has wide applicability.
  • the drawing a map image of a region to be merged using lines with preset transparency, and generating an initial merged map image may include:
  • a map image of the area to be merged is drawn in the second map drawing area using the preset transparency line, and the initial merged map image is generated.
  • the user when the user views the map in the first map drawing area (such as the canvas of a client), if some map areas need to be merged, and if the map areas of the three northeast provinces need to be merged, the user can click or other Operation Select the area to be merged.
  • the user draws a complete map (such as drawing a map of China) by importing GeoJSON data in the first map drawing area, and clicks the three maps of the northeast China ’s Liaoning, Jilin, and Heilongjiang provinces in the drawn China map to select Area to be merged.
  • a map image of the area to be merged can be drawn using a preset transparency line in the second map drawing area (the second map drawing area can use a hidden canvas) to generate an initial merged map image.
  • the second map drawing area can use a hidden canvas
  • the initial merged map image is generated based on the user's selection. The user can select the area to be merged and merge the map areas according to actual needs. The method is simple, flexible, and improves the user experience.
  • the generating a merged target merged map image based on the target pixel point may include:
  • a map image composed of the target pixel points in the initial merged map image is used as the target merged map image.
  • non-target pixel points that is, pixels other than the target pixel point
  • the initial merged map image may be removed from the initial merged map image.
  • the remaining target pixels can be merged to form a merged target merged map image.
  • non-target pixels are eliminated in the second map drawing area, and the target pixels are retained. Then, the image composed of the remaining target pixels in the second map drawing area can represent the merged target merged map image.
  • Non-target pixels that do not meet the required transparency are removed from the initial merged map image, and the remaining target pixels are directly used to generate the merged target merged map image.
  • the method is simple and the map regions are merged accurately.
  • the color at the position of the non-target pixel points can be set to be the same as the color of the pixel points in the boundary inner region (that is, the inner region of the region to be merged) in the initial merged map image.
  • the color of non-target pixels can be prevented from being different from those of other areas inside the boundary of the region to be merged after excluding the target pixels, which affects the display effect of the merged map image.
  • the initial merged map image is generated when the map image of the area to be merged is drawn, the initial merged map image has a background color, such as: the inner area of the boundary of the area to be merged is filled with red pixels.
  • the color at the location of the target pixel point may become white or colorless, which is different from the color of other pixels in the area inside the boundary of the area to be merged, which affects the display effect of the merged map image.
  • the color of the non-target pixels can be set to red, which is consistent with the color of the area inside the boundary of the area to be merged, so as to improve the display effect of the merged map image. If the interior of each area in the area to be merged is filled with different colors, that is, there are multiple colors at the non-boundary in the initial merged map image, the color of any pixel near the non-target pixel can be taken as the non-target pixel The color at that position after point culling.
  • the data processing method for map area merging provided in this manual, by setting uniform transparency, when merging map areas, the overlapping transparency will change, and the specified areas will be merged based on the change in transparency in the map image. Based on the changes in pixel transparency, merging the specified map areas is simple and fast, and does not require complicated mathematical calculations. The merged map areas can interact as a whole, which is highly applicable.
  • one or more embodiments of the present specification also provide a data processing apparatus for map region merging.
  • the device may include a system (including a distributed system), software (application), module, component, server, client, etc. that uses the method described in the embodiments of the present specification, and a device that implements necessary hardware.
  • the devices in one or more embodiments provided in the embodiments of this specification are as described in the following embodiments. Since the implementation solution of the device to solve the problem is similar to the method, the implementation of the specific device in the embodiment of this specification may refer to the implementation of the foregoing method, and the duplicated details are not described again.
  • unit or “module” may be a combination of software and / or hardware that realizes a predetermined function.
  • the devices described in the following embodiments are preferably implemented in software, implementation in hardware, or a combination of software and hardware is also possible and conceived.
  • FIG. 5 is a schematic diagram of a module structure of an embodiment of a data processing device for merging map regions provided in this specification.
  • a data processing device for merging map regions provided in this specification includes an initial merged image drawing module 51.
  • the initial merged image drawing module 51 may be configured to draw a map image of a region to be merged by using a preset transparency line to generate an initial merged map image;
  • a transparency obtaining module 52 which may be configured to obtain the transparency of the pixels in the initial merged map image
  • the target merged map generating module 53 may be configured to use a pixel with the same transparency as the preset transparency as a target pixel, and generate a merged target merged map image according to the target pixel.
  • the data processing device for merging map regions can detect whether the transparency of the pixels has changed by using the dyeing technique on the canvas. Differently, pixels with overlapping borders and pixels with non-overlapping borders are filtered out, and a merged map image is further generated based on pixels with non-overlapping borders.
  • the method is simple and fast, does not require complicated data processing, and can accurately detect overlapping borders, making the merge of map areas more accurate and fast.
  • the merged map areas can be interacted as a whole, which is convenient for subsequent use and has wide applicability.
  • the initial merged image drawing module is specifically configured to:
  • a map image of the area to be merged is drawn using the preset transparency line in the second map drawing area to generate the initial merged map image.
  • the initial merged image drawing module is specifically configured to:
  • a map image of the area to be merged is drawn using the preset transparency line in the second map drawing area to generate the initial merged map image.
  • the target merged map generation module is specifically configured to:
  • a map image composed of the target pixel points in the initial merged map image is used as the target merged map image.
  • non-target pixels that do not meet the requirements of transparency are removed from the initial merged map image, and the remaining target pixels are directly generated into the merged target merged map image.
  • the method is simple and the map regions are merged accurately.
  • the target merged map generation module is further configured to:
  • the color of non-target pixels is set to red, which is consistent with the color of the inner area of the boundary of the area to be merged, thereby improving the display effect of the merged map image.
  • the target merged map generation module is specifically configured to:
  • the coordinate information corresponding to the target pixel is extracted, and the target merged map image is generated according to a set of coordinate information corresponding to the target pixel.
  • the target merged map image is generated based on the set of coordinate information of the target pixel point.
  • the method is fast, does not require complicated data processing, and can accurately detect the overlapping part of the boundary, which makes the merge of map areas more accurate and faster.
  • the rear map area can interact as a whole, which is convenient for subsequent use and has wide applicability.
  • the transparency acquisition module is specifically configured to:
  • the pixels in the region to be merged in the initial merged map image are traversed to obtain the transparency corresponding to the pixels in the region to be merged in the initial merged map image.
  • the method by traversing the pixels inside the area to be merged, the transparency change of each pixel is obtained, the method is simple, and the detection of external pixels in the area to be merged can be reduced, and the speed of data processing is improved.
  • a computer storage medium may also be provided, in which a computer program is stored.
  • the method for processing video data in the foregoing embodiment is implemented.
  • the following method may be implemented:
  • a pixel point having the same transparency as the preset transparency point is used as a target pixel point, and a merged target merged map image is generated according to the target pixel point.
  • the method or device described in the foregoing embodiments provided in this specification may implement business logic through a computer program and record it on a storage medium, and the storage medium may be read and executed by a computer to achieve the effect of the solution described in the embodiments of this specification.
  • the data processing method or device for merging map areas provided in the embodiments of this specification may be implemented by a processor executing corresponding program instructions in a computer, such as using a C ++ language of a Windows operating system on a PC, a Linux system, or other For example, use android, iOS system programming language to realize in the intelligent terminal, and realize the processing logic based on quantum computer.
  • FIG. 6 is a schematic diagram of a module structure of an embodiment of a data processing system for merging map regions provided in this specification, as shown in FIG.
  • the data processing system for merging map regions provided in the embodiment may include a processor 61 and a memory 62 for storing processor-executable instructions.
  • the processor 61 and the memory 62 complete communication with each other through a bus 63;
  • the processor 61 is configured to call program instructions in the memory 62 to execute the methods provided in the foregoing embodiments of the seismic data processing method. For example, the processor 61 uses a preset transparency line to draw a map image of a region to be merged, and generates a map image. Initially merging the map image; obtaining the transparency of the pixels in the initial merging map image; using the pixels with the same transparency as the preset transparency as the target pixels, and generating the merged Target merged map image.
  • the embodiments of the present specification are not limited to situations that must conform to industry communication standards, standard computer data processing and data storage rules, or one or more embodiments described in this specification.
  • Certain industry standards or implementations that are slightly modified based on implementations described in custom methods or embodiments can also achieve the same, equivalent or similar, or predictable implementation effects of the above embodiments.
  • the embodiments obtained by applying these modified or deformed data acquisition, storage, judgment, processing methods, etc. may still fall within the scope of the optional implementation of the examples in this specification.
  • a programmable logic device Programmable Logic Device (PLD)
  • PLD Programmable Logic Device
  • FPGA Field Programmable Gate Array
  • HDL Hardware Description Language
  • VHDL Very-High-Speed Integrated Circuit Hardware Description Language
  • Verilog Verilog
  • the controller may be implemented in any suitable manner, for example, the controller may take the form of a microprocessor or processor and a computer-readable medium storing computer-readable program code (e.g., software or firmware) executable by the (micro) processor. , Logic gates, switches, Application Specific Integrated Circuits (ASICs), programmable logic controllers, and embedded microcontrollers. Examples of controllers include, but are not limited to, the following microcontrollers: ARC 625D, Atmel AT91SAM, With the Microchip PIC18F26K20 and Silicone Labs C8051F320, the memory controller can also be implemented as part of the control logic of the memory.
  • the controller may take the form of a microprocessor or processor and a computer-readable medium storing computer-readable program code (e.g., software or firmware) executable by the (micro) processor. , Logic gates, switches, Application Specific Integrated Circuits (ASICs), programmable logic controllers, and embedded microcontrollers. Examples of controllers include, but are
  • controller logic gates, switches, application-specific integrated circuits, programmable logic controllers, and embedded controllers by logically programming the method steps. Microcontrollers, etc. to achieve the same function. Therefore, such a controller can be regarded as a hardware component, and a device included in the controller for implementing various functions can also be regarded as a structure within the hardware component. Or even, the means for implementing various functions may be regarded as a structure within both a software module implementing the method and a hardware component.
  • the system, device, module, or unit described in the foregoing embodiments may be specifically implemented by a computer chip or entity, or a product with a certain function.
  • a typical implementation device is a computer.
  • the computer may be, for example, a personal computer, a laptop computer, a vehicle-mounted human-computer interaction device, a cellular phone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, and a tablet.
  • a computer, a wearable device, or a combination of any of these devices may be, for example, a personal computer, a laptop computer, a vehicle-mounted human-computer interaction device, a cellular phone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, and a tablet.
  • a computer, a wearable device, or a combination of any of these devices are examples of any of these devices.
  • each module may be implemented in the same or multiple software and / or hardware, or the module that implements the same function may be implemented by a combination of multiple submodules or subunits, etc. .
  • the device embodiments described above are only schematic.
  • the division of the unit is only a logical function division.
  • multiple units or components may be combined or integrated.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, which may be electrical, mechanical or other forms.
  • These computer program instructions may also be stored in a computer-readable memory capable of directing a computer or other programmable data processing device to work in a particular manner such that the instructions stored in the computer-readable memory produce a manufactured article including an instruction device, the instructions
  • the device implements the functions specified in one or more flowcharts and / or one or more blocks of the block diagram.
  • These computer program instructions can also be loaded on a computer or other programmable data processing device, so that a series of steps can be performed on the computer or other programmable device to produce a computer-implemented process, which can be executed on the computer or other programmable device.
  • the instructions provide steps for implementing the functions specified in one or more flowcharts and / or one or more blocks of the block diagrams.
  • a computing device includes one or more processors (CPUs), input / output interfaces, network interfaces, and memory.
  • processors CPUs
  • input / output interfaces output interfaces
  • network interfaces network interfaces
  • memory volatile and non-volatile memory
  • Memory may include non-persistent memory, random access memory (RAM), and / or non-volatile memory in computer-readable media, such as read-only memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
  • RAM random access memory
  • ROM read-only memory
  • flash RAM flash memory
  • Computer-readable media includes both permanent and non-persistent, removable and non-removable media.
  • Information can be stored by any method or technology.
  • Information may be computer-readable instructions, data structures, modules of a program, or other data.
  • Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), and read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash memory or other memory technologies, read-only disc read-only memory (CD-ROM), digital versatile disc (DVD) or other optical storage, Magnetic tape cartridges, magnetic tape magnetic disk storage, graphene storage, or other magnetic storage devices or any other non-transmission media can be used to store information that can be accessed by computing devices.
  • computer-readable media does not include temporary computer-readable media, such as modulated data signals and carrier waves.
  • one or more embodiments of the present specification may be provided as a method, a system, or a computer program product. Therefore, one or more embodiments of the present specification may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Moreover, one or more embodiments of the present specification may adopt a computer program implemented on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) containing computer-usable program code therein. The form of the product.
  • computer-usable storage media including but not limited to disk storage, CD-ROM, optical storage, etc.
  • One or more embodiments of the present specification may be described in the general context of computer-executable instructions executed by a computer, such as program modules.
  • program modules include routines, programs, objects, components, data structures, etc. that perform specific tasks or implement specific abstract data types.
  • One or more embodiments of the present specification may also be practiced in distributed computing environments in which tasks are performed by remote processing devices connected through a communication network.
  • program modules may be located in local and remote computer storage media, including storage devices.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Business, Economics & Management (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • General Physics & Mathematics (AREA)
  • Processing Or Creating Images (AREA)
  • Image Generation (AREA)

Abstract

一种地图区域合并的数据处理方法及装置。数据处理方法包括:使用预设透明度的线条绘制待合并区域的地图图像,生成初始合并地图图像(S2);获取初始合并地图图像中像素点的透明度(S4);将像素点的透明度与预设透明度相同的像素点作为目标像素点,根据目标像素点生成合并后的目标合并地图图像(S6)。数据处理方法及装置不需要复杂的数据处理,并且能够准确的检测出边界重叠部分,使得地图区域的合并更加准确快速。

Description

一种地图区域合并的数据处理方法及装置 技术领域
本说明书属于地图数据处理技术领域,尤其涉及一种地图区域合并的数据处理方法及装置。
背景技术
随着计算机技术的发展,电子地图的出现给人们的生活带来了极大的方便。在使用电子地图时,经常会遇到需要将地图中多个区域合并成一个区域的需求,例如:将东北三省合并成大东北区,浙江、上海、苏州合并成华东区,将很多国家合并成中东区等。
现有技术中,对于地图中区域的合并时,通常可以利用数学计算多个区域之间的重合线,数据处理过程复杂,不够灵活,适用性较差。因此,亟需一种方便快捷的地图区域合并的实施方案。
发明内容
本说明书目的在于提供一种地图区域合并的数据处理方法及装置,方法简单快捷,满足地图区域合并的技术需求。
一方面本说明书实施例提供了一种地图区域合并的数据处理方法,包括:
使用预设透明度的线条绘制待合并区域的地图图像,生成初始合并地图图像;
获取所述初始合并地图图像中像素点的透明度;
将所述像素点的透明度与所述预设透明度相同的像素点作为目标像素点,根据所述目标像素点生成合并后的目标合并地图图像。
进一步地,所述方法的另一个实施例中,所述使用预设透明度的线条绘制待合并区域的地图图像,生成初始合并地图图像,包括:
在第一地图绘制区域获取用户选择的所述待合并区域;
根据用户选择的所述待合并区域,在第二地图绘制区域使用所述预设透明度的线条绘制待合并区域的地图图像,生成所述初始合并地图图像。
进一步地,所述方法的另一个实施例中,所述根据所述目标像素点生成合并后的目 标合并地图图像,包括:
将所述初始合并地图图像中的所述目标像素点之外的像素点从所述初始合并地图图像中剔除;
将所述初始合并地图图像中所述目标像素点组成的地图图像作为所述目标合并地图图像。
进一步地,所述方法的另一个实施例中,所述将所述初始合并地图图像中的所述目标像素点之外的像素点从所述初始合并地图图像中剔除后,所述方法包括:
将所述目标像素点之外的像素点的位置处的颜色设置为与所述初始合并地图图像中所述待合并区域内部的像素点的颜色相同。
进一步地,所述方法的另一个实施例中,所述根据所述目标像素点生成合并后的目标合并地图图像,包括:
提取所述目标像素点对应的坐标信息,根据所述目标像素点对应的坐标信息的集合,生成所述目标合并地图图像。
进一步地,所述方法的另一个实施例中,所述获取所述初始合并地图图像中像素点的透明度,包括:
根据所述待合并区域的坐标信息,遍历所述初始合并地图图像中所述待合并区域内的像素点,获取所述初始合并地图图像中所述待合并区域内的像素点对应的透明度。
另一方面,本说明书提供了地图区域合并的数据处理装置,包括:
初始合并图像绘制模块,用于使用预设透明度的线条绘制待合并区域的地图图像,生成初始合并地图图像;
透明度获取模块,用于获取所述初始合并地图图像中像素点的透明度;
目标合并地图生成模块,用于将所述像素点的透明度与所述预设透明度相同的像素点作为目标像素点,根据所述目标像素点生成合并后的目标合并地图图像。
进一步地,所述装置的另一个实施例中,所述初始合并图像绘制模块具体用于:
在第一地图绘制区域获取用户选择的所述待合并区域;
根据用户选择的所述待合并区域,在第二地图绘制区域使用所述预设透明度的线条绘制待合并区域的地图图像,生成所述初始合并地图图像。
进一步地,所述装置的另一个实施例中,所述目标合并地图生成模块具体用于:
将所述初始合并地图图像中的所述目标像素点之外的像素点从所述初始合并地图图像中剔除;
将所述初始合并地图图像中所述目标像素点组成的地图图像作为所述目标合并地图图像。
进一步地,所述装置的另一个实施例中,所述目标合并地图生成模块还用于:
将所述目标像素点之外的像素点的位置处的颜色设置为与所述初始合并地图图像中所述待合并区域内部的像素点的颜色相同。
进一步地,所述装置的另一个实施例中,所述目标合并地图生成模块具体用于:
提取所述目标像素点对应的坐标信息,根据所述目标像素点对应的坐标信息的集合,生成所述目标合并地图图像。
进一步地,所述装置的另一个实施例中,所述透明度获取模块具体用于:
根据所述待合并区域的坐标信息,遍历所述初始合并地图图像中所述待合并区域内的像素点,获取所述初始合并地图图像中所述待合并区域内的像素点对应的透明度。
再一方面,本说明书实施例提供了一种计算机存储介质,其上存储有计算机程序,所述计算机程序被执行时,实现权利要求上述地图区域合并的数据处理方法。
又一方面,本说明书实施例提供了地图区域合并的数据处理系统,包括至少一个处理器以及用于存储处理器可执行指令的存储器,所述处理器执行所述指令时实现上述地图区域合并的数据处理方法。
本说明书提供的地图区域合并的数据处理方法、装置、系统,可以通过画布上染色技术检测像素点的透明度是否有变化,基于地图和合并时重叠部分的像素点的透明度与未重叠部分的像素点的透明度不同,筛选出边界重叠的像素点和边界未重叠的像素点,进一步基于边界未重叠的像素点生成合并后的地图图像。方法简单快捷,不需要复杂的数据处理,并且能够准确的检测出边界重叠部分,使得地图区域的合并更加准确快速,合并后的地图区域可以作为一个整体进行交互,方便后续使用,适用性广。
附图说明
为了更清楚地说明本说明书实施例或现有技术中的技术方案,下面将对实施例或现 有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本说明书中记载的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是本说明书提供的一个实施例中的地图区域合并的数据处理方法的流程示意图;
图2是本说明书一个实施例中东北地区初始合并地图图像的示意图;
图3是本说明书一个实施例中合并后的东北地区的目标合并地图图像示意图;
图4(a)-4(b)是本说明书一个实施例中透明度变化检测示意图;
图5是本说明书提供的地图区域合并的数据处理装置一个实施例的模块结构示意图;
图6是本说明书提供的一种地图区域合并的数据处理系统实施例的模块结构示意图。
具体实施方式
为了使本技术领域的人员更好地理解本说明书中的技术方案,下面将结合本说明书实施例中的附图,对本说明书实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本说明书一部分实施例,而不是全部的实施例。基于本说明书中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都应当属于本说明书保护的范围。
随着计算机网络技术的发展,人们可以通过电子地图了解世界各地的地理位置等信息。通常情况下地图或者电子地图是以省份、国家来进行区域的划分,但是,有些情况下,可能需要将一些指定的区域在地图上进行合并,如:将中国地图中的东北三省合并成东北区,以方便用户对东北区域的进行整体的了解认识。
本说明书实施例提供的地图区域合并的数据处理方法,通过设置统一的透明度,在合并地图区域时,重合的部分透明度会发生变化,基于地图图像中透明度的变化,将指定的区域进行合并。基于像素点透明度的变化,合并指定的地图区域,方法简单快捷,不需要复杂的数学计算,合并后的地图区域可以作为一个整体进行交互,适用性强。
本申请实施例中合并地图区域的数据处理过程可以在客户端上进行如:智能手机、平板电脑、智能可穿戴设备(智能手表、虚拟现实眼镜、虚拟现实头盔等)等电子设备。具体可以在客户端的浏览器端进行,如:PC浏览器端、移动浏览器端、服务器端web容器等。
具体地,图1是本说明书提供的一个实施例中的地图区域合并的数据处理方法的流程示意图,如图1所示,本说明书实施例提供的地图区域合并的数据处理方法,包括:
S2、使用预设透明度的线条绘制待合并区域的地图图像,生成初始合并地图图像。
待合并区域可以包括多个地图区域,如:东北三省中的辽宁省、吉林省、黑龙江省三个省份可以表示三个待合并区域,可以在同一个画布(画布可以表示用于绘制图形的组件或区域)或其他地图绘制区域使用预设透明度的线条绘制出待合并区域的地图图像。例如:图2是本说明书一个实施例中东北地区的初始合并地图图像的示意图,如图2所示,可以根据东北三省的经纬度信息,在同一个画布中按照辽宁省、吉林省、黑龙江省的相对位置分别绘制出辽宁省、吉林省、黑龙江省的地图图像,三个省份的地图图像共同组成东北地区的初始合并地图图像,可以看出初始合并地图图像中相邻的待合并区域之间可能会出现边界重合。如图2所示,在绘制待合并区域的地图图像时,可以只绘制出各个待合并区域的轮廓线即边界线,边界线以内的区域可以表示该待合并区域。
其中,绘制待合并区域的地图图像时,本说明书一个实施例中使用预设透明度的线条进行绘制,预设透明度可以根据实际需要进行选取,通常情况下预设透明度可以设置在0-1之间,本说明书一个实施例中预设透明度可以为0.5,这样设置可以方便后续像素点的检测。
此外,在绘制各个待合并地图区域时,不同的待合并地图区域可以是使用相同颜色的线条绘制,也可以使用不同颜色的线条进行绘制。如:东北三省地图区域作为待合并区域时,可以在同一个画布中都使用黑色(或其他颜色如:红色、蓝色等)、透明度为0.5的线条绘制出辽宁省、吉林省、黑龙江省的地图图像,将三个省份的地图图像的整体作为东北地区的初始合并图像。也可以使用黑色、透明度为0.5的线条绘制辽宁省的地图图像,使用红色、透明度为0.5的线条绘制吉林省的地图图像,使用蓝色、透明度为0.5的线条绘制黑龙江省的地图图像,将三个省份的地图图像的整体作为东北地区的初始合并图像。即本说明书一个实施例中在绘制待合并区域中各个区域的地图图像时,不同区域的地图图像可以使用相同的透明度的线条,但是线条颜色可以不进行具体的限定。
在绘制待合并区域的地图图像时,可以通过导入GeoJSON数据,然后用程序在画布上绘制出地图。GeoJSON是一种对各种地理数据结构进行编码的格式,是一种地图数据的组织格式,可以通过解析这种数据绘制出地图。
S4、获取所述初始合并地图图像中像素点的透明度。
在生成初始合并地图图像后,可以获取初始合并地图图像中的各个像素点的透明度。本说明书一个实施例中,在绘制待合并区域的地图图像时,可以将待合并区域的经纬度信息转化为坐标信息。根据待合并区域的坐标信息,可以遍历初始合并地图图像中待合并区域内的各个像素点,即可以遍历初始合并地图图像中待合并区域内的每个原始数据(包含经纬度信息的数据点或者坐标信息的数据点)对应的画布像素点,获取初始合并地图图像中待合并区域内的像素点对应的透明度。可以基于染色技术获取各个像素点的透明度,具体方法本申请实施例不作具体限定。
通过遍历待合并区域内部的像素点,获得各个像素点的透明度变化,方法简单,并且可以减少待合并区域外部像素点的检测,提高了数据处理的速度。
S6、将所述像素点的透明度与所述预设透明度相同的像素点作为目标像素点,根据所述目标像素点生成合并后的目标合并地图图像。
在获取到初始合并地图图像中像素点的透明度后,可以将像素点的透明度和在绘制待合并区域的地图图像时使用的线条的预设透明度进行对比,将透明度与预设透明度相同的像素点作为目标像素点。例如:在绘制待合并区域的地图图像时使用的预设透明度为0.5,则可以将初始合并地图图像中透明度为0.5的像素点作为目标像素点。利用目标像素点可以生成合并后的目标合并地图图像,完成地图区域的合并。
本说明书一个实施例中,可以提取出目标像素点的坐标信息,将目标像素点对应的坐标信息导出,利用目标像素点对应的坐标信息的集合生成目标合并地图图像。如:若在绘制待合并区域的地图图像时使用的预设透明度为0.5,可以将初始合并地图图像中透明度为0.5的像素点作为目标像素点,提取目标像素点的坐标信息,可以将目标像素点的坐标信息保存在坐标点集合中,可以将所有目标像素点的坐标信息组成的坐标点集合导出。根据目标像素点的坐标集合绘制出目标合成地图图像,目标合成地图图像可以由待合并区域的边界图像组成,此时生成的待合并区域的边界图像中可以不包括待合并区域的边界重叠的部分。
例如:本说明书一个实施例可以在画布或者地图绘制区域使用黑色、透明度为0.5的线条绘制出待合并区域的地图图像,待合并区域的地图图像中可以包括待合并区域的边界图像,待合并区域的地图图像可以组成初始合并地图图像,具体可以参考图2中东北地图的初始合并地图图像示意图。遍历初始合并地图图像中待合并区域内的像素点, 待合并区域的边界处未重叠的部分像素点的透明度为0.5,边界重叠的部分像素点的透明度通常大于0.5,边界图像内部的其他区域因未绘制图像内容,像素点的透明度为0。可以将透明度为0.5的像素点作为目标像素点,即将边界图像中未重叠的部分的像素点作为目标像素点。目标像素点组合在一起,可以表示合并后的待合并区域的边界图像,此时,合并后的边界图像中不包括各个待合并区域的重叠部分,可以表示各待合并区域整体的边界轮廓。可以提取并保存目标像素点的坐标信息,将目标像素点对应的坐标信息导出可以生成合并后的目标合并地图图像。图3是本说明书一个实施例中合并后的东北地区的目标合并地图图像示意图,如图3所示,本说明书一个实施例中合并后的目标地图图像可以将各待合并区域的边界重叠部分去除,只保留边界未重叠部分,直观的表示地图区域合并的效果,方便用户查看。
图4(a)-4(b)是本说明书一个实施例中透明度变化检测示意图,如图4(a)所示,图中将两个透明度为0.5的图像进行部分叠加,从图中可以看出中间叠加部分的图像的透明度大于其他未叠加部分的图像的透明度。同样的,如图4(b)所示,将两个没有边框的透明度为0.5的图像的进行部分叠加,可以看出,中间重叠部分的图像的透明度大于其他未叠加部分的图像的透明度。本说明书实施例中,通过检测像素点的透明度的变化,可以准确快速的检测出待合并区域中哪些部分发生重叠,哪些部分未重叠,以实现快速准确的生成合并后的地图图像。
此外,本申请实施例还可以根据待合并区域所处的地理位置,为合并后的目标合并地图图像进行命名,例如:图3中将合并后的东北三省命名为东北区。
本说明书提供的地图区域合并的数据处理方法,可以通过画布上染色技术检测像素点的透明度是否有变化,基于地图合并时重叠部分的像素点的透明度与未重叠部分的像素点的透明度不同,筛选出边界重叠的像素点和边界未重叠的像素点,进一步基于边界未重叠的像素点生成合并后的地图图像。方法简单快捷,不需要复杂的数据处理,并且能够准确的检测出边界重叠部分,使得地图区域的合并更加准确快速,合并后的地图区域可以作为一个整体进行交互,方便后续使用,适用性广。
在上述实施例的基础上,本说明书一个实施例中,所述使用预设透明度的线条绘制待合并区域的地图图像,生成初始合并地图图像,可以包括:
在第一地图绘制区域获取用户选择的所述待合并区域;
根据用户选择的所述待合并区域,在第二地图绘制区域使用所述预设透明度的线条 绘制待合并区域的地图图像,生成所述初始合并地图图像。
具体地,用户在第一地图绘制区域(如某客户端的画布中)中查看地图时,若需要将部分地图区域进行合并,如需要将东北三省的地图区域进行合并,则用户可以通过点击或其他操作选择需要合并的待合并区域。如:用户在第一地图绘制区域中通过导入GeoJSON数据绘制出完整的地图(如绘制出中国地图),并通过点击绘制出的中国地图中的东北三省辽宁省、吉林省、黑龙江省,选择出待合并区域。识别出用户选择的待合并区域后,可以在第二地图绘制区域(第二地图绘制区域可以使用隐藏画布)使用预设透明度的线条绘制出待合并区域的地图图像,生成初始合并地图图像,具体可以参考上述实施例生成初始合并地图图像,此处不再赘述。基于用户的选择生成初始合并地图图像,用户可以根据实际需要选择待合并区域,进行地图区域的合并,方法简单,灵活,提高了用户体验。
在上述实施例的基础上,本说明书一个实施例中,所述根据所述目标像素点生成合并后的目标合并地图图像,可以包括:
将所述初始合并地图图像中的所述目标像素点之外的像素点从所述初始合并地图图像中剔除;
将所述初始合并地图图像中所述目标像素点组成的地图图像作为所述目标合并地图图像。
具体地,在确定出目标像素点,基于目标像素点生成目标合并地图图像时,可以将初始合并地图图像中的非目标像素点(即除去目标像素点之外的像素点)从初始合并地图图像中剔除,此时,初始合并地图图像中只剩下目标像素点,可以将剩下的目标像素点合并组成合并后的目标合并地图图像。如:上述实施例中在第二地图绘制区域中将非目标像素点剔除,保留目标像素点,则第二地图绘制区域中剩余的目标像素点组成的图像即可以表示合并后的目标合并地图图像。
将透明度不符合要求的非目标像素点从初始合并地图图像中剔除,剩余的目标像素点直接生成合并后的目标合并地图图像,方法简单,地图区域合并准确。
在剔除非目标像素点后,可以将非目标像素点的位置处的颜色设置为与初始合并地图图像中边界内部区域(即待合并区域内部区域)的像素点的颜色相同。这样可以避免剔除非目标像素点后,非目标像素点的颜色与待合并区域边界内部其他区域的颜色不同,影响合并后地图图像的显示效果。如:若在绘制待合并区域的地图图像生成初始合并地 图图像时,初始合并地图图像具有底色如:待合并区域的边界内部区域使用红色像素点填充,则在剔除非目标像素点后,非目标像素点位置处的颜色可能会变成白色或无色,与待合并区域的边界内部区域其他像素点的颜色不同,影响合并地图图像的显示效果。可以在剔除非目标像素点后,将非目标像素点处的颜色设置为红色,与待合并区域的边界内部区域的颜色保持一致,提高合并后地图图像的显示效果。若待合并区域中各个区域边界内部使用不同的颜色进行填充,即初始合并地图图像中非边界处有多种颜色,则可以取非目标像素点临近的任意一个像素点的颜色作为该非目标像素点剔除后该位置处的颜色。
本说明书提供的地图区域合并的数据处理方法,通过设置统一的透明度,在合并地图区域时,重合的部分透明度会发生变化,基于地图图像中透明度的变化,将指定的区域进行合并。基于像素点透明度的变化,合并指定的地图区域,方法简单快捷,不需要复杂的数学计算,合并后的地图区域可以作为一个整体进行交互,适用性强。
本说明书中上述方法的各个实施例均采用递进的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。相关之处参见方法实施例的部分说明即可。
基于上述所述的地图区域合并的数据处理方法,本说明书一个或多个实施例还提供一种地图区域合并的数据处理装置。所述的装置可以包括使用了本说明书实施例所述方法的系统(包括分布式系统)、软件(应用)、模块、组件、服务器、客户端等并结合必要的实施硬件的装置。基于同一创新构思,本说明书实施例提供的一个或多个实施例中的装置如下面的实施例所述。由于装置解决问题的实现方案与方法相似,因此本说明书实施例具体的装置的实施可以参见前述方法的实施,重复之处不再赘述。以下所使用的,术语“单元”或者“模块”可以实现预定功能的软件和/或硬件的组合。尽管以下实施例所描述的装置较佳地以软件来实现,但是硬件,或者软件和硬件的组合的实现也是可能并被构想的。
具体地,图5是本说明书提供的地图区域合并的数据处理装置一个实施例的模块结构示意图,如图5所示,本说明书中提供的地图区域合并的数据处理装置包括:初始合并图像绘制模块51、透明度获取模块52、目标合并地图生成模块53,其中:
初始合并图像绘制模块51,可以用于使用预设透明度的线条绘制待合并区域的地图图像,生成初始合并地图图像;
透明度获取模块52,可以用于获取所述初始合并地图图像中像素点的透明度;
目标合并地图生成模块53,可以用于将所述像素点的透明度与所述预设透明度相同的像素点作为目标像素点,根据所述目标像素点生成合并后的目标合并地图图像。
本说明书实施例提供的地图区域合并的数据处理装置,可以通过画布上染色技术检测像素点的透明度是否有变化,基于地图和合并时重叠部分的像素点的透明度与未重叠部分的像素点的透明度不同,筛选出边界重叠的像素点和边界未重叠的像素点,进一步基于边界未重叠的像素点生成合并后的地图图像。方法简单快捷,不需要复杂的数据处理,并且能够准确的检测出边界重叠部分,使得地图区域的合并更加准确快速,合并后的地图区域可以作为一个整体进行交互,方便后续使用,适用性广。
在上述实施例的基础上,所述初始合并图像绘制模块具体用于:
在第一地图绘制区域获取用户选择的所述待合并区域;
根据用户选择的所述待合并区域,在第二地图绘制区域使用所述预设透明度的线条绘制待合并区域的地图图像,生成所述初始合并地图图像。
本说明书实施例,所述初始合并图像绘制模块具体用于:
在第一地图绘制区域获取用户选择的所述待合并区域;
根据用户选择的所述待合并区域,在第二地图绘制区域使用所述预设透明度的线条绘制待合并区域的地图图像,生成所述初始合并地图图像。
在上述实施例的基础上,所述目标合并地图生成模块具体用于:
将所述初始合并地图图像中的所述目标像素点之外的像素点从所述初始合并地图图像中剔除;
将所述初始合并地图图像中所述目标像素点组成的地图图像作为所述目标合并地图图像。
本说明书实施例,将透明度不符合要求的非目标像素点从初始合并地图图像中剔除,剩余的目标像素点直接生成合并后的目标合并地图图像,方法简单,地图区域合并准确。
在上述实施例的基础上,所述目标合并地图生成模块还用于:
将所述目标像素点之外的像素点的位置处的颜色设置为与所述初始合并地图图像中所述待合并区域内部的像素点的颜色相同。
本说明书实施例,在剔除非目标像素点后,将非目标像素点处的颜色设置为红色,与待合并区域的边界内部区域的颜色保持一致,提高合并后地图图像的显示效果。
在上述实施例的基础上,所述目标合并地图生成模块具体用于:
提取所述目标像素点对应的坐标信息,根据所述目标像素点对应的坐标信息的集合,生成所述目标合并地图图像。
本说明书实施例,基于目标像素点的坐标信息的集合生成目标合并地图图像,方法快捷,不需要复杂的数据处理,并且能够准确的检测出边界重叠部分,使得地图区域的合并更加准确快速,合并后的地图区域可以作为一个整体进行交互,方便后续使用,适用性广。
在上述实施例的基础上,所述透明度获取模块具体用于:
根据所述待合并区域的坐标信息,遍历所述初始合并地图图像中所述待合并区域内的像素点,获取所述初始合并地图图像中所述待合并区域内的像素点对应的透明度。
本说明书实施例,通过遍历待合并区域内部的像素点,获得各个像素点的透明度变化,方法简单,并且可以减少待合并区域外部像素点的检测,提高了数据处理的速度。
需要说明书的是,上述所述的装置根据方法实施例的描述还可以包括其他的实施方式。具体的实现方式可以参照相关方法实施例的描述,在此不作一一赘述。
本说明书一个实施例中,还可以提供一种计算机存储介质,其上存储有计算机程序,所述计算机程序被执行时,实现上述实施例中视频数据的处理方法,例如可以实现如下方法:
使用预设透明度的线条绘制待合并区域的地图图像,生成初始合并地图图像;
获取所述初始合并地图图像中像素点的透明度;
将所述像素点的透明度与所述预设透明度相同的像素点作为目标像素点,根据所述目标像素点生成合并后的目标合并地图图像。
上述对本说明书特定实施例进行了描述。其它实施例在所附权利要求书的范围内。在一些情况下,在权利要求书中记载的动作或步骤可以按照不同于实施例中的顺序来执行并且仍然可以实现期望的结果。另外,在附图中描绘的过程不一定要求示出的特定顺序或者连续顺序才能实现期望的结果。在某些实施方式中,多任务处理和并行处理 也是可以的或者可能是有利的。
本说明书提供的上述实施例所述的方法或装置可以通过计算机程序实现业务逻辑并记录在存储介质上,所述的存储介质可以计算机读取并执行,实现本说明书实施例所描述方案的效果。
本说明书实施例提供的上述地图区域合并的数据处理方法或装置可以在计算机中由处理器执行相应的程序指令来实现,如使用windows操作系统的c++语言在PC端实现、linux系统实现,或其他例如使用android、iOS系统程序设计语言在智能终端实现,以及基于量子计算机的处理逻辑实现等。本说明书提供的一种地图区域合并的数据处理系统的一个实施例中,图6是本说明书提供的一种地图区域合并的数据处理系统实施例的模块结构示意图,如图6所示,本说明书实施例提供的地图区域合并的数据处理系统可以包括处理器61以及用于存储处理器可执行指令的存储器62,
处理器61和存储器62通过总线63完成相互间的通信;
所述处理器61用于调用所述存储器62中的程序指令,以执行上述各地震数据处理方法实施例所提供的方法,例如包括:使用预设透明度的线条绘制待合并区域的地图图像,生成初始合并地图图像;获取所述初始合并地图图像中像素点的透明度;将所述像素点的透明度与所述预设透明度相同的像素点作为目标像素点,根据所述目标像素点生成合并后的目标合并地图图像。
需要说明的是说明书上述所述的装置、计算机存储介质、系统根据相关方法实施例的描述还可以包括其他的实施方式,具体的实现方式可以参照方法实施例的描述,在此不作一一赘述。
本说明书中的各个实施例均采用递进的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于硬件+程序类实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
本说明书实施例并不局限于必须是符合行业通信标准、标准计算机数据处理和数据存储规则或本说明书一个或多个实施例所描述的情况。某些行业标准或者使用自定义方式或实施例描述的实施基础上略加修改后的实施方案也可以实现上述实施例相同、等同或相近、或变形后可预料的实施效果。应用这些修改或变形后的数据获取、存储、判断、处理方式等获取的实施例,仍然可以属于本说明书实施例的可选实施方案范围之 内。
在20世纪90年代,对于一个技术的改进可以很明显地区分是硬件上的改进(例如,对二极管、晶体管、开关等电路结构的改进)还是软件上的改进(对于方法流程的改进)。然而,随着技术的发展,当今的很多方法流程的改进已经可以视为硬件电路结构的直接改进。设计人员几乎都通过将改进的方法流程编程到硬件电路中来得到相应的硬件电路结构。因此,不能说一个方法流程的改进就不能用硬件实体模块来实现。例如,可编程逻辑器件(Programmable Logic Device,PLD)(例如现场可编程门阵列(Field Programmable Gate Array,FPGA))就是这样一种集成电路,其逻辑功能由用户对器件编程来确定。由设计人员自行编程来把一个数字系统“集成”在一片PLD上,而不需要请芯片制造厂商来设计和制作专用的集成电路芯片。而且,如今,取代手工地制作集成电路芯片,这种编程也多半改用“逻辑编译器(logic compiler)”软件来实现,它与程序开发撰写时所用的软件编译器相类似,而要编译之前的原始代码也得用特定的编程语言来撰写,此称之为硬件描述语言(Hardware Description Language,HDL),而HDL也并非仅有一种,而是有许多种,如ABEL(Advanced Boolean Expression Language)、AHDL(Altera Hardware Description Language)、Confluence、CUPL(Cornell University Programming Language)、HDCal、JHDL(Java Hardware Description Language)、Lava、Lola、MyHDL、PALASM、RHDL(Ruby Hardware Description Language)等,目前最普遍使用的是VHDL(Very-High-Speed Integrated Circuit Hardware Description Language)与Verilog。本领域技术人员也应该清楚,只需要将方法流程用上述几种硬件描述语言稍作逻辑编程并编程到集成电路中,就可以很容易得到实现该逻辑方法流程的硬件电路。
控制器可以按任何适当的方式实现,例如,控制器可以采取例如微处理器或处理器以及存储可由该(微)处理器执行的计算机可读程序代码(例如软件或固件)的计算机可读介质、逻辑门、开关、专用集成电路(Application Specific Integrated Circuit,ASIC)、可编程逻辑控制器和嵌入微控制器的形式,控制器的例子包括但不限于以下微控制器:ARC 625D、Atmel AT91SAM、Microchip PIC18F26K20以及Silicone Labs C8051F320,存储器控制器还可以被实现为存储器的控制逻辑的一部分。本领域技术人员也知道,除了以纯计算机可读程序代码方式实现控制器以外,完全可以通过将方法步骤进行逻辑编程来使得控制器以逻辑门、开关、专用集成电路、可编程逻辑控制器和嵌入微控制器等的形式来实现相同功能。因此这种控制器可以被认为是一种硬件部件,而对其内包括的用于实现各种功能的装置也可以视为硬件部件内的结构。或者甚至,可以将用于实现各种功能的装置视为既可以是实现方法的软件模块又可以是硬件部件内的 结构。
上述实施例阐明的系统、装置、模块或单元,具体可以由计算机芯片或实体实现,或者由具有某种功能的产品来实现。一种典型的实现设备为计算机。具体的,计算机例如可以为个人计算机、膝上型计算机、车载人机交互设备、蜂窝电话、相机电话、智能电话、个人数字助理、媒体播放器、导航设备、电子邮件设备、游戏控制台、平板计算机、可穿戴设备或者这些设备中的任何设备的组合。
虽然本说明书一个或多个实施例提供了如实施例或流程图所述的方法操作步骤,但基于常规或者无创造性的手段可以包括更多或者更少的操作步骤。实施例中列举的步骤顺序仅仅为众多步骤执行顺序中的一种方式,不代表唯一的执行顺序。在实际中的装置或终端产品执行时,可以按照实施例或者附图所示的方法顺序执行或者并行执行(例如并行处理器或者多线程处理的环境,甚至为分布式数据处理环境)。术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、产品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、产品或者设备所固有的要素。在没有更多限制的情况下,并不排除在包括所述要素的过程、方法、产品或者设备中还存在另外的相同或等同要素。第一,第二等词语用来表示名称,而并不表示任何特定的顺序。
为了描述的方便,描述以上装置时以功能分为各种模块分别描述。当然,在实施本说明书一个或多个时可以把各模块的功能在同一个或多个软件和/或硬件中实现,也可以将实现同一功能的模块由多个子模块或子单元的组合实现等。以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
本发明是参照根据本发明实施例的方法、装置(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方 框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
在一个典型的配置中,计算设备包括一个或多个处理器(CPU)、输入/输出接口、网络接口和内存。
内存可能包括计算机可读介质中的非永久性存储器,随机存取存储器(RAM)和/或非易失性内存等形式,如只读存储器(ROM)或闪存(flash RAM)。内存是计算机可读介质的示例。
计算机可读介质包括永久性和非永久性、可移动和非可移动媒体可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括,但不限于相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带,磁带磁磁盘存储、石墨烯存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。按照本文中的界定,计算机可读介质不包括暂存电脑可读媒体(transitory media),如调制的数据信号和载波。
本领域技术人员应明白,本说明书一个或多个实施例可提供为方法、系统或计算机程序产品。因此,本说明书一个或多个实施例可采用完全硬件实施例、完全软件实施例或结合软件和硬件方面的实施例的形式。而且,本说明书一个或多个实施例可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本说明书一个或多个实施例可以在由计算机执行的计算机可执行指令的一般上 下文中描述,例如程序模块。一般地,程序模块包括执行特定任务或实现特定抽象数据类型的例程、程序、对象、组件、数据结构等等。也可以在分布式计算环境中实践本本说明书一个或多个实施例,在这些分布式计算环境中,由通过通信网络而被连接的远程处理设备来执行任务。在分布式计算环境中,程序模块可以位于包括存储设备在内的本地和远程计算机存储介质中。
本说明书中的各个实施例均采用递进的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于系统实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。在本说明书的描述中,参考术语“一个实施例”、“一些实施例”、“示例”、“具体示例”、或“一些示例”等的描述意指结合该实施例或示例描述的具体特征、结构、材料或者特点包含于本说明书的至少一个实施例或示例中。在本说明书中,对上述术语的示意性表述不必须针对的是相同的实施例或示例。而且,描述的具体特征、结构、材料或者特点可以在任一个或多个实施例或示例中以合适的方式结合。此外,在不相互矛盾的情况下,本领域的技术人员可以将本说明书中描述的不同实施例或示例以及不同实施例或示例的特征进行结合和组合。
以上所述仅为本说明书一个或多个实施例的实施例而已,并不用于限制本本说明书一个或多个实施例。对于本领域技术人员来说,本说明书一个或多个实施例可以有各种更改和变化。凡在本说明书的精神和原理之内所作的任何修改、等同替换、改进等,均应包含在权利要求范围之内。

Claims (14)

  1. 一种地图区域合并的数据处理方法,包括:
    使用预设透明度的线条绘制待合并区域的地图图像,生成初始合并地图图像;
    获取所述初始合并地图图像中像素点的透明度;
    将所述像素点的透明度与所述预设透明度相同的像素点作为目标像素点,根据所述目标像素点生成合并后的目标合并地图图像。
  2. 如权利要求1所述的方法,所述使用预设透明度的线条绘制待合并区域的地图图像,生成初始合并地图图像,包括:
    在第一地图绘制区域获取用户选择的所述待合并区域;
    根据用户选择的所述待合并区域,在第二地图绘制区域使用所述预设透明度的线条绘制待合并区域的地图图像,生成所述初始合并地图图像。
  3. 如权利要求1所述的方法,所述根据所述目标像素点生成合并后的目标合并地图图像,包括:
    将所述初始合并地图图像中的所述目标像素点之外的像素点从所述初始合并地图图像中剔除;
    将所述初始合并地图图像中所述目标像素点组成的地图图像作为所述目标合并地图图像。
  4. 如权利要求3所述的方法,所述将所述初始合并地图图像中的所述目标像素点之外的像素点从所述初始合并地图图像中剔除后,所述方法包括:
    将所述目标像素点之外的像素点的位置处的颜色设置为与所述初始合并地图图像中所述待合并区域内部的像素点的颜色相同。
  5. 如权利要求1所述的方法,所述根据所述目标像素点生成合并后的目标合并地图图像,包括:
    提取所述目标像素点对应的坐标信息,根据所述目标像素点对应的坐标信息的集合,生成所述目标合并地图图像。
  6. 如权利要求1所述的方法,所述获取所述初始合并地图图像中像素点的透明度,包括:
    根据所述待合并区域的坐标信息,遍历所述初始合并地图图像中所述待合并区域内的像素点,获取所述初始合并地图图像中所述待合并区域内的像素点对应的透明度。
  7. 一种地图区域合并的数据处理装置,包括:
    初始合并图像绘制模块,用于使用预设透明度的线条绘制待合并区域的地图图像, 生成初始合并地图图像;
    透明度获取模块,用于获取所述初始合并地图图像中像素点的透明度;
    目标合并地图生成模块,用于将所述像素点的透明度与所述预设透明度相同的像素点作为目标像素点,根据所述目标像素点生成合并后的目标合并地图图像。
  8. 如权利要求7所述的装置,所述初始合并图像绘制模块具体用于:
    在第一地图绘制区域获取用户选择的所述待合并区域;
    根据用户选择的所述待合并区域,在第二地图绘制区域使用所述预设透明度的线条绘制待合并区域的地图图像,生成所述初始合并地图图像。
  9. 如权利要求7所述的装置,所述目标合并地图生成模块具体用于:
    将所述初始合并地图图像中的所述目标像素点之外的像素点从所述初始合并地图图像中剔除;
    将所述初始合并地图图像中所述目标像素点组成的地图图像作为所述目标合并地图图像。
  10. 如权利要求9所述的装置,所述目标合并地图生成模块还用于:
    将所述目标像素点之外的像素点的位置处的颜色设置为与所述初始合并地图图像中所述待合并区域内部的像素点的颜色相同。
  11. 如权利要求7所述的装置,所述目标合并地图生成模块具体用于:
    提取所述目标像素点对应的坐标信息,根据所述目标像素点对应的坐标信息的集合,生成所述目标合并地图图像。
  12. 如权利要求7所述的装置,所述透明度获取模块具体用于:
    根据所述待合并区域的坐标信息,遍历所述初始合并地图图像中所述待合并区域内的像素点,获取所述初始合并地图图像中所述待合并区域内的像素点对应的透明度。
  13. 一种计算机存储介质,其上存储有计算机程序,所述计算机程序被执行时,实现权利要求1-6任一项所述的方法。
  14. 一种地图区域合并的数据处理系统,包括至少一个处理器以及用于存储处理器可执行指令的存储器,所述处理器执行所述指令时实现权利要求1-6任一项所述的方法。
PCT/CN2019/091262 2018-07-27 2019-06-14 一种地图区域合并的数据处理方法及装置 WO2020019899A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810839748.9 2018-07-27
CN201810839748.9A CN109192054B (zh) 2018-07-27 2018-07-27 一种地图区域合并的数据处理方法及装置

Publications (1)

Publication Number Publication Date
WO2020019899A1 true WO2020019899A1 (zh) 2020-01-30

Family

ID=64937165

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/091262 WO2020019899A1 (zh) 2018-07-27 2019-06-14 一种地图区域合并的数据处理方法及装置

Country Status (3)

Country Link
CN (1) CN109192054B (zh)
TW (1) TWI698841B (zh)
WO (1) WO2020019899A1 (zh)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108573653B (zh) * 2017-03-13 2022-01-04 腾讯科技(深圳)有限公司 电子地图生成方法及装置
CN109192054B (zh) * 2018-07-27 2020-04-28 阿里巴巴集团控股有限公司 一种地图区域合并的数据处理方法及装置
CN109785355A (zh) * 2019-01-25 2019-05-21 网易(杭州)网络有限公司 区域合并方法及装置、计算机存储介质、电子设备
CN111489411B (zh) * 2019-01-29 2023-06-20 北京百度网讯科技有限公司 线条绘制方法、装置、图像处理器、显卡及车辆
CN110068344B (zh) * 2019-04-08 2021-11-23 丰图科技(深圳)有限公司 地图数据的制作方法、装置、服务器及存储介质
CN112019702B (zh) * 2019-05-31 2023-08-25 北京嗨动视觉科技有限公司 图像处理方法、装置和视频处理器
CN112179361B (zh) 2019-07-02 2022-12-06 华为技术有限公司 更新移动机器人工作地图的方法、装置及存储介质
CN111080732B (zh) * 2019-11-12 2023-09-22 望海康信(北京)科技股份公司 用于形成虚拟地图的方法及系统
CN111127543B (zh) * 2019-12-23 2024-04-05 北京金山安全软件有限公司 图像处理方法、装置、电子设备以及存储介质
CN111881817A (zh) * 2020-07-27 2020-11-03 北京三快在线科技有限公司 一种提取特定区域的方法、装置、存储介质及电子设备
CN112269850B (zh) * 2020-11-10 2024-05-03 中煤航测遥感集团有限公司 地理数据处理方法、装置、电子设备及存储介质
CN112395380B (zh) * 2020-11-20 2022-03-22 上海莉莉丝网络科技有限公司 游戏地图内动态区域边界的合并方法、合并系统及计算机可读存储介质
CN112652063B (zh) * 2020-11-20 2022-09-20 上海莉莉丝网络科技有限公司 游戏地图内动态区域边界的生成方法、系统及计算机可读存储介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104101348A (zh) * 2013-04-08 2014-10-15 现代Mnsoft公司 导航系统以及在导航系统上显示地图的方法
US8872848B1 (en) * 2010-09-29 2014-10-28 Google Inc. Rendering vector data as tiles
CN106128291A (zh) * 2016-08-31 2016-11-16 武汉拓普伟域网络有限公司 一种基于第三方电子地图绘制自定义地图图层的方法
CN106530219A (zh) * 2016-11-07 2017-03-22 青岛海信移动通信技术股份有限公司 图像拼接方法及装置
CN106557567A (zh) * 2016-11-21 2017-04-05 中国农业银行股份有限公司 一种数据处理方法及系统
CN107919012A (zh) * 2016-10-09 2018-04-17 北京嘀嘀无限科技发展有限公司 一种运力调度的方法和系统
CN109192054A (zh) * 2018-07-27 2019-01-11 阿里巴巴集团控股有限公司 一种地图区域合并的数据处理方法及装置

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1299220C (zh) * 2004-05-13 2007-02-07 上海交通大学 数字道路地图自动拼接方法
US7911481B1 (en) * 2006-12-14 2011-03-22 Disney Enterprises, Inc. Method and apparatus of graphical object selection
TWI329825B (en) * 2007-04-23 2010-09-01 Network e-map graphic automatically generating system and method therefor
TWI480809B (zh) * 2009-08-31 2015-04-11 Alibaba Group Holding Ltd Image feature extraction method and device
TWI479343B (zh) * 2011-11-11 2015-04-01 Easymap Digital Technology Inc 主題地圖產生系統及其方法
US9043150B2 (en) * 2012-06-05 2015-05-26 Apple Inc. Routing applications for navigation
GB2499694B8 (en) * 2012-11-09 2017-06-07 Sony Computer Entertainment Europe Ltd System and method of image reconstruction
US9684673B2 (en) * 2013-12-04 2017-06-20 Urthecast Corp. Systems and methods for processing and distributing earth observation images
CN103714540B (zh) * 2013-12-21 2017-01-11 浙江传媒学院 数字抠像处理中的基于svm的透明度估计方法
CN103761094A (zh) * 2014-01-22 2014-04-30 上海诚明融鑫科技有限公司 一种平面绘图时多边形合并的方法
CN104077100B (zh) * 2014-06-27 2017-04-12 广东威创视讯科技股份有限公司 复合缓冲区图像显示方法及装置
CN104715451B (zh) * 2015-03-11 2018-01-05 西安交通大学 一种基于颜色及透明度一致优化的图像无缝融合方法
CN104867170B (zh) * 2015-06-02 2017-11-03 厦门卫星定位应用股份有限公司 公交线路密度分布图绘制方法及系统
CN107146201A (zh) * 2017-05-08 2017-09-08 重庆邮电大学 一种基于改进图像融合的图像拼接方法

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8872848B1 (en) * 2010-09-29 2014-10-28 Google Inc. Rendering vector data as tiles
CN104101348A (zh) * 2013-04-08 2014-10-15 现代Mnsoft公司 导航系统以及在导航系统上显示地图的方法
CN106128291A (zh) * 2016-08-31 2016-11-16 武汉拓普伟域网络有限公司 一种基于第三方电子地图绘制自定义地图图层的方法
CN107919012A (zh) * 2016-10-09 2018-04-17 北京嘀嘀无限科技发展有限公司 一种运力调度的方法和系统
CN106530219A (zh) * 2016-11-07 2017-03-22 青岛海信移动通信技术股份有限公司 图像拼接方法及装置
CN106557567A (zh) * 2016-11-21 2017-04-05 中国农业银行股份有限公司 一种数据处理方法及系统
CN109192054A (zh) * 2018-07-27 2019-01-11 阿里巴巴集团控股有限公司 一种地图区域合并的数据处理方法及装置

Also Published As

Publication number Publication date
CN109192054B (zh) 2020-04-28
CN109192054A (zh) 2019-01-11
TW202008328A (zh) 2020-02-16
TWI698841B (zh) 2020-07-11

Similar Documents

Publication Publication Date Title
WO2020019899A1 (zh) 一种地图区域合并的数据处理方法及装置
CN112184738B (zh) 一种图像分割方法、装置、设备及存储介质
US20180246635A1 (en) Generating user interfaces combining foreground and background of an image with user interface elements
US11217020B2 (en) 3D cutout image modification
US20210258511A1 (en) Diy effects image modification
US11494999B2 (en) Procedurally generating augmented reality content generators
CN111095353A (zh) 实时跟踪补偿图像效果
WO2019144763A1 (zh) 一种页面显示方法、装置及设备
TWI752473B (zh) 圖像處理方法及裝置、電子設備和電腦可讀儲存媒體
KR20220080007A (ko) 증강 현실 기반 디스플레이 방법, 장치 및 저장 매체
CN112954441B (zh) 视频编辑及播放方法、装置、设备、介质
TWI691206B (zh) 浮水印添加處理方法、裝置及客戶端
CN115131260A (zh) 图像处理方法、装置、设备、计算机可读存储介质及产品
EP4222715A1 (en) Ingestion pipeline for augmented reality content generators
CN116310036A (zh) 场景渲染方法、装置、设备、计算机可读存储介质及产品
CN113163135B (zh) 视频的动画添加方法、装置、设备及介质
US11494961B2 (en) Sticker generating method and apparatus, and medium and electronic device
CN107613046A (zh) 滤镜管道系统、图像数据处理方法、装置以及电子设备
CN114842120A (zh) 一种图像渲染处理方法、装置、设备及介质
EP4170588A2 (en) Video photographing method and apparatus, and device and storage medium
CN117808955A (zh) 对齐图像方法、装置、设备、存储介质及计算机程序产品
CA2931695C (en) Picture fusion method and apparatus
CN118279460A (zh) 饰品渲染方法、装置、设备、计算机可读存储介质及产品
KR20220099584A (ko) 이미지 처리 방법, 장치, 전자 장치 및 컴퓨터 판독 가능 저장 매체
CN115018749A (zh) 图像处理方法、装置、设备、计算机可读存储介质及产品

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19842028

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19842028

Country of ref document: EP

Kind code of ref document: A1