CN117710388A - Image processing method, device and storage medium - Google Patents

Image processing method, device and storage medium Download PDF

Info

Publication number
CN117710388A
CN117710388A CN202311642560.2A CN202311642560A CN117710388A CN 117710388 A CN117710388 A CN 117710388A CN 202311642560 A CN202311642560 A CN 202311642560A CN 117710388 A CN117710388 A CN 117710388A
Authority
CN
China
Prior art keywords
shadow
image
image processing
pixel point
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311642560.2A
Other languages
Chinese (zh)
Inventor
卢杨裔
宋雨伦
李大中
朱润亚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China United Network Communications Group Co Ltd
Unicom Digital Technology Co Ltd
Original Assignee
China United Network Communications Group Co Ltd
Unicom Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China United Network Communications Group Co Ltd, Unicom Digital Technology Co Ltd filed Critical China United Network Communications Group Co Ltd
Priority to CN202311642560.2A priority Critical patent/CN117710388A/en
Publication of CN117710388A publication Critical patent/CN117710388A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Processing (AREA)

Abstract

The application discloses an image processing method, an image processing device and a storage medium, relates to the technical field of computers, and is used for solving the problem that the integrity of vector data of buildings and terrains obtained in a general technology is low. The method comprises the following steps: after the characteristic information of each pixel point in the image is obtained, the shadow angle of each pixel point can be determined according to the characteristic information of each pixel point, and the shadow area in the image is determined according to the shadow angle of each pixel point so as to further carry out vectorization processing on the shadow area and obtain vector data corresponding to the image. The characteristic information comprises gradient information of each pixel point and light source position information, and the shadow angle is used for representing brightness information of the pixel point.

Description

Image processing method, device and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to an image processing method, an image processing device, and a storage medium.
Background
At present, the realization of environmental monitoring, natural disaster analysis and the like can be effectively supported through vector data of buildings and terrains.
In order to acquire vector data of buildings and terrains, laser radar scanning, stereo photogrammetry and other modes can be adopted. However, when these modes are implemented, the requirements on environmental factors such as weather or shielding are high, and it is often difficult to obtain vector data of complete buildings and terrains.
Disclosure of Invention
The application provides an image processing method, an image processing device and a storage medium, which are used for solving the problem that the general technology is difficult to acquire the vector data of the complete building and the topography, and can improve the integrity of acquiring the vector data of the building and the topography.
In order to achieve the above purpose, the present application adopts the following technical scheme:
in a first aspect, there is provided an image processing method including: after the image and the characteristic information data of each pixel point in the image are obtained, determining the shadow angle of each pixel point in the image according to the characteristic information data of each pixel point in the characteristic information image of each pixel point, and determining the shadow area in the image according to the shadow angle of each pixel point, so that vectorization processing is conveniently carried out on the shadow area, and vector data corresponding to the image is obtained. The shading angle is used for representing brightness information of the pixel points.
Optionally, the image processing method includes: slope information, light source position information and shadow angles of each pixel point meet a first formula; the first formula is:
α=β-γ;
where α is a shadow angle, β is gradient information, and γ is light source position information.
Optionally, the method for determining the shadow area in the image according to the shadow angle of each pixel point specifically includes:
dividing an image into a plurality of regions;
determining a shadow angle threshold value corresponding to each region according to the shadow angles of a plurality of pixel points in each region;
determining a region corresponding to the shadow pixel points in each region as a shadow region; the shadow pixels in each region are used to represent pixels having a shadow angle less than a shadow angle threshold corresponding to each region.
Optionally, the method for dividing the image into a plurality of areas specifically includes: and moving the preset window in the image for a plurality of times according to the preset step length to obtain a plurality of areas.
In a second aspect, there is provided an image processing apparatus comprising: the device comprises an acquisition unit, a determination unit and a processing unit; the acquisition unit is used for acquiring the characteristic information of each pixel point in the image; the characteristic information includes: grade information and light source position information; the determining unit is used for determining the shadow angle of each pixel point according to the characteristic information of each pixel point; the shadow angle is used for representing brightness information of the pixel points; a determining unit, configured to determine a shadow area in the image according to the shadow angle of each pixel point; and the processing unit is used for carrying out vectorization processing on the shadow area to obtain vector data corresponding to the image.
Optionally, the image processing apparatus further includes: slope information, light source position information and shadow angles of each pixel point meet a first formula; the first formula is:
α=β-γ;
where α is a shadow angle, β is gradient information, and γ is light source position information.
Optionally, the determining unit is specifically configured to: dividing an image into a plurality of regions; determining a shadow angle threshold value corresponding to each region according to the shadow angles of a plurality of pixel points in each region; determining a region corresponding to the shadow pixel points in each region as a shadow region; the shadow pixels in each region are used to represent pixels having a shadow angle less than a shadow angle threshold corresponding to each region.
Optionally, the determining unit is specifically configured to: and moving the preset window in the image for a plurality of times according to the preset step length to obtain a plurality of areas.
In a third aspect, an image processing apparatus is provided, comprising a memory and a processor; the memory is used for storing computer execution instructions, and the processor is connected with the memory through a bus; when the image processing apparatus is running, the processor executes computer-executable instructions stored in the memory to cause the image processing apparatus to perform the image processing method according to the first aspect.
The image processing apparatus may be a network device or may be a part of an apparatus in a network device, for example, a chip system in a network device. The system-on-a-chip is configured to support the network device to implement the functions involved in the first aspect and any one of its possible implementations, for example, to obtain, determine, and send data and/or information involved in the image processing method described above. The chip system includes a chip, and may also include other discrete devices or circuit structures.
In a fourth aspect, there is provided a computer readable storage medium comprising computer executable instructions which, when run on a computer, cause the computer to perform the image processing method of the first aspect.
In a fifth aspect, there is also provided a computer program product comprising computer instructions which, when run on an image processing apparatus, cause the image processing apparatus to perform the image processing method as described in the first aspect above.
It should be noted that the above-mentioned computer instructions may be stored in whole or in part on the first computer readable storage medium. The first computer readable storage medium may be packaged together with the processor of the image processing apparatus or may be packaged separately from the processor of the image processing apparatus, which is not limited in this application.
The description of the second, third, fourth and fifth aspects of the present application may refer to the detailed description of the first aspect; the advantages of the second aspect, the third aspect, the fourth aspect and the fifth aspect may be referred to as analysis of the advantages of the first aspect, and will not be described here.
In the present application, the names of the above-described image processing apparatuses do not constitute limitations on the devices or function modules themselves, and in actual implementations, these devices or function modules may appear under other names. Insofar as the function of each device or function module is similar to the present application, it is within the scope of the claims of the present application and the equivalents thereof.
These and other aspects of the present application will be more readily apparent from the following description.
The technical scheme provided by the application at least brings the following beneficial effects:
based on any one of the above aspects, the present application provides an image processing method, after an image processing device obtains feature information of each pixel point in an image, a shadow angle of each pixel point may be determined according to the feature information of each pixel point, and a shadow area in the image may be determined according to the shadow angle of each pixel point, so as to further perform vectorization processing on the shadow area, and obtain vector data corresponding to the image. The characteristic information comprises gradient information of each pixel point and light source position information, and the shadow angle is used for representing brightness information of the pixel point.
Based on the method, the image can be processed according to the characteristic information of each pixel point in the image, and the complete vector data corresponding to the image is obtained. Compared with a mode of acquiring vector data by means of laser radar scanning and stereo photogrammetry, the method and the device can support processing of the existing satellite images and the like to obtain the vector data, and the problem that complete vector data are difficult to acquire due to environmental factors is avoided. Therefore, the method and the device can be used for solving the problem that the integrity of the obtained vector data in the general technology is low, and improving the integrity of the obtained vector data.
Drawings
Fig. 1 is a schematic structural diagram of an image processing system according to an embodiment of the present application;
fig. 2 is a schematic hardware structure of an image processing apparatus according to an embodiment of the present application;
fig. 3 is a schematic flow chart of an image processing method according to an embodiment of the present application;
fig. 4 is a flowchart of another image processing method according to an embodiment of the present application;
fig. 5 is a flowchart of another image processing method according to an embodiment of the present application;
fig. 6 is a flowchart of another image processing method according to an embodiment of the present application;
Fig. 7 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application.
Detailed Description
The following description of the technical solutions in the embodiments of the present application will be made clearly and completely with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be noted that, in the embodiments of the present application, words such as "exemplary" or "such as" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "for example" should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
In order to clearly describe the technical solutions of the embodiments of the present application, in the embodiments of the present application, the terms "first", "second", and the like are used to distinguish the same item or similar items having substantially the same function and effect, and those skilled in the art will understand that the terms "first", "second", and the like are not limited in number and execution order.
With the development of image processing technology, building and topography vector data can be determined by processing shadows in high-resolution remote sensing images or satellite images, so that urban planning, land management, environment monitoring, natural disasters and the like can be conveniently analyzed through relevant information in the building and topography vector data.
Currently, the method can be used for preparing the composite material by: and acquiring building and topography vector data by a laser radar scanning method (light detection and ranging, LIDAR), a stereo photogrammetry method, a remote sensing image analysis method and the like.
LIDAR may create a three-dimensional map by calculating the laser pulse return time of a laser radar transmission. The laser radar scanning method has the advantage of high modeling precision, and is suitable for high-precision terrain modeling, building contour extraction and city planning. However, due to the high price of lidar equipment and the need for specialized flying or ground measurement equipment, the cost of using lidar scanning methods is high, and due to the possibly uneven density of data points scanned by lidar on the ground, the detailed information of some areas is insufficient, and under severe weather conditions, the integrity of the data acquired by lidar scanning is low. After the data obtained by laser radar scanning is obtained, the data obtained by laser radar scanning is complex in processing due to the fact that the data contains a large amount of point cloud data.
Stereometry can generate three-dimensional data through image intersection measurements of two or more cameras, which is often applied to create digital terrain models (digital terrain model, DTM) and digital surface models (digital surface model, DSM). Because stereo photogrammetry often requires multiple cameras or cameras, stereo photogrammetry is not suitable for obtaining large-scale terrains and requires complex equipment and operation, and in the case that objects such as buildings or trees are blocked on the ground surface, stereo images may not capture complete information of the ground surface, and furthermore, after the stereo images are obtained, stereo image data are required to be processed through algorithms such as camera calibration, image matching and stereo measurement, and processing steps are complex.
The remote sensing image analysis method can extract the outline and the topography information of the building from the image by using a high-resolution remote sensing image or an unmanned aerial vehicle image, and convert the outline and the topography information into three-dimensional vector data. Because the remote sensing image analysis needs a high-resolution remote sensing image or an unmanned aerial vehicle image, the cost for acquiring the remote sensing image is high, and the remote sensing image is possibly limited by sensor hardware when acquiring the high-resolution remote sensing image, so that the resolution of the remote sensing image is difficult to improve. After the high-resolution remote sensing image is obtained, noise or artifacts appear in the image due to the higher resolution of the remote sensing image, and the storage space occupied by the remote sensing image also becomes larger due to the improvement of the resolution.
In order to solve the problems in the general technology, the method adopts the existing satellite image data, does not need to additionally arrange expensive equipment to collect images, can be applied to a large-scale terrain or geographic area through shadow calculation vector data, and has lower calculation complexity on shadow analysis because three-dimensional point cloud data does not exist in data generated by processing shadow analysis. In order to effectively monitor the trend of the earth surface change, the method can use remote sensing image data of a plurality of time points and analyze the time sequence, so that the obtained vector data can be better applied to ecology and resource management.
The embodiment of the application provides an image processing method, after feature information of each pixel point in an original image is obtained, a shadow angle of each pixel point can be determined according to the feature information of each pixel point, and a shadow area in the original image is determined according to the shadow angle of each pixel point, so that vectorization processing is further carried out on the shadow area, and vector data corresponding to the original image is obtained. The characteristic information comprises gradient information of each pixel point and light source position information, and the shadow angle is used for representing brightness information of the pixel point.
Based on the method, the image can be processed according to the characteristic information of each pixel point in the image, and the complete vector data corresponding to the image is obtained. Compared with a mode of acquiring vector data by means of laser radar scanning and stereo photogrammetry, the method and the device can support processing of the existing satellite images and the like to obtain the vector data, and the problem that complete vector data are difficult to acquire due to environmental factors is avoided. Therefore, the method and the device can be used for solving the problem that the integrity of the obtained vector data in the general technology is low, and improving the integrity of the obtained vector data.
The image processing method is suitable for an image processing system. Fig. 1 shows a schematic configuration of an image processing system. As shown in fig. 1, the image processing system 100 includes: an image processing device 101 and an information acquisition device 102.
The image processing device 101 and the information acquisition device 102 may be in communication connection.
Alternatively, the connection between the image processing apparatus 101 and the information acquisition apparatus 102 may be a wired connection or a wireless connection.
In some embodiments, the image processing apparatus 101 may be used to implement an image processing function, which may be a functional module on the information collecting apparatus 102, or may be an entity device that is disposed separately from the information collecting apparatus 102.
It is easy to understand that when the image processing apparatus 101 is a functional module on the information acquisition apparatus 102, the interaction between the image processing apparatus 101 and the information acquisition apparatus 102 is an interaction between internal modules of the information acquisition apparatus 102. In this case, the interaction flow between the two is the same as "in the case where the image processing apparatus 101 is a separately provided entity device".
For ease of understanding, fig. 1 illustrates an example in which the image processing apparatus 101 and the information acquisition apparatus 102 are provided independently of each other.
The image processing apparatus 101 in fig. 1 may receive the image from the information acquisition apparatus 102 and the feature information of each pixel in the image, and may process the feature information of each pixel in the image from the information acquisition apparatus 102 so as to determine vector data of the image.
The information acquisition device 102 in fig. 1 may be configured with a database, or may be connected to a device configured with the database, for acquiring information such as feature information of each pixel in an image, and may send the acquired information to the image processing device 101.
In some embodiments, when the image processing apparatus 101 and the information collecting apparatus 102 are entity devices that are disposed independently of each other, the image processing apparatus 101 and the information collecting apparatus 102 may be a separate server or other physical devices. The physical device may be one server in a server cluster (including a plurality of servers), may be a chip in the physical device, may also be a system on a chip in the physical device, or may be implemented by a virtual machine deployed on the physical machine, which is not limited in this embodiment of the present application.
In some embodiments, the image processing apparatus 101 and the information acquisition apparatus 102 may be terminals for implementing man-machine interaction through the presentation page, where the terminals may be handheld devices with wireless connection functions, or wireless terminals connected to other processing devices of a wireless modem, or wired terminals. For example, smart devices such as cell phones, personal computers (personal computer, PCs), desktop computers, tablet computers, notebook computers, netbooks, personal digital assistants (personal digital assistant, PDAs), and the like, which are not limited in this embodiment.
In connection with fig. 1, an image processing apparatus 101 and an information acquisition apparatus 102 in an image processing system may include elements included in the image processing apparatus shown in fig. 2. The hardware configuration of the image processing apparatus 101 and the information acquisition apparatus 102 will be described below taking the image processing apparatus shown in fig. 2 as an example.
Fig. 2 is a schematic hardware structure of an image processing apparatus according to an embodiment of the present application. The image processing device comprises a processor 21, a memory 22, a communication interface 23, a bus 24. The processor 21, the memory 22 and the communication interface 23 may be connected by a bus 24.
The processor 21 is a control center of the image processing apparatus, and may be one processor or a collective term of a plurality of processing elements. For example, the processor 21 may be a general-purpose central processing unit (central processing unit, CPU), or may be another general-purpose processor. Wherein the general purpose processor may be a microprocessor or any conventional processor or the like.
As one example, processor 21 may include one or more CPUs, such as CPU 0 and CPU 1 shown in fig. 2.
Memory 22 may be, but is not limited to, a read-only memory (ROM) or other type of static storage device that can store static information and instructions, a random access memory (random access memory, RAM) or other type of dynamic storage device that can store information and instructions, or an electrically erasable programmable read-only memory (EEPROM), magnetic disk storage or other magnetic storage device, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer.
In a possible implementation, the memory 22 may exist separately from the processor 21, and the memory 22 may be connected to the processor 21 by a bus 24 for storing instructions or program code. The processor 21, when calling and executing instructions or program code stored in the memory 22, is capable of implementing the image processing method provided in the following embodiments of the present invention.
In another possible implementation, the memory 22 may also be integrated with the processor 21.
The communication interface 23 is used for connecting the image processing apparatus with other devices through a communication network, which may be an ethernet, a radio access network, a wireless local area network (wireless local area networks, WLAN), or the like. The communication interface 23 may include a receiving unit for receiving data, and a transmitting unit for transmitting data.
Bus 24 may be an industry standard architecture (industry standard architecture, ISA) bus, an external device interconnect (peripheral component interconnect, PCI) bus, or an extended industry standard architecture (extended industry standard architecture, EISA) bus, among others. The bus may be classified as an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in fig. 2, but not only one bus or one type of bus.
The image processing method provided in the embodiment of the present application is described in detail below with reference to the accompanying drawings. As shown in fig. 3, the image processing method includes: S301-S304.
S301, the image processing device acquires characteristic information of each pixel point in the image.
Wherein the characteristic information includes slope (slope) information and light source position information. The gradient information is used for representing the gradient of the ground surface where the position point corresponding to the pixel point is located. The light source position information is used for indicating an included angle between the irradiation light and a horizontal plane where a position point corresponding to the pixel point is located, and can also be called a light source height angle.
In one possible approach, the image may be a satellite image with corresponding digital elevation model (digital elevation model, DEM) data or terrain elevation data (terrain altitude data).
In one implementation, after the image processing apparatus acquires the image, the image processing apparatus may determine gradient information of the pixel point through digital elevation model data or terrain elevation data corresponding to the image.
In one implementation, the gradient information of the pixel point may be determined by a second formula:
wherein β is gradient information.Represents the gradient in the x-direction, +.>Representing the gradient in the y-direction.
In one implementation, the image processing apparatus may acquire the light source position information of each pixel point from the related information of the image after acquiring the image.
For example, the image processing apparatus may be configured with a first application program. The image processing device may input the image into the first application program for processing, so as to obtain feature information of each pixel point in the image. The pseudocode of the first application may be as follows:
import numpy as np
import cv2
# reading input image
input_image=cv2.imread('input_image.jpg',cv2.IMREAD_GRAYSCALE)
Obtaining surface elevation data from DEM data #)
dem_data=load_dem_data()
# calculated grade
slope=calculate_slope(dem_data)
# light source position information
light source elevation = 30# light source height angle.
S302, the image processing device determines the shadow angle of each pixel point according to the characteristic information of each pixel point.
Wherein, shadow angle (shadow angle) is used to represent the brightness information of the pixel point; when the shadow angle is larger, the brightness of the pixel point is higher, and when the shadow angle is smaller, the brightness of the pixel point is lower.
In one implementation, the image processing apparatus may determine the shadow angle of each pixel point by a second formula, where the first formula is:
α=β-γ。
where α is the shadow angle. Beta is gradient information, gamma is light source position information.
For example, the image processing apparatus may be configured with a second application program. The image processing device may input the feature information of each pixel point into the second application program to perform processing so as to obtain the shadow angle of each pixel point. The pseudo code of the second application may be as follows:
import numpy as np
# calculate shadow angle
shadow_angle=np.arctan(slope)-np.radians(light_source_elevation)。
S303, the image processing device determines a shadow area in the image according to the shadow angle of each pixel point.
Alternatively, the shaded region may be used to represent a region of lower brightness in the image.
In one implementation manner, the image processing apparatus may confirm a pixel point in the image having a shadow angle smaller than the first threshold value as a shadow pixel point, and extract an area corresponding to the shadow pixel point in the image as a shadow area of the image. The shadow pixel points are used for representing pixel points with a shadow angle smaller than a first threshold value, and the first threshold value can be a preset shadow angle threshold value.
For example, the image processing apparatus may be configured with a third application program. The image processing device may extract pixels in the image having a shadow angle smaller than a preset threshold value to determine a shadow region in the image. The pseudo code of the third application may be as follows:
import numpy as np
thresholding to extract shadow regions
threshold=0.2# threshold
shadow_region=shadow_image<threshold。
S304, the image processing device carries out vectorization processing on the shadow area to obtain vector data corresponding to the image.
Alternatively, vector data corresponding to an image may be used to represent geospatial information in the image. The vector data corresponding to the image may be composed of points, lines, planes, geometric figures, and the like.
In one implementation, after the image processing apparatus acquires the shadow region, the image processing apparatus may convert the shadow region into a vector polygon or a boundary of the polygon, to obtain vector data of the shadow region.
By way of example, the image processing apparatus may call a function library such as a numerical Python (NumPy) and a cross-platform computer vision library (open source computer vision library, openCV) through a configured application program, and perform vectorization processing on the shadow area to obtain vector data corresponding to the image.
NumPy is a data expansion function library in Python, and library functions in NumPy can be used to implement a large number of array computations. OpenCV is a cross-platform computer vision and machine learning function library, where library functions in OpenCV can be used to implement image processing.
In one implementation manner, after the image processing device acquires the vector data of the image, the image processing device can store the vector data of the image into a data storage module configured with the image processing device, so that the image can be analyzed later.
In one implementation manner, the image processing device may configure a corresponding tag for the image, and store vector data of the image to a corresponding file or database in the data storage module according to the tag of the image.
In one implementation manner, after the image processing device acquires images of the same geographic location at different moments for analysis, the corresponding vector data can be stored in the data storage module, so that the subsequent image processing device can analyze the earth surface change trend of the geographic location according to the vector data at different moments.
For example, the image processing apparatus may be configured with a fourth application program. The image processing apparatus may vector the shadow region and store vector data of the shadow region in a file or database. The pseudo code of the fourth application may be as follows:
import numpy as np
# vectorized shadow region
shadow_vector_data=vectorize_shadow_region(shadow_region)
# save results to File or database
save_vector_data(shadow_vector_data,"shadow_data.geojson")。
The technical scheme provided by the method at least brings the following beneficial effects: as can be seen from S301 to S304, after the image processing apparatus obtains the feature information of each pixel in the image, the shadow angle of each pixel can be determined according to the feature information of each pixel, and the shadow area in the image can be determined according to the shadow angle of each pixel, so as to further vectorize the shadow area to obtain vector data corresponding to the image. The characteristic information comprises gradient information of each pixel point and light source position information, and the shadow angle is used for representing brightness information of the pixel point.
Because the image can be processed according to the characteristic information of each pixel point in the image, the complete vector data corresponding to the image is obtained. Compared with a mode of acquiring vector data by means of laser radar scanning and stereo photogrammetry, the method and the device can support processing of the existing satellite images and the like to obtain the vector data, and the problem that complete vector data are difficult to acquire due to environmental factors is avoided. Therefore, the method and the device can be used for solving the problem that the integrity of the obtained vector data in the general technology is low, and improving the integrity of the obtained vector data.
In one embodiment, in conjunction with fig. 3, in S303, that is, when determining a shadow area in an image according to a shadow angle of each pixel, as shown in fig. 4, an embodiment of the present application provides an alternative implementation manner, including: S401-S403.
S401, the image processing apparatus divides an image into a plurality of areas.
Alternatively, the multiple regions may not overlap each other.
In one implementation manner, the image processing apparatus may divide the image according to features such as gray, color, and texture of the image, to obtain a plurality of non-overlapping regions corresponding to the image. Wherein, gray scale can be used to represent brightness information of pixels in an image, color can be used to represent color information of pixels in an image, and texture can be used to represent spatial distribution information of pixels in an image.
S402, the image processing device determines a shadow angle threshold value corresponding to each region according to the shadow angles of a plurality of pixel points in each region.
Optionally, the image processing device determines the shadow angle threshold value corresponding to each region, including but not limited to the following two modes.
The first way is:
the image processing device may perform statistical analysis on the shadow angles of the plurality of pixel points in each region, and determine an average shadow angle corresponding to each region as a shadow angle threshold of the region.
The second way is:
the image processing device may perform statistical analysis on the shadow angles of the plurality of pixel points in each region, and determine the median of the shadow angles corresponding to each region as the shadow angle threshold of the region.
For example, the preset a region has pixels B, C, D, E, the shadow angle corresponding to the pixel B is 20 °, the shadow angle corresponding to the pixel C is 10 °, the shadow angle corresponding to the pixel D is 25 °, and the shadow angle corresponding to the pixel E is 25 °. The image processing apparatus may determine the average value of the shadow angles of 20 ° as the shadow angle threshold value of the region, or the image processing apparatus may determine the median of the shadow angles of 22.5 ° as the shadow angle threshold value of the region.
S403, the image processing device determines the area corresponding to the shadow pixel points in each area as a shadow area.
The shadow pixel points are used for representing pixel points with a shadow angle smaller than a shadow angle threshold value corresponding to each region.
In one implementation manner, the image processing apparatus may update a pixel value of a shadow pixel point in each area to a preset pixel value, and determine an area corresponding to the pixel point having the pixel value of the preset pixel value as the shadow area.
In one implementation manner, as shown in fig. 5, the image processing apparatus may determine the shadow area based on an adaptive thresholding method, specifically including: S1-S6.
S1, acquiring a shadow angle of each pixel point in an image.
S2, dividing the pixel points in the image into a plurality of areas.
S3, determining a shadow angle threshold value in each region.
Alternatively, the image processing apparatus may determine the shadow angle average value or the shadow angle median within each region as the shadow angle threshold value of the region.
S4, determining the pixel points with the shadow angles smaller than the shadow angle threshold value corresponding to each region as shadow pixel points.
S5, updating the pixel value of the shadow pixel point in each region to a preset pixel value.
S6, storing the result image.
In one implementation, the image processing device may process the image based on an adaptive thresholding method to convert the image into a binary image, so as to reduce the data throughput in the subsequent vector data extraction.
For example, the image processing apparatus may be configured with a fifth application program. The image processing apparatus may input the image into the fifth application program for processing to acquire a shadow image corresponding to the image. The pseudo code of the fifth application may be as follows:
import cv2
# reading input image
input_image=cv2.imread('input_image.jpg',cv2.IMREAD_GRAYSCALE)
Definition of analysis window size
window_size=25
Adaptive thresholding method for # applications
adaptive_threshold_image=cv2.adaptiveThreshold(input_image,255,cv2.ADAPTIVE_THRESH_MEAN_C,cv2.THRESH_BINARY,window_size,0)
# save results image
cv2.imwrite('output_image.jpg',adaptive_threshold_image)。
The technical scheme provided by the method at least brings the following beneficial effects: as can be seen from S401 to S403, the image processing apparatus divides the image into a plurality of areas, determines a shadow angle threshold value corresponding to each of the plurality of areas, and then determines an area corresponding to a shadow pixel point in each area as a shadow area. The shadow pixel points in each region are used for representing pixel points with a shadow angle smaller than a shadow angle threshold value corresponding to each region. Therefore, the shadow area can be determined by the adaptive threshold method, and the accuracy of acquiring the shadow area is improved.
In one embodiment, in conjunction with fig. 4, in S401, when dividing the image into a plurality of areas, as shown in fig. 6, an embodiment of the present application provides an alternative implementation manner, including: s601.
S601, the image processing device moves a preset window in the image for a plurality of times according to a preset step length to obtain a plurality of areas.
Optionally, the step length is a moving distance of a preset window.
Alternatively, the image processing apparatus may divide the image into a plurality of areas through a preset window, wherein the size of each area is the same as the size of the preset window.
In one possible implementation, the image processing apparatus may divide the next region by moving a preset step after dividing the current region.
The size of the preset image is, for example, 25×25, i.e., 625 pixels. The size of the preset window is 5*5, namely 25 pixel points, the preset step length is 5, and after the image processing device divides a 5*5 area through the preset window, the image processing device can move left and right or up and down according to the preset step length 5 to divide the image into 25 areas, and the size of each area is 5*5.
The technical scheme provided by the method at least brings the following beneficial effects: as can be seen from S601, the image processing apparatus may move the preset window multiple times according to the preset step length to obtain multiple regions, so that the present application may implement division of images under different conditions by adjusting the preset step length, thereby improving the accuracy of extracting vector data subsequently.
Illustratively, in combination with the descriptions in S301-S304, S401-S403, and S601, the embodiment of the present application provides a pseudo code of an image processing method, where after determining a shadow angle of each pixel point according to feature information of each pixel point in an image, a shadow area may be extracted by analyzing the shadow angle of each pixel point, so as to further obtain corresponding vector data of the image. The pseudocode is as follows:
import numpy as np
import cv2
Obtaining surface elevation data from DEM data #)
dem_data=load_dem_data()
# calculated grade
slope=calculate_slope(dem_data)
Light source position information
light source elevation=30# light source elevation angle
# calculate shadow angle
shadow_angle=np.arctan(slope)-np.radians(light_source_elevation)
# generating shadow image
shadow_image=create_shadow_image(shadow_angle)
Thresholding to extract shadow regions
threshold=0.2# threshold
shadow_region=shadow_image<threshold
# vectorized shadow region
shadow_vector_data=vectorize_shadow_region(shadow_region)
# save results to File or database
save_image(shadow_image,"shadow_image.png")
save_vector_data(shadow_vector_data,"shadow_data.geojson")。
Based on this, these pseudo codes constitute a flow of a method of calculating vector data from the shadow angles. After the shadow angle of each pixel point in the image is calculated by determining the light source position information and the gradient information, the boundaries of the shadow and the illumination area in the image can be extracted by a self-adaptive threshold value method.
By combining the pseudo codes, the image processing method can be used for calculating the shadow angle of each pixel point in the image according to the characteristic information of each pixel point so as to generate a shadow image and related vector data, so that the subsequent visual display of the shadow condition of the image and the extraction of the shadow region in the image are facilitated, and the subsequent analysis of geographic information and the production of a map are facilitated.
The foregoing description of the solution provided in the embodiments of the present application has been mainly presented in terms of a method. To achieve the above functions, it includes corresponding hardware structures and/or software modules that perform the respective functions. Those of skill in the art will readily appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The embodiment of the present application may divide the functional modules of the image processing apparatus according to the above method example, for example, each functional module may be divided corresponding to each function, or two or more functions may be integrated in one processing module. The integrated modules may be implemented in hardware or in software functional modules. Optionally, the division of the modules in the embodiments of the present application is schematic, which is merely a logic function division, and other division manners may be actually implemented.
Fig. 7 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application. The image processing apparatus may be used to perform the image processing methods shown in fig. 3, 4, and 6. The image processing apparatus shown in fig. 7 includes: an acquisition unit 701, a determination unit 702, and a processing unit 703.
An acquiring unit 701, configured to acquire feature information of each pixel point in an image; the characteristic information includes: grade information and light source position information. For example, in connection with fig. 3, the acquisition unit 701 may be used to perform S301.
A determining unit 702, configured to determine a shadow angle of each pixel point according to the feature information of each pixel point; the shadow angle is used to represent the luminance information of the pixel point. For example, in connection with fig. 3, the determining unit 702 may be used to perform S302.
The determining unit 702 is further configured to determine a shadow area in the image according to the shadow angle of each pixel point. For example, in connection with fig. 3, the determining unit 702 may be used to perform S303.
The processing unit 703 is configured to perform vectorization processing on the shadow area, so as to obtain vector data corresponding to the image. For example, in connection with fig. 3, the processing unit 703 may be used to perform S304.
Optionally, the gradient information, the light source position information and the shadow angle of each pixel point meet a first formula; the first formula is:
α=β-γ。
where α is a shadow angle, β is gradient information, and γ is light source position information.
Optionally, the determining unit 702 is specifically configured to: the image is divided into a plurality of regions. For example, in connection with fig. 4, the determining unit 702 may be used to perform S401.
And determining a shadow angle threshold value corresponding to each region according to the shadow angles of the plurality of pixel points in each region. For example, in connection with fig. 4, the determining unit 702 may be used to perform S402.
Determining a region corresponding to the shadow pixel points in each region as a shadow region; the shadow pixels in each region are used to represent pixels having a shadow angle less than a shadow angle threshold corresponding to each region. For example, in connection with fig. 4, the determining unit 702 may be used to perform S403.
Optionally, the determining unit 702 is specifically configured to:
and moving the preset window in the image for a plurality of times according to the preset step length to obtain a plurality of areas. For example, in connection with fig. 6, the determining unit 702 may be used to perform S601.
The present application also provides a computer-readable storage medium including computer-executable instructions that, when executed on a computer, cause the computer to perform the image processing method provided in the above embodiments.
The embodiment of the present application also provides a computer program, which can be directly loaded into a memory and contains software codes, and the computer program can implement the image processing method provided in the above embodiment after being loaded and executed by a computer.
Those skilled in the art will appreciate that in one or more of the examples described above, the functions described in the present invention may be implemented in hardware, software, firmware, or any combination thereof. When implemented in software, these functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer-readable storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a general purpose or special purpose computer.
From the foregoing description of the embodiments, it will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of functional modules is illustrated, and in practical application, the above-described functional allocation may be implemented by different functional modules according to needs, i.e. the internal structure of the apparatus is divided into different functional modules to implement all or part of the functions described above.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described embodiments of the apparatus are merely illustrative, and the division of modules or units, for example, is merely a logical function division, and other manners of division are possible when actually implemented. For example, multiple units or components may be combined or may be integrated into another device, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form. The units described as separate parts may or may not be physically separate, and the parts shown as units may be one physical unit or a plurality of physical units, may be located in one place, or may be distributed in a plurality of different places. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units. The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a readable storage medium. Based on such understanding, the technical solution of the embodiments of the present application may be essentially or a part contributing to the general technology or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, where the software product includes several instructions to cause a device (may be a single-chip microcomputer, a chip or the like) or a processor (processor) to execute all or part of the steps of the methods described in the embodiments of the present invention. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk, etc.
The foregoing is merely illustrative of the present invention, and the present invention is not limited thereto, and any changes or substitutions easily contemplated by those skilled in the art within the scope of the present invention should be included in the present invention. Therefore, the protection scope of the invention is subject to the protection scope of the claims.

Claims (10)

1. An image processing method, comprising:
acquiring characteristic information of each pixel point in an image; the characteristic information includes: grade information and light source position information;
determining the shadow angle of each pixel point according to the characteristic information of each pixel point; the shadow angle is used for representing brightness information of the pixel points;
determining a shadow area in the image according to the shadow angle of each pixel point;
and carrying out vectorization processing on the shadow area to obtain vector data corresponding to the image.
2. The image processing method according to claim 1, wherein the gradient information, the light source position information, and the shadow angle of each pixel point satisfy a first formula; the first formula is:
α=β-γ;
wherein alpha is the shadow angle, beta is the gradient information, and gamma is the light source position information.
3. The image processing method according to claim 1, wherein the determining a shadow area in the image according to the shadow angle of each pixel point includes:
dividing the image into a plurality of regions;
determining a shadow angle threshold corresponding to each region according to shadow angles of a plurality of pixel points in each region;
Determining a region corresponding to the shadow pixel point in each region as the shadow region; and the shadow pixel points in each region are used for representing the pixel points with the shadow angles smaller than the shadow angle threshold value corresponding to each region.
4. The image processing method according to claim 3, wherein the dividing the image into a plurality of areas includes:
and moving a preset window in the image for a plurality of times according to a preset step length to obtain the plurality of areas.
5. An image processing apparatus, comprising: the device comprises an acquisition unit, a determination unit and a processing unit;
the acquisition unit is used for acquiring the characteristic information of each pixel point in the image; the characteristic information includes: grade information and light source position information;
the determining unit is used for determining the shadow angle of each pixel point according to the characteristic information of each pixel point; the shadow angle is used for representing brightness information of the pixel points;
the determining unit is further configured to determine a shadow area in the image according to the shadow angle of each pixel point;
and the processing unit is used for carrying out vectorization processing on the shadow area to obtain vector data corresponding to the image.
6. The image processing apparatus according to claim 5, wherein the gradient information, the light source position information, and the shadow angle of each pixel point satisfy a first formula; the first formula is:
α=β-γ;
wherein alpha is the shadow angle, beta is the gradient information, and gamma is the light source position information.
7. The image processing apparatus according to claim 5, wherein the determining unit is specifically configured to;
dividing the image into a plurality of regions;
determining a shadow angle threshold corresponding to each region according to shadow angles of a plurality of pixel points in each region;
determining a region corresponding to the shadow pixel point in each region as the shadow region; and the shadow pixel points in each region are used for representing the pixel points with the shadow angles smaller than the shadow angle threshold value corresponding to each region.
8. The image processing device according to claim 7, wherein the determining unit is specifically configured to:
and moving a preset window in the image for a plurality of times according to a preset step length to obtain the plurality of areas.
9. An image processing apparatus comprising a memory and a processor; the memory is used for storing computer execution instructions, and the processor is connected with the memory through a bus; when the image processing apparatus is running, the processor executes the computer-executable instructions stored in the memory to cause the image processing apparatus to perform the image processing method according to any one of claims 1 to 4.
10. A computer-readable storage medium comprising computer-executable instructions that, when run on an image processing apparatus, cause the image processing apparatus to perform the image processing method of any of claims 1-4.
CN202311642560.2A 2023-12-01 2023-12-01 Image processing method, device and storage medium Pending CN117710388A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311642560.2A CN117710388A (en) 2023-12-01 2023-12-01 Image processing method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311642560.2A CN117710388A (en) 2023-12-01 2023-12-01 Image processing method, device and storage medium

Publications (1)

Publication Number Publication Date
CN117710388A true CN117710388A (en) 2024-03-15

Family

ID=90163182

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311642560.2A Pending CN117710388A (en) 2023-12-01 2023-12-01 Image processing method, device and storage medium

Country Status (1)

Country Link
CN (1) CN117710388A (en)

Similar Documents

Publication Publication Date Title
Gruszczyński et al. Comparison of low-altitude UAV photogrammetry with terrestrial laser scanning as data-source methods for terrain covered in low vegetation
US10534870B2 (en) Methods, apparatuses and computer program products for automatic, non-parametric, non-iterative three dimensional geographic modeling
CN111028327B (en) Processing method, device and equipment for three-dimensional point cloud
CN105893972B (en) Automatic monitoring method for illegal building based on image and implementation system thereof
CN109683699B (en) Method and device for realizing augmented reality based on deep learning and mobile terminal
US20150063707A1 (en) Outline approximation for point cloud of building
CN111862292B (en) Data rendering method and device for transmission line corridor and computer equipment
CN111259957A (en) Visibility monitoring and model training method, device, terminal and medium based on deep learning
CN112883900B (en) Method and device for bare-ground inversion of visible images of remote sensing images
US20220004740A1 (en) Apparatus and Method For Three-Dimensional Object Recognition
US7778808B2 (en) Geospatial modeling system providing data thinning of geospatial data points and related methods
Zhu et al. Robust registration of aerial images and LiDAR data using spatial constraints and Gabor structural features
CN111458691B (en) Building information extraction method and device and computer equipment
CN110992366A (en) Image semantic segmentation method and device and storage medium
CN112489099A (en) Point cloud registration method and device, storage medium and electronic equipment
Akbulut et al. Automatic building extraction from image and LiDAR data with active contour segmentation
Dong et al. A framework for automated assessment of post-earthquake building damage using geospatial data
CN113421217A (en) Method and device for detecting travelable area
CN115797256A (en) Unmanned aerial vehicle-based tunnel rock mass structural plane information processing method and device
CN114463503A (en) Fusion method and device of three-dimensional model and geographic information system
WO2022062853A1 (en) Remote sensing image registration method and apparatus, device, storage medium, and system
CN112419460B (en) Method, apparatus, computer device and storage medium for baking model map
CN117710388A (en) Image processing method, device and storage medium
CN115908318A (en) Power line sag determination method, device, equipment, medium and product
Li et al. Low-cost 3D building modeling via image processing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination