CN111429386B - Image processing method and device and electronic equipment - Google Patents

Image processing method and device and electronic equipment Download PDF

Info

Publication number
CN111429386B
CN111429386B CN202010526189.3A CN202010526189A CN111429386B CN 111429386 B CN111429386 B CN 111429386B CN 202010526189 A CN202010526189 A CN 202010526189A CN 111429386 B CN111429386 B CN 111429386B
Authority
CN
China
Prior art keywords
area
passing area
passing
obstacle
unknown
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010526189.3A
Other languages
Chinese (zh)
Other versions
CN111429386A (en
Inventor
张�浩
支涛
应甫臣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Yunji Technology Co Ltd
Original Assignee
Beijing Yunji Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Yunji Technology Co Ltd filed Critical Beijing Yunji Technology Co Ltd
Priority to CN202010526189.3A priority Critical patent/CN111429386B/en
Publication of CN111429386A publication Critical patent/CN111429386A/en
Application granted granted Critical
Publication of CN111429386B publication Critical patent/CN111429386B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The application provides an image processing method, an image processing device and electronic equipment, which are used for realizing denoising and enhancement under the condition of not influencing map precision. The method comprises the following steps: acquiring an original image; dividing a passing area and a non-passing area according to the pixel value of the original image; setting sampling points in the traffic area; acquiring barrier data of all sampling points in a passing area; and performing noise reduction processing on the original image according to the obstacle data and a preset screening condition to generate a clean image. Therefore, the problem that the map precision is reduced due to the direct application of the prior art, and the positioning and navigation of the robot are influenced is solved.

Description

Image processing method and device and electronic equipment
Technical Field
The present application relates to an image processing method, and in particular, to an image processing method, an image processing apparatus, and an electronic device.
Background
There are many existing image denoising and enhancing techniques, for example, algorithms such as histogram equalization filtering and linear smoothing filtering can be used for image denoising, and the enhancing techniques mainly include gray scale transformation enhancing, histogram enhancing, image smoothing and image sharpening. By the method, the interested features in the image can be selectively highlighted, and the unwanted features are attenuated, so that the recognition degree of the image by a machine is improved. The map of the robot is mainly used for positioning and navigation, the requirement on precision is high, and the direct application of the prior art can cause the precision of the map to be reduced, thereby influencing the positioning and navigation of the robot.
Disclosure of Invention
An object of the embodiments of the present application is to provide an image processing method, an image processing apparatus, and an electronic device, so as to implement denoising and enhancement without affecting map accuracy.
In a first aspect, an embodiment of the present invention provides an image processing method, including: acquiring an original image; dividing a passing area and a non-passing area according to the pixel value of the original image; setting sampling points in the traffic area; acquiring barrier data of all sampling points in a passing area; and performing noise reduction processing on the original image according to the obstacle data and a preset screening condition to generate a clean image.
In one embodiment, dividing the pass region and the non-pass region according to the pixel values of the original image includes: generating a gray scale map according to the pixel value of each pixel point; dividing the gray map into a plurality of initial passing areas and a plurality of initial non-passing areas according to the gray value and a first threshold value in the gray map; and combining the area parameters according to the initial passing area and the initial non-passing area and preset conditions to generate a passing area and a non-passing area.
In one embodiment, the non-traffic zone includes a barrier zone and an unknown zone; according to the area parameters of the initial passing area and the initial non-passing area, combining according to preset conditions to generate a passing area, wherein the method comprises the following steps: judging whether the length of the edge contour of the obstacle area is smaller than a second threshold value or not according to the edge contour of the obstacle area; if so, adjusting the gray value of the blocked area to the gray value of the unknown area; otherwise, a barrier area is reserved; judging whether the area of the passing area is smaller than a third threshold value or not according to the edge profile of the passing area; if so, adjusting the gray value of the passing area to the gray value of the unknown area; otherwise, reserving a passing area; judging whether the area of the unknown area is smaller than a fourth threshold value or not according to the edge profile of the unknown area; if so, adjusting the gray value of the unknown area to the gray value of the passing area; and generating a passing area and a non-passing area according to the combined result according to the preset condition.
In one embodiment, setting sampling points in the pass region includes: setting a plurality of reference points in the original image at intervals of a preset step length; uniformly sampling all reference points to obtain a pixel value of each reference point; judging whether the reference point is in the traffic area or not according to the pixel value of each reference point; if yes, the reference point is reserved as a sampling point.
In one embodiment, acquiring the obstacle data of all sampling points in the passing area includes: generating a scanning signal simulating the scanning of the obstacle at each sampling point; and acquiring obstacle data of the obstacle corresponding to each sampling point.
In an embodiment, the noise reduction processing is performed on the original image according to the obstacle data and a preset screening condition to generate a clean image, including: judging whether the barrier data of any sampling point corresponds to the barrier data of other sampling points according to the barrier data of each sampling point; if yes, retaining the barrier data; otherwise, the obstacle data is deleted from the passing area.
In a second aspect, an embodiment of the present invention provides an image processing apparatus, including: the first acquisition module is used for acquiring an original image; the first dividing module is used for dividing a passing area and a non-passing area according to the pixel value of the original image; the first setting module is used for setting sampling points in the traffic area; the second acquisition module is used for acquiring the obstacle data of all sampling points in the passing area; and the first generation module is used for carrying out noise reduction processing on the original image according to the obstacle data and a preset screening condition to generate a clean image.
In an embodiment, the first partitioning module is further configured to: generating a gray scale map according to the pixel value of each pixel point; dividing the gray map into a plurality of initial passing areas and a plurality of initial non-passing areas according to the gray value and a first threshold value in the gray map; and combining the area parameters according to the initial passing area and the initial non-passing area and preset conditions to generate a passing area and a non-passing area.
In one embodiment, the non-traffic zone includes a barrier zone and an unknown zone; according to the area parameters of the initial passing area and the initial non-passing area, combining according to preset conditions to generate a passing area, wherein the method comprises the following steps: judging whether the length of the edge contour of the obstacle area is smaller than a second threshold value or not according to the edge contour of the obstacle area; if so, adjusting the gray value of the blocked area to the gray value of the unknown area; otherwise, a barrier area is reserved; judging whether the area of the passing area is smaller than a third threshold value or not according to the edge profile of the passing area; if so, adjusting the gray value of the passing area to the gray value of the unknown area; otherwise, reserving a passing area; judging whether the area of the unknown area is smaller than a fourth threshold value or not according to the edge profile of the unknown area; if so, adjusting the gray value of the unknown area to the gray value of the passing area; and generating a passing area and a non-passing area according to the combined result according to the preset condition.
In an embodiment, the first setting module is further configured to: setting a plurality of reference points in the original image at intervals of a preset step length; uniformly sampling all reference points to obtain a pixel value of each reference point; judging whether the reference point is in the traffic area or not according to the pixel value of each reference point; if yes, the reference point is reserved as a sampling point.
In an embodiment, the second obtaining module is further configured to: generating a scanning signal simulating the scanning of the obstacle at each sampling point; and acquiring obstacle data of the obstacle corresponding to each sampling point.
In an embodiment, the first generating module is further configured to: judging whether the barrier data of any sampling point corresponds to the barrier data of other sampling points according to the barrier data of each sampling point; if yes, retaining the barrier data; otherwise, the obstacle data is deleted from the passing area.
In a third aspect, an embodiment of the present invention provides an electronic device, including: a memory for storing a computer program; a processor for performing the method of any of the preceding embodiments.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments of the present application will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and that those skilled in the art can also obtain other related drawings based on the drawings without inventive efforts.
Fig. 1 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure;
fig. 2 is a schematic flow chart of an image processing method according to an embodiment of the present application;
FIG. 3 is a schematic flowchart of another image processing method provided in the embodiments of the present application;
FIG. 4 is a schematic flowchart of another image processing method provided in the embodiments of the present application;
FIG. 5 is a schematic flowchart of another image processing method provided in the embodiments of the present application;
fig. 6 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application. In the description of the present application, the terms "first," "second," and the like are used solely to distinguish one from another and are not to be construed as indicating or implying relative importance.
As shown in fig. 1, the present embodiment provides an electronic apparatus 1 including: at least one processor 11 and a memory 12, one processor being exemplified in fig. 1. The processor 11 and the memory 12 are connected by a bus 10, and the memory 12 stores instructions executable by the processor 11 and the instructions are executed by the processor 11.
In an embodiment, the electronic device 1 may be, but is not limited to, a data storage server and a data processing server, and is configured to identify a passing area and a non-passing area from the input map data, and then screen the passing area and the non-passing area smaller than a threshold according to an outline and an area of the area, so as to implement image noise reduction and generate a clean image.
Please refer to fig. 2, which is an image processing method provided in this embodiment, and the method may be executed by the electronic device 1 shown in fig. 1 to recognize a traffic area and a non-traffic area from the input map data, and then filter the traffic area and the non-traffic area smaller than a threshold according to the contour and the area of the area to reduce noise of the image and generate a clean image. The method comprises the following steps:
step 201: an original image is acquired.
In this step, the original image is a robot map picture file with a known resolution, where the robot map picture file may be a map created by the SLAM (simultaneous localization and mapping) technology when the robot starts to operate in a certain scene or during the operation. The robot completes the creation of the map based on a sensor carried by the robot, and the creation process is to track the position change of the robot while moving and simultaneously to incrementally superimpose the obstacles scanned by the distance sensor on the map according to the tracked position.
Step 202: and dividing the passing area and the non-passing area according to the pixel value of the original image.
In this step, a passing area and a non-passing area can be distinguished according to the difference of pixel values of each pixel point in the image, the non-passing area can include an obstacle area and an unknown area, when the robot detects a current scene, the sensor can determine the position of an obstacle and mark the obstacle when a map is generated, and the unknown area which cannot be scanned can be generated due to the fact that scanning rays of the sensor are blocked by the obstacle.
In one embodiment, a pixel may represent a distance of 0.02 meters or 0.05 meters. For example, in a SLAM map, if I (x, y) indicates a pixel value at length x and width y. In one embodiment, it may also be the gray value at point I, which represents the unknown region when I (x, y) is equal to 204; when I (x, y) is equal to 0, an obstacle edge is indicated; when I (x, y) is equal to 255, a free-passage area is indicated.
Step 203: sampling points are set in the traffic region.
In this step, since it is necessary to uniformly generate sampling samples in the passage area, virtual sensor scan data is generated based on the sampling samples and the map data. Sensor data used when the map is created is recovered through the static map, and the map is corrected by processing the sensor data, so that the accuracy reduction caused by directly processing the map is avoided.
Step 204: obstacle data of all sampling points in the passing area is acquired.
In this step: the sensor can be a laser radar, the virtual sensor scans data, namely the laser radar is simulated to be placed at any position in a passing area on a map, the sensor data at the position is simulated and collected, and finally a combined data set of actual measurement and virtual simulation of the sensor corresponding to the position is formed. Assuming that the picture is wide k and long l, and is uniformly sampled by taking t as a step, the sample is abandoned when the located pixel value is an unknown area or an obstacle area, and the point is added into a queue of effective positions if the located pixel value is a pass area.
In one embodiment, for each active position, the simulation places the lidar there, simulates the angular resolution of the lidar, and calculates scan data based on a ray-tracing algorithm. For example, starting from the direction of the picture length of 0 degree, acquiring a length every 1 degree, 1/3 degrees or 1/4 degrees, simulating the irradiation of real light, stopping the simulation when an obstacle is encountered, calculating the distance of the laser radar in the direction of the obstacle, and finally forming laser scanning data of 360 degrees. One valid position generates one frame of lidar data, and n valid positions generate n frames of lidar data.
Step 205: and performing noise reduction processing on the original image according to the obstacle data and a preset screening condition to generate a clean image.
In the step, judging whether the barrier data of any sampling point corresponds to the barrier data of other sampling points according to the barrier data of each sampling point; if yes, retaining the barrier data; otherwise, the obstacle data is deleted from the passing area.
In one embodiment, after the map is created, noise caused by dynamic obstacles in the current active scene, illumination, and other factors may cause dirty data to be generated on the map. Based on the generated obstacle data, whether the obstacle data are generated by pedestrians or other dynamic objects can be confirmed through a machine learning algorithm, and therefore the obstacle outline can be classified and learned more easily through sensor data. Taking a pedestrian as an example, training is carried out after slicing each frame of generated laser data, then human leg recognition is carried out, and human leg noise points recorded in the data dynamically can be removed.
Please refer to fig. 3, which is another image processing method provided in this embodiment, the method may be executed by the electronic device 1 shown in fig. 1 to recognize a traffic area and a non-traffic area from the input map data, and then filter the traffic area and the non-traffic area smaller than a threshold according to the contour and the area of the area to reduce noise of the image. The method comprises the following steps:
step 301: an original image is acquired. Please refer to the description of step 201 in the above embodiments.
Step 302: and generating a gray-scale image according to the pixel value of each pixel point.
In this step, the difference of the values in the gray-scale map is used to distinguish different areas.
Step 303: and dividing the gray map into a plurality of initial passing areas and a plurality of initial non-passing areas according to the gray value in the gray map and a first threshold value.
In this step, the first threshold may be a gray level at the point I, and in one embodiment, represents an initial non-passing area when I (x, y) is less than 255, and represents an initial passing area when I (x, y) is equal to 255.
In one embodiment, when I (x, y) is equal to 204, it represents an unknown region. And when the I (x, y) is equal to 0, the edge profile of the obstacle is represented, and the area surrounded by the edge profile is an obstacle area. When I (x, y) is equal to 255, a pass zone is indicated.
Step 304: and combining the area parameters according to the initial passing area and the initial non-passing area and preset conditions to generate a passing area and a non-passing area.
In this step, the area parameter may be an area of the passing area and the non-passing area, or a length of an edge contour enclosing the passing area and the non-passing area. The robot completes the creation of a map based on a sensor carried by the robot, the process of the map creation is that the robot determines the position change of the robot through a positioning sensor when moving, simultaneously associates an obstacle scanned by a distance sensor with the current position according to the positioning position, and incrementally overlays the scanned obstacle data on the map. The passing area and the non-passing area in the map scanned each time can generate deviation due to different positions of the robot and different scanning angles of the sensor. Therefore, multiple acquisitions and overlapping combinations are required to generate the final pass and non-pass regions.
Step 305: sampling points are set in the traffic region. Please refer to the description of step 203 in the above embodiments.
Step 306: obstacle data of all sampling points in the passing area is acquired. Please refer to the description of step 204 in the above embodiments.
Step 307: and performing noise reduction processing on the original image according to the obstacle data and a preset screening condition to generate a clean image. Please refer to the description of step 205 in the above embodiments.
Please refer to fig. 4, which is another image processing method provided in this embodiment, the method may be executed by the electronic device 1 shown in fig. 1, so as to realize that a traffic area and a non-traffic area are identified from the input map data, and then the traffic area and the non-traffic area smaller than a threshold are screened according to the contour and the area of the area, so as to realize image noise reduction and generate a clean image. The method comprises the following steps:
step 401: an original image is acquired. Please refer to the description of step 201 in the above embodiments.
Step 402: and generating a gray-scale image according to the pixel value of each pixel point. Please refer to the description of step 302 in the above embodiment.
Step 403: and dividing the gray map into a plurality of initial passing areas and a plurality of initial non-passing areas according to the gray value in the gray map and a first threshold value. Please refer to the description of step 303 in the above embodiments.
In one embodiment, the non-traffic area may include a barrier area and an unknown area. The first threshold may be a gray value of 0 or a set value, and the blocked area is an area surrounded by a contour line having a gray value of 0 or a set value in the map, where the contour line is an edge contour of the blocked area. The unknown region is a region having a gray value between the gray value of the barrier region and the gray value of the traffic region.
In one embodiment, when the sensor of the robot scans the passing area, once the sensor scans the obstacle, an outline representing the obstacle is generated, and due to the blocking of the obstacle, the sensor cannot scan the area behind the obstacle, namely, an unknown area is generated, so that the outline is an isolation line between the passing area and the unknown area.
Step 404: and judging whether the length of the edge contour of the barrier area is smaller than a second threshold value or not according to the edge contour of the barrier area. If yes, go to step 405. If not, go to step 406.
In this step, the second threshold is the length value of the edge profile. The perimeter of the edge profile of the obstacle represents the size of the obstacle, i.e. the obstructed area enclosed by the edge profile. The influence of the size of the obstacle on the robot passage is different, so a condition for judging the size of the obstacle is set, and the obstacle outline which does not influence the robot passage in the passage area is deleted.
Step 405: and adjusting the gray value of the barrier area to the gray value of the unknown area.
In this step, if the edge contour length of the obstacle area is smaller than the second threshold, it is indicated that the obstacle in the obstacle area does not affect the robot passage, so that the obstacle is temporarily classified as an unknown area, and the new classification is performed by waiting for the map scan data of the next valid position.
Step 406: the barrier area remains.
In this step, if the edge contour length of the obstacle area is not less than the second threshold, it indicates that the obstacle area has an influence on the robot passage, and the obstacle area is retained on the map.
Step 407: and judging whether the area of the passing area is smaller than a third threshold value or not according to the edge profile of the passing area. If yes, go to step 408. If not, go to step 409.
In this step, the third threshold is an area value, and the area of any traffic area can be determined according to the edge contour of the area, so that the screening can be performed according to the size of the area. The edge profile of the passing area may be a boundary line intersecting the blocked area or a boundary line intersecting the unknown area. Because the size of the passing area is also influenced by the size limit of the robot, a condition for judging the size of the passing area can be set, and the areas which cannot pass in the passing area are deleted.
Step 408: and adjusting the gray value of the passing area to the gray value of the unknown area.
In this step, if the area of the passing area is smaller than the third threshold, it is described that the passing area is not an obstacle but actually affects the passage of the robot, and therefore, the passing area is temporarily classified as an unknown area, and the new classification is performed while waiting for the map scan data of the next valid position.
Step 409: the traffic area is reserved.
In this step, if the area of the passing area is not less than the third threshold, it is indicated that the passing area has no influence on the robot passing and remains on the map.
Step 410: and judging whether the area of the unknown area is smaller than a fourth threshold value according to the edge profile of the unknown area. If yes, go to step 411. If not, go to step 412.
In this step, the fourth threshold is an area value, and the area of any unknown region can be determined according to the edge profile of the region, so that the region can be screened according to the size of the area. The edge profile of the unknown area may be a boundary line intersecting the obstructed area or a boundary line intersecting the traffic area.
In an embodiment, in the scanned map data, due to the influence of the scanning angle of the sensor, the obstacle may bring dead angles to the scanning, thereby generating some unknown areas with small areas, and such unknown areas may actually pass through, but the scanning cannot be embodied as a passing area on the map only because the scanning is influenced by the obstacle, so a condition for judging the size of the unknown area may be set, and the areas which do not actually affect the passing in the unknown area may be deleted.
Step 411: and adjusting the gray value of the unknown area to the gray value of the passing area.
In this step, if the area of the unknown area is smaller than the fourth threshold, it is assumed that the unknown area does not actually affect the robot passage, and therefore, the area is classified as a passage area.
Step 412: and generating a passing area and a non-passing area according to the combined result according to the preset condition.
In this step, the sensor scans the map data at a plurality of effective positions, and step 405 and 411 may be repeated a plurality of times to perfect the denoising of the map noise.
Step 413: sampling points are set in the traffic region. Please refer to the description of step 203 in the above embodiments.
Step 414: obstacle data of all sampling points in the passing area is acquired. Please refer to the description of step 204 in the above embodiments.
Step 415: and performing noise reduction processing on the original image according to the obstacle data and a preset screening condition to generate a clean image. Please refer to the description of step 205 in the above embodiments.
Please refer to fig. 5, which is another image processing method provided in this embodiment, and the method may be executed by the electronic device 1 shown in fig. 1 to recognize a traffic area and a non-traffic area from the input map data, and then filter the traffic area and the non-traffic area smaller than a threshold according to the contour and the area of the area to reduce noise of the image and generate a clean image. The method comprises the following steps:
step 501: an original image is acquired. Please refer to the description of step 201 in the above embodiments.
Step 502: and dividing the passing area and the non-passing area according to the pixel value of the original image. Please refer to the description of step 202 in the above embodiments.
Step 503: and setting a plurality of reference points in the original image at intervals of a preset step length.
In this step, since it is necessary to uniformly generate sampling samples in the passage area and generate virtual sensor scan data based on the sampling samples and the map data, reference points are uniformly set in the pixel map before sampling.
In one embodiment, assuming that the picture width k and length l can be uniformly sampled with t as a step, the sample is discarded when the pixel value is an unknown region or an obstacle region, and the point is added to the queue of valid positions if the pixel value is a pass region.
Step 504: and uniformly sampling all the reference points to obtain the pixel value of each reference point.
In this step, in order to determine whether the reference point of the sample is in the pass region, it is necessary to acquire the pixel value of each reference point to determine whether the point is in the pass region.
Step 505: and judging whether the reference point is in the passing area or not according to the pixel value of each reference point. If yes, go to step 506, otherwise return to step 505 to continue determining the next reference point.
In the step, whether the point is in the traffic area or not is determined according to the pixel value of each reference point, the sampling process of the non-traffic area is reduced, and the calculation amount in the analog scanning in the subsequent step is reduced.
Step 506: the reference points are retained as sampling points.
In this step, if the reference point is within the pass region, it is said that the scanning signal can be simulated at the reference point, so it is retained.
Step 507: a scanning signal simulating the scanning of an obstacle is generated at each sampling point.
In the step, for each effective position, the laser radar is placed at the sampling point in a simulated mode, the angular resolution of the laser radar is simulated, and scanning data are calculated according to a ray tracing algorithm. In one embodiment, starting from the direction of 0 degree of picture length, one length is collected every 1 degree, 1/3 degrees or 1/4 degrees, the irradiation of real light is simulated, the simulation is stopped when an obstacle is encountered, and finally 360 degrees of laser scanning data are formed.
Step 508: and acquiring obstacle data of the obstacle corresponding to each sampling point.
In this step, the detection distance of the scanning signal of the simulated scanning obstacle in the scanning direction is calculated, in one embodiment, when the simulated laser radar scans, the distance of the simulated scanning obstacle in the scanning direction is obtained, a sampling point of one effective position generates one frame of laser radar data, and sampling points of n effective positions generate n frames of laser radar data.
Step 509: and judging whether the obstacle data of any sampling point corresponds to the obstacle data of other sampling points according to the obstacle data of each sampling point. If so, go to step 510, otherwise, go to step 511.
In this step, for the same objectively existing obstacle, the positions of different sampling points relative to the obstacle are different, and when a scanning signal is simulated, the distance and angle data of the obstacle in the laser radar data of each frame relative to the sampling points of the frame are changed. However, when the robot scans a map in a scene, there are problems such as movement of people, which causes the absolute positions of the obstacle in the previous frame and the obstacle in the next frame to be different. The obstacle data in each frame needs to be compared to determine the actual obstacle in the scene.
Step 510: obstacle data is retained.
In this step, if the obstacle data of the sampling point corresponds to the obstacle data of other sampling points, it is described that the obstacle data corresponds to a real obstacle, and the movement of the robot is affected.
In one embodiment, the retained obstacle data may be further sharpened.
Step 511: the obstacle data is deleted from the passing area.
In this step, if the obstacle data of the sampling point does not correspond to the obstacle data of other sampling points, it is indicated that the obstacle data corresponds to an unreal obstacle, and the movement of the robot is not affected.
Referring to fig. 6, which is an image processing apparatus provided in this embodiment, the apparatus 600 may be applied to the electronic device 1 shown in fig. 1, and is configured to identify a traffic area and a non-traffic area from input map data, and then screen the traffic area and the non-traffic area smaller than a threshold according to an outline and an area of the area, so as to implement image noise reduction and generate a clean image. The apparatus 600 comprises: a first obtaining module 601, a first dividing module 602, a first setting module 603, a second obtaining module 604, and a first generating module 605. The principle relationship of the modules is as follows:
the first obtaining module 601 is configured to obtain an original image. Please refer to the description of step 201 in the above embodiments.
A first dividing module 602, configured to divide a traffic region and a non-traffic region according to pixel values of an original image. Please refer to the description of step 202 in the above embodiments.
In an embodiment, the first division module 602 is further configured to: generating a gray scale map according to the pixel value of each pixel point; dividing the gray map into a plurality of initial passing areas and a plurality of initial non-passing areas according to the gray value and a first threshold value in the gray map; and combining the area parameters according to the initial passing area and the initial non-passing area and preset conditions to generate a passing area and a non-passing area. Please refer to the description of steps 302-304 in the above embodiments.
In one embodiment, the non-traffic zone includes a barrier zone and an unknown zone; according to the area parameters of the initial passing area and the initial non-passing area, combining according to preset conditions to generate a passing area, wherein the method comprises the following steps: judging whether the length of the edge contour of the obstacle area is smaller than a second threshold value or not according to the edge contour of the obstacle area; if so, adjusting the gray value of the blocked area to the gray value of the unknown area; otherwise, a barrier area is reserved; judging whether the area of the passing area is smaller than a third threshold value or not according to the edge profile of the passing area; if so, adjusting the gray value of the passing area to the gray value of the unknown area; otherwise, reserving a passing area; judging whether the area of the unknown area is smaller than a fourth threshold value or not according to the edge profile of the unknown area; if so, adjusting the gray value of the unknown area to the gray value of the passing area; and generating a passing area and a non-passing area according to the combined result according to the preset condition. Please refer to the description of steps 404-412 in the above embodiments.
A first setting module 603 for setting sample points in the pixel map. Please refer to the description of step 203 in the above embodiments.
In an embodiment, the first setting module 603 is further configured to: setting a plurality of reference points in the original image at intervals of a preset step length; uniformly sampling all reference points to obtain a pixel value of each reference point; judging whether the reference point is in the traffic area or not according to the pixel value of each reference point; if yes, the reference point is reserved as a sampling point. Please refer to the description of steps 503-506 in the above embodiment in detail.
A second obtaining module 604, configured to obtain obstacle data of all sampling points in the passing area. Please refer to the description of step 204 in the above embodiments.
In an embodiment, the second obtaining module 604 is further configured to: generating a scanning signal simulating the scanning of the obstacle at each sampling point; and acquiring obstacle data of the obstacle corresponding to each sampling point. Please refer to the description of step 507-508 in the above embodiment.
The first generating module 605 is configured to perform noise reduction processing on the original image according to the obstacle data and a preset screening condition to generate a clean image. Please refer to the description of step 205 in the above embodiments.
In an embodiment, the first generating module 605 is further configured to: judging whether the barrier data of any sampling point corresponds to the barrier data of other sampling points according to the barrier data of each sampling point; if yes, retaining the barrier data; otherwise, the obstacle data is deleted from the passing area. Please refer to the description of steps 509-511 in the above embodiment.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, a division of a unit is merely a division of one logic function, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
In addition, units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
Furthermore, the functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
It should be noted that the functions, if implemented in the form of software functional modules and sold or used as independent products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
In this document, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions.
The above embodiments are merely examples of the present application and are not intended to limit the scope of the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (7)

1. An image processing method, comprising:
acquiring an original image;
dividing a passing area and a non-passing area according to the pixel value of the original image;
setting sampling points in the passing area;
acquiring obstacle data of all the sampling points in the passing area;
performing noise reduction processing on the original image according to the obstacle data and a preset screening condition to generate a clean image; wherein the content of the first and second substances,
dividing a passing area and a non-passing area according to the pixel value of the original image comprises the following steps:
generating a gray scale map according to the pixel value of each pixel point;
dividing the gray map into a plurality of initial passing areas and a plurality of initial non-passing areas according to the gray value and a first threshold value in the gray map;
combining the area parameters according to the initial passing area and the initial non-passing area and according to preset conditions to generate the passing area and the non-passing area; wherein the content of the first and second substances,
the non-passing area comprises an obstacle area and an unknown area; the step of generating the passing area by combining the area parameters of the initial passing area and the initial non-passing area according to preset conditions comprises the following steps:
judging whether the length of the edge contour of the obstacle area is smaller than a second threshold value or not according to the edge contour of the obstacle area;
if so, adjusting the gray value of the blocked area to the gray value of the unknown area;
otherwise, reserving the barrier area;
judging whether the area of the passing area is smaller than a third threshold value or not according to the edge profile of the passing area;
if so, adjusting the gray value of the passing area to the gray value of the unknown area;
otherwise, reserving the passing area;
judging whether the area of the unknown region is smaller than a fourth threshold value or not according to the edge profile of the unknown region;
if so, adjusting the gray value of the unknown area to the gray value of the passing area;
and generating the passing area and the non-passing area according to the combined result according to the preset condition.
2. The method of claim 1, wherein setting sampling points in the traffic region comprises:
setting a plurality of reference points in the original image every other preset step length;
uniformly sampling all the reference points to obtain the pixel value of each reference point;
judging whether the reference point is in a passing area or not according to the pixel value of each reference point;
if yes, the reference point is reserved as a sampling point.
3. The method of claim 2, wherein said acquiring obstacle data for all of said sample points in said traffic zone comprises:
generating a scanning signal simulating the scanning of an obstacle at each of the sampling points;
and acquiring the obstacle data of each sampling point corresponding to the obstacle.
4. The method according to claim 3, wherein the denoising the original image according to the obstacle data and a preset filtering condition to generate a clean image comprises:
judging whether the obstacle data of any sampling point corresponds to the obstacle data of other sampling points according to the obstacle data of each sampling point;
if yes, retaining the obstacle data;
otherwise, deleting the obstacle data from the passing area.
5. An image processing apparatus characterized by comprising:
the first acquisition module is used for acquiring an original image;
the first dividing module is used for dividing a passing area and a non-passing area according to the pixel value of the original image;
the first setting module is used for setting sampling points in the original image;
the second acquisition module is used for acquiring the obstacle data of all the sampling points in the passing area;
the first generation module is used for carrying out noise reduction processing on the original image according to the obstacle data and a preset screening condition to generate a clean image; wherein the content of the first and second substances,
the first partitioning module is further configured to:
generating a gray scale map according to the pixel value of each pixel point;
dividing the gray map into a plurality of initial passing areas and a plurality of initial non-passing areas according to the gray value and a first threshold value in the gray map;
combining the area parameters according to the initial passing area and the initial non-passing area and according to preset conditions to generate the passing area and the non-passing area; wherein the content of the first and second substances,
the non-passing area comprises an obstacle area and an unknown area; the step of generating the passing area by combining the area parameters of the initial passing area and the initial non-passing area according to preset conditions comprises the following steps:
judging whether the length of the edge contour of the obstacle area is smaller than a second threshold value or not according to the edge contour of the obstacle area;
if so, adjusting the gray value of the blocked area to the gray value of the unknown area;
otherwise, reserving the barrier area;
judging whether the area of the passing area is smaller than a third threshold value or not according to the edge profile of the passing area;
if so, adjusting the gray value of the passing area to the gray value of the unknown area;
otherwise, reserving the passing area;
judging whether the area of the unknown region is smaller than a fourth threshold value or not according to the edge profile of the unknown region;
if so, adjusting the gray value of the unknown area to the gray value of the passing area;
and generating the passing area and the non-passing area according to the combined result according to the preset condition.
6. The apparatus of claim 5, wherein the first setup module is further configured to:
setting a plurality of reference points in the original image every other preset step length;
uniformly sampling all the reference points to obtain the pixel value of each reference point;
judging whether the reference point is in a passing area or not according to the pixel value of each reference point;
if yes, the reference point is reserved as a sampling point.
7. An electronic device, comprising:
a memory for storing a computer program;
a processor for performing the method of any one of claims 1-4.
CN202010526189.3A 2020-06-11 2020-06-11 Image processing method and device and electronic equipment Active CN111429386B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010526189.3A CN111429386B (en) 2020-06-11 2020-06-11 Image processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010526189.3A CN111429386B (en) 2020-06-11 2020-06-11 Image processing method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN111429386A CN111429386A (en) 2020-07-17
CN111429386B true CN111429386B (en) 2020-09-25

Family

ID=71551447

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010526189.3A Active CN111429386B (en) 2020-06-11 2020-06-11 Image processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN111429386B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107729878A (en) * 2017-11-14 2018-02-23 智车优行科技(北京)有限公司 Obstacle detection method and device, equipment, vehicle, program and storage medium
CN108256413A (en) * 2017-11-27 2018-07-06 科大讯飞股份有限公司 It can traffic areas detection method and device, storage medium, electronic equipment
CN110307848A (en) * 2019-07-04 2019-10-08 南京大学 A kind of Mobile Robotics Navigation method
CN111026136A (en) * 2020-03-11 2020-04-17 广州赛特智能科技有限公司 Port unmanned sweeper intelligent scheduling method and device based on monitoring equipment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10282591B2 (en) * 2015-08-24 2019-05-07 Qualcomm Incorporated Systems and methods for depth map sampling

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107729878A (en) * 2017-11-14 2018-02-23 智车优行科技(北京)有限公司 Obstacle detection method and device, equipment, vehicle, program and storage medium
CN108256413A (en) * 2017-11-27 2018-07-06 科大讯飞股份有限公司 It can traffic areas detection method and device, storage medium, electronic equipment
CN110307848A (en) * 2019-07-04 2019-10-08 南京大学 A kind of Mobile Robotics Navigation method
CN111026136A (en) * 2020-03-11 2020-04-17 广州赛特智能科技有限公司 Port unmanned sweeper intelligent scheduling method and device based on monitoring equipment

Also Published As

Publication number Publication date
CN111429386A (en) 2020-07-17

Similar Documents

Publication Publication Date Title
Heinzler et al. Cnn-based lidar point cloud de-noising in adverse weather
Israni et al. Edge detection of license plate using Sobel operator
CN109858436A (en) Target category modification method, detection method based on video dynamic foreground mask
CN112199991B (en) Simulation point cloud filtering method and system applied to vehicle-road cooperation road side perception
CN109190473A (en) The application of a kind of " machine vision understanding " in remote monitoriong of electric power
CN110675407B (en) Image instance segmentation method and device, electronic equipment and storage medium
US20230005278A1 (en) Lane extraction method using projection transformation of three-dimensional point cloud map
CN111401133A (en) Target data augmentation method, device, electronic device and readable storage medium
CN106803262A (en) The method that car speed is independently resolved using binocular vision
CN103679167A (en) Method for processing CCD images
CN110570442A (en) Contour detection method under complex background, terminal device and storage medium
CN113282905A (en) Login test method and device
CN114355467A (en) Detection model establishment and multi-image-based track foreign matter detection method
Fang et al. An intensity-enhanced method for handling mobile laser scanning point clouds
CN111429386B (en) Image processing method and device and electronic equipment
CN102254306A (en) Real-time image defogging method based on image simplified hierachical model
CN114639159A (en) Moving pedestrian detection method, electronic device and robot
CN103971376A (en) Application program execution method and device
CN112348112B (en) Training method and training device for image recognition model and terminal equipment
CN111860261B (en) Passenger flow value statistical method, device, equipment and medium
CN113435350A (en) Traffic marking detection method, device, equipment and medium
Zhao et al. The combined cloud model for edge detection
CN117237924B (en) Obstacle visibility analysis method and device, intelligent terminal and storage medium
CN111626299A (en) Outline-based digital character recognition method
JP2004094427A (en) Slip image processor and program for realizing the same device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: Room 201, building 4, courtyard 8, Dongbeiwang West Road, Haidian District, Beijing

Patentee after: Beijing Yunji Technology Co.,Ltd.

Address before: Room 201, building 4, courtyard 8, Dongbeiwang West Road, Haidian District, Beijing

Patentee before: BEIJING YUNJI TECHNOLOGY Co.,Ltd.