WO2022052032A1 - 图像的分割方法、装置和图像的三维重建方法、装置 - Google Patents

图像的分割方法、装置和图像的三维重建方法、装置 Download PDF

Info

Publication number
WO2022052032A1
WO2022052032A1 PCT/CN2020/114752 CN2020114752W WO2022052032A1 WO 2022052032 A1 WO2022052032 A1 WO 2022052032A1 CN 2020114752 W CN2020114752 W CN 2020114752W WO 2022052032 A1 WO2022052032 A1 WO 2022052032A1
Authority
WO
WIPO (PCT)
Prior art keywords
pixel
image
sub
color gamut
pixels
Prior art date
Application number
PCT/CN2020/114752
Other languages
English (en)
French (fr)
Inventor
何惠东
张�浩
陈丽莉
韩鹏
姜倩文
石娟娟
Original Assignee
京东方科技集团股份有限公司
北京京东方光电科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 京东方科技集团股份有限公司, 北京京东方光电科技有限公司 filed Critical 京东方科技集团股份有限公司
Priority to PCT/CN2020/114752 priority Critical patent/WO2022052032A1/zh
Priority to CN202080001926.6A priority patent/CN115053257A/zh
Priority to US17/312,156 priority patent/US20220327710A1/en
Publication of WO2022052032A1 publication Critical patent/WO2022052032A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/28Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image

Definitions

  • the present disclosure relates to the field of computer technology, and in particular, to an image segmentation method, an image segmentation device, an image three-dimensional reconstruction method, an image three-dimensional reconstruction device, an electronic device, a wearable device, and a non-volatile computer-readable storage medium.
  • Image segmentation is a fundamental problem in image processing and computer vision because it is a key process in many applications of region-specific extraction.
  • an image segmentation method including: dividing each pixel in an image to be segmented into different pixel sets according to the color gamut range to which the pixel value belongs; Matching situation between pixels in each pixel set; image segmentation is performed on the to-be-segmented image according to the matching situation.
  • the method further includes: in a coordinate system with the red component, green component and blue component of the pixel value as variables, dividing the color gamut cube formed by the red component, the green component and the blue component into multiple A color gamut sub-cube as each color gamut range.
  • the method further includes: determining one of the vertexes of the color gamut cube, the center point of each color gamut sub-cube, and the mean point of each color gamut sub-cube contained in each color gamut sub-cube as the corresponding color
  • the characteristic pixel value of the gamut range according to the characteristic pixel value, the color gamut range to which the pixel value of each pixel in the to-be-segmented image belongs is determined.
  • determining the matching situation between the pixels in each pixel set includes: selecting a pixel in any pixel set as a seed pixel; calculating the other pixels in the pixel set. The difference between the pixel value and the pixel value of the seed pixel; according to the difference, it is determined whether the other pixels match the seed pixel.
  • the determining, according to the difference, whether the other pixel matches the seed pixel includes: using a membership function to determine, according to the difference, a fuzzy set to which the other pixel belongs; and according to the determined fuzzy set Set to determine whether the other pixels match the seed pixel.
  • the pixel value includes a red component, a green component and a blue component
  • using a membership function to determine the fuzzy set to which the other pixel belongs includes: according to the difference of the red component, The difference of the green component and the difference of the blue component determines the fuzzy set to which the red component, the green component and the blue component of the other pixels belong.
  • selecting a pixel in any pixel set as a seed pixel includes: according to the difference between the pixel value of each pixel in any pixel set and the characteristic pixel value of the color gamut range to which the pixel set belongs, to The pixels in the pixel set are sorted, and the characteristic pixel value is the vertex of the color gamut cube contained in the color gamut sub-cube corresponding to the color gamut range to which it belongs, the center point of the corresponding color gamut sub-cube, One of the mean points of the corresponding color gamut sub-cubes; according to the sorting result, each pixel in the pixel set is sequentially selected as the seed pixel.
  • performing image segmentation on the to-be-segmented image according to the matching situation includes: generating a plurality of sub-images according to the pixels and their matching pixels; Perform merging processing on the plurality of sub-images; and determine an image segmentation result according to the merging result.
  • the combining the multiple sub-images according to the overlap between the sub-images includes: calculating the number of pixels included in the intersection of the first sub-image and the second sub-image; The ratio of the number of pixels contained in the first sub-image to the number of pixels contained in the first sub-image is used to determine an overlap parameter for judging the overlap situation; when the overlap parameter is greater than a threshold, compare the first sub-image with the The second sub-image is merged.
  • the method further includes: determining an interference pixel according to the pixel value distribution of each pixel in the original image; determining a matching pixel of the interference pixel according to the pixel value of each pixel in the original image; In the image, the interference pixels and their matching pixels are removed to obtain the to-be-segmented image.
  • the to-be-segmented image is a two-dimensional image generated according to acquired underwater sonar data.
  • a 3D reconstruction method of an image including: according to the segmentation method of any one of the above embodiments, performing segmentation processing on an image to be segmented; and performing 3D reconstruction according to the segmentation processing result to obtain a 3D image.
  • an image segmentation apparatus including at least one processor, the processor is configured to perform the following steps: The pixels are divided into different pixel sets; according to the pixel value, the matching situation between each pixel in each pixel set is determined respectively; according to the matching situation, image segmentation is performed on the to-be-segmented image.
  • the method further includes: in a coordinate system with the red component, green component and blue component of the pixel value as variables, dividing the color gamut cube formed by the red component, the green component and the blue component into multiple A color gamut sub-cube as each color gamut range.
  • the processor is further configured to perform the step of: converting the vertexes of the color gamut cubes, the center points of the color gamut sub-cubes, and the mean points of the color gamut sub-cubes contained in the color gamut sub-cubes into the following steps: One is determined as the characteristic pixel value of the corresponding color gamut range; according to the characteristic pixel value, the color gamut range to which the pixel value of each pixel in the image to be divided belongs is determined.
  • determining the matching situation between the pixels in each pixel set includes: selecting a pixel in any pixel set as a seed pixel; calculating the other pixels in the pixel set. The difference between the pixel value and the pixel value of the seed pixel; according to the difference, it is determined whether the other pixels match the seed pixel.
  • the determining, according to the difference, whether the other pixel matches the seed pixel includes: using a membership function to determine, according to the difference, a fuzzy set to which the other pixel belongs; and according to the determined fuzzy set Set to determine whether the other pixels match the seed pixel.
  • the pixel value includes a red component, a green component and a blue component
  • using a membership function to determine the fuzzy set to which the other pixel belongs includes: according to the difference of the red component, The difference of the green component and the difference of the blue component determines the fuzzy set to which the red component, the green component and the blue component of the other pixels belong.
  • selecting a pixel in any pixel set as a seed pixel includes: according to the difference between the pixel value of each pixel in any pixel set and the characteristic pixel value of the color gamut range to which the pixel set belongs, to The pixels in the pixel set are sorted, and the characteristic pixel value is the vertex of the color gamut cube contained in the color gamut sub-cube corresponding to the color gamut range to which it belongs, the center point of the corresponding color gamut sub-cube, One of the mean points of the corresponding color gamut sub-cubes; according to the sorting result, each pixel in the pixel set is sequentially selected as the seed pixel.
  • performing image segmentation on the to-be-segmented image according to the matching situation includes: generating a plurality of sub-images according to the pixels and their matching pixels; Perform merging processing on the plurality of sub-images; and determine an image segmentation result according to the merging result.
  • the combining the multiple sub-images according to the overlap between the sub-images includes: calculating the number of pixels included in the intersection of the first sub-image and the second sub-image; The ratio of the number of pixels contained in the first sub-image to the number of pixels contained in the first sub-image is used to determine an overlap parameter for judging the overlap situation; when the overlap parameter is greater than a threshold, compare the first sub-image with the The second sub-image is merged.
  • the processor is further configured to perform the steps of: determining interfering pixels according to the pixel value distribution of each pixel in the original image; determining the interfering pixel according to the pixel value of each pixel in the original image In the original image, the interference pixels and their matching pixels are removed, and the image to be segmented is obtained.
  • the to-be-segmented image is a two-dimensional image generated according to acquired underwater sonar data.
  • an apparatus for three-dimensional reconstruction of an image comprising at least one processor configured to perform the steps of: segmenting an image to be segmented according to the segmentation method of any one of the foregoing embodiments Processing; 3D reconstruction is performed according to the segmentation processing result, and a 3D image is obtained.
  • a wearable device comprising: a three-dimensional reconstruction apparatus for an image in any one of the above embodiments; a display screen for displaying a three-dimensional image acquired by the three-dimensional reconstruction apparatus.
  • the three-dimensional reconstruction device generates the image to be segmented according to the acquired underwater sonar data, and reconstructs the three-dimensional image according to the segmentation result of the image to be segmented.
  • an electronic device comprising: a memory; and a processor coupled to the memory, the processor configured to execute the above based on instructions stored in the memory device.
  • a non-volatile computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, implements the image segmentation method according to any of the foregoing embodiments or 3D reconstruction of images.
  • FIG. 1 shows a flowchart of some embodiments of a segmentation method of an image according to the present disclosure
  • FIG. 2 shows a flowchart of other embodiments of the image segmentation method according to the present disclosure
  • FIG. 3 shows a schematic diagram of some embodiments of an image segmentation method according to the present disclosure
  • FIG. 4 shows a flowchart of some embodiments of step 120 in FIG. 1;
  • FIG. 5 shows a schematic diagram of other embodiments of the image segmentation method according to the present disclosure.
  • FIG. 6 shows a flowchart according to some embodiments of step 130 in FIG. 1;
  • FIG. 7 shows a schematic diagram of further embodiments of the image segmentation method according to the present disclosure.
  • FIG. 8 shows a schematic diagram of some embodiments of a wearable device according to the present disclosure
  • FIG. 9 illustrates a block diagram of some embodiments of wearable devices according to the present disclosure.
  • Figure 10 shows a block diagram of some embodiments of the electronic device of the present disclosure
  • FIG. 11 shows a block diagram of further embodiments of the electronic device of the present disclosure.
  • Image segmentation methods using binarization methods or based on RGB (Red, Green, Blue, red, green, blue) color space are based on a segmentation principle, that is, each pixel is classified into a unique subset.
  • RGB Red, Green, Blue, red, green, blue
  • the accuracy of image segmentation is not high, which can lead to poorer effect of subsequent processing.
  • the seawater, the seafloor and the target have completely different characteristics.
  • the three also have different opacities and colors in the 3D visualization imaging processing after image segmentation. In this way, the above-mentioned segmentation method often causes the target at the segmentation line of each region to be covered, resulting in poor three-dimensional imaging effect.
  • the present disclosure proposes a technical solution for image segmentation.
  • the technical solution divides each pixel whose pixel value belongs to the same color gamut range into the same pixel set, and performs pixel value matching processing in each pixel set, so as to determine the image segmentation result.
  • the division of the color gamut space can be refined, the recognition rate of different image areas can be improved, and the accuracy of image segmentation can be improved.
  • the technical solutions of the present disclosure can be implemented through the following embodiments.
  • FIG. 1 shows a flowchart of some embodiments of a method of segmentation of an image according to the present disclosure.
  • the method includes: step 110 , dividing different sets of pixels; step 120 , determining the matching situation in the set; and step 130 , segmenting the image according to the matching situation.
  • each pixel in the image to be segmented is divided into different pixel sets according to the color gamut range to which the pixel value belongs.
  • the image to be segmented is a two-dimensional image generated according to the acquired underwater sonar data.
  • the pixel set to which each pixel belongs can be determined according to the difference between the pixel value of each pixel and the characteristic pixel of each color gamut range, so as to realize the classification of the pixels.
  • similar pixels can be classified into one category to achieve preliminary image segmentation; further matching of pixel values within each category can improve the accuracy of image segmentation.
  • the entire color gamut may be modeled, and the color gamut may be divided into multiple color gamut ranges based on the modeling. On this basis, the color gamut range to which the pixel value of each pixel in the to-be-segmented image belongs can be determined. For example, color gamut modeling and division can be achieved by the embodiment in FIG. 2 .
  • FIG. 2 shows a flowchart of further embodiments of the image segmentation method according to the present disclosure.
  • the method further includes: step 210 , modeling a color gamut cube; step 220 , dividing the color gamut range; step 230 , determining characteristic pixel values; and step 240 , determining the color gamut range to which the pixel values belong.
  • step 210 according to the value ranges of the red component, the green component and the blue component of the pixel value, in the coordinate system with the red component, the green component and the blue component as variables, the entire color gamut is modeled as a color Domain cube.
  • the color gamut cube is divided into a plurality of color gamut sub-cubes as each color gamut range.
  • a gamut cube can be modeled by the embodiment in FIG. 3 and divided into gamut sub-cubes.
  • FIG. 3 shows a schematic diagram of some embodiments of a segmentation method of an image according to the present disclosure.
  • the three coordinate values of the coordinate system respectively represent the values of the three components R (red component), G (green component) and B (blue component) of the pixel value.
  • the value range of each component is [0, 255], and the pixel value (0, 0, 0) at the origin P5 represents black.
  • the cubes with P 1 to P 8 as vertices are the color gamut cubes corresponding to the entire color gamut, that is, the color gamut space.
  • the color gamut cube may be divided into 8 color gamut sub-cubes including vertices P 1 to P 8 respectively in the direction of 3 components according to the pixel value interval of 127, that is, the color gamut subspace.
  • the vertices P 1 to P 8 represent blue, pink, white, cyan, black, red, yellow, and green, respectively.
  • each color gamut sub-cube represents a different color gamut range, that is, pixel values within the same color gamut range have similar color information.
  • the color gamut range to which the pixel value of each pixel in the to-be-segmented image belongs may be determined by using the embodiment in FIG. 2 .
  • the vertices of the color gamut cubes included in each color gamut sub-cube are determined as characteristic pixel values of the corresponding color gamut range.
  • any pixel value in each color gamut sub-cube that can represent the corresponding color gamut range can be determined as a characteristic pixel value, such as a vertex, a center point, a mean point, and the like.
  • the color gamut range to which the pixel value of each pixel in the image to be divided belongs is determined according to the characteristic pixel value. For example, it can be determined which color gamut range the pixel belongs to according to the difference between the pixel and each characteristic pixel.
  • the pixel value of the pixel can be used as a multi-dimensional feature vector, and the difference between the pixels can be determined by calculating the similarity between the feature vectors (such as Euclidean distance, Mingshi distance, Mahalanobis distance, etc.).
  • the distances between the point corresponding to each pixel in the image to be segmented and the 8 vertices of the color gamut cube are calculated respectively.
  • the distance is calculated by the following formula:
  • p(n,m) RGB is the point defined by any pixel in the color image to be segmented in the RGB coordinate system
  • v i RGB is the vertex i of the color gamut cube
  • d i (n, m) is the distance between the point and the vertex the distance.
  • the pixels in the image to be segmented are spatially classified one by one.
  • the amount of pixels each subspace contains can be clearly located. For example, no post-processing is done for gamut subcubes that do not contain any pixels. Moreover, after the spatial classification is performed, repeated calculations for the same pixels can be avoided in subsequent processing (such as blurred color extraction).
  • image segmentation can be continued through the embodiment in FIG. 1 .
  • step 120 according to the pixel value, the matching situation between each pixel in each pixel set is determined respectively.
  • only purposeful matching of pixels belonging to the same color gamut range (as if belonging to one color type) can improve the efficiency and accuracy of matching, thereby improving the matching results based on The efficiency and accuracy of image segmentation.
  • step 120 may be implemented by the embodiment in FIG. 4 .
  • FIG. 4 shows a flowchart of some embodiments of step 120 in FIG. 1 .
  • step 120 includes: step 1210, selecting seed pixels; step 1220, calculating pixel value differences; step 1230, determining the fuzzy set to which the pixel belongs; and step 1240, determining whether the pixels match.
  • a pixel is selected from any pixel set as a seed pixel.
  • the pixels in the pixel set are sorted; according to the sorting result, the pixels are sorted in sequence. Each pixel in the pixel set is selected as a seed pixel.
  • the pixels in each pixel set are sorted from small to large according to their distances from the corresponding gamut sub-cube containing vertices. Starting from the closest point, each pixel is used as a seed pixel for fuzzy color extraction in a round-robin manner.
  • the sub-pixel if a sub-pixel cannot find a matching pixel, the sub-pixel is discarded, and the next pixel within the corresponding gamut sub-cube is used as the sub-pixel.
  • the sub-pixels are selected for matching in turn. For example, the selection and matching of seed pixels can be performed simultaneously within 8 gamut subcubes.
  • step 1220 the difference between the pixel values of other pixels in the pixel set and the pixel values of the seed pixels is calculated.
  • fuzzy color extraction is performed through steps 1230 and 1240 to determine the matching condition.
  • step 1230 the fuzzy set to which the difference belongs is determined using membership functions and fuzzy logic.
  • the pixel value includes a red component, a green component, and a blue component
  • the red, green, and blue components of other pixels are determined based on the difference in the red component, the difference in the green component, and the difference in the blue component, respectively.
  • an FCE Fuzzy Color Extractor
  • the components of its sub-pixels in the RGB space are p(n,m) R , p(n,m) G and p(n,m) B .
  • the current pixel to be processed is seed, and its RGB components are seed R , seed G and seed B .
  • the selection of the seed can be selected according to the needs of the algorithm, or determined according to the pixels in the image to be processed.
  • M and N represent the size of the image (positive integer).
  • the fuzzy set to which the color component difference belongs can be calculated by using a preset membership function according to the color component difference.
  • the corresponding membership function of each fuzzy set can be determined by the embodiment in FIG. 5 .
  • FIG. 5 shows a schematic diagram of other embodiments of the image segmentation method according to the present disclosure.
  • the fuzzy sets to which the color component differences belong include the Zero set, the Negative set, and the Positive set.
  • the three function curves correspond to the membership functions of the three fuzzy sets respectively.
  • ⁇ 1 and ⁇ 2 are adjustable blur thresholds set according to the actual situation and prior knowledge.
  • the matching situation of the pixels can be determined through step 1240 in FIG. 4 .
  • step 1240 according to the determined fuzzy set, it is determined whether other pixels match the seed pixel.
  • fuzzy logic can be:
  • the language method is used to configure the fuzzy logic
  • the input and output functions are relatively simple, and an accurate mathematical model is not required, thereby optimizing the amount of calculation.
  • the fuzzy matching method has strong robustness and is suitable for solving the problems of nonlinearity, strong coupling and time-varying, and lag in the classification process, thereby improving the accuracy of image segmentation.
  • Fuzzy matching method has strong fault tolerance ability and can adapt to the changes of the controlled object's own characteristics and environmental characteristics.
  • the fuzzy color extraction algorithm is suitable for image segmentation processing in complex environments (such as underwater sonar data images), and can improve the accuracy of image segmentation.
  • image segmentation can be performed through step 130 in FIG. 1 .
  • step 130 image segmentation is performed on the image to be segmented according to the matching situation.
  • image segmentation may be performed by the embodiment in FIG. 6 .
  • FIG. 6 shows a flowchart according to some embodiments of step 130 in FIG. 1 .
  • step 130 may include: step 1310, generating a plurality of sub-images; step 1320, merging the sub-images; and step 1330, determining a segmentation result.
  • a plurality of sub-images are generated according to each pixel and its matching pixel. For example, after each fuzzy color extraction, a sub-image can be obtained based on a seed pixel and its matching pixels.
  • sub-images corresponding to seed pixels with similar pixel values generally have overlapping parts. In cases where multiple sub-images share a common color region in RGB space, it may even occur that one sub-image completely covers another.
  • step 1320 the multiple sub-images are merged according to the overlap between the sub-images.
  • Two sub-images are considered to share a common area if they have spatial similarity and color similarity, and they can be joined together to form an image partition.
  • the number of pixels included in the intersection of the first sub-image and the second sub-image is calculated; according to the ratio of the number of pixels included in the intersection to the number of pixels included in the first sub-image, an overlap parameter is determined as an overlap condition; When the parameter is greater than the threshold, the first sub-image is merged with the second sub-image.
  • the overlap parameter can be determined using the following formula:
  • NUM() is the number of pixels in parentheses.
  • the overlap parameter the size of the common area of the two sub-images in the RGB space can be detected.
  • the overlap parameter is greater than the threshold, the sub-images I SAMPLE (i) and I SAMPLE (l) are considered to be similar and can be merged.
  • the threshold can be set in the algorithm. In order to make the image segmentation more accurate, the threshold can be set larger, such as 90, 100 and so on.
  • step 1330 the image segmentation result is determined according to the merging result.
  • the extraction and image segmentation of different regions in the image can be realized.
  • the imaging environment in some cases where the imaging environment is more complicated, there are many unknown factors in the original image.
  • the seabed environment is complex, the dynamic range of sonar data obtained by scanning is very small, and there will be a lot of interference in underwater sonar images. Therefore, the image can be preprocessed using logarithmic transformation to expand the dynamic range of the data and reduce interference.
  • preprocessing such as denoising and contrast enhancement can also be performed on the original image.
  • the interference in the image may be removed by the embodiment in FIG. 7 .
  • FIG. 7 shows a schematic diagram of further embodiments of the image segmentation method according to the present disclosure.
  • the method may further include: step 710 , determining interfering pixels; step 720 , determining matching pixels of the interfering pixels; and step 730 , removing interference.
  • interference pixels are determined according to the pixel value distribution of each pixel in the original image. For example, interference pixels can be selected according to prior knowledge and actual needs.
  • interfering pixels may be selected based on a priori knowledge. If the color range (eg, red domain, etc.) of the interference factor in the image has been determined, the pixels in the color range can be determined as the interference pixels.
  • the color range eg, red domain, etc.
  • a matching pixel of the interference pixel is determined according to the pixel value of each pixel in the original image.
  • matching can be performed by the method in any of the above embodiments (eg, fuzzy color extraction).
  • step 730 in the original image, the interference pixels and their matching pixels are removed to obtain the image to be segmented.
  • an interference image I INT composed of interference pixels and matching pixels can be determined.
  • the color image (matching pixel) close to the interference pixel in the original image can be removed through I SOURCE -I INT to obtain the required image I SAMPLE to be segmented.
  • segmentation processing is performed on the image to be segmented; 3D reconstruction is performed according to the segmentation processing result to obtain a 3D image.
  • a two-dimensional image obtained by performing image segmentation on an underwater sonar image different regions such as oceans, formations, and objects can be identified.
  • the 3D structures can be reconstructed from the segmented 2D images (such as Unity 3D tools).
  • the volume rendering technology can be used to realize the upper three-dimensional visualization effect.
  • volume rendering process there is no need to construct the geometric image of the intermediate process, and only the 3D data volume needs to be processed to reveal its internal details.
  • the operation of such three-dimensional reconstruction processing is simple and the conversion is quick.
  • three-dimensional visualization can be realized by VTK (Visualization Toolkit).
  • the water body can be separated from the bottom layer more effectively and the target objects (such as underwater buried mines, bombs, etc.) can be extracted more effectively, and the accuracy rate is high.
  • the target objects such as underwater buried mines, bombs, etc.
  • it can well solve the uncertainty and ambiguity in practical applications, and adapt to the different emphasis on color of different observers in different color spaces.
  • the Unity platform is used to construct the 3D scene. Users can construct very complex 3D images and scenes within a short understanding time, greatly improving work efficiency.
  • the processed 3D data is represented by appropriate geometry, color and brightness, and mapped to the 2D image plane.
  • three-dimensional images can be rendered in a VR (Virtual Reality, virtual reality) head-mounted device, allowing users to watch sonar images in a virtual environment to enhance immersion.
  • VR Virtual Reality, virtual reality
  • the image segmentation apparatus includes at least one processor configured to perform the image segmentation method in any of the above embodiments.
  • the apparatus for three-dimensional reconstruction of an image includes at least one processor, and the processor is configured to perform: the segmentation method of any one of the foregoing embodiments, to perform segmentation processing on the image to be segmented; image.
  • FIG. 8 shows a schematic diagram of some embodiments of a wearable device according to the present disclosure.
  • the wearable device can adopt a VR split machine structure, including: a PC (Personal Computer, personal computer) part (such as an image reconstruction device) and a VR head display part (such as a display screen).
  • a PC Personal Computer, personal computer
  • a VR head display part such as a display screen
  • processing processes such as image preprocessing, image segmentation, and volume rendering may be completed in the PC part first, and then the obtained three-dimensional image is rendered into the VR head-mounted display part through a DP (Display Port, display interface).
  • DP Display Port, display interface
  • image preprocessing may include denoising, contrast enhancement and other processing
  • image segmentation may include FCE processing in any of the above embodiments
  • using Unity 3D to construct three-dimensional images and scenes use VTK to perform three-dimensional image visualization processing.
  • image segmentation processing is performed on the image of the sonar data, it can be displayed in the virtual reality head-mounted device through volume rendering technology. In this way, users can observe the three-dimensional images of underwater sonar data water-strata-target objects in the VR scene.
  • FIG. 9 illustrates a block diagram of some embodiments of a wearable device according to the present disclosure.
  • the wearable device 9 includes: a three-dimensional reconstruction apparatus 91 for images in any of the foregoing embodiments; a display screen 92 for displaying a three-dimensional image acquired by the three-dimensional reconstruction apparatus 91 .
  • the three-dimensional reconstruction device 91 generates the image to be segmented according to the acquired underwater sonar data, and reconstructs the three-dimensional image according to the segmentation result of the image to be segmented.
  • FIG. 10 illustrates a block diagram of some embodiments of the electronic device of the present disclosure.
  • the electronic device 10 of this embodiment includes: a memory U11 and a processor U12 coupled to the memory U11, the processor U12 is configured to execute any one of the present disclosure based on instructions stored in the memory U11 The image segmentation method or the three-dimensional reconstruction method of the image in the embodiment.
  • the memory U11 may include, for example, a system memory, a fixed non-volatile storage medium, and the like.
  • the system memory stores, for example, an operating system, an application program, a boot loader (Boot Loader), a database, and other programs.
  • FIG. 11 shows a block diagram of further embodiments of the electronic device of the present disclosure.
  • the electronic device 11 of this embodiment includes: a memory U10 and a processor U20 coupled to the memory U10, and the processor U20 is configured to execute any one of the foregoing embodiments based on instructions stored in the memory U10 The segmentation method of the image or the three-dimensional reconstruction method of the image.
  • the memory U10 may include, for example, a system memory, a fixed non-volatile storage medium, and the like.
  • the system memory stores, for example, an operating system, an application program, a boot loader (Boot Loader), and other programs. )
  • the electronic device 6 may further include an input/output interface U30, a network interface U40, a storage interface U50, and the like. These interfaces U30, U40, U50 and the memory U10 and the processor U20 can be connected, for example, through a bus U60.
  • the input and output interface U30 provides a connection interface for input and output devices such as a display, a mouse, a keyboard, a touch screen, a microphone, and a speaker.
  • the network interface U40 provides connection interfaces for various networked devices.
  • the storage interface U50 provides a connection interface for external storage devices such as SD cards and U disks.
  • embodiments of the present disclosure may be provided as a method, system, or computer program product. Accordingly, the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present disclosure may take the form of a computer program product embodied on one or more computer-usable non-transitory storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein .
  • computer-usable non-transitory storage media including, but not limited to, disk storage, CD-ROM, optical storage, etc.
  • the methods and systems of the present disclosure may be implemented in many ways.
  • the methods and systems of the present disclosure may be implemented in software, hardware, firmware, or any combination of software, hardware, and firmware.
  • the above-described order of steps of the method is for illustration only, and the steps of the method of the present disclosure are not limited to the order specifically described above unless specifically stated otherwise.
  • the present disclosure can also be implemented as programs recorded in a recording medium, the programs including machine-readable instructions for implementing methods according to the present disclosure.
  • the present disclosure also covers a recording medium storing a program for executing the method according to the present disclosure.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Processing (AREA)

Abstract

一种图像的分割方法、装置和图像的三维重建方法、装置(91),涉及计算机技术领域。分割方法包括:根据像素值所属的色域范围,将待分割图像中的各像素划分到不同的像素集合中;根据像素值,分别确定每个像素集合内的各像素之间的匹配情况;根据匹配情况,对待分割图像进行图像分割。

Description

图像的分割方法、装置和图像的三维重建方法、装置 技术领域
本公开涉及计算机技术领域,特别涉及一种图像的分割方法、图像的分割装置、图像的三维重建方法、图像的三维重建装置、电子设备、可穿戴设备和非易失性计算机可读存储介质。
背景技术
图像分割是图像处理和计算机视觉中的一个基本问题,因为在许多特定区域提取的应用中,图像分割是关键过程。
在相关技术中,大多采用基于边缘检测的图像分割方法。
发明内容
根据本公开的一些实施例,提供了一种图像的分割方法,包括:根据像素值所属的色域范围,将待分割图像中的各像素划分到不同的像素集合中;根据像素值,分别确定每个像素集合内的各像素之间的匹配情况;根据所述匹配情况,对所述待分割图像进行图像分割。
在一些实施例中,该方法还包括:在以像素值的红色分量、绿色分量和蓝色分量为变量的坐标系中,将由红色分量、绿色分量和蓝色分量构成的色域立方体划分为多个色域子立方体,作为各色域范围。
在一些实施例中,该方法还包括:将各色域子立方体包含的所述色域立方体的顶点、各色域子立方体的中心点、各色域子立方体的均值点中的一个,确定为相应的色域范围的特征像素值;根据所述特征像素值,确定所述待分割图像中各像素的像素值所属的色域范围。
在一些实施例中,所述根据像素值,分别确定每个像素集合内的各像素之间的匹配情况包括:在任一像素集合中选取一个像素,作为种子像素;计算该像素集合中其他像素的像素值与所述种子像素的像素值的差异;根据所述差异,确定所述其他像素是否与所述种子像素匹配。
在一些实施例中,所述根据所述差异,确定所述其他像素是否与所述种子像素匹配包括:根据所述差异,利用隶属函数,确定所述其他像素属于的模糊集合;根据确 定的模糊集合,确定所述其他像素是否与所述种子像素匹配。
在一些实施例中,所述像素值包括红色分量、绿色分量和蓝色分量,所述根据所述差异,利用隶属函数,确定所述其他像素属于的模糊集合包括:分别根据红色分量的差异、绿色分量的差异和蓝色分量的差异,确定所述其他像素的红色分量、绿色分量和蓝色分量属于的模糊集合。
在一些实施例中,所述在任一像素集合中选取一个像素,作为种子像素包括:根据任一像素集合中各像素的像素值与该像素集合所属的色域范围的特征像素值的差异,对该像素集合中各像素进行排序,所述特征像素值为所述所属的色域范围对应的色域子立方体包含的所述色域立方体的顶点、所述对应的色域子立方体的中心点、所述对应的各色域子立方体的均值点中的一个;根据排序结果,依次将该像素集合中各像素选取为所述种子像素。
在一些实施例中,所述根据所述匹配情况,对所述待分割图像进行图像分割包括:根据所述各像素及其匹配像素,生成多个子图像;根据各子图像之间的重叠情况,对所述多个子图像进行合并处理;根据合并结果,确定图像分割结果。
在一些实施例中,所述根据各子图像之间的重叠情况,对所述多个子图像进行合并处理包括:计算第一子图像与第二子图像的交集包含的像素数量;根据所述交集包含的像素数量与所述第一子图像包含的像素数量的比值,确定重叠参数用于判断所述重叠情况;在所述重叠参数大于阈值的情况下,将所述第一子图像与所述第二子图像合并。
在一些实施例中,方法还包括:根据原始图像中各像素的像素值分布,确定干扰像素;根据所述原始图像中各像素的像素值,确定所述干扰像素的匹配像素;在所述原始图像中,去除所述干扰像素及其匹配像素,获取所述待分割图像。
在一些实施例中,所述待分割图像为根据获取的水下声呐数据生成的二维图像。
根据本公开的另一些实施例,提供一种图像的三维重建方法,包括:根据上述任一个实施例的分割方法,对待分割图像进行分割处理;根据分割处理结果进行三维重建,获取三维图像。
根据本公开的又一些实施例,提供一种图像的分割装置,包括至少一个处理器,所述处理器被配置为执行如下步骤:根据像素值所属的色域范围,将待分割图像中的各像素划分到不同的像素集合中;根据像素值,分别确定每个像素集合内的各像素之间的匹配情况;根据所述匹配情况,对所述待分割图像进行图像分割。
在一些实施例中,该方法还包括:在以像素值的红色分量、绿色分量和蓝色分量为变量的坐标系中,将由红色分量、绿色分量和蓝色分量构成的色域立方体划分为多个色域子立方体,作为各色域范围。
在一些实施例中,所述处理器还被配置为执行如下步骤:将各色域子立方体包含的所述色域立方体的顶点、各色域子立方体的中心点、各色域子立方体的均值点中的一个,确定为相应的色域范围的特征像素值;根据所述特征像素值,确定所述待分割图像中各像素的像素值所属的色域范围。
在一些实施例中,所述根据像素值,分别确定每个像素集合内的各像素之间的匹配情况包括:在任一像素集合中选取一个像素,作为种子像素;计算该像素集合中其他像素的像素值与所述种子像素的像素值的差异;根据所述差异,确定所述其他像素是否与所述种子像素匹配。
在一些实施例中,所述根据所述差异,确定所述其他像素是否与所述种子像素匹配包括:根据所述差异,利用隶属函数,确定所述其他像素属于的模糊集合;根据确定的模糊集合,确定所述其他像素是否与所述种子像素匹配。
在一些实施例中,所述像素值包括红色分量、绿色分量和蓝色分量,所述根据所述差异,利用隶属函数,确定所述其他像素属于的模糊集合包括:分别根据红色分量的差异、绿色分量的差异和蓝色分量的差异,确定所述其他像素的红色分量、绿色分量和蓝色分量属于的模糊集合。
在一些实施例中,所述在任一像素集合中选取一个像素,作为种子像素包括:根据任一像素集合中各像素的像素值与该像素集合所属的色域范围的特征像素值的差异,对该像素集合中各像素进行排序,所述特征像素值为所述所属的色域范围对应的色域子立方体包含的所述色域立方体的顶点、所述对应的色域子立方体的中心点、所述对应的各色域子立方体的均值点中的一个;根据排序结果,依次将该像素集合中各像素选取为所述种子像素。
在一些实施例中,所述根据所述匹配情况,对所述待分割图像进行图像分割包括:根据所述各像素及其匹配像素,生成多个子图像;根据各子图像之间的重叠情况,对所述多个子图像进行合并处理;根据合并结果,确定图像分割结果。
在一些实施例中,所述根据各子图像之间的重叠情况,对所述多个子图像进行合并处理包括:计算第一子图像与第二子图像的交集包含的像素数量;根据所述交集包含的像素数量与所述第一子图像包含的像素数量的比值,确定重叠参数用于判断所述 重叠情况;在所述重叠参数大于阈值的情况下,将所述第一子图像与所述第二子图像合并。
在一些实施例中,所述处理器还被配置为执行如下步骤:根据原始图像中各像素的像素值分布,确定干扰像素;根据所述原始图像中各像素的像素值,确定所述干扰像素的匹配像素;在所述原始图像中,去除所述干扰像素及其匹配像素,获取所述待分割图像。
在一些实施例中,所述待分割图像为根据获取的水下声呐数据生成的二维图像。
根据本公开的再一些实施例,提供一种图像的三维重建装置,包括至少一个处理器,所述处理器被配置为执行如下步骤:根据上述任一个实施例的分割方法,对待分割图像进行分割处理;根据分割处理结果进行三维重建,获取三维图像。
根据本公开的再一些实施例,提供一种可穿戴设备包括:上述任一个实施例中的图像的三维重建装置;显示屏,用于显示三维重建装置获取的三维图像。
在一些实施例中,三维重建装置根据获取的水下声呐数据生成待分割图像,并根据待分割图像的分割结果重建所述三维图像。
根据本公开的再一些实施例,提供一种电子设备,包括:存储器;和耦接至所述存储器的处理器,所述处理器被配置为基于存储在所述存储器装置中的指令,执行上述任一些实施例所述的图像的分割方法或者图像的三维重建方法。
根据本公开的再一些实施例,提供一种非易失性计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现上述任一些实施例所述的图像的分割方法或者图像的三维重建方法。
通过以下参照附图对本公开的示例性实施例的详细描述,本公开的其它特征及其优点将会变得清楚。)
附图说明
此处所说明的附图用来提供对本公开的进一步理解,构成本申请的一部分,本公开的示意性实施例及其说明用于解释本公开,并不构成对本公开的不当限定。在附图中:
图1示出根据本公开的图像的分割方法的一些实施例的流程图;
图2示出根据本公开的图像的分割方法的另一些实施例的流程图;
图3示出根据本公开的图像的分割方法的一些实施例的示意图;
图4示出图1中步骤120的一些实施例的流程图;
图5示出根据本公开的图像的分割方法的另一些实施例的示意图;
图6示出根据图1中步骤130的一些实施例的流程图;
图7示出根据本公开的图像的分割方法的又一些实施例的示意图;
图8示出根据本公开的可穿戴设备的一些实施例的示意图;
图9示出根据本公开的可穿戴设备的一些实施例的框图;
图10示出本公开的电子设备的一些实施例的框图;
图11示出本公开的电子设备的另一些实施例的框图。
具体实施方式
下面将结合本公开实施例中的附图,对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本公开一部分实施例,而不是全部的实施例。以下对至少一个示例性实施例的描述实际上仅仅是说明性的,决不作为对本公开及其应用或使用的任何限制。基于本公开中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本公开保护的范围。
除非另外具体说明,否则在这些实施例中阐述的部件和步骤的相对布置、数字表达式和数值不限制本公开的范围。同时,应当明白,为了便于描述,附图中所示出的各个部分的尺寸并不是按照实际的比例关系绘制的。对于相关领域普通技术人员已知的技术、方法和设备可能不作详细讨论,但在适当情况下,所述技术、方法和设备应当被视为授权说明书的一部分。在这里示出和讨论的所有示例中,任何具体值应被解释为仅仅是示例性的,而不是作为限制。因此,示例性实施例的其它示例可以具有不同的值。应注意到:相似的标号和字母在下面的附图中表示类似项,因此,一旦某一项在一个附图中被定义,则在随后的附图中不需要对其进行进一步讨论。
本公开的发明人发现上述相关技术中存在如下问题。采用二值化方法或基于RGB(Red、Green、Blue,红、绿、蓝)颜色空间的图像分割方法都基于一个分割原则,即每个像素被分类为一个唯一子集。但是,这种粗糙的分割原则无法准确识别不同的图像区域,导致图像分割的准确率不高。
而且,由于不同的观察者对图像颜色的理解是不同的,这种分割方法无法很好地反映人类对图像的理解,导致图像分割的准确率不高。
图像分割准确率不高,可以导致后续处理的效果变差。例如,在原始的水下声呐 数据中所包含的信息中,海水、海底和目标三者具有完全不同的特性。在图像分割后的三维可视化成像处理中,该三者也具有不同的不透明度和颜色。这样,上述分割方法往往使得各区域分割线处的目标被覆盖,导致三维成像效果差。
鉴于此,本公开提出了一种图像的分割技术方案。该技术方案将像素值所属色域范围相同的各像素划分到同一像素集合中,并在各像素集合内部进行像素值匹配处理,从而确定图像分割结果。这样,可以细化色域空间的划分,提高不同图像区域的识别率,从而提高图像分割的准确率。例如,可以通过下面的各实施例实现本公开的技术方案。
图1示出根据本公开的图像的分割方法的一些实施例的流程图。
如图1所示,该方法包括:步骤110,划分不同的像素集合;步骤120,确定集合内的匹配情况;和步骤130,根据匹配情况分割图像。
在步骤110中,根据像素值所属的色域范围,将待分割图像中的各像素划分到不同的像素集合中。例如,待分割图像为根据获取的水下声呐数据生成的二维图像。
例如,可以根据各像素的像素值与各色域范围的特征像素的差异大小,确定各像素所属的像素集合,从而实现像素的分类。这样,可以将相似的像素分为一类,实现图像的初步分割;在各分类内部进一步进行像素值的匹配,可以提高图像分割的准确率。
在一些实施例中,在执行步骤110之前,可以对整个色域进行建模,并基于该建模将色域划分多个色域范围。在此基础上,可以确定待分割图像中各像素的像素值所属的色域范围。例如,可以通过图2中的实施例实现色域建模和划分。
图2示出根据本公开的图像的分割方法的另一些实施例的流程图。
如图2所示,该方法还包括:步骤210,建模色域立方体;步骤220,划分色域范围;步骤230,确定特征像素值;和步骤240,确定像素值所属的色域范围。
在步骤210中,根据像素值的红色分量、绿色分量和蓝色分量的取值范围,在以红色分量、绿色分量和蓝色分量为变量的坐标系中,将整个色域建模为一个色域立方体。
在步骤220中,将色域立方体划分为多个色域子立方体,作为各色域范围。例如,可以通过图3中的实施例建模色域立方体并划分色域子立方体。
图3示出根据本公开的图像的分割方法的一些实施例的示意图。
如图3所示,坐标系的3个坐标值分别表示像素值3个分量R(红色分量)、G (绿色分量)和B(蓝色分量)的数值。在整个色域中,每个分量的取值范围均为[0,255],原点P 5的像素值(0,0,0)表示黑色。以P 1~P 8为顶点的立方体为整个色域对应的色域立方体,即色域空间。
在一些实施例中,可以按照127的像素值间隔,在3个分量的方向上将色域立方体划分为分别包含顶点P 1~P 8的8个色域子立方体,即色域子空间。例如,顶点P 1~P 8分别表示蓝色、粉色、白色、青色、黑色、红色、黄色、绿色。根据包含的顶点,每个色域子立方体分别表示不同的色域范围,即同一色域范围内的像素值具有类似的颜色信息。
在建模了色域立方体和色域子立方体后,可以继续通过图2中的实施例确定待分割图像中各像素的像素值所属的色域范围。
在步骤230中,将各色域子立方体包含的色域立方体的顶点,确定为相应的色域范围的特征像素值。例如,可以将各色域子立方体中能够表征相应的色域范围的任一像素值确定为特征像素值,如顶点、中心点、均值点等等。
在步骤240中,根据特征像素值,确定待分割图像中各像素的像素值所属的色域范围。例如,可以根据像素与各特征像素之间的差异大小,确定该像素属于哪个色域范围。可以将像素的像素值作为多维的特征向量,通过计算特征向量之间的相似度(如欧氏距离、明氏距离、马氏距离等)确定像素之间的差异。
在一些实施例中,在RGB坐标系中,分别计算待分割图像中每个像素对应的点与色域立方体的8个顶点的距离。例如,该距离通过如下公式计算:
d i(n,m)=∑[p(n,m) RGB-v i RGB] 2   i=1,2,…8
公式中p(n,m) RGB是彩色待分割图像中任意像素在RGB坐标系下定义的点,v i RGB是色域立方体的顶点i,d i(n,m)是该点与顶点间的距离。
例如,每个像素都会计算得到相应的8个距离。通过比较这8个距离的大小,将像素划分到最短距离对应顶点的色域子立方体。通过上述方法,对待分割图像中像素逐一进行空间分类。
这样,可以清楚地定位每个子空间包含的像素量。例如,对于不包含任何像素的色域子立方体不进行后续处理。而且,进行空间分类后,在后续处理中(如模糊颜色提取)可以避免对同样的像素进行重复计算。
在确定了像素值所属的色域范围后,可以继续通过图1中的实施例进行图像分割。
在步骤120中,根据像素值,分别确定每个像素集合内的各像素之间的匹配情况。 这样,相比于在整个色域进行无目的匹配,只对属于相同色域范围(如同属于一种颜色类型)的像素进行有目的匹配,能够提高匹配的效率和准确性,从而提高基于匹配结果的图像分割的效率和准确性。
在一些实施例中,可以通过图4中的实施例实现步骤120。
图4示出图1中步骤120的一些实施例的流程图。
如图4所示,步骤120包括:步骤1210,选取种子像素;步骤1220,计算像素值差异;步骤1230,确定像素所属的模糊集合;和步骤1240,确定像素是否匹配。
在步骤1210中,在任一像素集合中选取一个像素,作为种子像素。
在一些实施例中,根据任一像素集合中各像素的像素值与该像素集合所属的色域范围的特征像素值的差异,对该像素集合中各像素进行排序;根据排序结果,依次将该像素集合中各像素选取为种子像素。
在一些实施例中,按照像素与相应的色域子立方体包含顶点的距离,将每个像素集合中的像素由小到大排序。从距离最近的点开始,采用轮询的方式将每个像素作为种子像素进行模糊颜色提取。
在一些实施例中,如果某个种子像素无法找到与其匹配的像素,则舍弃该种子像素,并将相应的色域子立方内的下一个像素作为种子像素。以此类推,依次选取种子像素进行匹配。例如,可以在8个色域子立方内同时进行种子像素的选取和匹配。
在步骤1220中,计算该像素集合中其他像素的像素值与种子像素的像素值的差异。
基于该差异,可以确定其他像素是否与种子像素匹配。例如,通过步骤1230和步骤1240进行模糊颜色提取,从而确定匹配情况。
在步骤1230中,利用隶属函数和模糊逻辑,确定差异属于的模糊集合。
在一些实施例中,像素值包括红色分量、绿色分量和蓝色分量,分别根据红色分量的差异、绿色分量的差异和蓝色分量的差异,确定其他像素的红色分量、绿色分量和蓝色分量属于的模糊集合。
在一些实施例中,可以创建一个FCE(Fuzzy Color Extractor,模糊颜色提取器),用于提取与种子像素(seed)相似的像素。例如,对于像素p(n,m),其子像素在RGB空间的分量为p(n,m) R、p(n,m) G和p(n,m) B。当前想要处理的像素则为seed,其RGB分量分别为seed R、seed G和seed B。seed的选取可以根据算法需要选取,或根据图像中想要处理的像素确定。
任一个像素p(n,m)和seed之间的颜色分量差异计算如下:
Figure PCTCN2020114752-appb-000001
M和N表示图像的尺寸(正整数)。可以根据颜色分量差异,利用预设的隶属函数计算颜色分量差异所属的模糊集合。
在一些实施例中,可以通过图5中的实施例确定各模糊集合相应的隶属函数。
图5示出根据本公开的图像的分割方法的另一些实施例的示意图。
如图5所示,颜色分量差异所属的模糊集合包括Zero集合、Negative集合和Positive集合。3条函数曲线分别对应3个模糊集的隶属函数。纵轴隶属函数的函数值,即差异属于各模糊集合的隶属度;横轴表示差异的取值。α 1和α 2为根据实际情况和先验知识设置的可调整模糊阈值。
在确定了隶属函数的情况下,可以通过图4中的步骤1240确定像素的匹配情况。
在步骤1240中,根据确定的模糊集合,确定其他像素是否与种子像素匹配。
通过模糊计算后可以得到匹配与不匹配的模糊集合,经过去模糊化处理后便可以得到最终被提取的像素(即与seed匹配的像素)。
在一些实施例中,模糊逻辑可以为:
当dif(n,m) R、dif(n,m)) G和dif(n,m) B都属于Zero集合时,像素p(n,m) RGB与seed匹配;
当dif(n,m) R或dif(n,m)) G或dif(n,m) B属于Negative或者Positive集时,p(n,m) RGB与seed不匹配。
在上述实施例中,使用语言方法配置模糊逻辑,输入、输出函数比较简单,不需要精确的数学模型,从而优化了计算量。模糊匹配方法的鲁棒性强,适合解决分类过程中的非线性、强耦合时变、滞后问题,从而提高了图像分割的准确率。模糊匹配方法有较强的容错能力,可以适应受控对象自身特征及环境特征的变化。
因此,模糊颜色提取算法适用于复杂环境下(如水下声呐数据图像)的图像分割处理,能够提高图像分割的准确性。
在确定了匹配情况的基础上,可以通过图1中的步骤130进行图像分割。
在步骤130中,根据匹配情况,对待分割图像进行图像分割。
在一些实施例中,可以通过图6中的实施例进行图像分割。
图6示出根据图1中步骤130的一些实施例的流程图。
如图6所示,步骤130可以包括:步骤1310,生成多个子图像;步骤1320,合并子图像;和步骤1330,确定分割结果。
在步骤1310中,根据各像素及其匹配像素,生成多个子图像。例如,每经过一次模糊颜色提取,便可基于一个种子像素及其匹配像素获取一个子图像。
由于像素的空间相似性,像素值接近的种子像素对应的子图像一般具有重叠部分。在多个子图像共享RGB空间中的公共颜色区域的情况下,甚至可能出现某个子图像完全覆盖另一个子图像。
因此,需要将得到的一系列子图像子按照一定方法进行合并,形成最终的分割图像。
在步骤1320中,根据各子图像之间的重叠情况,对多个子图像进行合并处理。如果两个子图像具有空间相似性和颜色相似性,则认为两个子图像共享公共区域,可以将它们将连接在一起形成一个图像分区。
在一些实施例中,计算第一子图像与第二子图像的交集包含的像素数量;根据交集包含的像素数量与第一子图像包含的像素数量的比值,确定重叠参数作为重叠情况;在重叠参数大于阈值的情况下,将第一子图像与所述第二子图像合并。
例如,两个子图像分别为I SAMPLE (i)和I SAMPLE (l),可以使用如下的公式确定重叠参数:
NUM(I SAMPLE (i)∩I SAMPLE (l))/NUM(I SAMPLE (i))
NUM()为取括号中的像素数量。根据该重叠参数可以检测RGB空间中两个子图像的公共区域大小。在重叠参数大于阈值的情况下,认为子图像I SAMPLE (i)和I SAMPLE (l)具有相似性,可以将其合并。例如,阈值可以在算法中设置。为了使图像分割更为准确,可以将阈值设置较大一些,如90、100等。
在步骤1330中,根据合并结果,确定图像分割结果。通过合并处理,可以实现图像中不同区域(如水下图像中的水体、地层等)的提取和图像分割。
在一些实施例中,一些成像环境较复杂的情况,原图像中具有很多未知因素的干扰。例如,海底环境较为复杂,经过扫描得到的声呐数据动态范围很小,水下声呐图像中会存在很多干扰。因此,可以使用对数变换对图像进行预处理,从而来扩展数据动态范围,降低干扰。例如,还可以对原图像进行去噪、增强对比度等预处理。
在一些实施例中,在执行步骤110之前,可以通过图7中的实施例去除图像中的干扰。
图7示出根据本公开的图像的分割方法的又一些实施例的示意图。
如图7所示,该方法还可以包括:步骤710,确定干扰像素;步骤720,确定干扰像素的匹配像素;和步骤730,去干扰处理。
在步骤710中,根据原始图像中各像素的像素值分布,确定干扰像素。例如,可以根据先验知识、实际需求选取干扰像素。
在一些实施例中,可以根据先验知识选取干扰像素。如已经确定图像中的干扰因素的颜色范围(如红色域等),可以将该颜色范围内的像素确定为干扰像素。
在一些实施例中,RGB空间中心点(127,127,127)无法精确地划分到任一个颜色子空间,在颜色提取时会产生很大干扰。因此,也可以选取种子像素seed=(127,127,127)作为干扰像素,进行模糊颜色提取。
在步骤720中,根据原始图像中各像素的像素值,确定干扰像素的匹配像素。例如,可以通过上述任一个实施例中的方法(如模糊颜色提取)进行匹配。
在步骤730中,在原始图像中,去除干扰像素及其匹配像素,获取待分割图像。例如,对于输入的原图像I SOURCE,可以确定干扰像素和匹配像素组成的干扰图像I INT。可以通过I SOURCE—I INT去掉原图像中靠近干扰像素的彩色图像(匹配像素),得到需要的待分割图像I SAMPLE
在一些实施例中,根据上述任一个实施例的分割方法,对待分割图像进行分割处理;根据分割处理结果进行三维重建,获取三维图像。
在一些实施例中,在对水下声呐图像进行图像分割后得到的二维图像中,可以识别海洋、地层、目标物等不同区域。根据原始水下声呐数据含有的丰富信息的三维实体,可以对分割后的二维图像进行三维结构体的重建(如Unity 3D工具)。进而,可以利用体绘制技术实现上三维可视化效果。
例如,在体绘制过程中,不需要构建中间过程的几何图像,只需要对三维数据体进行处理即可展示其内部细节。这样的三维重建处理的操作简单、转换迅速。
在一些实施例中,可以通过VTK(Visualization Toolkit,可视化工具包)实现三维可视化。
在上述实施例中,能更有效地将水体与底层分离并提取出目标物(如水下掩埋雷、炸弹等),且拥有较高的准确率。在彩色图像分割领域,可以很好地解决实际应用中存在的不确定性和模糊性,并在不同颜色空间适应于不同观察者对彩色的不同侧重。
在后续的三维可视化过程中,使用Unity平台行三维场景的构建。用户可以在很 短的理解时间内,构造非常复杂的三维图像和场景,大大提高工作效率。
采用基于VTK工具包的体绘制方法,通过合适的几何图形、颜色和亮度将处理后的三维数据表现出来,并映射到二维像平面。最终,可以将三维图像渲染在VR(Virtual Reality,虚拟现实)头戴式设备中,使用户在虚拟的环境中观看声呐图像,增强沉浸感。
在一些实施例中,图像的分割装置包括至少一个处理器,处理器被配置为执行上述任一个实施例中的图像的分割方法。
在一些实施例中,图像的三维重建装置包括至少一个处理器,处理器被配置为执行:上述任一个实施例的分割方法,对待分割图像进行分割处理;根据分割处理结果进行三维重建,获取三维图像。
图8示出根据本公开的可穿戴设备的一些实施例的示意图。
如图8所示,可穿戴设备可以采用VR分体机结构,包括:PC(Personal Computer,个人计算机)部分(如图像的重建装置)和VR头显部分(如显示屏)。
在一些实施例中,可以先在PC部分完成图像预处理、图像分割、体绘制等处理过程,之后通过DP(Display Port,显示接口)将获得的三维图像渲染到VR头显部分中。
例如,图像预处理可以包括去噪、增强对比度等处理;图像分割可以包括上述任一个实施例的FCE处理;利用Unity 3D构建三维图像和场景利用VTK进行三维图像可视化处理。
在一些实施例中,对声呐数据的图像进行图像分割处理后,可以通过体绘制技术显示在虚拟现实头戴式设备中。这样,可以让用户在VR场景中观察水下声呐数据水体-地层-目标物的三维图像。
图9示出根据本公开的可穿戴设备的一些实施例的框图。
如图9所示,可穿戴设备9包括:上述任一个实施例中的图像的三维重建装置91;显示屏92,用于显示三维重建装置91获取的三维图像。
在一些实施例中,三维重建装置91根据获取的水下声呐数据生成待分割图像,并根据待分割图像的分割结果重建所述三维图像。
图10示出本公开的电子设备的一些实施例的框图。
如图10所示,该实施例的电子设备10包括:存储器U11以及耦接至该存储器U11的处理器U12,处理器U12被配置为基于存储在存储器U11中的指令,执行本公 开中任意一个实施例中的图像的分割方法或者图像的三维重建方法。
其中,存储器U11例如可以包括系统存储器、固定非易失性存储介质等。系统存储器例如存储有操作系统、应用程序、引导装载程序(Boot Loader)、数据库以及其他程序等。
图11示出本公开的电子设备的另一些实施例的框图。
如图11所示,该实施例的电子设备11包括:存储器U10以及耦接至该存储器U10的处理器U20,处理器U20被配置为基于存储在存储器U10中的指令,执行前述任意一个实施例中的图像的分割方法或者图像的三维重建方法。
存储器U10例如可以包括系统存储器、固定非易失性存储介质等。系统存储器例如存储有操作系统、应用程序、引导装载程序(Boot Loader)以及其他程序等。)
电子设备6还可以包括输入输出接口U30、网络接口U40、存储接口U50等。这些接口U30、U40、U50以及存储器U10和处理器U20之间例如可以通过总线U60连接。其中,输入输出接口U30为显示器、鼠标、键盘、触摸屏、麦克、音箱等输入输出设备提供连接接口。网络接口U40为各种联网设备提供连接接口。存储接口U50为SD卡、U盘等外置存储设备提供连接接口。
本领域内的技术人员应当明白,本公开的实施例可提供为方法、系统、或计算机程序产品。因此,本公开可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本公开可采用在一个或多个其中包含有计算机可用程序代码的计算机可用非瞬时性存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
至此,已经详细描述了根据本公开的。为了避免遮蔽本公开的构思,没有描述本领域所公知的一些细节。本领域技术人员根据上面的描述,完全可以明白如何实施这里公开的技术方案。
可能以许多方式来实现本公开的方法和系统。例如,可通过软件、硬件、固件或者软件、硬件、固件的任何组合来实现本公开的方法和系统。用于所述方法的步骤的上述顺序仅是为了进行说明,本公开的方法的步骤不限于以上具体描述的顺序,除非以其它方式特别说明。此外,在一些实施例中,还可将本公开实施为记录在记录介质中的程序,这些程序包括用于实现根据本公开的方法的机器可读指令。因而,本公开还覆盖存储用于执行根据本公开的方法的程序的记录介质。
虽然已经通过示例对本公开的一些特定实施例进行了详细说明,但是本领域的技 术人员应该理解,以上示例仅是为了进行说明,而不是为了限制本公开的范围。本领域的技术人员应该理解,可在不脱离本公开的范围和精神的情况下,对以上实施例进行修改。本公开的范围由所附权利要求来限定。

Claims (18)

  1. 一种图像的分割方法,包括:
    根据像素值所属的色域范围,将待分割图像中的各像素划分到不同的像素集合中;
    根据像素值,分别确定每个像素集合内的各像素之间的匹配情况;
    根据所述匹配情况,对所述待分割图像进行图像分割。
  2. 根据权利要求1所述的分割方法,还包括:
    在以像素值的红色分量、绿色分量和蓝色分量为变量的坐标系中,将由红色分量、绿色分量和蓝色分量构成的色域立方体划分为多个色域子立方体,作为各色域范围。
  3. 根据权利要求2所述的分割方法,还包括:
    将各色域子立方体包含的所述色域立方体的顶点、各色域子立方体的中心点、各色域子立方体的均值点中的一个,确定为相应的色域范围的特征像素值;
    根据所述特征像素值,确定所述待分割图像中各像素的像素值所属的色域范围。
  4. 根据权利要求1所述的分割方法,其中,所述根据像素值,分别确定每个像素集合内的各像素之间的匹配情况包括:
    在任一像素集合中选取一个像素,作为种子像素;
    计算该像素集合中其他像素的像素值与所述种子像素的像素值的差异;
    根据所述差异,确定所述其他像素是否与所述种子像素匹配。
  5. 根据权利要求4所述的分割方法,其中,所述根据所述差异,确定所述其他像素是否与所述种子像素匹配包括:
    利用隶属函数,确定所述差异属于的模糊集合;
    根据确定的模糊集合和模糊逻辑,确定所述其他像素是否与所述种子像素匹配。
  6. 根据权利要求5所述的分割方法,其中,所述像素值包括红色分量、绿色分量和蓝色分量,所述根据所述差异,利用隶属函数,确定所述其他像素属于的模糊集合包括:
    分别根据红色分量的差异、绿色分量的差异和蓝色分量的差异,确定所述其他像素的红色分量、绿色分量和蓝色分量属于的模糊集合。
  7. 根据权利要求4所述的分割方法,其中,所述在任一像素集合中选取一个像素,作为种子像素包括:
    根据任一像素集合中各像素的像素值与该像素集合所属的色域范围的特征像素值的差异,对该像素集合中各像素进行排序,所述特征像素值为所述所属的色域范围对应的色域子立方体包含的所述色域立方体的顶点、所述对应的色域子立方体的中心点、所述对应的各色域子立方体的均值点中的一个;
    根据排序结果,依次将该像素集合中各像素选取为所述种子像素。
  8. 根据权利要求1所述的分割方法,其中,所述根据所述匹配情况,对所述待分割图像进行图像分割包括:
    根据所述各像素及其匹配像素,生成多个子图像;
    根据各子图像之间的重叠情况,对所述多个子图像进行合并处理;
    根据合并结果,确定图像分割结果。
  9. 根据权利要求8所述的分割方法,其中,所述根据各子图像之间的重叠情况,对所述多个子图像进行合并处理包括:
    计算第一子图像与第二子图像的交集包含的像素数量;
    根据所述交集包含的像素数量与所述第一子图像包含的像素数量的比值,确定重叠参数用于判断所述重叠情况;
    在所述重叠参数大于阈值的情况下,将所述第一子图像与所述第二子图像合并。
  10. 根据权利要求1-9任一项所述的分割方法,还包括:
    根据原始图像中各像素的像素值分布,确定干扰像素;
    根据所述原始图像中各像素的像素值,确定所述干扰像素的匹配像素;
    在所述原始图像中,去除所述干扰像素及其匹配像素,获取所述待分割图像。
  11. 根据权利要求1-9任一项所述的分割方法,其中,
    所述待分割图像为根据获取的水下声呐数据生成的二维图像。
  12. 一种图像的三维重建方法,包括:
    根据权利要求1-11任一项所述的分割方法,对待分割图像进行分割处理;
    根据分割处理结果进行三维重建,获取三维图像。
  13. 一种图像的分割装置,包括至少一个处理器,所述处理器被配置为执行如下步骤:
    根据像素值所属的色域范围,将待分割图像中的各像素划分到不同的像素集合中;
    根据像素值,分别确定每个像素集合内的各像素之间的匹配情况;
    根据所述匹配情况,对所述待分割图像进行图像分割。
  14. 一种图像的三维重建装置,包括至少一个处理器,所述处理器被配置为执行如下步骤:
    根据权利要求1-11任一项所述的分割方法,对待分割图像进行分割处理;
    根据分割处理结果进行三维重建,获取三维图像。
  15. 一种电子设备,包括:
    存储器;和
    耦接至所述存储器的处理器,所述处理器被配置为基于存储在所述存储器装置中的指令,执行权利要求1-11任一项所述的图像的分割方法,或者权利要求12所述的图像的三维重建方法。
  16. 一种非易失性计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现如权利要求1-11任一项所述的图像的分割方法,或者权利要求12所述的图像的三维重建方法。
  17. 一种可穿戴设备,包括:
    权利要求14所述的图像的三维重建装置;和
    显示屏,用于显示所述三维重建装置获取的三维图像。
  18. 根据权利要求17所述的可穿戴设备,其中,
    所述三维重建装置根据获取的水下声呐数据生成待分割图像,并根据所述待分割图像的分割结果重建所述三维图像。
PCT/CN2020/114752 2020-09-11 2020-09-11 图像的分割方法、装置和图像的三维重建方法、装置 WO2022052032A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
PCT/CN2020/114752 WO2022052032A1 (zh) 2020-09-11 2020-09-11 图像的分割方法、装置和图像的三维重建方法、装置
CN202080001926.6A CN115053257A (zh) 2020-09-11 2020-09-11 图像的分割方法、装置和图像的三维重建方法、装置
US17/312,156 US20220327710A1 (en) 2020-09-11 2020-09-11 Image Segmentation Method and Apparatus and Image Three-Dimensional Reconstruction Method and Apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/114752 WO2022052032A1 (zh) 2020-09-11 2020-09-11 图像的分割方法、装置和图像的三维重建方法、装置

Publications (1)

Publication Number Publication Date
WO2022052032A1 true WO2022052032A1 (zh) 2022-03-17

Family

ID=80630212

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/114752 WO2022052032A1 (zh) 2020-09-11 2020-09-11 图像的分割方法、装置和图像的三维重建方法、装置

Country Status (3)

Country Link
US (1) US20220327710A1 (zh)
CN (1) CN115053257A (zh)
WO (1) WO2022052032A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116434065A (zh) * 2023-04-19 2023-07-14 北京卫星信息工程研究所 全色几何校正遥感影像的水体分割方法
CN116704152A (zh) * 2022-12-09 2023-09-05 荣耀终端有限公司 图像处理方法和电子设备

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117911631B (zh) * 2024-03-19 2024-05-28 广东石油化工学院 一种基于异源图像匹配的三维重建方法

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103337064A (zh) * 2013-04-28 2013-10-02 四川大学 图像立体匹配中的一种误匹配点剔除方法
CN104967761A (zh) * 2015-06-26 2015-10-07 深圳市华星光电技术有限公司 色域匹配方法
CN105122306A (zh) * 2013-03-29 2015-12-02 欧姆龙株式会社 区域分割方法以及检查装置
US9299009B1 (en) * 2013-05-13 2016-03-29 A9.Com, Inc. Utilizing color descriptors to determine color content of images
CN105869177A (zh) * 2016-04-20 2016-08-17 内蒙古农业大学 一种图像分割方法及装置
CN109377490A (zh) * 2018-10-31 2019-02-22 深圳市长隆科技有限公司 水质检测方法、装置及计算机终端
CN110751660A (zh) * 2019-10-18 2020-02-04 南京林业大学 一种彩色图像分割方法

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108681994B (zh) * 2018-05-11 2023-01-10 京东方科技集团股份有限公司 一种图像处理方法、装置、电子设备及可读存储介质
US10957049B2 (en) * 2019-07-31 2021-03-23 Intel Corporation Unsupervised image segmentation based on a background likelihood estimation
US11455485B2 (en) * 2020-06-29 2022-09-27 Adobe Inc. Content prediction based on pixel-based vectors

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105122306A (zh) * 2013-03-29 2015-12-02 欧姆龙株式会社 区域分割方法以及检查装置
CN103337064A (zh) * 2013-04-28 2013-10-02 四川大学 图像立体匹配中的一种误匹配点剔除方法
US9299009B1 (en) * 2013-05-13 2016-03-29 A9.Com, Inc. Utilizing color descriptors to determine color content of images
CN104967761A (zh) * 2015-06-26 2015-10-07 深圳市华星光电技术有限公司 色域匹配方法
CN105869177A (zh) * 2016-04-20 2016-08-17 内蒙古农业大学 一种图像分割方法及装置
CN109377490A (zh) * 2018-10-31 2019-02-22 深圳市长隆科技有限公司 水质检测方法、装置及计算机终端
CN110751660A (zh) * 2019-10-18 2020-02-04 南京林业大学 一种彩色图像分割方法

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116704152A (zh) * 2022-12-09 2023-09-05 荣耀终端有限公司 图像处理方法和电子设备
CN116704152B (zh) * 2022-12-09 2024-04-19 荣耀终端有限公司 图像处理方法和电子设备
CN116434065A (zh) * 2023-04-19 2023-07-14 北京卫星信息工程研究所 全色几何校正遥感影像的水体分割方法
CN116434065B (zh) * 2023-04-19 2023-12-19 北京卫星信息工程研究所 全色几何校正遥感影像的水体分割方法

Also Published As

Publication number Publication date
CN115053257A (zh) 2022-09-13
US20220327710A1 (en) 2022-10-13

Similar Documents

Publication Publication Date Title
US11983893B2 (en) Systems and methods for hybrid depth regularization
WO2022052032A1 (zh) 图像的分割方法、装置和图像的三维重建方法、装置
Li et al. Multi-angle head pose classification when wearing the mask for face recognition under the COVID-19 coronavirus epidemic
EP2720171B1 (en) Recognition and pose determination of 3D objects in multimodal scenes
TWI395145B (zh) 手勢辨識系統及其方法
CN108921895B (zh) 一种传感器相对位姿估计方法
CN108876723B (zh) 一种灰度目标图像的彩色背景的构建方法
CN107437246B (zh) 一种基于端到端全卷积神经网络的共同显著性检测方法
WO2019169884A1 (zh) 基于深度信息的图像显著性检测方法和装置
CN108388901B (zh) 基于空间-语义通道的协同显著目标检测方法
CN115937552A (zh) 一种基于融合手工特征与深度特征的图像匹配方法
CN112767478A (zh) 一种基于表观指导的六自由度位姿估计方法
Peng et al. OCM3D: Object-centric monocular 3D object detection
KR101191319B1 (ko) 객체의 움직임 정보에 기반한 회화적 렌더링 장치 및 방법
Akizuki et al. ASM-Net: Category-level Pose and Shape Estimation Using Parametric Deformation.
Chochia Image segmentation based on the analysis of distances in an attribute space
Kowdle et al. Scribble based interactive 3d reconstruction via scene co-segmentation
Abello et al. A graph-based method for joint instance segmentation of point clouds and image sequences
CN113129351A (zh) 一种基于光场傅里叶视差分层的特征检测方法
Wang et al. Geodesic-HOF: 3D reconstruction without cutting corners
Garces et al. Light-field surface color segmentation with an application to intrinsic decomposition
Manke et al. Salient region detection using fusion of image contrast and boundary information
CN117593618B (zh) 基于神经辐射场和深度图的点云生成方法
Nguyen et al. Automatic image colorization based on feature lines
Bullinger et al. Moving object reconstruction in monocular video data using boundary generation

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20952820

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20952820

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 20.11.2023)