KR20110019117A - Semantic based image retrieval method - Google Patents

Semantic based image retrieval method Download PDF

Info

Publication number
KR20110019117A
KR20110019117A KR1020090076704A KR20090076704A KR20110019117A KR 20110019117 A KR20110019117 A KR 20110019117A KR 1020090076704 A KR1020090076704 A KR 1020090076704A KR 20090076704 A KR20090076704 A KR 20090076704A KR 20110019117 A KR20110019117 A KR 20110019117A
Authority
KR
South Korea
Prior art keywords
semantic
image
images
concept
query image
Prior art date
Application number
KR1020090076704A
Other languages
Korean (ko)
Inventor
강상길
Original Assignee
인하대학교 산학협력단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 인하대학교 산학협력단 filed Critical 인하대학교 산학협력단
Priority to KR1020090076704A priority Critical patent/KR20110019117A/en
Publication of KR20110019117A publication Critical patent/KR20110019117A/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Mathematical Analysis (AREA)
  • Computational Mathematics (AREA)
  • Algebra (AREA)
  • Mathematical Optimization (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Pure & Applied Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

PURPOSE: A semantic based image retrieval method is provided to improve the exactness of image retrieval result by reducing semantic gap between query image and search image. CONSTITUTION: If an image query is inputted, the image retrieval method extracts a plurality of images, and forms at least four sub domain by dividing the image query and a plurality of images(S110~S130). A feature vector about the sub domain frame of the image is detected, a semantic meaning is endowed to the images suing the detected feature vector(S140). The semantic meaning extracts COV(Concept Occurrence Vector) in order to decide the rate of occupying in a plurality of images and the image query(S150). A semantic graph interlinking image and the semantic vertex expressing one or more semantic concept with a plurality of images of the quality is formed using the COV(S160).

Description

Semantic based image retrieval method

The present invention relates to a semantic-based image retrieval method, and more particularly, to an image retrieval method that can retrieve an image according to the semantic concept of the image.

Recently, due to the development of multimedia technology and popularization of high-speed communication network, image information is overflowing not only text information. These images are stored in large databases, and computer users can search for the images they need.

Meanwhile, text-based image retrieval using annotation is used as a method for retrieving a plurality of images stored in a large database. This method is annotated by describing the characteristics of the image for each image. However, it is possible to use a medium in which a small number of images are stored. However, when the number of images is large, the accuracy of the search is deteriorated. To solve this problem, content-based image retrieval has been developed.

Content-based image retrieval is based on visual feature information of the image, which is based on low-level feature information such as color, texture, shape, and spatial distribution. However, when the user's query is expressed based on the low-level image features described above, there is a significant semantic gap from the actual content intended by the user. In order to overcome such semantic differences, there is a need for a method of searching for an image by using semantics included in the image (for example, the sky, the sea, etc., which are expressed in the natural video image).

SUMMARY OF THE INVENTION The present invention has been made to solve the above-mentioned problems, and an object of the present invention is to divide a query image and a plurality of images into at least four sub-areas, to give a semantic concept to each sub-area, and to query images and a plurality of images. An object of the present invention is to provide an image retrieval method capable of retrieving an image according to the similarity of semantic concepts included in the.

According to an embodiment of the present disclosure, an image classification method according to an embodiment of the present invention reads a plurality of images stored in a database when at least one query image is input, and divides the query image and the plurality of images. A first step of forming at least four or more sub-areas respectively, detecting a feature vector for each of the query image and the sub-regions of the plurality of images, and included in the sub-region using the detected feature vector A second step of assigning semantic concepts representing meaningful meanings, and calculating a concept occurrence vector (COV) for determining a proportion of the given semantic concepts in the query image and the entirety of each of the plurality of images; In a third step, the query image and the plurality of images are generated using the calculated concept generation vector. A fourth step of forming a semantic graph connecting the image and the semantic vertex expressing at least one semantic concept, and comparing the semantic graph of the query image with the semantic graph of the search target image to determine similarity, And a fifth step of retrieving an image similar to the query image.

In the present invention, the semantic concept may include sky, water, grass, sand, rock, trunk, flower, and foliage. have. In this case, each of the query image and the plurality of images includes the sky, the water, the grass, the sand, the rock, the trunk, and the image. The image may be a natural image including a semantic concept of at least one of a flower and the foliage.

On the other hand, the first step is a first step of forming the four sub-areas by dividing the query image into 2 × 2 blocks, a second process of determining whether the edge detection in the four sub-regions of the query image and the fourth If an edge is detected in any of the sub-regions, the method may further include re-dividing the sub-region in which the edge is detected until the edge is not detected to determine whether the edge is detected. Here, the edge detection for the sub region may use a canny edge detection algorithm.

The second step may include a first process of detecting color information of each of the sub-areas, a second process of detecting texture information of each of the sub-areas, and color information of each of the detected sub-areas. And a third process of applying a semantic concept to each of the sub-regions by using texture information.

In the third step, it is preferable to calculate the concept generation vector using the following equation.

Figure 112009050667872-PAT00001

Where N is the semantic concept and n is the number of subregions.

In the fourth step, a first process of calculating weights for semantic concepts included in each of the query image and the plurality of images using the calculated concept generation vector, and the query image and the plurality of images, respectively, are performed. The second process may include connecting to the semantic vertex using an edge line, and matching and connecting the calculated weight to each of the edge lines. In this case, the weight of the semantic concept included in each of the query image and the plurality of images may be calculated using the following equation.

Figure 112009050667872-PAT00002

Here, wvi y is a weight vector for the semantic concept i of the query image and the image y of the plurality of images, vi y is a concept generation vector for the semantic concept i of the query image and the image y of the plurality of images , x y is the number of edge lines connected to the image y of the query image and the plurality of images, E is the total number of the query image and the plurality of images.

In the fifth step, the similarity between the semantic graph of the query image and the semantic graph of the plurality of images may be determined using the following equation.

Figure 112009050667872-PAT00003

Where p is the image, q is the query image, wvi y is a weight vector for the semantic concept i of image y of the query image and the plurality of images, p i is a semantic concept i, q i of the image The semantic concept i of the query image.

According to the present invention, at least one semantic concept is assigned to the query image and the plurality of images, and the similarity between the query image and the plurality of images is determined using the semantic concept included in the query image and the plurality of images, and the similarity with the query image. Will be able to search for high images. Therefore, it is possible to reduce the semantic gap between the query image input by the user and the retrieved image, thereby improving the search accuracy of the image.

Hereinafter, with reference to the accompanying drawings will be described in detail the present invention.

1 is a flowchart illustrating a semantic based image retrieval method according to an embodiment of the present invention. 2 is a view for explaining an edge detection method according to an embodiment of the present invention, Figure 3 is a view for explaining a method for generating a concept generation vector according to an embodiment of the present invention. 4 is a diagram illustrating a method of generating a semantic graph according to an embodiment of the present invention, and FIG. 5 is a diagram illustrating a semantic based image search result according to an embodiment of the present invention.

The semantic-based image retrieval method illustrated in FIG. 1 relates to a method of retrieving a plurality of images stored in a large database in a multimedia environment using a user terminal device (for example, a PC or a mobile phone).

In the semantic-based image search method illustrated in FIG. 1, when at least one query image is input (S110), a plurality of images stored in a database are read as a search target image (S120). Here, the query image is a natural video image for searching, and may be input by scanning a picture or a picture including a natural video image to be searched by a user or uploading an image stored in a terminal device.

Thereafter, the query image and the plurality of read images are divided to form at least four or more sub-regions, respectively (S130). Specifically, first, the query image and the plurality of images are divided into 2 × 2 block units. By this process, each of the query image and the plurality of images includes four sub-areas of the same size.

An edge is detected in each sub region of the query image and the plurality of images, and the subdivision of each sub region is determined according to whether the edge is detected. Specifically, when an edge is detected in each sub-region, the sub-region in which the edge is detected is re-divided in 2 × 2 block units to form a subdivided sub-region.

Then, an edge is detected in each of the subdivided subregions, and the subdivision of the subdivided subregion is determined according to whether the edge is detected. This process may be repeated until no edges are detected in the re-divided sub-regions of the query image and each of the plurality of images, but only until the query image and the plurality of images are divided into 32 × 32 blocks. It is desirable to. This edge detection method is the canny edge detection algorithm presented in the paper "Canny, J. A Computational Approach To Edge Detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol, pp.679-698" published in 1986. By using this is shown in FIG.

2 is a view for explaining an edge detection method according to an embodiment of the present invention, and shows the Canny edge detection algorithm as described above.

Referring to FIG. 2, the Canny edge detection algorithm may be applied to each of a query image and a plurality of images, and may be divided into two stages. First, a threshold consisting of a first threshold value T1 to a second threshold value T2 in order to detect an edge in a sub region N formed by dividing a predetermined image I by 2 × 2 blocks in one step. The range is set. In this case, the first threshold value T1 may be the minimum value in the threshold range, and the second threshold value T2 may be the maximum value in the threshold range. The first threshold value T1 and the second threshold value T2 may be represented by gray levels.

Thereafter, in step 2, an edge is detected for each of the sub-regions N included in the image I. Specifically, the gray level E of one pixel is calculated, and if the pixel gray level E is included between the threshold ranges T1 to T2 set in step 1, it is determined that the edge is detected. Then, the subregion N where the edge is detected is subdivided into 2 × 2 block units to form a subdivided subregion x to detect the edge again.

If the gray level E of one pixel of the subregion N and the subdivided subregion x is not included between the threshold ranges T1 to T2 set in step 1, it is determined that the edge is not detected. do. This two-step edge detection process is performed until no edge is detected in the subregion N or the subdivided subregion x, but only until the entire image I is divided into 32 × 32 blocks. It is preferable to repeat.

On the other hand, when the query image and the plurality of images are divided into block units to form a subregion, a semantic concept is provided by detecting a feature vector in each subregion of the query image and the plurality of images (S140). Here, the feature vector means color information and texture information of each pixel included in the sub-area, and detects color information and texture information of the query image and the sub-region of each of the plurality of images and gives a semantic concept thereof. do. In this case, the semantic concept may be a meaning to be expressed in each sub area. For example, the semantic concepts in natural video images such as the query image and the plurality of images of the present invention include sky, water, grass, sand, rock, and stem. (trunk), flowers, and leaves. That is, it may be a visual meaning to be expressed in the natural video image.

On the other hand, the color information of the sub-regions of each of the query image and the plurality of images may be detected by analyzing the color of the pixels included in each sub-region and generating a color histogram thereof. In this case, the color may be defined as a three-dimensional color space of any one of RGB (Red, Green, Blue), HSV (Hue, Saturation, Value), and HSB (Hue, Saturation, Brightness), and in particular, JPEG, BMP, In an image formed in the form of GIF, the color may be expressed in an RGB (red, green, blue) color space. Therefore, the color of the pixels included in each sub-region can be generated and analyzed as a color histogram.

The texture information of each subregion may be detected by analyzing a pattern of pixels included in each subregion to grasp a structural arrangement of a natural image. In this case, the texture information for each sub-area was published in 1978 in the paper "Tamura, H., Mori, S., and Yamawaki, T. Texture features corresponding to visual perception. IEEE Transactions System Man, and Cyberbectics 8 ( 6) ".

As described above, the semantic concept of the corresponding sub region may be given using the detected color information and texture information. For example, when the color information of the sub area is "blue series", the sub area may be selected as a semantic concept of any one of water or sky such as sea or lake, and when the texture information is sea, Finally, for the sub-region, the semantic concept of "water" can be given. As such, semantic concepts can be given through mechanical learning such as machine learning. In addition, the semantic concept for each sub-area is k-NN (k-Nearest) proposed in the 2000 paper "Duda RO, Hart PE, Stork D, G., Pattern Classificatio 2nd Edition, A Wiley-Interscience Publicaion". Neighbor) can be given in a manner classified by using a classifier.

Next, a concept generation vector (COV: Concept Occurrence Vector) is calculated using the semantic concept applied to the query image and the sub-region of each of the plurality of images (S150). In this case, the concept generation vector is an element representing the proportion of the semantic concept assigned to each sub-area in the entirety of the query image and each of the plurality of images. That is, sky, water, grass, sand, rock, trunk, flower, leaf, etc. in the query image and the plurality of images, respectively. It shows how much of the semantic concept of.

Such a concept generation vector may be calculated using Equation 1 below.

Figure 112009050667872-PAT00004

In Equation 1, N is a semantic concept, and n represents the number of subregions. A method of calculating the concept generation vector using Equation 1 will be described in more detail with reference to FIG. 3.

3 (a) and 3 (b) are diagrams for explaining a method of calculating a concept generation vector according to an embodiment of the present invention. The image shown in FIG. 3 (a) is illustrated in FIG. 3 (b). As shown, the concept generation vector is calculated by dividing into sub-regions and giving a semantic concept for each sub-region.

As shown in (b) of FIG. 3, the concept occurrence vector of the image shown in (a) is applied to eight semantic concepts (sky, water, grass, sand, rock, tree trunks, flowers and leaves) [24.5, 40.2, 0, 28.5, 6.8, 0, 0, 0]. In this case, the unit of the concept generation vector may be%, but the% unit may be converted to a decimal point unit.

The image illustrated in FIG. 3 may be any one of a query image and a plurality of images, and the method illustrated in FIG. 3 may be applied to both the query image and the plurality of images.

Meanwhile, a semantic graph for each of the query image and the plurality of images is formed using the concept generation vector calculated in S150 (S160). Here, the semantic graph is a graph showing how many semantic concepts are included in an image, and is a graph connecting edge images with semantic vertices representing the semantic concepts included in the image. In this case, a weight of the semantic concept included in each of the query image and the plurality of images may be calculated and matched to a corresponding edge line to connect the image and semantic vertices. Such a semantic graph may be expressed as Equation 2 below.

Figure 112009050667872-PAT00005

In Equation 2, G denotes a semantic graph, V denotes a set of semantic vertices, and E denotes a set of edge lines. A semantic graph G denotes a plurality of images or query images and semantic vertices (V) by edge lines (E). It has a connected structure.

Referring to the semantic graph illustrated in FIG. 4, the semantic graph includes semantic vertices in which the first to nth images represent semantic concepts included in their own images among the first to eighth semantic vertices 1 to 8. It has a structure connected by edge lines. In this case, the first semantic vertex 1 represents the semantic concept of the sky, the second semantic vertex 2 represents the semantic concept of the water, and the third semantic vertex 3 represents the semantic concept of the grass. . In addition, the fourth semantic vertex 4 represents the semantic concept of sand, the fifth semantic vertex 5 represents the semantic concept of rock, and the sixth semantic vertex 6 represents the semantic concept of tree trunk. . In addition, the seventh semantic vertex 7 represents the semantic concept of the flower, and the eighth semantic vertex 8 represents the semantic concept of the leaf. The order of the first to eighth semantic vertices 8 illustrated in FIG. 4 may be changed, and the number thereof may increase according to the semantic concept included in the natural landscape image.

In addition, each edge line in the semantic graph is matched with a weight for the semantic concept represented by the corresponding semantic vertex in the image.

Referring to FIG. 4, the first image includes semantic concepts such as sky, water, and sand, and the first image includes a first semantic vertex 1, a second semantic vertex 2, and an edge line. It has a structure connected with the fourth semantic vertex (4). In this case, each of the three edge lines may be matched with weights wv1 1 , wv2 1 , and wv4 1 for semantic concepts of sky, water, and sand in the first image. In this case, the weight is obtained by using the concept generation vector calculated through Equation 1, and may be calculated using Equation 3 below.

Figure 112009050667872-PAT00006

In Equation 3, wvi y is the weight vector for the semantic concept i in image y, vi y is the concept occurrence vector for semantic concept i in image y, x y is the number of edge lines connected to image y, E is the query image And the total number of the plurality of images. In Equation 3, the image y may be any one of a query image and a plurality of images, and may be applied to each of the plurality of images and each of the query images.

Thereafter, when the semantic graph is formed, the image is searched by comparing the semantic graph of the query image with the semantic graph of each of the plurality of images to determine similarity (S170). In this case, the semantic graph of the query image and the similarity of each of the plurality of images may be determined using Equation 4 below.

Figure 112009050667872-PAT00007

In Equation 4, p is any one of a plurality of images, q is a query image, wvi y is a weight vector for the semantic concept i of image y of the query image and the plurality of images, and p i is any of the plurality of images. The semantic concepts i and q i of one image are the semantic concepts i of a query image.

In Equation 4, wvi y · p i is obtained by multiplying the weight vector for the semantic concept i of any one of the plurality of images by the semantic concept i of any one of the plurality of images. In this case, wvi y And p i preferably target the same image.

Wvi y · q i is obtained by multiplying the weight vector of the semantic concept i of any one image y of the query image by the semantic concept i of any one of the query images. In this case, wvi y And q i preferably target the same query image.

In Equation 4, the similarity of each of the query image and the plurality of images can be calculated using the dot product of the result vector of wvi y · p i and the result vector of wvi y · q i .

5 is a diagram illustrating a semantic based image search result. When the first to fourth query images are input to the search apparatus, the plurality of images stored in the database are read, and an image having a high similarity to the first to fourth query images is searched using the method provided in FIG. 1.

That is, as shown in FIG. 5, the first query image includes semantic concepts of water (sea) and sky, and calculates a concept generation vector for the semantic concept to form a semantic graph. The search apparatus also forms a semantic graph by calculating a concept generation vector for the semantic concept for each of the plurality of read images. Thereafter, images including semantic concepts similar to those of the first query image q1 and similar weights to each of the semantic concepts included in the first query image q1 are searched for. The searched images may be provided in the order of high similarity. In this manner, by searching for images similar to the second to fourth query images q2, q3, and q4, the searched images may be provided in the order of high similarity.

According to the present invention, by identifying the semantic concept to be included in the query image and expressed through the query image by using the above-described method, it is possible to search for images with high semantically similarity, thereby improving search accuracy. .

Although the above has been illustrated and described with respect to the preferred embodiments of the present invention, the present invention is not limited to the above-described specific embodiments, it is common in the technical field to which the invention belongs without departing from the spirit of the invention claimed in the claims. Various modifications can be made by those skilled in the art, and these modifications should not be individually understood from the technical spirit or the prospect of the present invention.

1 is a flowchart illustrating a semantic based image retrieval method according to an embodiment of the present invention;

2 is a view for explaining an edge detection method according to an embodiment of the present invention;

3 (a) and 3 (b) are views for explaining a method of calculating a concept generation vector according to an embodiment of the present invention;

4 is a view for explaining a method for generating a semantic graph according to an embodiment of the present invention;

5 is a diagram illustrating a semantic based image search result according to an exemplary embodiment.

Claims (10)

Reading a plurality of images stored in a database when at least one query image is input, and splitting the query image and the plurality of images to form at least four sub-areas, respectively; Detecting a feature vector for each of the query images and sub-regions of the plurality of images, and giving a semantic concept indicating meanings included in the sub-regions using the detected feature vector; Calculating a concept generation vector (COV) for determining a ratio of the given semantic concept to the entirety of the query image and each of the plurality of images; Forming a semantic graph connecting the query image, the plurality of images, and a semantic vertex expressing at least one semantic concept using the calculated concept generation vector; And, And comparing the semantic graph of the query image with the semantic graph of the search target image to determine similarity, and searching for an image similar to the query image. The method of claim 1, The semantic concept is, Semantic-based images characterized by including sky, water, grass, sand, rock, trunk, flowers and foliage Search method. The method of claim 2, Each of the query image and the plurality of images, At least one of the sky, the water, the grass, the sand, the rock, the trunk, the flower, and the foliage A semantic-based image retrieval method, characterized in that the natural video image including the semantic concept of. The method of claim 1, The first step, A first step of dividing the query image into 2x2 blocks to form four sub-regions; A second step of determining whether edges are detected in four sub-regions of the query image; And And a third process of determining whether the edge is detected by re-dividing the sub-region in which the edge is detected until the edge is not detected when an edge is detected in any one of the four sub-regions. Search method. The method of claim 4, wherein Edge detection for the sub area, A semantic based image retrieval method using a canny edge detection algorithm. The method of claim 1, The second step, A first process of detecting color information of each of the sub-areas; A second process of detecting texture information about each of the sub-regions; And And a third process of applying a semantic concept to each of the sub-areas by using color information and texture information of each of the detected sub-areas. The method of claim 1, The third step, The semantic-based image retrieval method of claim 1, wherein the conceptual generation vector is calculated using the following equation.
Figure 112009050667872-PAT00008
(N is the semantic concept, n is the number of subareas)
The method of claim 1, The fourth step, A first step of calculating weights for semantic concepts included in each of the query image and the plurality of images by using the calculated concept generation vector; And And a second process of connecting each of the query image and the plurality of images to the semantic vertex using an edge line, and matching and connecting the calculated weights to the edge lines, respectively. Search method. The method of claim 8, The weight for the semantic concept included in each of the query image and the plurality of images is calculated using the following equation.
Figure 112009050667872-PAT00009
(wvi y is a weight vector for the semantic concept i of image y of the query image and the plurality of images, vi y is a concept occurrence vector for the semantic concept i of image y of the query image and the plurality of images, x y is the number of edge lines connected to image y of the query image and the plurality of images, E is the total number of the query image and the plurality of images)
10. The method of claim 9, The fifth step, The semantic-based image retrieval method of claim 1, wherein the similarity between the semantic graph for the query image and the semantic graph for the plurality of images is determined.
Figure 112009050667872-PAT00010
(p is the image, q is the query image, wvi y is a weight vector for the semantic concept i of image y of the query image and the plurality of images, p i is the semantic concept i of the image, q i is the semantic concept of the query image i)
KR1020090076704A 2009-08-19 2009-08-19 Semantic based image retrieval method KR20110019117A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020090076704A KR20110019117A (en) 2009-08-19 2009-08-19 Semantic based image retrieval method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020090076704A KR20110019117A (en) 2009-08-19 2009-08-19 Semantic based image retrieval method

Related Child Applications (1)

Application Number Title Priority Date Filing Date
KR1020110055512A Division KR101142163B1 (en) 2011-06-09 2011-06-09 Semantic based image retrieval method

Publications (1)

Publication Number Publication Date
KR20110019117A true KR20110019117A (en) 2011-02-25

Family

ID=43776539

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020090076704A KR20110019117A (en) 2009-08-19 2009-08-19 Semantic based image retrieval method

Country Status (1)

Country Link
KR (1) KR20110019117A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101142163B1 (en) * 2011-06-09 2012-05-03 인하대학교 산학협력단 Semantic based image retrieval method
US10820213B2 (en) 2016-11-17 2020-10-27 Samsung Electronics Co., Ltd. Method and apparatus for analyzing communication environment based on property information of an object
US10887029B2 (en) 2016-11-17 2021-01-05 Samsung Electronics Co., Ltd. Method and apparatus for analysing communication channel in consideration of material and contours of objects
US11153763B2 (en) 2016-11-17 2021-10-19 Samsung Electronics Co., Ltd. Method and device for analyzing communication channels and designing wireless networks, in consideration of information relating to real environments
US11277755B2 (en) 2016-11-17 2022-03-15 Samsung Electronics Co., Ltd. Method and apparatus for analyzing communication environment based on property information of an object

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101142163B1 (en) * 2011-06-09 2012-05-03 인하대학교 산학협력단 Semantic based image retrieval method
US10820213B2 (en) 2016-11-17 2020-10-27 Samsung Electronics Co., Ltd. Method and apparatus for analyzing communication environment based on property information of an object
US10887029B2 (en) 2016-11-17 2021-01-05 Samsung Electronics Co., Ltd. Method and apparatus for analysing communication channel in consideration of material and contours of objects
US11153763B2 (en) 2016-11-17 2021-10-19 Samsung Electronics Co., Ltd. Method and device for analyzing communication channels and designing wireless networks, in consideration of information relating to real environments
US11277755B2 (en) 2016-11-17 2022-03-15 Samsung Electronics Co., Ltd. Method and apparatus for analyzing communication environment based on property information of an object

Similar Documents

Publication Publication Date Title
US7925650B2 (en) Image management methods, image management systems, and articles of manufacture
CN101551823B (en) Comprehensive multi-feature image retrieval method
Xiao et al. Efficient shadow removal using subregion matching illumination transfer
CN111325271B (en) Image classification method and device
CN105069042A (en) Content-based data retrieval methods for unmanned aerial vehicle spying images
CN108334642A (en) A kind of similar head portrait searching system
CN101930461A (en) Digital image visualized management and retrieval for communication network
CN103971113A (en) Image recognition method, electronic device and computer program product
CN114332889A (en) Text box ordering method and text box ordering device for text image
CN104732534B (en) Well-marked target takes method and system in a kind of image
KR20110019117A (en) Semantic based image retrieval method
CN115861409B (en) Soybean leaf area measuring and calculating method, system, computer equipment and storage medium
CN111091129A (en) Image salient region extraction method based on multi-color characteristic manifold sorting
CN113988147A (en) Multi-label classification method and device for remote sensing image scene based on graph network, and multi-label retrieval method and device
CN111563462A (en) Image element detection method and device
KR101142163B1 (en) Semantic based image retrieval method
JP6387026B2 (en) Book searching apparatus, method and program
Yu et al. Mean shift based clustering of neutrosophic domain for unsupervised constructions detection
CN108304588B (en) Image retrieval method and system based on k neighbor and fuzzy pattern recognition
Kaur et al. An edge detection technique with image segmentation using ant colony optimization: A review
CN109299295A (en) Indigo printing fabric image database search method
Losson et al. CFA local binary patterns for fast illuminant-invariant color texture classification
CN110019898A (en) A kind of animation image processing system
Chigateri et al. CBIR algorithm development using RGB histogram-based block contour method to improve the retrieval performance
Chen et al. User tailored colorization using automatic scribbles and hierarchical features

Legal Events

Date Code Title Description
A201 Request for examination
A107 Divisional application of patent
WITB Written withdrawal of application