WO2017215194A1 - Procédé et dispositif de traitement d'image et support de stockage associé - Google Patents

Procédé et dispositif de traitement d'image et support de stockage associé Download PDF

Info

Publication number
WO2017215194A1
WO2017215194A1 PCT/CN2016/107004 CN2016107004W WO2017215194A1 WO 2017215194 A1 WO2017215194 A1 WO 2017215194A1 CN 2016107004 W CN2016107004 W CN 2016107004W WO 2017215194 A1 WO2017215194 A1 WO 2017215194A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
grayscale
black
scaled
primary
Prior art date
Application number
PCT/CN2016/107004
Other languages
English (en)
Chinese (zh)
Inventor
刘冬梅
刘凤鹏
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Publication of WO2017215194A1 publication Critical patent/WO2017215194A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Definitions

  • the present invention relates to the field of communications technologies, and in particular, to an image processing method, an apparatus, and a storage medium.
  • the photographing effect of the smart terminal device is improved by upgrading the existing optical device, but the method for improving the photographing effect of the smart terminal device by improving the existing optical device is limited by factors such as volume, and the cost is high, and The effect is limited.
  • a commonly used method for improving the image effect of the smart terminal device by optimizing the software algorithm generally improves the image effect only by improving the single effect and only shooting the current image by the user. The image is processed without using other reasonable resources to improve the overall quality of the image.
  • the embodiment of the present invention is to provide an image processing method, a device thereof, and a storage medium, which can improve the photographing effect of the terminal device by using resources such as a network without increasing the hardware cost.
  • an embodiment of the present invention provides an image processing method, where the method includes:
  • the original image is image-fused with the final image to obtain an improved original image.
  • the original image and the primary selected image are size-scaled to obtain a zoomed original image and a zoomed primary selected image
  • the gray value of each pixel in the grayscale image of the scaled original image is compared with the grayscale average value of all the pixels in the grayscale image of the scaled original image, and the scaled original image is Converting the grayscale image into a first black and white image;
  • the pixel gray value in the grayscale image of the scaled original image is greater than or equal to the grayscale average value of all pixels in the grayscale image of the scaled original image, the pixel is marked as the first state;
  • the pixel gray value in the grayscale image of the scaled original image is smaller than the grayscale average value of all the pixels in the grayscale image of the scaled original image, the pixel is marked as the second state.
  • the pixel gray value in the grayscale image of the zoomed primary image is greater than or equal to the grayscale average value of all the pixels in the grayscale image of the zoomed primary image, the pixel is marked as the first status;
  • the pixel gray value in the grayscale image of the scaled primary image is smaller than the grayscale average of all pixels in the grayscale image of the scaled primary image, the pixel is marked as the second state.
  • the second black and white image is The corresponding zoom image is a final image that satisfies a preset condition
  • the scaled image corresponding to the second black and white image is not satisfied Preset conditions.
  • an embodiment of the present invention further provides an image processing apparatus, where the apparatus includes:
  • the acquiring module is configured to acquire original information and location information of the original image
  • the searching module is configured to search, in the pre-stored image library, a primary selected image that is the same as the original image location information;
  • the screening module is configured to compare an image characteristic of the original image with an image characteristic of the primary selected image, and select a finalized image that meets a preset condition from the primary selected image;
  • the acquiring module is further configured to perform image fusion on the original image and the final image to obtain an improved original image.
  • the acquiring module is configured to size and scale the original image and the primary selected image to obtain a zoomed original image and a zoomed primary selected image;
  • the conversion module is configured to convert the grayscale image of the scaled original image and the grayscale image of the scaled primary image into a corresponding black and white image according to a preset policy; wherein the grayscale of the original image is scaled
  • the black and white image corresponding to the image is a first black and white image
  • the black and white image corresponding to the gray image of the zoomed primary image is a second black and white image;
  • the acquiring module is further configured to compare a first comparison value of the first black and white image with a second comparison value of the second black and white image, and obtain, from the zoomed primary selected image, a final condition that satisfies a preset condition Select an image.
  • the conversion module is configured to compare a gray value of each pixel in the grayscale image of the scaled original image with a grayscale average value of all pixels in the grayscale image of the scaled original image, Converting the grayscale image of the scaled original image into a first black and white image;
  • the zoomed primary image is converted into a second black and white image.
  • the marking module is configured to: when the pixel gray value in the grayscale image of the scaled original image is greater than or equal to the grayscale average value of all pixels in the grayscale image of the scaled original image, The pixel is marked as the first state;
  • the pixel gray value in the grayscale image of the scaled original image is smaller than the grayscale average value of all the pixels in the grayscale image of the scaled original image, the pixel is marked as the second state.
  • the marking module is configured to: when the pixel gray value in the grayscale image of the zoomed primary image is greater than or equal to the grayscale average value of all the pixels in the grayscale image of the zoomed primary image, The pixel is marked in a first state;
  • the pixel gray value in the grayscale image of the zoomed primary image is smaller than the grayscale average of all the pixels in the grayscale image of the zoomed primary image, the pixel is marked as the second state.
  • the acquiring module is configured to: when the number of bits of the first comparison value of the first black and white image and the second comparison value of the second black and white image are different from the first preset threshold,
  • the scaled image corresponding to the second black and white image is a final image that meets a preset condition
  • the scaling image corresponding to the second black and white image is not met.
  • the image processing method and device thereof and the storage medium according to the embodiments of the present invention obtain the original image and the location information by using the terminal device, and then find an image having the same position information in an image library such as a network, and perform the searched image. Further screening and obtaining the image most similar to the image acquired by the terminal device, and finally performing image fusion to obtain the image with the best effect is stored in the terminal device, fully utilizing resources such as the network, and increasing the hardware cost, so that the image captured by the terminal device
  • the advantages of combining multiple images make the camera effect of the terminal device improved.
  • FIG. 1 is a schematic flow chart of an image processing method according to Embodiment 1 of the present invention.
  • FIG. 2 is a schematic flowchart of a method for obtaining a final image according to Embodiment 1 of the present invention
  • Embodiment 3 is a grayscale image after gradation transformation of a color image according to Embodiment 1 of the present invention.
  • FIG. 5 is a schematic flow chart of a method for converting a grayscale image into a black and white image according to Embodiment 1 of the present invention
  • FIG. 6 is a schematic structural diagram of an image processing apparatus according to Embodiment 2 of the present invention.
  • the basic idea of the embodiment of the present invention is: in the terminal device, the terminal device is first used by itself. Configuration, open the positioning function, and then take photos with the terminal device. Then, the photographs are initially screened out from the network according to the positioning information to reduce the subsequent workload, and then the desired high-quality images are further obtained from the images initially screened by the network, and finally the obtained high-quality images are merged with the original images. , get better image data, and finally achieve a greater improvement in the camera photo effect.
  • an embodiment of the present invention provides an image processing method, which may include:
  • the terminal device having the photographing function is used to take a photograph to obtain an original image.
  • the positioning function of the terminal device such as the GPS positioning function, is opened before the photographing, and the position information of the original image is acquired while taking the photograph.
  • the location information of the original image may be latitude and longitude coordinates or a specific place name or the like.
  • the positioning function is used to acquire the location information of the original image capturing location for acquiring the same image as the original image location information.
  • the pre-stored image library may be an image that is found from the network.
  • the user can connect to the network through the terminal device and search for the same image from the network as the location information of the original image. It is also possible to directly search for an image of the same location information on the network by using the original image location information in the background, and the primary image is obtained by the above method.
  • the pre-stored image library may also be an image with location information stored in a storage function device such as a server, a cloud disk, a personal computer, or a mobile hard disk.
  • the acquired primary images can be stored in a specific server. For example, from the network The primary image is obtained. Considering the limitation of the storage space of the terminal device, the primary image obtained from the network can be stored in a specific server for further screening and comparison with the original image.
  • the image characteristics of the original image are compared with the image characteristics of the primary image to obtain a final image that satisfies a preset condition.
  • the original image and the primary selected image are both colored images, and the image characteristics are utilized.
  • the original image and the primary selected image are firstly scaled, and then the scaled original image and the primary selected image are converted.
  • the grayscale image is converted into a corresponding black and white image, and finally the corresponding black and white image is analyzed to obtain an image satisfying the preset condition.
  • the image that satisfies the preset condition refers to an image that is further filtered from the primary image and that is closest to the original image.
  • step S103 may include: S1031 to S1034:
  • S1013 Scaling the original image and the primary selected image to obtain a zoomed original image and a zoomed primary selected image.
  • most of the current image ratio is 4:3, the statistics of the integrated information and the speed of the search, and the original image captured by the terminal device and the primary image obtained from the pre-stored image library are uniformly scaled to 16 ⁇ 12 size image, a 16 ⁇ 12 size image consisting of 192 Pixel composition.
  • the original image taken by the scaled terminal device here and the primary image obtained from the pre-stored image library are still color images.
  • S1032 Perform gradation transformation on the scaled original image and the scaled primary image to obtain a grayscale image of the scaled original image and a grayscale image of the scaled primary image.
  • each image will obtain 192 gray values.
  • each pixel in the color image is composed of three primary colors of red, green, and blue, directly processing the color image increases computational complexity, so the color image is first converted into a grayscale image.
  • the color image can be converted into a grayscale image by the following five methods:
  • Gray (R ⁇ 30+G ⁇ 59+B ⁇ 11)/100
  • the above operator >> represents a right shift operation, a right shift of one bit corresponds to division by 2, and a right shift of n bits corresponds to division by 2 nth power. Since the shift algorithm does not use a direct division process, the algorithm is faster than the integer algorithm. Here, shifting 8 bits to the right is equivalent to dividing the floating-point operation by the 8th power of 2. Therefore, it is necessary to multiply the coefficient of the floating-point algorithm by the 8th power of 2 to maintain the result of the original floating-point operation.
  • the above example uses a coefficient of 8-bit precision, multiplying the coefficients of the floating-point algorithm by the power of 8 of 2, which is 256.
  • the shift algorithm can be obtained by the floating point algorithm by the above transformation, wherein the shift algorithm is preferable to the 8-bit precision and the coefficient of 2 to 20-bit precision.
  • R, G, and B in the original RGB (R, G, B) are uniformly replaced by Gray to form a new color space RGB (Gray, Gray, Gray).
  • Figure 3 shows the color image. Grayscale image obtained after gradation transformation.
  • the method of converting the above color image into a grayscale image is applied to the scaled original image and the scaled primary image to obtain a grayscale image corresponding to the scaled original image and the scaled primary image.
  • the black and white image corresponding to the grayscale image of the scaled original image is a first black and white image
  • the black and white image corresponding to the grayscale image of the zoomed primary image is a second black and white image
  • FIG. 4 is an image obtained by converting the grayscale image shown in FIG. 3 into a black and white image.
  • step S1033 includes S10331 and S10332:
  • step S10331 includes:
  • the pixel gray value in the grayscale image of the scaled original image is greater than or equal to the grayscale average of all the pixels in the grayscale image of the scaled original image, the pixel is marked as the first state.
  • the pixel gray value in the grayscale image of the scaled original image is smaller than the grayscale average value of all the pixels in the grayscale image of the scaled original image, the pixel is marked as the second state.
  • the first state is represented by “1”
  • the second state is represented by "0”
  • the black color is represented by "0”
  • the white color is represented by "1” whereby the grayscale image of the original image is converted into a binary first black and white image.
  • step S10332 includes:
  • the pixel gray value in the grayscale image of the scaled primary image is greater than or equal to the grayscale average of all pixels in the grayscale image of the zoomed primary image, the pixel is marked as the first state.
  • the pixel gray value in the grayscale image of the scaled primary image is smaller than the grayscale average of all pixels in the grayscale image of the scaled primary image, the pixel is marked as the second state.
  • the grayscale image of the zoomed primary image is converted into the binary second black and white image.
  • the black and white outlines should be similar, by comparing the second ratio of the first alignment value of the first black and white image of the original image to the second black and white image of the zoomed primary image. For the value, you can get a final image that meets the preset conditions.
  • the grayscale image of the original image After scaling the grayscale image of the original image to obtain the binary first black and white image, it is recorded as a hexadecimal first alignment value.
  • the gray image obtained by scaling the primary image is obtained by two After the second black and white image is encoded, it is recorded as a hexadecimal second alignment value.
  • the first alignment value and the second alignment value are used to compare the first black and white image of the scaled original image with the second black and white image of the scaled primary image to determine a first black and white image of the scaled original image and a zoom primary The similarity of the second black and white image of the image.
  • step S1034 includes:
  • a zoom image corresponding to the second black and white image when a number of bits different from a first alignment value of the first black and white image and a second alignment value of the second black and white image is less than or equal to a first preset threshold A final image to meet the preset conditions.
  • the scaled image corresponding to the second black and white image is not satisfied Preset conditions.
  • the first preset threshold may be set to 10.
  • the original image is similar to the primary image when the first alignment value of the first black and white image of the original image is different from the second alignment value of the second black and white image of the zoomed primary image by 10 or less. Selecting the primary selected image as a final image; when the first alignment value of the first black and white image of the original image is scaled, the second alignment value of the second black and white image of the zoomed primary image is different from the number of digits greater than 10
  • the primary image is not selected as the terminal image.
  • the number of acquired terminal images that satisfy the preset condition may be one or several.
  • S104 Image fusion of the original image and the final image that meets a preset condition to obtain an improved original image.
  • the original image captured by the terminal device is merged with one or several final images selected from the pre-stored image library to obtain an improved original image, and the image is obtained. Stored in the terminal device.
  • image fusion The purpose of image fusion is to synthesize information of multiple images of the same scene, and the result is more suitable.
  • image fusion algorithms including pixel-level image fusion method, weighted average method and wavelet transform method.
  • Image-level image fusion is the lowest level of image fusion, but this level has the highest fusion accuracy and can provide other levels. Integrate details that are not available. Since the current algorithm is mature, it will not be detailed here.
  • An embodiment of the present invention provides an image processing method, which uses a terminal device to acquire an original image and its location information, and then finds an image having the same location information in an image library such as a network, and further filters the found image to obtain a terminal.
  • the image with the most approximate image acquired by the device is finally stored in the terminal device by image fusion, and the resources such as the network are fully utilized, and the hardware cost is increased, so that the image captured by the terminal device is combined with multiple images.
  • the advantage of the terminal device is improved.
  • the embodiment of the present invention further provides a computer storage medium, wherein the computer storage medium stores a computer program, and the computer program is used to execute the image processing method according to the first embodiment of the present invention.
  • an embodiment of the present invention provides an image processing apparatus, where the apparatus includes: an obtaining module 601, a searching module 602, and a screening module 603;
  • the acquiring module 601 is configured to acquire original information and location information of the original image
  • the searching module 602 is configured to search, in the pre-stored image library, a primary selected image that is the same as the original image location information;
  • the screening module 603 is configured to compare an image characteristic of the original image with an image characteristic of the primary selected image, and select a finalized image that meets a preset condition from the primary selected image;
  • the acquiring module 601 is further configured to perform image fusion on the original image and the final image to obtain an improved original image.
  • the device further includes: a conversion module 604;
  • the acquiring module 601 is configured to size and scale the original image and the primary selected image to obtain a zoomed original image and a zoomed primary selected image;
  • the conversion module 604 is configured to convert the grayscale image of the scaled original image and the grayscale image of the scaled primary image into a corresponding black and white image according to a preset policy; wherein the gray of the original image is scaled
  • the black and white image corresponding to the degree image is a first black and white image
  • the black and white image corresponding to the gray image of the zoomed primary image is a second black and white image;
  • the obtaining module 601 is further configured to compare a first comparison value of the first black and white image with a second comparison value of the second black and white image, and obtain a preset condition from the zoomed primary selected image. Final image.
  • the conversion module 604 is configured to compare a gray value of each pixel in the grayscale image of the scaled original image with a grayscale average value of all pixels in the grayscale image of the scaled original image, Converting the grayscale image of the scaled original image into a first black and white image;
  • the zoomed primary image is converted into a second black and white image.
  • the device further includes: a marking module 605;
  • the marking module 605 is configured to mark the pixel as a grayscale average value of all pixels in the grayscale image of the scaled original image when the grayscale image of the scaled image of the original image is greater than or equal to First state
  • the pixel gray value in the grayscale image of the scaled original image is smaller than the grayscale average value of all the pixels in the grayscale image of the scaled original image, the pixel is marked as the second state.
  • the marking module 605 is configured to: when the pixel gray value in the grayscale image of the zoomed primary image is greater than or equal to the grayscale average value of all the pixels in the grayscale image of the zoomed primary image, The pixel is marked in a first state;
  • the pixel gray value in the grayscale image of the zoomed primary image is smaller than the grayscale average of all the pixels in the grayscale image of the zoomed primary image, the pixel is marked as the second state.
  • the acquiring module 601 is configured to: when the number of bits of the first comparison value of the first black and white image and the second comparison value of the second black and white image are different from the first preset threshold, The scaled image corresponding to the second black and white image is a final image that meets a preset condition;
  • the scaling image corresponding to the second black and white image is not met.
  • the obtaining module 601, the searching module 602, the screening module 603, the converting module 604, and the marking module 605 may each be a central processing unit (CPU) and a microprocessor located in the image processing device 6.
  • CPU central processing unit
  • MPU Micro Processor Unit
  • DSP Digital Signal Processor
  • FPGA Field Programmable Gate Array
  • An embodiment of the present invention provides an image processing apparatus, which uses an end device to acquire an original image and position information thereof, and then finds an image having the same position information in an image library such as a network, and further filters the searched image to obtain a terminal.
  • the image with the most approximate image acquired by the device is finally stored in the terminal device by image fusion, and the resources such as the network are fully utilized, and the hardware cost is increased, so that the image captured by the terminal device is combined with multiple images.
  • the advantage of the terminal device is improved.
  • embodiments of the present invention can be provided as a method, system, or computer program product. Accordingly, the present invention can take the form of a hardware embodiment, a software embodiment, or a combination of software and hardware. Moreover, the invention can take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage and optical storage, etc.) including computer usable program code.
  • the computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device.
  • the apparatus implements the functions specified in one or more blocks of a flow or a flow and/or block diagram of the flowchart.
  • These computer program instructions can also be loaded onto a computer or other programmable data processing device such that a series of operational steps are performed on a computer or other programmable device to produce computer-implemented processing for execution on a computer or other programmable device.
  • the instructions provide steps for implementing the functions specified in one or more of the flow or in a block or blocks of a flow diagram.
  • the original image and the location information thereof are obtained by using the terminal device, and then the image with the same location information in the image library such as the network is searched, and the searched image is further filtered to obtain the image that is most similar to the image acquired by the terminal device.
  • the image is finally image-fused to obtain the image with the best effect and stored in the terminal device, making full use of resources such as the network, without the increase of hardware cost, so that the image captured by the terminal device combines the advantages of multiple images, so that the terminal device takes photos. Improved.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Library & Information Science (AREA)
  • Image Processing (AREA)

Abstract

La présente invention porte, dans un mode de réalisation, sur un procédé de traitement d'image, le procédé consistant : à acquérir une image originale et des informations de position de l'image originale ; à rechercher, dans une bibliothèque d'images préstockée, des images primaires dont les informations de position sont identiques à celles de l'image originale ; à comparer des caractéristiques d'image de l'image originale avec celles des images primaires et à sélectionner une image finale remplissant une condition prédéfinie à partir des images primaires ; et à fusionner l'image originale et l'image finale pour obtenir une image originale améliorée. La présente invention porte également, dans les modes de réalisation, sur un dispositif de traitement d'image et sur un support de stockage.
PCT/CN2016/107004 2016-06-14 2016-11-23 Procédé et dispositif de traitement d'image et support de stockage associé WO2017215194A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610414797.9 2016-06-14
CN201610414797.9A CN107507158A (zh) 2016-06-14 2016-06-14 一种图像处理方法和装置

Publications (1)

Publication Number Publication Date
WO2017215194A1 true WO2017215194A1 (fr) 2017-12-21

Family

ID=60663436

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/107004 WO2017215194A1 (fr) 2016-06-14 2016-11-23 Procédé et dispositif de traitement d'image et support de stockage associé

Country Status (2)

Country Link
CN (1) CN107507158A (fr)
WO (1) WO2017215194A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110473249A (zh) * 2019-07-12 2019-11-19 平安普惠企业管理有限公司 一种网页用户界面与设计稿的对比方法、装置及终端设备
CN110648340A (zh) * 2019-09-29 2020-01-03 惠州学院 一种基于二进制及水平集处理图像的方法及装置

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112532856B (zh) * 2019-09-17 2023-10-17 中兴通讯股份有限公司 一种拍摄方法、装置和系统
US20230056332A1 (en) * 2019-12-13 2023-02-23 Huawei Technologies Co., Ltd. Image Processing Method and Related Apparatus
CN114697543B (zh) * 2020-12-31 2023-05-19 华为技术有限公司 一种图像重建方法、相关装置及系统

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070271256A1 (en) * 2006-05-16 2007-11-22 Shih-Fu Chang Active context-based concept fusion
CN101566994A (zh) * 2008-04-22 2009-10-28 王磊 一种图像、视频检索方法
CN102915326A (zh) * 2012-08-30 2013-02-06 杭州藕根科技有限公司 一种基于gps和图像搜索技术的移动终端景物辨识系统
CN103440318A (zh) * 2013-08-29 2013-12-11 王靖洲 移动终端的景观识别系统
CN103514265A (zh) * 2013-09-04 2014-01-15 一派视觉(北京)数字科技有限公司 全景交互数字增强体验式智慧旅游系统和方法
CN104537608A (zh) * 2014-12-31 2015-04-22 深圳市中兴移动通信有限公司 一种图像处理的方法及其装置
CN105046676A (zh) * 2015-08-27 2015-11-11 上海斐讯数据通信技术有限公司 一种基于智能终端的图像融合方法及设备

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070271256A1 (en) * 2006-05-16 2007-11-22 Shih-Fu Chang Active context-based concept fusion
CN101566994A (zh) * 2008-04-22 2009-10-28 王磊 一种图像、视频检索方法
CN102915326A (zh) * 2012-08-30 2013-02-06 杭州藕根科技有限公司 一种基于gps和图像搜索技术的移动终端景物辨识系统
CN103440318A (zh) * 2013-08-29 2013-12-11 王靖洲 移动终端的景观识别系统
CN103514265A (zh) * 2013-09-04 2014-01-15 一派视觉(北京)数字科技有限公司 全景交互数字增强体验式智慧旅游系统和方法
CN104537608A (zh) * 2014-12-31 2015-04-22 深圳市中兴移动通信有限公司 一种图像处理的方法及其装置
CN105046676A (zh) * 2015-08-27 2015-11-11 上海斐讯数据通信技术有限公司 一种基于智能终端的图像融合方法及设备

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110473249A (zh) * 2019-07-12 2019-11-19 平安普惠企业管理有限公司 一种网页用户界面与设计稿的对比方法、装置及终端设备
CN110648340A (zh) * 2019-09-29 2020-01-03 惠州学院 一种基于二进制及水平集处理图像的方法及装置
CN110648340B (zh) * 2019-09-29 2023-03-17 惠州学院 一种基于二进制及水平集处理图像的方法及装置

Also Published As

Publication number Publication date
CN107507158A (zh) 2017-12-22

Similar Documents

Publication Publication Date Title
WO2017215194A1 (fr) Procédé et dispositif de traitement d'image et support de stockage associé
US20220222786A1 (en) Image processing method, smart device, and computer readable storage medium
TWI624182B (zh) 高動態範圍影像的編碼、解碼及表示
JP4706415B2 (ja) 撮像装置、画像記録装置およびプログラム
WO2018082185A1 (fr) Procédé et dispositif de traitement d'image
TW202008313A (zh) 圖像打光方法和裝置
US20220189029A1 (en) Semantic refinement of image regions
US20130222645A1 (en) Multi frame image processing apparatus
CN111131688B (zh) 一种图像处理方法、装置及移动终端
CN113810641B (zh) 视频处理方法、装置、电子设备和存储介质
WO2014187265A1 (fr) Support d'enregistrement informatique, dispositif et procédé de traitement de capture de photo
US8995784B2 (en) Structure descriptors for image processing
Nam et al. Modelling the scene dependent imaging in cameras with a deep neural network
CN107038695A (zh) 一种图像融合方法及移动设备
WO2023151511A1 (fr) Procédé et appareil d'apprentissage de modèle, procédé et appareil d'élimination de moiré d'image, et dispositif électronique
CN113822830A (zh) 基于深度感知增强的多曝光图像融合方法
Liu et al. Progressive complex illumination image appearance transfer based on CNN
CN114723610A (zh) 图像智能处理方法、装置、设备及存储介质
CN114449199B (zh) 视频处理方法、装置、电子设备和存储介质
JP6977483B2 (ja) 画像処理装置、画像処理方法、画像処理システムおよびプログラム
WO2024067461A1 (fr) Procédé et appareil de traitement d'image, et dispositif informatique et support de stockage
CN111383289A (zh) 图像处理方法、装置、终端设备及计算机可读存储介质
WO2017101570A1 (fr) Procédé de traitement de photo et système de traitement de photo
US10026201B2 (en) Image classifying method and image displaying method
CN115706870A (zh) 视频处理方法、装置、电子设备和存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16905306

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16905306

Country of ref document: EP

Kind code of ref document: A1