WO2019037038A1 - Procédé et dispositif de traitement d'image, et serveur - Google Patents

Procédé et dispositif de traitement d'image, et serveur Download PDF

Info

Publication number
WO2019037038A1
WO2019037038A1 PCT/CN2017/098854 CN2017098854W WO2019037038A1 WO 2019037038 A1 WO2019037038 A1 WO 2019037038A1 CN 2017098854 W CN2017098854 W CN 2017098854W WO 2019037038 A1 WO2019037038 A1 WO 2019037038A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
target
dimensional model
location
information
Prior art date
Application number
PCT/CN2017/098854
Other languages
English (en)
Chinese (zh)
Inventor
骆磊
Original Assignee
深圳前海达闼云端智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳前海达闼云端智能科技有限公司 filed Critical 深圳前海达闼云端智能科技有限公司
Priority to PCT/CN2017/098854 priority Critical patent/WO2019037038A1/fr
Priority to CN201780001596.9A priority patent/CN107690673B/zh
Publication of WO2019037038A1 publication Critical patent/WO2019037038A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/587Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using geographical or spatial information, e.g. location
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/682Vibration or motion blur correction
    • H04N23/683Vibration or motion blur correction performed by a processor, e.g. controlling the readout of an image memory
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Definitions

  • the present disclosure relates to the field of image processing, and in particular, to an image processing method, apparatus, and server.
  • the present disclosure provides an image processing method, apparatus, and server for improving the sharpness of an image.
  • an image processing method applied to a server, the method comprising:
  • the image after the image replacement is transmitted to the terminal.
  • an image processing method which is applied to a terminal, the method comprising:
  • an image processing apparatus which is applied to a server, the apparatus comprising:
  • a first determining module configured to determine a target sub-image included in a target image sent by the terminal
  • a second determining module configured to determine, from the three-dimensional model library, a target three-dimensional model that matches a target object corresponding to the target sub-image
  • a replacement module configured to perform image replacement on the target sub-image in the target image based on the target three-dimensional model
  • the first sending module is configured to send the image after the image replacement to the terminal.
  • an image processing apparatus which is applied to a terminal, the apparatus comprising:
  • a first determining module configured to determine a target sub-image included in the target image
  • a second determining module configured to determine, from the three-dimensional model library, a target three-dimensional model that matches a target object corresponding to the target sub-image
  • the replacement module is configured to perform image replacement on the target sub-image in the target image based on the target three-dimensional model.
  • a computer program product comprises a computer program executable by a programmable device, the computer program having code for performing the method of any of the first or second aspect when executed by the programmable device section.
  • a non-transitory computer readable storage medium comprising one or more programs for performing The method of any of the first aspect or the second aspect.
  • a server including:
  • Non-transitory computer readable storage medium
  • a terminal including:
  • Non-transitory computer readable storage medium
  • the server may find a matching target three-dimensional model from the three-dimensional model library, and then perform image replacement on the target sub-image in the target image based on the target three-dimensional model, thereby
  • the target sub-image that may be blurred in the image is replaced with a clear image, and the target image after the image replacement is sent to the terminal, and the terminal can obtain a clear image, and the image processing capability of the server is strong.
  • FIG. 1 is a flowchart of an image processing method according to an exemplary embodiment
  • FIG. 2 is a schematic diagram of a location range, according to an exemplary embodiment
  • FIG. 3 is a schematic diagram of a shooting target image, according to an exemplary embodiment
  • FIG. 4 is a flowchart of an image processing method according to an exemplary embodiment
  • FIG. 5 is a block diagram of an image processing apparatus according to an exemplary embodiment
  • FIG. 6 is a block diagram of an image processing apparatus according to an exemplary embodiment.
  • FIG. 1 is a flowchart of an image processing method according to an exemplary embodiment. As shown in FIG. 1, the image processing method can be applied to a server, and includes the following steps.
  • Step S11 Determine a target sub-image included in the target image transmitted by the terminal.
  • Step S12 Determine a target three-dimensional model that matches the target object corresponding to the target sub-image from the three-dimensional model library.
  • Step S13 Perform image replacement on the target sub-image in the target image based on the target three-dimensional model.
  • Step S14 transmitting the image after the image replacement to the terminal.
  • the server may be a cloud device, and the terminal may implement image transmission with the cloud device through the network; or the server may be another device different from the terminal, and the terminal may be connected by wire or wirelessly. The other device is connected, thereby implementing image transfer, and the like. Any device that can receive the target image of the transmission of the terminal and process the target image may be the server in the embodiment of the present disclosure.
  • the target image may be a certain frame image in a frame preview image acquired by the terminal when shooting by the camera, or the target image may also be an image stored in the terminal, etc. This is not limited.
  • the target sub image may be a partial image corresponding to the subject in the target image when the target image is captured.
  • the 3D model library can be a pre-built database, and the 3D model library can contain three-dimensional models of landmark buildings, places of interest, objects, and the like.
  • the 3D model library can be stored in the server's own memory or it can be stored in another device that can communicate with the server.
  • the server may parse the target image to determine the target sub-image included in the target image.
  • the same target image may include one or more target sub-images. This is not limited.
  • a target three-dimensional model matching the target object corresponding to the target sub-image can be found from the three-dimensional model library.
  • the target image includes the sub-image corresponding to the target object “Zhaozhou Bridge”, and then the three-dimensional model library can be obtained from the three-dimensional model library. Match the three-dimensional model of "Zhaozhou Bridge".
  • the location information of the target image may also be obtained, where the location information may be used to indicate a geographic location when the target image is collected; and then the location range including the location corresponding to the location information is determined according to the location information, and then the location range included in the location range is determined.
  • the identification object may be an object corresponding to the three-dimensional model in the three-dimensional model library, and then the target three-dimensional model may be determined in the three-dimensional model set corresponding to all the identification objects.
  • the location information may be used to indicate the geographic location when the target image is acquired.
  • the manner of acquiring the location information of the target image is not limited in the embodiment of the present disclosure.
  • the manner of acquiring the location information may be different according to the target image.
  • the target image may be a certain frame image in the continuous preview image acquired by the terminal at the time of shooting, and the location information of the acquired target image may be the location information of the current terminal sent by the terminal (for example, the GPS through the terminal (Global Positioning System) , Global Positioning System module to get, etc.); or, the target image may be stored in the terminal
  • the image information of the target image may include location information at the time of shooting, and the server may directly obtain the location information from the information carried by the target image.
  • the location range may be determined according to the location information.
  • the manner of determining the location range is not limited in this embodiment, as long as it is a location range that determines the geographic location corresponding to the location information.
  • the geographic location corresponding to the location information may be directly centered, and the radius is a set value (for example, 500 meters, etc.), and the obtained circular area is the required position range.
  • the location information may be combined with other parameters to determine the location range. The manner in which the location range is determined will be described below.
  • the height information of the target image may also be acquired, and the height information may be used to indicate the altitude when the target image is collected, and then the location range may be determined according to the location information and the height information.
  • the target image may be a certain frame image in the continuous preview image acquired by the terminal at the time of shooting, and the height information of the acquired target image may be the height information of the altitude at which the current terminal is sent by the terminal (for example, can be obtained by the altitude meter of the terminal).
  • the target image may be an image stored in the terminal, and the image information of the target image may include height information at the time of shooting, and the server may directly obtain the height information from the information carried by the target image.
  • the position range determined by the position information can be adjusted by the height information. For example, when the altitude is 0, the position range is a circle centered on the position indicated by the position information, and R is a circular area of the radius. For every 10 meters of elevation, the radius of the position range is increased by R1, and so on. In this way, the determined range of positions is more accurate, avoiding missing objects that the target image may capture, and the image processing capability of the server is strong.
  • the focus distance of the target image can also be obtained, and then the location information can be Focus distance to determine the position range.
  • the target image may be a certain frame image in the continuous preview image acquired by the terminal at the time of shooting, and the focus distance of the target image may be the focus distance set by the current shooting sent by the terminal; or the target image may be stored in the terminal.
  • An image of the target image may include a focus distance at the time of shooting, and the server may directly obtain a focus distance from the information carried by the target image.
  • the location range may be a circular area formed by the geographic location corresponding to the location information, and the focal length f is a radius; or A margin x can be set, and the position range can be a circle centered on the geographic location corresponding to the position information, a circle having a radius of fx and a circle having a radius of f+x, and the like.
  • the position range can also be determined by the position information, the height information, and the focus distance.
  • the position range can be determined first by the above-mentioned position information and the focus distance to determine the position range, and then the height information is used to determine the position range. Adjust the range of locations, and so on.
  • all the identification objects corresponding to the three-dimensional model included in the location range can be found on the map. For example, if the determined location range is as shown in FIG. 2, then all the identification objects corresponding to the three-dimensional model included in the three-dimensional model library included in the location range may be determined, for example, “Leshan Dafoshan Gate”, “Big Buddha”, “ The three-dimensional model set may include the three-dimensional model of all the above-mentioned identification objects, such as Haishidong, Tianwangdian, Daxiongdian and Moruotang.
  • the target sub-image may be compared in each three-dimensional model corresponding to all the identification objects, thereby obtaining a matching target three-dimensional model. It can be seen that the position information can be used to accurately find the comparison of the target sub-images.
  • the scope can further quickly find the three-dimensional model matched by the target object corresponding to the target sub-image, improve the response speed of the server, and improve the image processing capability of the server.
  • the target sub-image in the target image may be replaced by the target three-dimensional model, and then the target image after the image replacement is sent back to the terminal, and the user of the terminal can directly see through the server.
  • the target image after the image is replaced.
  • information indicating that the matching fails may be sent to the terminal, and the terminal may directly display an image that cannot be processed.
  • the image replacement may be performed by first intercepting the target three-dimensional model to obtain a two-dimensional image that matches the target sub-image, and then replacing the target sub-image with the two-dimensional image in the target image.
  • the target sub-image can be compared with the target three-dimensional model, and the target three-dimensional model is taken out, and the angle of the target sub-image is captured, the position of the target object is captured, and the target sub-image is A two-dimensional image in which the parameters such as the size of the target image are the same, and the intercepted two-dimensional image completely replaces the target sub-image in the target image.
  • the target sub-image is A two-dimensional image in which the parameters such as the size of the target image are the same, and the intercepted two-dimensional image completely replaces the target sub-image in the target image.
  • the finder frame when the target image is photographed, the finder frame is directed to a part of the middle of the "Eiffel Tower", and therefore, the target sub-image in the target image is an image formed by the "Eiffel Tower" in the middle portion. Then, after finding the 3D model of the target (ie, the 3D model of the Eiffel Tower), the angle of the target image, the size of the imaged sub-image of the image, etc., can be imaged according to the "Eiffel Tower", and the "Eiffel Tower”
  • the three-dimensional model intercepts, extracts a two-dimensional image that matches the target sub-image, and then replaces the target sub-image in the target image with the intercepted two-dimensional image.
  • the target three-dimensional model can be a three-dimensional model previously constructed by a computer, the two-dimensional image intercepted from the target three-dimensional model is a higher-definition image, regardless of the user of the terminal. Zooming in, as long as the target 3D model is fine enough, the image of the target object in the processed target image is always in an extremely clear state, and the zoom is no longer limited by the terminal capability. On the other hand, image blurring caused by hand shake or slow shutter speed can also be solved. By replacing the target sub-image which may be relatively blurred in the original target image by using the intercepted high-resolution two-dimensional image, the clarity of the subject in the target image can be improved, and the image processing capability of the server is strong.
  • the image parameters of the two-dimensional image may also be set according to the image parameters of the target image.
  • Image parameters may include brightness, contrast, color temperature, color values, and the like of the image.
  • the image parameters of the target image may be directly sent by the terminal, or may be obtained from the image information carried by the target image, which is not limited by the embodiment of the present disclosure.
  • the image parameters of the two-dimensional image intercepted from the target three-dimensional model may be set to the same or similar parameters, so that the target image after the image replacement is more matched, and the image processing capability of the server is better. Strong.
  • the server stores another location range determined according to another image, determining a geographic location corresponding to the location information of the target image, and a distance between the geographic location corresponding to the location information of the other image exceeds a preset. a distance threshold; when the distance exceeds the preset distance threshold, determining a location range corresponding to the target image according to the location information of the target image, and updating the location range stored by the server; when the distance does not exceed the preset distance threshold, the other is The position range is determined as the position range.
  • the preset distance threshold may be a preset value for determining whether to re-determine the location range.
  • the embodiment of the present disclosure is not limited.
  • the preset distance threshold may be set to 10 meters. , 15 meters, and so on.
  • the target image is a frame image of a frame preview image acquired by the terminal when shooting through the camera, and the terminal can send each frame image collected to the server in real time, and the server processes each frame image. Send it back to the terminal.
  • the position information of many consecutive frames may be the same or little change, and the server does not have to have bits for each frame of the image.
  • the information is calculated by the location range. Therefore, the server can record the location range determined last time according to another image.
  • the location range can be recalculated and the location recorded by the server can be updated. range. In this way, the accuracy of the location range can be ensured, the calculation amount of the server can be reduced, and the image processing speed of the server can be improved.
  • the feature information of the target object may be acquired, where the feature information includes at least one of historical information, geographic information, and travel information; and the feature information is sent to the terminal.
  • the server can get the information of "Eiffel Tower” from the network, such as historical information, fare information, best photo location information, nearby restaurant information, etc., and send this information.
  • the terminal can be directly displayed on the screen, which improves the user experience and improves the information processing capability of the server.
  • FIG. 4 is a flowchart of an image processing method according to an exemplary embodiment. As shown in FIG. 4, the image processing method may be applied to a terminal, including the following steps.
  • Step S41 Determine a target sub-image included in the target image.
  • Step S42 Determine a target three-dimensional model that matches the target object corresponding to the target sub-image from the three-dimensional model library.
  • Step S43 Perform image replacement on the target sub-image in the target image based on the target three-dimensional model.
  • the image processing method executed by the server side described above may also be executed by the terminal, and then the three-dimensional model library may be stored in the terminal, or may be stored in another device connected to the terminal, and the terminal does not need to target the image to be processed. Send it to the server and directly perform the steps of searching and replacing the target 3D model.
  • the image processing method on the terminal side refer to the description of the corresponding part on the server side, and details are not described herein.
  • FIG. 5 is a block diagram of an image processing apparatus 500 according to an exemplary embodiment, where the apparatus 500 can be applied.
  • the server, the device 500 can include:
  • the first determining module 501 is configured to determine a target sub-image included in the target image sent by the terminal;
  • the second determining module 502 is configured to determine, from the three-dimensional model library, a target three-dimensional model that matches the target object corresponding to the target sub-image;
  • the replacement module 503 is configured to perform image replacement on the target sub-image in the target image based on the target three-dimensional model
  • the first sending module 504 is configured to send the image after the image replacement to the terminal.
  • the apparatus 500 further includes:
  • a first acquiring module configured to acquire location information of the target image, where the location information is used to indicate a geographic location when the target image is collected;
  • a third determining module configured to determine a location range according to the location information, where the location range includes a location indicated by the location information;
  • a fourth determining module configured to determine all the identification objects included in the location range, where the identification object is an object corresponding to the three-dimensional model in the three-dimensional model library;
  • the second determining module includes 502:
  • the first determining submodule is configured to determine the target three-dimensional model in the three-dimensional model set corresponding to all the identification objects.
  • the apparatus 500 further includes:
  • a second acquiring module configured to acquire height information of the target image, where the height information is used to indicate an altitude when the target image is collected;
  • the third determining module includes:
  • the second determining submodule is configured to determine the location range according to the location information and the altitude information.
  • the apparatus 500 further includes:
  • a third obtaining module configured to acquire a focusing distance of the target image
  • the third determining module further includes:
  • the third determining submodule is configured to determine the location range according to the location information and the focus distance.
  • the server stores another location range determined according to another image
  • the apparatus 500 further includes:
  • a fifth determining module configured to determine whether a geographic location corresponding to the location information of the target image, and a distance between the geographic location corresponding to the location information of the other image exceeds a preset distance threshold
  • the third determining module further includes:
  • a fourth determining submodule configured to determine a location range corresponding to the target image according to the location information of the target image when the distance exceeds the preset distance threshold
  • An update module configured to update a range of locations stored by the server
  • the fifth determining submodule is configured to determine the other location range as the location range when the distance does not exceed the preset distance threshold.
  • the replacement module 503 includes:
  • An intercepting module configured to intercept the target three-dimensional model to obtain a two-dimensional image that matches the target sub-image
  • the replacement sub-module is configured to replace the target sub-image with the two-dimensional image in the target image.
  • the apparatus 500 further includes:
  • the setting module is configured to set the image parameters of the two-dimensional image according to the image parameters of the target image after the target three-dimensional model is intercepted to obtain a two-dimensional image that matches the target sub-image.
  • the apparatus 500 further includes:
  • a fourth acquiring module configured to acquire feature information of the target object, where the feature information includes at least one of historical information, geographic information, and travel information;
  • the second sending module is configured to send the feature information to the terminal.
  • FIG. 6 is a block diagram of an image processing apparatus 600 according to an exemplary embodiment, where the apparatus 600 can be applied to Terminal, the device 600 can include:
  • a first determining module 601 configured to determine a target sub-image included in the target image
  • the second determining module 602 is configured to determine, from the three-dimensional model library, a target three-dimensional model that matches the target object corresponding to the target sub-image;
  • the replacement module 603 is configured to perform image replacement on the target sub-image in the target image based on the target three-dimensional model.
  • the disclosed apparatus and method may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the modules or units is only a logical function division.
  • there may be another division manner for example, multiple units or components may be used. Combinations can be integrated into another system, or some features can be ignored or not executed.
  • the functional modules in the various embodiments of the present application may be integrated into one processing unit, or each module may exist physically separately, or two or more modules may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as a standalone product, may be stored in a computer readable storage medium.
  • a computer readable storage medium including a number of instructions to make a computer device (can be a A human computer, server, or network device, or the like, or a processor, performs all or part of the steps of the methods described in various embodiments of the present application.
  • the foregoing storage medium includes: a U disk, a mobile hard disk, a ROM (Read-Only Memory), a RAM (Random Access Memory), a disk or an optical disk, and the like, which can store program codes. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Library & Information Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

La présente invention concerne un procédé et un dispositif de traitement d'image, et un serveur, destinés à être utilisés pour améliorer la définition d'une image. Le procédé consiste : à déterminer une sous-image cible comprise dans une image cible envoyée par un terminal ; à déterminer un modèle tridimensionnel cible qui correspond à un objet cible correspondant à la sous-image cible à partir d'une bibliothèque de modèles tridimensionnels ; à remplacer la sous-image cible dans l'image cible selon le modèle tridimensionnel cible ; et à envoyer l'image cible au terminal après le remplacement de l'image.
PCT/CN2017/098854 2017-08-24 2017-08-24 Procédé et dispositif de traitement d'image, et serveur WO2019037038A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2017/098854 WO2019037038A1 (fr) 2017-08-24 2017-08-24 Procédé et dispositif de traitement d'image, et serveur
CN201780001596.9A CN107690673B (zh) 2017-08-24 2017-08-24 图像处理方法、装置及服务器

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/098854 WO2019037038A1 (fr) 2017-08-24 2017-08-24 Procédé et dispositif de traitement d'image, et serveur

Publications (1)

Publication Number Publication Date
WO2019037038A1 true WO2019037038A1 (fr) 2019-02-28

Family

ID=61154076

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/098854 WO2019037038A1 (fr) 2017-08-24 2017-08-24 Procédé et dispositif de traitement d'image, et serveur

Country Status (2)

Country Link
CN (1) CN107690673B (fr)
WO (1) WO2019037038A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110648396A (zh) * 2019-09-17 2020-01-03 西安万像电子科技有限公司 图像处理方法、装置和系统
CN113998344A (zh) * 2020-07-28 2022-02-01 北京四维图新科技股份有限公司 快递盒回收方法、系统、服务器、终端及存储介质
CN115457202A (zh) * 2022-09-07 2022-12-09 北京四维远见信息技术有限公司 一种三维模型更新的方法、装置及存储介质

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109005382A (zh) * 2018-06-27 2018-12-14 深圳市轱辘汽车维修技术有限公司 一种视频采集管理方法及服务器
CN109492607B (zh) * 2018-11-27 2021-07-09 Oppo广东移动通信有限公司 一种信息推送方法、信息推送装置及终端设备
CN112784621B (zh) * 2019-10-22 2024-06-18 华为技术有限公司 图像显示方法及设备
CN110913140B (zh) * 2019-11-28 2021-05-28 维沃移动通信有限公司 一种拍摄信息提示方法及电子设备
CN111556278B (zh) * 2020-05-21 2022-02-01 腾讯科技(深圳)有限公司 一种视频处理的方法、视频展示的方法、装置及存储介质
CN115002333B (zh) * 2021-03-02 2023-09-26 华为技术有限公司 一种图像处理方法及相关装置
CN114677468B (zh) * 2022-05-27 2022-09-20 深圳思谋信息科技有限公司 基于逆向建模的模型修正方法、装置、设备和存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007280046A (ja) * 2006-04-06 2007-10-25 Canon Inc 画像処理装置及びその制御方法、プログラム
CN102547090A (zh) * 2010-11-24 2012-07-04 三星电子株式会社 数字拍摄设备及其提供照片的方法
CN103561264A (zh) * 2013-11-07 2014-02-05 北京大学 一种基于云计算的媒体解码方法及解码器
CN104618627A (zh) * 2014-12-31 2015-05-13 小米科技有限责任公司 视频处理方法和装置
CN106060249A (zh) * 2016-05-19 2016-10-26 维沃移动通信有限公司 一种拍照防抖方法及移动终端
CN106096043A (zh) * 2016-06-24 2016-11-09 维沃移动通信有限公司 一种拍照方法和移动终端

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11306318A (ja) * 1998-04-16 1999-11-05 Image Joho Kagaku Kenkyusho 顔すげ替え編集装置
CN101482968B (zh) * 2008-01-07 2013-01-23 日电(中国)有限公司 图像处理方法和设备
CN102831580B (zh) * 2012-07-17 2015-04-08 西安电子科技大学 基于运动检测的手机拍摄图像修复方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007280046A (ja) * 2006-04-06 2007-10-25 Canon Inc 画像処理装置及びその制御方法、プログラム
CN102547090A (zh) * 2010-11-24 2012-07-04 三星电子株式会社 数字拍摄设备及其提供照片的方法
CN103561264A (zh) * 2013-11-07 2014-02-05 北京大学 一种基于云计算的媒体解码方法及解码器
CN104618627A (zh) * 2014-12-31 2015-05-13 小米科技有限责任公司 视频处理方法和装置
CN106060249A (zh) * 2016-05-19 2016-10-26 维沃移动通信有限公司 一种拍照防抖方法及移动终端
CN106096043A (zh) * 2016-06-24 2016-11-09 维沃移动通信有限公司 一种拍照方法和移动终端

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110648396A (zh) * 2019-09-17 2020-01-03 西安万像电子科技有限公司 图像处理方法、装置和系统
CN113998344A (zh) * 2020-07-28 2022-02-01 北京四维图新科技股份有限公司 快递盒回收方法、系统、服务器、终端及存储介质
CN113998344B (zh) * 2020-07-28 2023-06-27 北京四维图新科技股份有限公司 快递盒回收方法、系统、服务器、终端及存储介质
CN115457202A (zh) * 2022-09-07 2022-12-09 北京四维远见信息技术有限公司 一种三维模型更新的方法、装置及存储介质

Also Published As

Publication number Publication date
CN107690673B (zh) 2021-04-02
CN107690673A (zh) 2018-02-13

Similar Documents

Publication Publication Date Title
WO2019037038A1 (fr) Procédé et dispositif de traitement d'image, et serveur
US10540806B2 (en) Systems and methods for depth-assisted perspective distortion correction
WO2018201809A1 (fr) Dispositif et procédé de traitement d'image basé sur des caméras doubles
US9313419B2 (en) Image processing apparatus and image pickup apparatus where image processing is applied using an acquired depth map
US9591237B2 (en) Automated generation of panning shots
US9159169B2 (en) Image display apparatus, imaging apparatus, image display method, control method for imaging apparatus, and program
CN110536057B (zh) 图像处理方法和装置、电子设备、计算机可读存储介质
JP5740884B2 (ja) 繰り返し撮影用arナビゲーション及び差異抽出のシステム、方法及びプログラム
CN109474780B (zh) 一种用于图像处理的方法和装置
CN109905604B (zh) 对焦方法、装置、拍摄设备及飞行器
CN113129241B (zh) 图像处理方法及装置、计算机可读介质、电子设备
CN113875219B (zh) 图像处理方法和装置、电子设备、计算机可读存储介质
CN111932587A (zh) 图像处理方法和装置、电子设备、计算机可读存储介质
CN113391644B (zh) 一种基于图像信息熵的无人机拍摄距离半自动寻优方法
GB2537886A (en) An image acquisition technique
CN105467741A (zh) 一种全景拍照方法及终端
KR102076635B1 (ko) 산재된 고정 카메라를 이용한 파노라마 영상 생성 장치 및 방법
KR101598399B1 (ko) 로드뷰 사진이미지의 좌표정보를 이용한 자동 이미지 합성 시스템
CN116456191A (zh) 图像生成方法、装置、设备及计算机可读存储介质
US10721419B2 (en) Ortho-selfie distortion correction using multiple image sensors to synthesize a virtual image
Sindelar et al. Space-variant image deblurring on smartphones using inertial sensors
US11792511B2 (en) Camera system utilizing auxiliary image sensors
CN112532856B (zh) 一种拍摄方法、装置和系统
CN109582811B (zh) 图像处理方法、装置、电子设备和计算机可读存储介质
CN114222059A (zh) 拍照、拍照处理方法、系统、设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17922329

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 10.09.2020)

122 Ep: pct application non-entry in european phase

Ref document number: 17922329

Country of ref document: EP

Kind code of ref document: A1