WO2019223538A1 - 图像处理方法和装置、存储介质、电子设备 - Google Patents

图像处理方法和装置、存储介质、电子设备 Download PDF

Info

Publication number
WO2019223538A1
WO2019223538A1 PCT/CN2019/086022 CN2019086022W WO2019223538A1 WO 2019223538 A1 WO2019223538 A1 WO 2019223538A1 CN 2019086022 W CN2019086022 W CN 2019086022W WO 2019223538 A1 WO2019223538 A1 WO 2019223538A1
Authority
WO
WIPO (PCT)
Prior art keywords
scene
scene detection
category
initial
detection
Prior art date
Application number
PCT/CN2019/086022
Other languages
English (en)
French (fr)
Inventor
陈岩
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2019223538A1 publication Critical patent/WO2019223538A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/144Movement detection

Definitions

  • the present application relates to the field of computer technology, and in particular, to an image processing method and device, a storage medium, and an electronic device.
  • the camera function has become one of the commonly used applications of mobile terminals, and belongs to applications that users use very frequently.
  • the function of scene detection on the image may be used.
  • traditional scene detection techniques still have some errors in scene detection.
  • the embodiments of the present application provide an image processing method and device, a storage medium, and an electronic device, which can improve the accuracy of scene detection.
  • An image processing method includes:
  • the initial result of scene detection is corrected according to the position information, and the final result of scene detection after correction is obtained.
  • An image processing device includes:
  • a detection module configured to perform scene detection on an image and obtain an initial result of scene detection
  • a position determining module configured to obtain position information when the image is taken
  • the correction module is configured to correct the initial result of scene detection according to the position information, and obtain a final result of scene detection after correction.
  • a computer-readable storage medium has stored thereon a computer program that, when executed by a processor, implements the operations of the image processing method described above.
  • An electronic device includes a memory, a processor, and a computer program stored on the memory and operable on the processor.
  • the processor executes the computer program, the operations of the image processing method described above are performed.
  • the foregoing image processing method and device, storage medium, and electronic device perform scene detection on an image, obtain initial results of scene detection, obtain position information at the time of image capture, and correct initial results of scene detection based on the position information to obtain scene detection after correction Final result.
  • FIG. 1 is an internal structural diagram of an electronic device in an embodiment
  • FIG. 2 is a flowchart of an image processing method according to an embodiment
  • FIG. 3 is a flowchart of a method for acquiring position information when an image is taken in FIG. 2;
  • FIG. 4 is a flowchart of a method for correcting an initial result of scene detection according to position information in FIG. 2 to obtain a final result of scene detection after correction;
  • FIG. 5 is a flowchart of a method for calculating a confidence level in FIG. 4;
  • FIG. 6 is a schematic structural diagram of an image processing apparatus according to an embodiment
  • FIG. 7 is a schematic structural diagram of an image processing apparatus in another embodiment
  • FIG. 8 is a schematic structural diagram of a correction module in FIG. 6;
  • FIG. 1 is a schematic diagram of an internal structure of an electronic device in an embodiment.
  • the electronic device includes a processor, a memory, and a network interface connected through a system bus.
  • the processor is used to provide computing and control capabilities to support the operation of the entire electronic device.
  • the memory is used to store data, programs, and the like. At least one computer program is stored on the memory, and the computer program can be executed by a processor to implement the scene detection method applicable to the electronic device provided in the embodiments of the present application.
  • the memory may include a non-volatile storage medium such as a magnetic disk, an optical disc, a read-only memory (ROM), or a random-access memory (RAM).
  • the memory includes a non-volatile storage medium and an internal memory.
  • the non-volatile storage medium stores an operating system and a computer program.
  • the computer program can be executed by a processor to implement an image processing method provided by each of the following embodiments.
  • the internal memory provides a cached operating environment for operating system computer programs in a non-volatile storage medium.
  • the network interface may be an Ethernet card or a wireless network card, and is used to communicate with external electronic devices.
  • the electronic device may be a mobile phone, a tablet computer, or a personal digital assistant or a wearable device.
  • an image processing method is provided.
  • the method is applied to the electronic device in FIG. 1 as an example, and includes:
  • Operation 220 Perform scene detection on the image to obtain an initial result of scene detection.
  • the user uses an electronic device (with a photographing function) to take a picture, obtain an image after the picture is taken, and perform scene detection on the image.
  • a conventional scene detection algorithm is used to perform scene detection on an image to detect which scene is included in the image.
  • the deep neural network model mainly used in scene detection algorithms is Convolutional Neural Networks (CNN).
  • CNN Convolutional Neural Networks
  • the scene category can be scenery, beach, blue sky, green grass, snow, fireworks, spotlight, text, portrait, baby, cat, dog, food, etc.
  • Operation 240 Acquire position information during image shooting.
  • the electronic device records the location of each picture, and generally uses GPS (Global Positioning System, Global Positioning System) to record the location. For example, if a user takes a picture in Lianhuashan Park in Shenzhen, the address of the image after taking the picture can be “Lianhuashan Park in Shenzhen”. The address is Lianhuashan Park in Shenzhen.
  • the corresponding images have a higher probability of blue sky, green grass, portraits, and landscapes, but for example, a beach, snow, etc. are less likely.
  • Operation 260 Correct the initial scene detection result according to the position information to obtain the final scene detection result after the correction.
  • the probability of certain scenes in the image is obtained, and the initial results are corrected in combination with the initial results of scene detection.
  • the initial result of scene detection is blue sky, green grass, and sandy beach. After correction, obviously, the appearance probability of sandy beach is the smallest and should not appear in the image. Therefore, after correction, it is obtained.
  • blue sky and green grass are obtained, and the blue sky and green grass are output as the final result of scene detection.
  • scene detection is performed on an image, initial results of scene detection are obtained, position information at the time of image capture is obtained, initial results of scene detection are corrected according to the position information, and final scene detection results after correction are obtained.
  • this method combines the analysis of the position information at the time of image capture, because each position information will correspond to some scenes, thereby further optimizing the rationality of the final result of scene detection and improving the accuracy of scene detection.
  • acquiring position information when an image is taken includes:
  • Operation 242 Obtain address information during image shooting.
  • the electronic device records the location of each picture taken.
  • GPS Global Positioning System
  • Operation 244 Acquire position information of the image according to the address information, where the position information includes a scene category corresponding to the address information and a weight corresponding to the scene category.
  • the location information of the image is obtained according to the address information.
  • the scene corresponding to the address "grassland” is "green grass” with a weight of 9, and "snow scene”
  • the weight value of "7” the weight value of "landscape” is 4
  • the weight value of "blue sky” is 6
  • the weight value of "beach” is -8
  • the value range is [-10,10].
  • the larger the weight value the greater the probability of the scene appearing in the image, and the smaller the weight value, the smaller the probability of the scene appearing in the image.
  • the obtained location information includes scene categories corresponding to the address information and weights corresponding to the scene categories.
  • a scene category corresponding to the address information and a weight value corresponding to the scene category are obtained.
  • the scene category of the image can also be obtained through the shooting address information of the image, so that the scene category of the image obtained through the shooting address information of the image can be used to calibrate the initial result of scene detection. This ultimately improves the accuracy of scene detection.
  • the method further includes: matching the corresponding scene category and the weight corresponding to the scene category for different address information in advance.
  • the corresponding scene category and the corresponding weight of the scene category are matched in advance for different address information, and these data are stored in the database for being called at any time. Specifically, it may be a result obtained after statistical analysis is performed on a large number of image materials, and corresponding scene categories and weight values corresponding to the scene categories are correspondingly matched for different address information according to the results.
  • the corresponding scene category and the corresponding weight of the scene category are matched for different address information in advance, and the result is obtained by performing statistical analysis on a large number of image materials.
  • this result is obtained after statistical analysis of a large number of image materials, and has high universality and accuracy.
  • predicting and calibrating the scene of the image can ultimately improve the accuracy of scene detection.
  • operation 260 corrects an initial result of scene detection according to the position information to obtain a final result of scene detection after correction, including:
  • Operation 262 Calculate the confidence of the initial result of scene detection according to the scene category corresponding to the address information and the weight value corresponding to the scene category.
  • the scene detection is performed on the image using the traditional scene detection algorithm, which scene is detected in the image, and the initial result of the scene detection is obtained.
  • the initial result of scene detection includes the initial category of scene detection and the confidence level corresponding to the initial category of scene detection. For example, scene detection is performed on the captured image, and the scene contained in the image has "green grass” and the confidence is 70%; the scene has "blue sky” and the confidence is 80%; the scene has "snow", and The confidence is 70%; the scene has "beach” and the confidence is 70%.
  • the scene category corresponding to "grassland” and the weight value corresponding to the scene category are obtained from the database.
  • the scene corresponding to the address “Prairie” is "grass” with a weight of 9, “snow” with a weight of 7, “scenery” with a weight of 4, and “blue sky” with a weight of 6,
  • the value of "Beach” is -8, and the value ranges from [-10,10].
  • the weight information of the scene corresponding to the position information the confidence of the initial result of scene detection is enhanced or weakened.
  • the weight of the scene category “beach” obtained according to the address information is -8
  • the confidence level of “beach” in the initial result of scene detection is 70%
  • 70% ⁇ (1-8%) 0.644. It means that after correcting the "beach” in the initial result of scene detection according to the address information, the confidence of the "beach” is weakened. At this time, the recalculated confidence of the "beach” is 0.644.
  • the above calculation is performed on each scene category in the initial result of scene detection in order to obtain the confidence level of the recalculated scene category.
  • an initial result of scene detection whose confidence exceeds a preset threshold is used as a final result of scene detection.
  • the preset threshold is a threshold that is set according to the initial result of scene detection.
  • the preset threshold can be set to 0.7.
  • it can also be set to another reasonable threshold according to the initial result of scene detection.
  • the initial result of scene detection with the recalculated confidence exceeding a preset threshold is taken as the final result of scene detection.
  • the "beach" will be eliminated, so as to achieve the effect of correcting the initial result of scene detection, and the final result of scene detection is "green grass", "blue sky", and "snow scene”.
  • the confidence of the initial result of scene detection is calculated according to the scene category corresponding to the address information and the weight value corresponding to the scene category.
  • the initial result of scene detection with confidence exceeding a preset threshold is taken as the final result of scene detection. Because the confidence of the scene category in the image is recalculated, the scene of the image is predicted and calibrated based on the position information of the image, which can ultimately improve the accuracy of scene detection.
  • operation 262 calculating the confidence of the initial result of scene detection according to the scene category corresponding to the address information and the weight corresponding to the scene category, includes:
  • Operation 262a Obtain a scene category that is the same as the initial category of scene detection from the scene category corresponding to the address information;
  • Operation 262b obtain weights corresponding to the same scene category
  • Operation 262c Calculate the confidence after the correction according to the percentage corresponding to the weight and the confidence corresponding to the initial category of scene detection.
  • the weight value of the scene category “green grass” obtained according to the address information is 9
  • the confidence level of “green grass” in the initial result of scene detection is 70%
  • the confidence of the "beach” is weakened.
  • the recalculated confidence of the "beach” is 0.644.
  • the above calculation is performed on each scene category in the initial result of scene detection in order to obtain the confidence level of the recalculated scene category.
  • the process of calculating the confidence of the initial result of scene detection according to the scene category corresponding to the address information and the weight value corresponding to the scene category is described in detail.
  • the confidence level of the scene category with higher accuracy can be obtained, thereby realizing the screening of the higher accuracy result from the initial result of scene recognition and outputting it as the final result of scene recognition.
  • the initial result of scene detection whose confidence exceeds a preset threshold as the final result of scene detection includes:
  • the initial category of scene detection corresponding to the confidence level after correction is taken as the final result of scene detection.
  • the preset threshold value of the corrected confidence level is a threshold value corresponding to the initial result of scene detection.
  • the lowest confidence level value in the top 3 scene categories is taken as the corrected confidence level.
  • Preset threshold When there are too many scene categories detected by the initial result of scene detection, for example, there are 10, the lowest confidence value among the top 5 scene categories is correspondingly taken as the preset threshold value of the corrected confidence.
  • the initial category of scene detection corresponding to the confidence level after correction is taken as the final result of scene detection. Because when the confidence level after correction exceeds a preset threshold, it means that after the initial result of scene detection is corrected according to the address information, the confidence level is enhanced. Under such double verification, the initial category of scene detection obtained with a confidence level exceeding a preset threshold can be used as the final result of scene detection.
  • an image processing method is provided.
  • the method is applied to the electronic device in FIG. 1 as an example, and includes:
  • Operation one The user uses an electronic device (with a photographing function) to take a picture and obtain an image after the picture is taken.
  • the traditional scene detection algorithm is used to detect the scene of the image and detect which scene category is included in the image.
  • Scene categories can be scenery, beach, blue sky, green grass, snow, fireworks, spotlights, text, portraits, babies, cats, dogs, food, etc.
  • the detected scene category and the confidence level corresponding to the initial category are used as the initial results of scene detection;
  • Operation two The electronic device records the location of each picture.
  • GPS Global Positioning System
  • Operation 3 Obtain the scene category and the weight corresponding to the scene category that match the address information from the database.
  • the database stores corresponding scene categories and weights corresponding to the scene categories that are matched for different address information in advance;
  • Operation 4 Obtain the same scene category as the initial category of scene detection from the scene category corresponding to the address information; obtain the weight corresponding to the same scene category; according to the percentage corresponding to the weight and the confidence level corresponding to the initial category of scene detection Confidence after performing a calculation correction;
  • Operation five Determine whether the confidence level after correction exceeds a preset threshold; when the determination result is yes, use the initial category of scene detection corresponding to the corrected confidence level as the final result of scene detection.
  • the confidence of the initial result of scene detection is calculated according to the scene category corresponding to the address information and the weight value corresponding to the scene category.
  • the initial result of scene detection with confidence exceeding a preset threshold is taken as the final result of scene detection. Because the confidence of the scene category in the image is recalculated, the scene of the image is predicted and calibrated based on the position information of the image, which can ultimately improve the accuracy of scene detection.
  • an image processing apparatus 600 includes: a detection module 620, a position determination module 640, and a correction module 660. among them,
  • a detection module 620 configured to perform scene detection on an image and obtain an initial result of scene detection
  • a position determining module 640 configured to obtain position information when an image is taken
  • the correction module 660 is configured to correct an initial result of scene detection according to the position information, and obtain a final result of scene detection after correction.
  • the position determining module 640 is further configured to obtain address information when an image is taken; and obtain position information of the image according to the address information, and the position information includes a scene category corresponding to the address information and a weight corresponding to the scene category.
  • an image processing apparatus 600 is provided.
  • the apparatus further includes: a presetting module 610 configured to match different scene information and corresponding scene categories and weights corresponding to the scene categories in advance. .
  • the correction module 660 further includes:
  • the confidence calculation module 662 is configured to calculate the confidence of the initial result of scene detection according to the scene category corresponding to the address information and the weight corresponding to the scene category;
  • the final result of scene detection module 664 is configured to use the initial result of scene detection whose confidence exceeds a preset threshold as the final result of scene detection.
  • the confidence calculation module 662 is further configured to obtain, from the scene categories corresponding to the address information, the same scene category as the initial category of scene detection; obtain weights corresponding to the same scene categories; Percentage and confidence level corresponding to the initial category of scene detection are used to calculate the confidence level after correction.
  • the final result of scene detection module 664 is further configured to determine whether the confidence level after correction exceeds a preset threshold
  • the initial category of scene detection corresponding to the confidence level after correction is taken as the final result of scene detection.
  • Each module in the image processing apparatus may be implemented in whole or in part by software, hardware, and a combination thereof.
  • the network interface may be an Ethernet card or a wireless network card.
  • the above modules may be embedded in the processor in the form of hardware or independent of the processor in the server, or may be stored in the memory of the server in the form of software to facilitate the processor. Call to perform the operations corresponding to the above modules.
  • a computer-readable storage medium is provided on which a computer program is stored.
  • the computer program is executed by a processor, the operations of the image processing methods provided by the foregoing embodiments are implemented.
  • an electronic device including a memory, a processor, and a computer program stored on the memory and executable on the processor.
  • the processor executes the computer program, the image processing provided by the foregoing embodiments is implemented. The operation of the method.
  • An embodiment of the present application further provides a computer program product, which when executed on a computer, causes the computer to perform operations of the image processing methods provided by the foregoing embodiments.
  • Non-volatile memory may include read-only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • Volatile memory can include random access memory (RAM), which is used as external cache memory.
  • RAM is available in various forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), dual data rate SDRAM (DDR, SDRAM), enhanced SDRAM (ESDRAM), synchronous Link (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
  • SRAM static RAM
  • DRAM dynamic RAM
  • SDRAM synchronous DRAM
  • DDR dual data rate SDRAM
  • SDRAM enhanced SDRAM
  • SLDRAM synchronous Link (Synchlink) DRAM
  • SLDRAM synchronous Link (Synchlink) DRAM
  • Rambus direct RAM
  • DRAM direct memory bus dynamic RAM
  • RDRAM memory bus dynamic RAM

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

本申请涉及一种图像处理方法和装置、电子设备、计算机可读存储介质,对图像进行场景检测,获取场景检测初始结果,获取图像拍摄时的位置信息,根据位置信息对场景检测初始结果进行校正,得到校正之后的场景检测最终结果。

Description

图像处理方法和装置、存储介质、电子设备
相关申请的交叉引用
本申请要求于2018年05月21日提交中国专利局,申请号为201810489122.X,发明名称为“图像处理方法和装置、存储介质、电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机技术领域,特别是涉及一种图像处理方法和装置、存储介质、电子设备。
背景技术
随着移动终端的普及和移动互联网的迅速发展,移动终端的用户使用量越来越大。而拍照功能已经成为移动终端的常用应用之一,属于用户使用频率极高的应用。在拍照的过程中或在拍照之后,都可能会使用到对图像进行场景检测的功能。但传统的场景检测技术对场景的检测仍然存在一定的误差。
发明内容
本申请实施例提供一种图像处理方法和装置、存储介质、电子设备,可以提高场景检测的准确性。
一种图像处理方法,包括:
对图像进行场景检测,获取场景检测初始结果;
获取所述图像拍摄时的位置信息;
根据所述位置信息对所述场景检测初始结果进行校正,得到校正之后的场景检测最终结果。
一种图像处理装置,所述装置包括:
检测模块,用于对图像进行场景检测,获取场景检测初始结果;
位置确定模块,用于获取所述图像拍摄时的位置信息;
校正模块,用于根据所述位置信息对所述场景检测初始结果进行校正,得到校正之后的场景检测最终结果。
一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现如上所述的图像处理方法的操作。
一种电子设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,处理器执行计算机程序时执行如上所述的图像处理方法的操作。
上述图像处理方法和装置、存储介质、电子设备,对图像进行场景检测,获取场景检测初始结果,获取图像拍摄时的位置信息,根据位置信息对场景检测初始结果进行校正,得到校正之后的场景检测最终结果。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为一个实施例中电子设备的内部结构图;
图2为一个实施例中图像处理方法的流程图;
图3为图2中获取图像拍摄时的位置信息方法的流程图;
图4为图2中根据位置信息对场景检测初始结果进行校正,得到校正之后的场景检测最终结果方法的流程图;
图5为图4中计算置信度方法的流程图;
图6为一个实施例中图像处理装置的结构示意图;
图7为另一个实施例中图像处理装置的结构示意图;
图8为图6中校正模块的结构示意图;
具体实施方式
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对 本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
图1为一个实施例中电子设备的内部结构示意图。如图1所示,该电子设备包括通过系统总线连接的处理器、存储器和网络接口。其中,该处理器用于提供计算和控制能力,支撑整个电子设备的运行。存储器用于存储数据、程序等,存储器上存储至少一个计算机程序,该计算机程序可被处理器执行,以实现本申请实施例中提供的适用于电子设备的场景检测方法。存储器可包括磁碟、光盘、只读存储记忆体(Read-Only Memory,ROM)等非易失性存储介质,或随机存储记忆体(Random-Access-Memory,RAM)等。例如,在一个实施例中,存储器包括非易失性存储介质及内存储器。非易失性存储介质存储有操作系统和计算机程序。该计算机程序可被处理器所执行,以用于实现以下各个实施例所提供的一种图像处理方法。内存储器为非易失性存储介质中的操作系统计算机程序提供高速缓存的运行环境。网络接口可以是以太网卡或无线网卡等,用于与外部的电子设备进行通信。该电子设备可以是手机、平板电脑或者个人数字助理或穿戴式设备等。
在一个实施例中,如图2所示,提供了一种图像处理方法,以该方法应用于图1中的电子设备为例进行说明,包括:
操作220,对图像进行场景检测,获取场景检测初始结果。
用户使用电子设备(具有拍照功能)进行拍照,获取拍照之后的图像,对图像进行场景检测。具体地,采用传统的场景检测算法对图像进行场景检测,检测出图像中包含哪种场景。场景检测算法主要用到的深度神经网络模型是卷积神经网络CNN(Convolutional Neural Networks,简称CNN)。例如,场景类别可以是风景、海滩、蓝天、绿草、雪景、烟火、聚光灯、文本、人像、婴儿、猫、狗、美食等。在对图像进行场景检测之后,得到了场景检测初始结果。
操作240,获取图像拍摄时的位置信息。
一般情况下,电子设备会对每次拍照的地点进行记录,一般采用GPS(Global Positioning System,全球定位系统)来进行记录位置。例如,用户在深圳市莲花山公园进行拍照,则对拍照之后的图像就可以记录地址为“深圳市莲花山公园”。地址为深圳市莲花山公园,则相应拍摄出来的图像中出现蓝天、绿草、人像、风景的概率比较高,而例如 出现海滩、雪景等的概率较小。
操作260,根据位置信息对场景检测初始结果进行校正,得到校正之后的场景检测最终结果。
根据地址信息就得到了图像中出现某些场景的概率,再结合场景检测初始结果对初始结果进行校正。例如,场景检测初始结果为蓝天、绿草、沙滩,则经过校正之后,显然沙滩的出现概率最小,不应该出现在图像中。因此,经过校正之后就得到了,对上述图像进行场景检测之后得到的是蓝天、绿草,则将蓝天、绿草输出为场景检测最终结果。
本申请实施例中,对图像进行场景检测,获取场景检测初始结果,获取图像拍摄时的位置信息,根据位置信息对场景检测初始结果进行校正,得到校正之后的场景检测最终结果。本方法在场景检测方法的基础上结合对图像拍摄时的位置信息的分析,因为每一位置信息会固定对应一些场景,从而进一步优化场景检测最终结果的合理性,提高场景检测的准确率。
在一个实施例中,如图3所示,获取图像拍摄时的位置信息,包括:
操作242,获取图像拍摄时的地址信息。
一般情况下,电子设备会对每次拍照的地点进行记录,一般采用GPS(Global Positioning System,全球定位系统)来进行记录地址信息。获取电子设备所记录的地址信息。
操作244,根据地址信息获取图像的位置信息,位置信息包含与地址信息对应的场景类别及场景类别对应的权值。
在获取电子设备所记录的地址信息之后,根据地址信息获取图像的位置信息。预先为不同的地址信息匹配对应的场景类别及场景类别对应的权值。具体地,可以是根据对大量的图像素材进行统计学分析后得出的结果,根据结果相应地为不同的地址信息匹配对应的场景类别及场景类别对应的权值。例如,根据对大量的图像素材进行统计学分析后得出,当地址信息显示为“草原”时,则与地址为“草原”对应的场景为“绿草”的权值为9,“雪景”的权值为7,“风景”的权值为4,“蓝天”的权值为6,“海滩”的权值为-8,权值的取值范围为[-10,10]。权值越大说明在该图像中出现该场景的概率就越大,权值越小说明在该图像中出现该场景的概率就越小。获取到的位置信息包含与地址信息对应的场景类 别及场景类别对应的权值。
本申请实施例中,根据获取到的图像的拍摄地址信息,进而根据地址信息获取到与该地址信息对应的场景类别及场景类别对应的权值。如此通过图像的拍摄地址信息同样可以获取到图像的场景类别,从而,可以实现用通过图像的拍摄地址信息获取到的图像的场景类别,来对场景检测初始结果进行校准。从而最终提高了场景检测的准确度。
在一个实施例中,方法还包括:预先为不同的地址信息匹配对应的场景类别及场景类别对应的权值。
预先为不同的地址信息匹配对应的场景类别及场景类别对应的权值,并将这些数据存储在数据库中,以供随时调用。具体地,可以是根据对大量的图像素材进行统计学分析后得出的结果,根据结果相应地为不同的地址信息匹配对应的场景类别及场景类别对应的权值。例如,根据对大量的图像素材进行统计学分析后得出,当地址信息显示为“草原”时,则与地址为“草原”对应的场景为“绿草”的权值为9,“雪景”的权值为7,“风景”的权值为4,“蓝天”的权值为6,“海滩”的权值为-8,权值的取值范围为[-10,10]。权值越大说明在该图像中出现该场景的概率就越大,权值越小说明在该图像中出现该场景的概率就越小。权值从0开始每增加1,则对应场景的置信度增加1%,同样的,权值从0开始每减少1,则对应的场景的置信度减少1%。
本申请实施例中,预先为不同的地址信息匹配对应的场景类别及场景类别对应的权值,且是根据对大量的图像素材进行统计学分析后得出的结果。首先,这个结果是经过对大量的图像素材进行统计学分析后得出的结果,具有较高的普适性和准确性。其次,根据上述经过对大量的图像素材进行统计学分析后得出的结果,对图像的场景进行预测和校准,能够最终提高了场景检测的准确度。
在一个实施例中,如图4所示,操作260,根据位置信息对场景检测初始结果进行校正,得到校正之后的场景检测最终结果,包括:
操作262,根据地址信息对应的场景类别及场景类别对应的权值,对场景检测初始结果进行计算置信度。
采用传统的场景检测算法对图像进行场景检测,检测出图像中包含哪种场景,得到场景检测初始结果。具体地,场景检测初始结果包括场景检测的初始类别及场景检测的初始 类别对应的置信度。例如,对所拍摄的图像进行场景检测,得到图像中包含的场景有“绿草”,且置信度为70%;场景有“蓝天”,且置信度为80%;场景有“雪景”,且置信度为70%;场景有“海滩”,且置信度为70%。
此时再根据图像的拍摄地址信息为“新疆伊犁草原”,则从数据库中获取到与“草原”对应的场景类别及场景类别对应的权值。例如,与地址为“草原”对应的场景为“绿草”的权值为9,“雪景”的权值为7,“风景”的权值为4,“蓝天”的权值为6,“海滩”的权值为-8,权值的取值范围为[-10,10]。根据位置信息对应的场景的权值信息,对场景检测初始结果的置信度进行增强或者弱化。
根据地址信息对应的场景类别及场景类别对应的权值,对场景检测初始结果进行计算置信度。具体地,根据地址信息所获取的场景类别为“绿草”的权值为9,则获取场景检测初始结果中“绿草”的置信度70%,用70%×(1+9%)=0.763。则说明根据地址信息对场景检测初始结果中“绿草”进行校正之后,对“绿草”的置信度进行了增强,此时重新计算出的“绿草”的置信度为0.763。例如,根据地址信息所获取的场景类别为“海滩”的权值为-8,则获取场景检测初始结果中“海滩”的置信度70%,用70%×(1-8%)=0.644。则说明根据地址信息对场景检测初始结果中“海滩”进行校正之后,对“海滩”的置信度进行了弱化,此时重新计算出的“海滩”的置信度为0.644。同理依次对场景检测初始结果中的每一个场景类别都进行上述计算,获得重新计算出的场景类别的置信度。
操作264,将置信度超过预设阈值的场景检测初始结果,作为场景检测最终结果。
预设阈值为根据场景检测初始结果相应设置的阈值,例如,在本实施例中则就可以设置预设阈值为0.7,当然也可以根据场景检测初始结果相应设置为其他合理的阈值。将重新计算出的置信度超过预设阈值的场景检测初始结果,作为场景检测最终结果。在本实施例中则就会将“海滩”剔除出去,从而达到了对场景检测初始结果进行校正的效果,得出场景检测最终结果为“绿草”、“蓝天”“雪景”。
本申请实施例中,根据地址信息对应的场景类别及场景类别对应的权值,对场景检测初始结果进行计算置信度。将置信度超过预设阈值的场景检测初始结果,作为场景检测最终结果。因为重新计算了图像中场景类别的置信度,所以就实现了通过图像的位置信息对图像的场景进行预测和校准,能够最终提高了场景检测的准确度。
在一个实施例中,如图5所示,操作262,根据地址信息对应的场景类别及场景类别对应的权值,对场景检测初始结果进行计算置信度,包括:
操作262a,从地址信息对应的场景类别中获取与场景检测的初始类别相同的场景类别;
操作262b,获取相同的场景类别对应的权值;
操作262c,根据权值对应的百分比及场景检测的初始类别对应的置信度来进行计算校正之后的置信度。
具体地,根据地址信息所获取的场景类别为“绿草”的权值为9,则获取场景检测初始结果中“绿草”的置信度70%,用70%×(1+9%)=0.763。则说明根据地址信息对场景检测初始结果中“绿草”进行校正之后,对“绿草”的置信度进行了增强,此时重新计算出的“绿草”的置信度为0.763。例如,根据地址信息所获取的场景类别为“海滩”的权值为-8,则获取场景检测初始结果中“海滩”的置信度70%,用70%×(1-8%)=0.644。则说明根据地址信息对场景检测初始结果中“海滩”进行校正之后,对“海滩”的置信度进行了弱化,此时重新计算出的“海滩”的置信度为0.644。同理依次对场景检测初始结果中的每一个场景类别都进行上述计算,获得重新计算出的场景类别的置信度。
本申请实施例中,对根据地址信息对应的场景类别及场景类别对应的权值,对场景检测初始结果进行计算置信度的过程,进行了详细的描述。通过这种重新计算置信度的方法,可以获取到准确度更高的场景类别的置信度,从而实现从场景识别初始结果筛选出准确度较高的结果,作为场景识别最终结果进行输出。
在一个实施例中,将置信度超过预设阈值的场景检测初始结果,作为场景检测最终结果,包括:
判断校正之后的置信度是否超过预设阈值;
当判断结果为是,则将校正之后的置信度对应的场景检测的初始类别作为场景检测最终结果。
本申请实施例中,校正后的置信度的预设阈值是为根据场景检测初始结果相应设置的阈值,一般情况下取前3名场景类别中的最低的置信度数值作为校正后的置信度的预设阈值。当场景检测初始结果所检测出的场景类别太多,例如有10个,则就相应取前5名场景类别中的最低的置信度数值作为校正后的置信度的预设阈值。
在经过重新计算之后得到新的置信度之后,判断校正之后的置信度是否超过预设阈值。当判断结果为是,则将校正之后的置信度对应的场景检测的初始类别作为场景检测最终结果。因为当校正之后的置信度超过预设阈值,则说明根据地址信息对场景检测初始结果进行校正之后,是对置信度进行了增强。这样双重验证之下,所得出的置信度超过预设阈值的场景检测的初始类别就可以作为场景检测最终结果。
在一个具体的实施例中,提供了一种图像处理方法,以该方法应用于图1中的电子设备为例进行说明,包括:
操作一:用户使用电子设备(具有拍照功能)进行拍照,获取拍照之后的图像。采用传统的场景检测算法对图像进行场景检测,检测出图像中包含哪种场景类别。场景类别可以是风景、海滩、蓝天、绿草、雪景、烟火、聚光灯、文本、人像、婴儿、猫、狗、美食等。所检测出的场景类别及初始类别对应的置信度作为场景检测初始结果;
操作二:电子设备会对每次拍照的地点进行记录,一般采用GPS(Global Positioning System,全球定位系统)来进行记录地址信息。获取电子设备所记录的地址信息;
操作三:从数据库中获取与该地址信息所匹配的场景类别及场景类别对应的权值。该数据库中存储了预先为不同的地址信息匹配的对应的场景类别及场景类别对应的权值;
操作四:从地址信息对应的场景类别中获取与场景检测的初始类别相同的场景类别;获取相同的场景类别对应的权值;根据权值对应的百分比及场景检测的初始类别对应的置信度来进行计算校正之后的置信度;
操作五:判断校正之后的置信度是否超过预设阈值;当判断结果为是,则将校正之后的置信度对应的场景检测的初始类别作为场景检测最终结果。
本申请实施例中,根据地址信息对应的场景类别及场景类别对应的权值,对场景检测初始结果进行计算置信度。将置信度超过预设阈值的场景检测初始结果,作为场景检测最终结果。因为重新计算了图像中场景类别的置信度,所以就实现了通过图像的位置信息对图像的场景进行预测和校准,能够最终提高了场景检测的准确度。
应该理解的是,虽然上述流程图中的各个操作按照箭头的指示依次显示,但是这些操作并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些操作的执行并没有严格的顺序限制,这些操作可以以其它的顺序执行。而且,上述图中的至少一部 分操作可以包括多个子操作或者多个阶段,这些子操作或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些子操作或者阶段的执行顺序也不必然是依次进行,而是可以与其它操作或者其它操作的子操作或者阶段的至少一部分轮流或者交替地执行。
在一个实施例中,如图6所示,提供了一种图像处理装置600,装置包括:检测模块620、位置确定模块640及校正模块660。其中,
检测模块620,用于对图像进行场景检测,获取场景检测初始结果;
位置确定模块640,用于获取图像拍摄时的位置信息;
校正模块660,用于根据位置信息对场景检测初始结果进行校正,得到校正之后的场景检测最终结果。
在一个实施例中,位置确定模块640还用于获取图像拍摄时的地址信息;根据地址信息获取图像的位置信息,位置信息包含与地址信息对应的场景类别及场景类别对应的权值。
在一个实施例中,如图7所示,提供了一种图像处理装置600,装置还包括:预先设置模块610,用于预先为不同的地址信息匹配对应的场景类别及场景类别对应的权值。
在一个实施例中,如图8所示,校正模块660还包括:
置信度计算模块662,用于根据地址信息对应的场景类别及场景类别对应的权值,对场景检测初始结果进行计算置信度;
场景检测最终结果确定模块664,用于将置信度超过预设阈值的场景检测初始结果,作为场景检测最终结果。
在一个实施例中,置信度计算模块662,还用于从地址信息对应的场景类别中获取与场景检测的初始类别相同的场景类别;获取相同的场景类别对应的权值;根据权值对应的百分比及场景检测的初始类别对应的置信度来进行计算校正之后的置信度。
在一个实施例中,场景检测最终结果确定模块664,还用于判断校正之后的置信度是否超过预设阈值;
当判断结果为是,则将校正之后的置信度对应的场景检测的初始类别作为场景检测最终结果。
上述图像处理装置中的各个模块可全部或部分通过软件、硬件及其组合来实现。其 中,网络接口可以是以太网卡或无线网卡等,上述各模块可以以硬件形式内嵌于或独立于服务器中的处理器中,也可以以软件形式存储于服务器中的存储器中,以便于处理器调用执行以上各个模块对应的操作。
在一个实施例中,提供了一种计算机可读存储介质,其上存储有计算机程序,计算机程序被处理器执行时实现上述各实施例所提供的图像处理方法的操作。
在一个实施例中,提供了一种电子设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,处理器执行计算机程序时实现上述各实施例所提供的图像处理方法的操作。
本申请实施例还提供了一种计算机程序产品,当其在计算机上运行时,使得计算机执行上述各实施例所提供的图像处理方法的操作。
本申请所使用的对存储器、存储、数据库或其它介质的任何引用可包括非易失性和/或易失性存储器。合适的非易失性存储器可包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM),它用作外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双数据率SDRAM(DDR SDRAM)、增强型SDRAM(ESDRAM)、同步链路(Synchlink)DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)。
以上所述实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对本申请专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。

Claims (18)

  1. 一种图像处理方法,其特征在于,包括:
    对图像进行场景检测,获取场景检测初始结果;
    获取所述图像拍摄时的位置信息;及
    根据所述位置信息对所述场景检测初始结果进行校正,得到校正之后的场景检测最终结果。
  2. 根据权利要求1所述的方法,其特征在于,所述获取所述图像拍摄时的位置信息,包括:
    获取所述图像拍摄时的地址信息;及
    根据所述地址信息获取所述图像的位置信息,所述位置信息包含与所述地址信息对应的场景类别及所述场景类别对应的权值。
  3. 根据权利要求2所述的方法,其特征在于,所述方法还包括:
    预先为不同的地址信息匹配对应的场景类别及所述场景类别对应的权值。
  4. 根据权利要求2所述的方法,其特征在于,所述根据所述位置信息对所述场景检测初始结果进行校正,得到校正之后的场景检测最终结果,包括:
    根据所述地址信息对应的场景类别及所述场景类别对应的权值,对所述场景检测初始结果进行计算置信度;及
    将所述置信度超过预设阈值的场景检测初始结果,作为场景检测最终结果。
  5. 根据权利要求4所述的方法,其特征在于,所述场景检测初始结果包括场景检测的初始类别及所述场景检测的初始类别对应的置信度。
  6. 根据权利要求5所述的方法,其特征在于,所述根据所述地址信息对应的场景类别及所述场景类别对应的权值,对所述场景检测初始结果进行计算置信度,包括:
    从所述地址信息对应的场景类别中获取与所述场景检测的初始类别相同的场景类别;
    获取所述相同的场景类别对应的权值;及
    根据所述权值对应的百分比及所述场景检测的初始类别对应的置信度来进行计算校正之后的置信度。
  7. 根据权利要求6所述的方法,其特征在于,所述将所述置信度超过预设阈值的场 景检测初始结果,作为场景检测最终结果,包括:
    判断所述校正之后的置信度是否超过预设阈值;及
    当判断结果为是,则将所述校正之后的置信度对应的场景检测的初始类别作为场景检测的最终结果。
  8. 根据权利要求6所述的方法,其特征在于,所述预设阈值为根据场景检测初始结果相应设置的阈值。
  9. 一种图像处理装置,其特征在于,所述装置包括:
    检测模块,用于对图像进行场景检测,获取场景检测初始结果;
    位置确定模块,用于获取所述图像拍摄时的位置信息;及
    校正模块,用于根据所述位置信息对所述场景检测初始结果进行校正,得到校正之后的场景检测最终结果。
  10. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求1至8中任一项所述的图像处理方法的操作。
  11. 一种电子设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时实现以下操作:
    对图像进行场景检测,获取场景检测初始结果;
    获取所述图像拍摄时的位置信息;及
    根据所述位置信息对所述场景检测初始结果进行校正,得到校正之后的场景检测最终结果。
  12. 根据权利要求11所述的电子设备,其特征在于,所述处理器执行所述计算机程序时实现以下操作:所述获取所述图像拍摄时的位置信息,包括:
    获取所述图像拍摄时的地址信息;及
    根据所述地址信息获取所述图像的位置信息,所述位置信息包含与所述地址信息对应的场景类别及所述场景类别对应的权值。
  13. 根据权利要求12所述的电子设备,其特征在于,所述处理器执行所述计算机程序时实现以下操作:
    预先为不同的地址信息匹配对应的场景类别及所述场景类别对应的权值。
  14. 根据权利要求12所述的电子设备,其特征在于,所述处理器执行所述计算机程序时实现以下操作:
    所述根据所述位置信息对所述场景检测初始结果进行校正,得到校正之后的场景检测最终结果,包括:
    根据所述地址信息对应的场景类别及所述场景类别对应的权值,对所述场景检测初始结果进行计算置信度;及
    将所述置信度超过预设阈值的场景检测初始结果,作为场景检测最终结果。
  15. 根据权利要求14所述的电子设备,其特征在于,所述处理器执行所述计算机程序时实现以下操作:
    所述场景检测初始结果包括场景检测的初始类别及所述场景检测的初始类别对应的置信度。
  16. 根据权利要求15所述的电子设备,其特征在于,所述处理器执行所述计算机程序时实现以下操作:
    所述根据所述地址信息对应的场景类别及所述场景类别对应的权值,对所述场景检测初始结果进行计算置信度,包括:
    从所述地址信息对应的场景类别中获取与所述场景检测的初始类别相同的场景类别;
    获取所述相同的场景类别对应的权值;及
    根据所述权值对应的百分比及所述场景检测的初始类别对应的置信度来进行计算校正之后的置信度。
  17. 根据权利要求16所述的电子设备,其特征在于,所述处理器执行所述计算机程序时实现以下操作:所述将所述置信度超过预设阈值的场景检测初始结果,作为场景检测最终结果,包括:
    判断所述校正之后的置信度是否超过预设阈值;及
    当判断结果为是,则将所述校正之后的置信度对应的场景检测的初始类别作为场景检测的最终结果。
  18. 根据权利要求17所述的电子设备,其特征在于,所述处理器执行所述计算机程序时实现以下操作:
    所述预设阈值为根据场景检测初始结果相应设置的阈值。
PCT/CN2019/086022 2018-05-21 2019-05-08 图像处理方法和装置、存储介质、电子设备 WO2019223538A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810489122.XA CN108600634B (zh) 2018-05-21 2018-05-21 图像处理方法和装置、存储介质、电子设备
CN201810489122.X 2018-05-21

Publications (1)

Publication Number Publication Date
WO2019223538A1 true WO2019223538A1 (zh) 2019-11-28

Family

ID=63632605

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/086022 WO2019223538A1 (zh) 2018-05-21 2019-05-08 图像处理方法和装置、存储介质、电子设备

Country Status (2)

Country Link
CN (1) CN108600634B (zh)
WO (1) WO2019223538A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108600634B (zh) * 2018-05-21 2020-07-21 Oppo广东移动通信有限公司 图像处理方法和装置、存储介质、电子设备
CN113409041B (zh) * 2020-03-17 2023-08-04 华为技术有限公司 一种电子卡的选取方法、装置、终端以及存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103632141A (zh) * 2013-11-28 2014-03-12 小米科技有限责任公司 一种识别人物的方法、装置及终端设备
CN106095800A (zh) * 2016-05-27 2016-11-09 珠海市魅族科技有限公司 一种信息推荐方法及终端
CN107122189A (zh) * 2017-04-27 2017-09-01 北京小米移动软件有限公司 图像显示方法及装置
CN107734251A (zh) * 2017-09-29 2018-02-23 维沃移动通信有限公司 一种拍照方法和移动终端
CN107888823A (zh) * 2017-10-30 2018-04-06 维沃移动通信有限公司 一种拍摄处理方法、装置及系统
CN108600634A (zh) * 2018-05-21 2018-09-28 Oppo广东移动通信有限公司 图像处理方法和装置、存储介质、电子设备

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007295338A (ja) * 2006-04-26 2007-11-08 Seiko Epson Corp 撮影日時推定装置、撮影日時修正装置、画像撮影装置、撮影日時修正方法、撮影日時修正プログラム及びそのプログラムを記録した記録媒体
US8665340B2 (en) * 2010-04-29 2014-03-04 Intellectual Ventures Fund 83 Llc Indoor/outdoor scene detection using GPS
CN102054166B (zh) * 2010-10-25 2016-04-27 北京理工大学 一种新的用于户外增强现实系统的场景识别方法
CN102694826B (zh) * 2011-03-22 2018-09-07 百度在线网络技术(北京)有限公司 一种用于获取与现实场景相关的共享对象的设备和方法
WO2012165088A1 (ja) * 2011-05-31 2012-12-06 富士フイルム株式会社 撮影装置及びプログラム
CN104301613B (zh) * 2014-10-16 2016-03-02 深圳市中兴移动通信有限公司 移动终端及其拍摄方法
CN107835364A (zh) * 2017-10-30 2018-03-23 维沃移动通信有限公司 一种拍照辅助方法及移动终端

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103632141A (zh) * 2013-11-28 2014-03-12 小米科技有限责任公司 一种识别人物的方法、装置及终端设备
CN106095800A (zh) * 2016-05-27 2016-11-09 珠海市魅族科技有限公司 一种信息推荐方法及终端
CN107122189A (zh) * 2017-04-27 2017-09-01 北京小米移动软件有限公司 图像显示方法及装置
CN107734251A (zh) * 2017-09-29 2018-02-23 维沃移动通信有限公司 一种拍照方法和移动终端
CN107888823A (zh) * 2017-10-30 2018-04-06 维沃移动通信有限公司 一种拍摄处理方法、装置及系统
CN108600634A (zh) * 2018-05-21 2018-09-28 Oppo广东移动通信有限公司 图像处理方法和装置、存储介质、电子设备

Also Published As

Publication number Publication date
CN108600634A (zh) 2018-09-28
CN108600634B (zh) 2020-07-21

Similar Documents

Publication Publication Date Title
WO2019233393A1 (zh) 图像处理方法和装置、存储介质、电子设备
WO2019233341A1 (zh) 图像处理方法、装置、计算机可读存储介质和计算机设备
CN108764208B (zh) 图像处理方法和装置、存储介质、电子设备
WO2019233263A1 (zh) 视频处理方法、电子设备、计算机可读存储介质
CN108921161B (zh) 模型训练方法、装置、电子设备和计算机可读存储介质
EP3598736B1 (en) Method and apparatus for processing image
WO2019233271A1 (zh) 图像处理方法、计算机可读存储介质和电子设备
CN108897786B (zh) 应用程序的推荐方法、装置、存储介质及移动终端
WO2019237887A1 (zh) 图像处理方法、电子设备、计算机可读存储介质
CN108804658B (zh) 图像处理方法和装置、存储介质、电子设备
WO2019223594A1 (zh) 神经网络模型处理方法和装置、图像处理方法、移动终端
US10594930B2 (en) Image enhancement and repair using sample data from other images
WO2019233260A1 (zh) 广告信息推送方法和装置、存储介质、电子设备
WO2019223538A1 (zh) 图像处理方法和装置、存储介质、电子设备
WO2019223513A1 (zh) 图像识别方法、电子设备和存储介质
CN108717530A (zh) 图像处理方法、装置、计算机可读存储介质和电子设备
WO2020001219A1 (zh) 图像处理方法和装置、存储介质、电子设备
JP7429363B2 (ja) 画像検出方法、装置、システム、デバイス及び記憶媒体
US20160019681A1 (en) Image processing method and electronic device using the same
CN108848306B (zh) 图像处理方法和装置、电子设备、计算机可读存储介质
CN107704798A (zh) 图像虚化方法、装置、计算机可读存储介质和计算机设备
CN111080571A (zh) 摄像头遮挡状态检测方法、装置、终端和存储介质
TW202211163A (zh) 光流資訊預測方法、裝置、電子設備和儲存媒體
WO2020168807A1 (zh) 图像亮度的调节方法、装置、计算机设备和存储介质
US8466981B2 (en) Electronic camera for searching a specific object image

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19806693

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19806693

Country of ref document: EP

Kind code of ref document: A1