WO2020001219A1 - Procédé et appareil de traitement d'image, support de stockage et dispositif électronique - Google Patents

Procédé et appareil de traitement d'image, support de stockage et dispositif électronique Download PDF

Info

Publication number
WO2020001219A1
WO2020001219A1 PCT/CN2019/088963 CN2019088963W WO2020001219A1 WO 2020001219 A1 WO2020001219 A1 WO 2020001219A1 CN 2019088963 W CN2019088963 W CN 2019088963W WO 2020001219 A1 WO2020001219 A1 WO 2020001219A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
detected
target detection
focus
detection result
Prior art date
Application number
PCT/CN2019/088963
Other languages
English (en)
Chinese (zh)
Inventor
陈岩
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2020001219A1 publication Critical patent/WO2020001219A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Definitions

  • the present application relates to the field of computer technology, and in particular, to an image processing method and device, a storage medium, and an electronic device.
  • Target detection also called target extraction
  • Target extraction is an image segmentation based on the geometric and statistical characteristics of the target. It combines target segmentation and recognition into one. Its accuracy and real-time performance is an important capability of the entire system.
  • Object detection The goal is generally to determine the position and size of the target.
  • Traditional deep learning-based object detection algorithms tend to be slow and difficult to detect in real time. Therefore, how to perform target detection on an image in real time has become one of the problems to be solved urgently.
  • the embodiments of the present application provide an image processing method and device, a storage medium, and an electronic device, which can improve the accuracy of scene classification on an image.
  • An image processing method includes:
  • Target detection is performed on a focus area of the image to be detected to obtain a first target detection result of the image to be detected.
  • An image processing device includes:
  • a to-be-detected image acquisition module configured to obtain an to-be-detected image
  • a focus area determining module configured to determine a focus area of the image to be detected
  • the first target detection module is configured to perform target detection on a focus area of the image to be detected to obtain a first target detection result of the image to be detected.
  • a computer-readable storage medium has stored thereon a computer program that, when executed by a processor, implements the operations of the image processing method described above.
  • An electronic device includes a memory and a processor.
  • the memory stores a computer program capable of running on the processor, and the processor executes the operations of the image processing method described above when the processor executes the computer program.
  • the image processing method and device, storage medium, and electronic device obtain an image to be detected, determine a focus area of the image to be detected, perform target detection on the focus area of the image to be detected, and obtain a first target detection result of the image to be detected.
  • FIG. 1 is an internal structural diagram of an electronic device in an embodiment
  • FIG. 2 is a flowchart of an image processing method according to an embodiment
  • FIG. 3 is a flowchart of a method for determining a focus area of an image to be detected in FIG. 2;
  • FIG. 4 is a schematic diagram of a method for obtaining pixels within a preset range from the focus on the image to be detected
  • FIG. 6 is a schematic structural diagram of an image processing apparatus according to an embodiment
  • FIG. 7 is a schematic structural diagram of a focus area determination module in FIG. 6;
  • FIG. 8 is a schematic structural diagram of an image processing apparatus in another embodiment.
  • FIG. 1 is a schematic diagram of an internal structure of an electronic device in an embodiment.
  • the electronic device includes a processor, a memory, and a network interface connected through a system bus.
  • the processor is used to provide computing and control capabilities to support the operation of the entire electronic device.
  • the memory is used to store data, programs, and the like. At least one computer program is stored on the memory, and the computer program can be executed by a processor to implement the scene recognition method applicable to the electronic device provided in the embodiments of the present application.
  • the memory may include a non-volatile storage medium such as a magnetic disk, an optical disc, a read-only memory (ROM), or a random-access memory (RAM).
  • the memory includes a non-volatile storage medium and an internal memory.
  • the non-volatile storage medium stores an operating system and a computer program.
  • the computer program can be executed by a processor to implement an image processing method provided by each of the following embodiments.
  • the internal memory provides a cached operating environment for operating system computer programs in a non-volatile storage medium.
  • the network interface may be an Ethernet card or a wireless network card, and is used to communicate with external electronic devices.
  • the electronic device may be a mobile phone, a tablet computer, or a personal digital assistant or a wearable device.
  • an image processing method is provided.
  • the method is applied to the electronic device in FIG. 1 as an example, and includes:
  • Operation 220 Acquire an image to be detected.
  • the user uses an electronic device (with a photographing function) to take a picture and obtain an image to be detected.
  • the image to be detected may be a photo preview screen, or a photo saved to an electronic device after the photo is taken.
  • the image to be detected may also be an image obtained from a video captured by a user using an electronic device.
  • the image to be detected refers to the image that needs to be detected.
  • the objects included in the image to be detected include portraits, babies, cats, dogs, food, etc. Of course, the above is not exhaustive, and will include many other The goal.
  • Operation 240 Determine a focus area of the image to be detected.
  • the shooting focus generally refers to the center focus that reflects the subject of the picture. Such as running deer, standing children in the crowd, etc. This is called the video center focus.
  • these multiple shooting focal points can be contrasted with each other, of course, there are also differences in emphasis.
  • the focus area of the image to be detected can be obtained according to the shooting focus.
  • the focus area is a closed area composed of pixels around the focus.
  • the focus area may be an area included in a portrait recognition frame surrounding the focus.
  • the focus area may also be an area composed of pixels within a preset range surrounding the focus.
  • target detection is performed on a focus area of the image to be detected to obtain a first target detection result of the image to be detected.
  • target detection is performed on the focus area of the image to be detected to obtain a first target detection result of the image to be detected.
  • target detection is performed on the focus area, and the first target detection result obtained may be a portrait, a baby, a cat, a dog, a food, and the like.
  • an image to be detected is acquired, a focus area of the image to be detected is determined, and a target detection is performed on the focus area of the image to be detected to obtain a target detection result of the image to be detected. Because when the user takes a picture, the user generally focuses on the location of the target to be captured. Therefore, when performing target detection on the image to be detected, first determine the focus area of the image to be detected, and then directly perform target detection on the focus area. , There is no need to perform target detection on the entire image, thereby improving target detection efficiency while ensuring target detection accuracy.
  • target detection algorithms such as SSD (full name SingleShot MultiBox Detector), etc.
  • SSD full name SingleShot MultiBox Detector
  • target detection algorithms based on deep learning it is often used to extract images every few frames to perform target detection on the images, and it is not possible to perform target detection on each frame of the video in real time.
  • target detection is performed only on the focus area of the image to be detected, that is, inputting only the focus area for target detection, compared to inputting a whole image for target detection, greatly reduces the amount of input information. Size, saving resources and improving efficiency.
  • real-time target detection can be realized for each frame of video in the video, which ensures the accuracy of target detection.
  • operation 240 determining a focus area of the image to be detected, includes:
  • Operation 242 Obtain a focus when the image to be detected is captured.
  • the camera When a user uses an electronic device (with a photographing function) to take a picture, the camera generally focuses automatically, or the user can manually focus (for example, manually touching the display of the electronic device).
  • the focus generated by the above two methods is obtained from the image to be detected.
  • the image to be detected may also be an image obtained from a video captured by a user using an electronic device.
  • Operation 244 Obtain a pixel within a preset range from the focus on the image to be detected.
  • a focus area is formed by pixels within a preset range.
  • the preset range may be a preset closed geometry range.
  • it can be a preset closed geometric range with the focus at the center, such as triangles, quadrilaterals, and polygons, circles, and sectors.
  • the preset range can obtain all the pixels with the pixel values near the focus within a certain range, and then frame a rectangular box to include these pixels.
  • the focus area is determined from the focus when the image to be detected is taken, and pixels within a preset range are obtained with the focus at the center, and the focus area is formed by pixels within the preset range.
  • the focus area generally includes the shooting target of the entire image, and the focus area obtained through the above operation will be more accurate, so that to a certain extent, subsequent target detection is performed only in the focus area of the image to be detected, and the first The target detection result uses the first target detection result as the accuracy of the target detection result of the entire image.
  • obtaining pixels within a preset range from the focus on the image to be detected includes:
  • the preset range may be a circular range formed by a preset radius centered on the focal point.
  • the width of the image to be detected is W and the height is H.
  • the focal point is a
  • the radius is b
  • the focal point a is the center of the circle
  • b is the radius to form a circular area.
  • the preset radius can also be set according to historical data. For example, by performing target detection and analysis on a large number of sample images, the target appearance range can be obtained.
  • the target appearance range can be centered on the focus and the radius range. It can be W / 3 or H / 3. Then, at this time, you can refer to the historical data and set the preset radius to W / 3 or H / 3, so as to obtain the pixels within the preset radius with the focus as the center.
  • the preset radius can also be obtained by analyzing based on historical data.
  • the range of the circle with the preset radius obtained by taking the focus as the center will partially exceed the range of the image to be detected. Prevail.
  • the preset radius is obtained by performing target detection and analysis on a large number of sample images, so it is universal to a certain extent. Then, the pixels within the preset radius are obtained from the focus on the image to be detected. Thus, the focus area is formed by pixels within a preset range.
  • the focus area obtained through the above operations will be more accurate, so that to a certain extent, subsequent target detection is performed only in the focus area of the image to be detected, and the first target detection result is obtained, and the first target detection result is used as the entire image. Accuracy of target detection results.
  • obtaining pixels within a preset range from the focus on the image to be detected includes:
  • the pixels in the preset rectangular range are obtained from the focus on the image to be detected.
  • the preset rectangle is centered on the focus, the width of the preset rectangle is a preset multiple of the width of the image to be detected, the height of the preset rectangle is a preset multiple of the height of the image to be detected, and the preset multiple is less than 1.
  • the preset rectangular range is the range of the target detection rectangular frame.
  • the preset rectangle is centered on the focus a, and the target is generally included.
  • the width r of the preset rectangle is a preset multiple of the width of the image to be detected
  • the height d of the preset rectangle is a preset multiple of the height of the image to be detected
  • the preset multiple is less than 1. Because when the preset magnification is greater than or equal to 1, the preset rectangle will completely cover the image to be detected or exceed the image to be detected, so the preset magnification needs to be limited to less than 1, so that the preset rectangle is only a part of the image to be detected. In this way, when performing target detection on a preset rectangular area, target detection is not performed on the entire image to be detected, thereby improving detection efficiency and saving resources.
  • the preset multiple can be set based on historical data. For example, by performing target detection analysis on a large number of sample images, the target range can be obtained.
  • the target range can be In the center of the focus, the width of the preset rectangle is 1/2 of the width of the image to be detected, and the height of the preset rectangle is 1/2 of the height of the image to be detected. Then, you can refer to the historical data at this time, and take the preset multiple of 1/2.
  • the preset multiples can also be obtained by analyzing based on historical data.
  • the range of the preset rectangle will partially exceed the range of the image to be detected.
  • the edge of the preset rectangle is determined by the edge of the image to be detected.
  • the width of the preset rectangle is a preset multiple of the width of the image to be detected
  • the height of the preset rectangle is a preset multiple of the height of the image to be detected.
  • the preset multiple is obtained by performing target detection analysis on a large number of sample images, so it is universal to a certain extent. Then, the pixels within the preset rectangular range are obtained from the focus on the image to be detected. Thus, the focus area is formed by the pixels in the preset rectangle.
  • the focus area obtained through the above operations will be more accurate, so that to a certain extent, subsequent target detection is performed only in the focus area of the image to be detected, and the first target detection result is obtained, and the first target detection result is used as the entire image. Accuracy of target detection results.
  • performing target detection on a focus area of an image to be detected, and obtaining a first target detection result of the image to be detected includes:
  • Operation 270 When the image to be detected is an image obtained from a video, use the background difference method to perform target detection on the image to be detected to obtain a second target detection result of the image to be detected.
  • the background difference method is a universal method for motion segmentation of still scenes. It performs a difference operation between the currently acquired image frame and the background image to obtain a grayscale image of the target motion area, and thresholds the grayscale image to extract the motion area. In addition, to avoid the influence of changes in ambient lighting, the background image is updated according to the currently acquired image frame.
  • the background difference method is used to perform target detection on the current frame image, that is, the image to be detected, to obtain a second target detection result of the image to be detected.
  • Operation 280 Calibrate the first target detection result of the image to be detected based on the second target detection result of the image to be detected to obtain the target detection result of the image to be detected.
  • the background difference method is mainly used for extracting the foreground without moving the background. Its main principle is to do subtraction in the current frame and background.
  • the algorithm used in the background difference method is relatively simple and performs target detection with the focus area of the image to be detected in the embodiment of the present application.
  • the method for obtaining the first target detection result of the image to be detected is target detection from different angles. . Therefore, the second target detection result of the to-be-detected image obtained by using the background difference method is calibrated to obtain the target detection result of the to-be-detected image.
  • the first target detection result of the image to be detected is a portrait
  • the second target detection result of the image to be detected obtained by the background difference method is also a portrait, then it can be directly obtained
  • the target detection result of the image to be detected is a portrait.
  • the algorithm used for the background difference method is relatively simple, and performs target detection with the focus area of the image to be detected in the embodiment of the present application, and the method of obtaining the first target detection result of the image to be detected is different from Detection from an angle. Therefore, the second target detection result of the image to be detected obtained by using the background difference method is calibrated to obtain the target detection result of the image to be detected. Improved the accuracy of target detection results.
  • performing target detection on the focus area of the image to be detected and obtaining the first target detection result of the image to be detected includes:
  • the to-be-detected image is an image obtained from a video
  • the to-be-detected image is subjected to target detection using an inter-frame difference method to obtain a second target detection result of the to-be-detected image
  • the inter-frame difference method uses subtraction of corresponding pixel values of two adjacent frames or three frames in an image sequence, and then takes the difference image and performs threshold processing to extract the motion area in the image, and the extracted motion area can be extracted.
  • Perform target detection Target detection is performed with a focus area of an image to be detected in the embodiment of the present application, and a method for obtaining a first target detection result of the image to be detected is target detection from different angles. Therefore, the first target detection result of the to-be-detected image is calibrated by using the second target detection result of the to-be-detected image obtained by using the inter-frame difference method to obtain the target detection result of the to-be-detected image.
  • the first target detection result of the to-be-detected image is a portrait
  • the second target detection result of the to-be-detected image obtained by the inter-frame difference method is also a portrait.
  • the target detection result of the image to be detected is a portrait.
  • the algorithm adopted by the inter-frame difference method is relatively simple, and performs target detection with the focus area of the image to be detected in the embodiment of the present application, and the method of obtaining the first target detection result of the image to be detected is separately from Object detection from different angles. Therefore, by using the inter-frame difference method to obtain the second target detection result of the image to be detected, the first target detection result of the image to be detected is calibrated to obtain the target detection result of the image to be detected. Improved the accuracy of target detection results.
  • an image processing method is provided.
  • the method is applied to the electronic device in FIG. 1 as an example, and includes:
  • Operation 1 Obtain the focus when shooting the image to be detected
  • Operation 2 Obtain pixels within a preset rectangle range from the focus on the image to be detected.
  • the preset rectangle is centered on the focus.
  • the width of the preset rectangle is 1/2 of the width of the image to be detected.
  • the height is 1/2 of the height of the image to be detected;
  • the focus area is formed by pixels within a preset rectangular range
  • Operation four Perform target detection on the focus area of the image to be detected, and obtain a first target detection result of the image to be detected.
  • Operation five When the image to be detected is an image obtained from a video, the image to be detected is subjected to target detection using a background difference method to obtain a second target detection result of the image to be detected;
  • Operation 6 Calibrate the first target detection result of the image to be detected based on the second target detection result of the image to be detected to obtain the target detection result of the image to be detected.
  • target detection is performed only on the focus area of the image to be detected, that is, only the focus area is input for target detection.
  • the amount of input information is greatly reduced, and the amount of information is saved. Resources to improve efficiency.
  • real-time target detection can be realized for each frame of video in the video, which ensures the accuracy of target detection.
  • the second target detection result of the to-be-detected image obtained by using the background difference method is calibrated to obtain the target detection result of the to-be-detected image. Improved the accuracy of target detection results.
  • an image processing apparatus 600 including: an image-to-be-detected module 620, a focus area determination module 640, and an object detection module 660. among them,
  • a to-be-detected image acquisition module 620 configured to obtain an to-be-detected image
  • a focus area determination module 640 configured to determine a focus area of an image to be detected
  • the first target detection module 660 is configured to perform target detection on a focus area of an image to be detected, and obtain a first target detection result of the image to be detected.
  • the focus area determination module 640 includes:
  • a focus acquisition module 642 configured to acquire a focus when an image to be detected is taken
  • a preset range acquisition module 644 configured to acquire pixels within a preset range from the focus on the image to be detected
  • the focus area forming module 646 is configured to form a focus area from pixels within a preset range.
  • the preset range obtaining module 644 is further configured to obtain pixels within a preset radius from the focus on the image to be detected.
  • the preset range acquisition module 644 is further configured to acquire pixels within a preset rectangular range from the focus on the image to be detected.
  • an image processing apparatus 600 is further included:
  • a second target detection module 670 configured to: when the image to be detected is an image obtained from a video, use the background difference method to perform target detection on the image to be detected to obtain a second target detection result of the image to be detected;
  • the calibration module 680 is configured to calibrate the first target detection result of the image to be detected based on the second target detection result of the image to be detected to obtain the target detection result of the image to be detected.
  • the second target detection module 670 is further configured to use the frame difference method to perform target detection when the image to be detected is an image obtained from a video, to obtain a second target detection of the image to be detected. result.
  • each module in the above image processing apparatus is for illustration only. In other embodiments, the image processing apparatus may be divided into different modules as needed to complete all or part of the functions of the above image processing apparatus.
  • Each module in the image processing apparatus may be implemented in whole or in part by software, hardware, and a combination thereof.
  • the network interface may be an Ethernet card or a wireless network card.
  • the above modules may be embedded in the processor in the form of hardware or independent of the processor in the server, or may be stored in the memory of the server in the form of software for the processor Call to perform the operations corresponding to the above modules.
  • a computer-readable storage medium on which a computer program is stored.
  • the computer program is executed by a processor, the operations of the image processing methods provided by the foregoing embodiments are implemented.
  • an electronic device including a memory, a processor, and a computer program stored on the memory and executable on the processor.
  • the processor executes the computer program, the image processing provided by the foregoing embodiments is implemented. The operation of the method.
  • An embodiment of the present application further provides a computer program product, which when executed on a computer, causes the computer to perform operations of the image processing methods provided by the foregoing embodiments.
  • Non-volatile memory may include read-only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • Volatile memory can include random access memory (RAM), which is used as external cache memory.
  • RAM is available in various forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), dual data rate SDRAM (DDR, SDRAM), enhanced SDRAM (ESDRAM), synchronous Link (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
  • SRAM static RAM
  • DRAM dynamic RAM
  • SDRAM synchronous DRAM
  • DDR dual data rate SDRAM
  • SDRAM enhanced SDRAM
  • SLDRAM synchronous Link (Synchlink) DRAM
  • SLDRAM synchronous Link (Synchlink) DRAM
  • Rambus direct RAM
  • DRAM direct memory bus dynamic RAM
  • RDRAM memory bus dynamic RAM
  • the storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), or the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)
  • Image Analysis (AREA)

Abstract

La présente invention concerne un procédé et un appareil de traitement d'image, un dispositif électronique et un support de stockage lisible par ordinateur. Le procédé comprend l'acquisition d'une image à tester, la détermination d'une zone focale de l'image à tester, la réalisation d'une détection de cible sur la zone focale de l'image à tester, de façon à obtenir un premier résultat de détection cible de l'image à tester.
PCT/CN2019/088963 2018-06-28 2019-05-29 Procédé et appareil de traitement d'image, support de stockage et dispositif électronique WO2020001219A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810685766.6A CN109086761B (zh) 2018-06-28 2018-06-28 图像处理方法和装置、存储介质、电子设备
CN201810685766.6 2018-06-28

Publications (1)

Publication Number Publication Date
WO2020001219A1 true WO2020001219A1 (fr) 2020-01-02

Family

ID=64839995

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/088963 WO2020001219A1 (fr) 2018-06-28 2019-05-29 Procédé et appareil de traitement d'image, support de stockage et dispositif électronique

Country Status (2)

Country Link
CN (1) CN109086761B (fr)
WO (1) WO2020001219A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111968126A (zh) * 2020-06-30 2020-11-20 上海艾策通讯科技股份有限公司 页面焦点识别方法、装置、计算机设备和存储介质
CN113607742A (zh) * 2021-08-03 2021-11-05 广东利元亨智能装备股份有限公司 电芯极耳检测方法、装置、电子设备和存储介质

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109086761B (zh) * 2018-06-28 2020-12-01 Oppo广东移动通信有限公司 图像处理方法和装置、存储介质、电子设备
CN109727193B (zh) * 2019-01-10 2023-07-21 北京旷视科技有限公司 图像虚化方法、装置及电子设备
CN110675369B (zh) * 2019-04-26 2022-01-14 深圳市豪视智能科技有限公司 联轴器失配检测方法及相关设备
CN112935576B (zh) * 2021-01-25 2023-09-01 深圳市大族半导体装备科技有限公司 一种激光加工调焦系统及其调焦方法

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103871051A (zh) * 2014-02-19 2014-06-18 小米科技有限责任公司 图像处理方法、装置和电子设备
CN105933612A (zh) * 2016-06-29 2016-09-07 联想(北京)有限公司 一种图像处理方法及电子设备
CN109086761A (zh) * 2018-06-28 2018-12-25 Oppo广东移动通信有限公司 图像处理方法和装置、存储介质、电子设备

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100531374C (zh) * 2007-07-12 2009-08-19 上海交通大学 结合色度偏差和亮度偏差的视频运动目标检测方法
JP5593990B2 (ja) * 2010-09-08 2014-09-24 リコーイメージング株式会社 撮像システムおよび画素信号読出し方法
CN105404894B (zh) * 2015-11-03 2018-10-23 湖南优象科技有限公司 无人机用目标追踪方法及其装置
CN107578380A (zh) * 2017-08-07 2018-01-12 北京金山安全软件有限公司 一种图像处理方法、装置、电子设备及存储介质
CN108154118B (zh) * 2017-12-25 2018-12-18 北京航空航天大学 一种基于自适应组合滤波与多级检测的目标探测系统及方法

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103871051A (zh) * 2014-02-19 2014-06-18 小米科技有限责任公司 图像处理方法、装置和电子设备
CN105933612A (zh) * 2016-06-29 2016-09-07 联想(北京)有限公司 一种图像处理方法及电子设备
CN109086761A (zh) * 2018-06-28 2018-12-25 Oppo广东移动通信有限公司 图像处理方法和装置、存储介质、电子设备

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111968126A (zh) * 2020-06-30 2020-11-20 上海艾策通讯科技股份有限公司 页面焦点识别方法、装置、计算机设备和存储介质
CN111968126B (zh) * 2020-06-30 2023-10-17 上海艾策通讯科技股份有限公司 页面焦点识别方法、装置、计算机设备和存储介质
CN113607742A (zh) * 2021-08-03 2021-11-05 广东利元亨智能装备股份有限公司 电芯极耳检测方法、装置、电子设备和存储介质
CN113607742B (zh) * 2021-08-03 2022-11-15 广东利元亨智能装备股份有限公司 电芯极耳检测方法、装置、电子设备和存储介质

Also Published As

Publication number Publication date
CN109086761B (zh) 2020-12-01
CN109086761A (zh) 2018-12-25

Similar Documents

Publication Publication Date Title
WO2020001219A1 (fr) Procédé et appareil de traitement d'image, support de stockage et dispositif électronique
CN110428366B (zh) 图像处理方法和装置、电子设备、计算机可读存储介质
CN110689037B (zh) 用于使用深度网络的自动对象注释的方法和系统
US11457138B2 (en) Method and device for image processing, method for training object detection model
CN108898567B (zh) 图像降噪方法、装置及系统
US10896518B2 (en) Image processing method, image processing apparatus and computer readable storage medium
WO2019233341A1 (fr) Procédé et appareil de traitement d'images, support d'informations lisible par ordinateur, et dispositif électronique
WO2019237887A1 (fr) Procédé de traitement d'images, dispositif électronique et support d'informations lisible par ordinateur
US10284789B2 (en) Dynamic generation of image of a scene based on removal of undesired object present in the scene
WO2021022983A1 (fr) Appareil et procédé de traitement d'images, dispositif électronique et support d'enregistrement lisible par ordinateur
JP6961797B2 (ja) プレビュー写真をぼかすための方法および装置ならびにストレージ媒体
WO2018058934A1 (fr) Procédé de photographie, dispositif de photographie, et support de stockage
US10277806B2 (en) Automatic image composition
WO2019042404A1 (fr) Procédé de traitement d'image, terminal, et support d'informations
WO2021082883A1 (fr) Procédé et appareil de détection de corps principal, dispositif électronique et support de stockage lisible par ordinateur
WO2019056527A1 (fr) Procédé et dispositif de capture
US10122912B2 (en) Device and method for detecting regions in an image
CN111753882B (zh) 图像识别网络的训练方法和装置、电子设备
WO2022160857A1 (fr) Procédé et appareil de traitement d'images, support de stockage lisible par ordinateur et dispositif électronique
WO2023098045A1 (fr) Procédé et appareil d'alignement d'image, et dispositif informatique et support de stockage
CN109447022B (zh) 一种镜头类型识别方法及装置
WO2022206680A1 (fr) Procédé et appareil de traitement d'image, dispositif informatique et support d'enregistrement
CN111932462B (zh) 图像降质模型的训练方法、装置和电子设备、存储介质
WO2023083171A1 (fr) Procédé et appareil de traitement de flux de données d'image, et dispositif électronique
CN112418243A (zh) 特征提取方法、装置及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19825987

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19825987

Country of ref document: EP

Kind code of ref document: A1