WO2022001648A1 - Image processing method and apparatus, and device and medium - Google Patents
Image processing method and apparatus, and device and medium Download PDFInfo
- Publication number
- WO2022001648A1 WO2022001648A1 PCT/CN2021/100019 CN2021100019W WO2022001648A1 WO 2022001648 A1 WO2022001648 A1 WO 2022001648A1 CN 2021100019 W CN2021100019 W CN 2021100019W WO 2022001648 A1 WO2022001648 A1 WO 2022001648A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- area
- image
- target object
- change trend
- contrast value
- Prior art date
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 29
- 238000000034 method Methods 0.000 claims description 28
- 238000004590 computer program Methods 0.000 claims description 2
- 238000010586 diagram Methods 0.000 description 17
- 230000006870 function Effects 0.000 description 8
- 230000000694 effects Effects 0.000 description 5
- 210000000746 body region Anatomy 0.000 description 2
- 230000003796 beauty Effects 0.000 description 1
- 238000007599 discharging Methods 0.000 description 1
- 238000010428 oil painting Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
Definitions
- the present application belongs to the technical field of image processing, and in particular relates to an image processing method, apparatus, device and medium.
- Matting is one of the most common operations in image processing, which is to separate a certain part of an image from the image.
- a lasso tool, a marquee tool, a magic wand tool, a pen tool, etc. are usually used to manually cut out the image.
- the purpose of the embodiments of the present application is to provide an image processing method, apparatus, device, and medium, which can solve the problem of inaccurate matting.
- an embodiment of the present application provides an image processing method, including:
- N images are images captured by the image acquisition component at different focusing distances
- the pixel contrast value determine the coordinate information of the target object in the target image, wherein the target image is an image in N images;
- an image processing apparatus including:
- a first acquisition module configured to acquire N images, wherein the N images are images captured by the image acquisition component at different focusing distances;
- a division module for dividing each of the N images into M regions
- the second acquisition module is used to acquire the pixel contrast value of each area of each image
- a determining module configured to determine the coordinate information of the target object in the target image according to the pixel contrast value, wherein the target image is an image in N images;
- the third acquiring module is used for acquiring the target object in the target image according to the coordinate information.
- embodiments of the present application provide an electronic device, the electronic device includes a processor, a memory, and a program or instruction stored on the memory and executable on the processor, the program or instruction being The processor implements the steps of the method according to the first aspect when executed.
- an embodiment of the present application provides a computer-readable storage medium, where a program or an instruction is stored on the computer-readable storage medium, and when the program or instruction is executed by a processor, the method according to the first aspect is implemented A step of.
- an embodiment of the present application provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction, and implement the first aspect the steps of the method.
- each of the N images is divided into M areas, and the coordinate information of the target object in the target image is determined according to the pixel contrast value of each area of each image, and according to the coordinate information, Obtaining the target object in the target image is to realize the matting of the target object.
- the embodiment of the present application can automatically cut out the target object, which can improve the accuracy of the image cutout.
- FIG. 1 is a schematic flowchart of an image processing method provided by an embodiment of the present application.
- FIG. 2 is a schematic diagram of a target shooting scene provided by an embodiment of the present application.
- FIG. 3 is a schematic diagram of N images provided by an embodiment of the present application.
- FIG. 6 is a schematic diagram of sub-regional display of objects provided by an embodiment of the present application.
- FIG. 7 is a schematic diagram of a processed target image provided by an embodiment of the present application.
- FIG. 8 is a schematic structural diagram of an image processing apparatus provided by an embodiment of the present application.
- FIG. 9 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
- FIG. 1 is a schematic flowchart of an image processing method provided by an embodiment of the present application. As shown in Figure 1, the image processing method may include:
- S101 Acquire N images, where the N images are images captured by the image acquisition component at different focusing distances;
- S104 determine the coordinate information of the target object in the target image according to the pixel contrast value, wherein the target image is the image in the N images;
- S105 Acquire the target object in the target image according to the coordinate information.
- the execution subject may be an image processing apparatus, or a control module in the image processing apparatus for executing the image processing method.
- an image processing method performed by an image processing apparatus is used as an example to describe the image processing method provided by the embodiments of the present application.
- the image processing device acquires N images; divides each image in the N images into M areas; acquires the pixel contrast value of each area of each image; determines the coordinates of the target object in the target image according to the pixel contrast value information; obtain the target object in the target image according to the coordinate information.
- each of the N images is divided into M areas, and the coordinate information of the target object in the target image is determined according to the pixel contrast value of each area of each image, and according to the coordinate information, Obtaining the target object in the target image is to realize the matting of the target object.
- the embodiment of the present application can automatically cut out the target object, which can improve the accuracy of the image cutout.
- the image acquisition component may be a single image acquisition component.
- a single image acquisition component is used to collect N images of the target shooting scene in the process of gradually changing the focus distance, and each image in the N images is divided into M areas;
- the pixel contrast value of each area determines the coordinate information of the target object in the target image, and obtains the target object in the target image according to the coordinate information. That is, the matting of the target object is realized, and the accuracy of the matting can be improved.
- the change trend of the focus distance may be a gradual change from small to large, or a gradual change from large to small.
- Focusing distance refers to the distance between objects and images, which is the sum of the distance from the lens to the object and the distance from the lens to the photosensitive element.
- FIG. 2 is a schematic diagram of a target shooting scene provided by an embodiment of the present application.
- N images of the scene captured by the target shown in FIG. 2 in the process of gradually changing the focus distance are collected by using the image collection component, as shown in FIG. 3 .
- FIG. 3 is a schematic diagram of N images provided by an embodiment of the present application. Among them, Figure 3 shows four schematic diagrams of images.
- FIG. 4 is a schematic diagram of area division provided by an embodiment of the present application. Among them, each square in Figure 4 represents an area.
- each of the multiple regions obtained by dividing in S102 may include multiple pixels.
- the pixel contrast value of the area may be the sum of the pixel contrast values of each pixel in the multiple pixels included in the area.
- the pixel contrast value refers to the color difference between adjacent pixels.
- each of the multiple regions obtained by dividing in S102 may include only one pixel.
- the pixel contrast value of the area may be the pixel contrast value of one pixel included in the area.
- each area includes only one pixel, that is, the image is divided according to the pixel granularity, so that the coordinates of the target object in the target object can be more accurate, and thus the accuracy of the map can be achieved.
- S104 may include: classifying the M regions according to the change trend of the pixel contrast value and the change trend of the focus distance, and obtain a classification result; according to the classification result, determine that the target object is in the target image coordinate information in .
- the change trend of the pixel contrast value includes but is not limited to: from large to small, from small to large, from large to small and then from small to large, and from small to large and then from large to small .
- the changing trend of the focusing distance includes, but is not limited to: from large to small and from small to large.
- classifying the M regions according to the change trend of the pixel contrast value and the change trend of the focus distance, and obtaining the classification result may include: classifying the first region in the M regions into a classification result.
- the class is the background area, wherein the change trend of the pixel contrast value of the first area is the same as the change trend of the focus distance; the second area in the M areas is classified as the foreground area, wherein the pixel contrast value of the second area is the same.
- the change trend of is opposite to that of the focus distance; the third area in the M areas is classified as the main area, wherein the change trend of the pixel contrast value of the third area and the change trend of the focus distance are the same first and then opposite.
- the change trend of the pixel contrast value of the third area and the change trend of the focus distance are opposite first and then the same.
- the target object includes: at least one of a background area, a foreground area and a subject area.
- FIG. 5 is a schematic diagram of a result of region classification provided by an embodiment of the present application.
- the rate of change of the pixel contrast values of the background area, the foreground area, and the main body area may be different.
- the M regions may also be classified according to the change rate of the pixel contrast value and the change trend of the focus distance to obtain a classification result.
- the areas in the M areas with the pixel contrast value change rate greater than the first rate are classified as foreground areas; the pixel contrast value change rate in the M areas is smaller than the second rate.
- the area is classified as a background area; the area in which the pixel contrast value change rate in the M areas is between the second rate and the first rate is classified as the main area, wherein the first rate is greater than the second rate.
- the areas in the M areas with the change rate of the pixel contrast value greater than the third rate are classified as the background area; the areas in the M areas with the change rate of the pixel contrast value less than the fourth rate are classified as the background area. is the foreground area; the area where the pixel contrast value change rate in the M areas is between the fourth rate and the third rate is classified as the main area, wherein the third rate is greater than the fourth rate.
- the target image may be an image focusing on the target object among the N images, that is, the target object may be an image focusing on the target object among the N images.
- the target object is the main area
- the pixel contrast value of the target object increases from small to large, and then from large to small. Therefore, the image corresponding to the maximum pixel contrast value of the target object can be determined as the target image. It can be understood that when the pixel contrast value of the target object is at the maximum value, the focus of the image acquisition component is just on the target object, that is, the image acquisition component takes the target object as the focus.
- the image processing method provided by the embodiments of the present application may further include: performing first processing on the target object in the target image, and/or, performing the first processing on the target object in the target image except the target object.
- the other objects perform the second processing.
- the embodiments of the present application do not limit the specific processing manners of the first processing and the second processing, and any available processing manners can be applied to the embodiments of the present application.
- any available processing manners can be applied to the embodiments of the present application.
- color burn processing color gradient processing, soft light processing, sharpening processing, oil painting processing and color pencil processing and so on.
- the first processing and the second processing may be set according to actual requirements.
- the first processing may be a beauty processing.
- the second processing may be blurring processing.
- the target object when the target object is the main body region, the target object can be made clearer by performing blurring processing on other regions except the main body region in the target image.
- the image processing method provided by the embodiments of the present application may further include: displaying the target object in the fourth area of the screen; displaying the target image in the fifth area of the screen except the target object other objects. That is, the target object in the target image is separated from other objects in the target image except the target object. Then, the second processing is performed on the target object displayed in the fourth area and the objects other than the target object in the target image displayed in the fifth area, respectively.
- FIG. 6 is a schematic diagram of displaying objects in sub-regions provided by an embodiment of the present application.
- the target object is the main area, and other objects except the target object are the background area and the foreground area.
- the target objects displayed in the fourth area of the screen can be respectively and performing the second processing on other objects other than the target object in the target image displayed in the fifth area of the screen, to avoid processing the target object and other objects other than the target object in the target image on the target image, Mishandling the target object or objects other than the target object in the target image.
- the image processing method provided by the embodiments of the present invention may further include: combining the target object displayed in the fourth area that has undergone the first processing and the target object displayed in the fifth area that has undergone the second processing. Other objects in the target image except the target object are merged to obtain the processed target image.
- the target object and other objects other than the target object in the target image are displayed in sub-regions, as shown in FIG. 6 .
- the first processing may be performed on the target object displayed in the fourth area, and/or the target object displayed in the fifth area except the target object may be subjected to the first processing.
- the second processing is performed on other objects other than the object.
- the two processed objects are merged to obtain the processed target image, as shown in Figure 7 .
- FIG. 7 is a schematic diagram of a processed target image provided by an embodiment of the present application.
- the object in the target image, "the main area, the background area, and the foreground area” may also be displayed in three areas. Then, at least one object among the objects displayed in the sub-regions is processed, and after the object processing is completed, the three objects can be combined again to obtain a processed target image.
- FIG. 8 is a schematic structural diagram of an image processing apparatus provided by an embodiment of the present application. As shown in FIG. 8 , the image processing apparatus 800 may include:
- the first acquisition module 801 is configured to acquire N images, wherein the N images are images captured by the image acquisition component at different focusing distances;
- a dividing module 802 configured to divide each of the N images into M regions
- the second acquisition module 803 is used to acquire the pixel contrast value of each area of each image
- Determining module 804 is used to determine the coordinate information of the target object in the target image according to the pixel contrast value, wherein the target image is the image in the N images;
- the third obtaining module 805 is configured to obtain the target object in the target image according to the coordinate information.
- each of the N images is divided into M areas, and the coordinate information of the target object in the target image is determined according to the pixel contrast value of each area of each image, and according to the coordinate information, Obtaining the target object in the target image is to realize the matting of the target object.
- the embodiment of the present application can automatically cut out the target object, which can improve the accuracy of the image cutout.
- the determining module 804 may include:
- the classification sub-module is used to classify the M areas according to the change trend of the pixel contrast value and the change trend of the focus distance, and obtain the classification result;
- the determining sub-module is used for determining the coordinate information of the target object in the target image according to the classification result.
- the classification submodule may be specifically used for:
- the first area in the M areas is classified as a background area, wherein the change trend of the pixel contrast value of the first area is the same as the change trend of the focus distance;
- the third area in the M areas is classified as the main area, wherein the change trend of the pixel contrast value of the third area and the change trend of the focus distance are the same first and then opposite, or the change of the pixel contrast value of the third area.
- the trend and the change trend of focus distance are opposite first and then the same;
- the target object includes: at least one of a background area, a foreground area and a subject area.
- the image processing apparatus 800 may further include:
- the display module is used for displaying the target object in the fourth area of the screen, and displaying other objects in the target image except the target object in the fifth area of the screen.
- the target image may be an image with the target object as the focus among the N images.
- the image processing apparatus in this embodiment of the present application may be an apparatus, or may be a component, an integrated circuit, or a chip in a terminal.
- the apparatus may be a mobile electronic device or a non-mobile electronic device.
- the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palmtop computer, an in-vehicle electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook, or a personal digital assistant (personal digital assistant).
- UMPC ultra-mobile personal computer
- PDA personal digital assistant
- non-mobile electronic devices can be servers, network attached storage (Network Attached Storage, NAS), personal computer (personal computer, PC), television (television, TV), teller machine or self-service machine, etc., this application Examples are not specifically limited.
- the image processing apparatus in this embodiment of the present application may be an apparatus having an operating system.
- the operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, which are not specifically limited in the embodiments of the present application.
- the image processing apparatus provided in the embodiments of the present application can implement each process in the image processing method embodiments shown in FIG. 1 to FIG. 7 , and in order to avoid repetition, details are not repeated here.
- FIG. 9 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
- the electronic device 900 includes but is not limited to: a radio frequency unit 901 , a network module 902 , an audio output unit 903 , an input unit 904 , a sensor 905 , a display unit 906 , a user input unit 907 , an interface unit 908 , and a memory 909 , and components such as the processor 910 .
- the input unit 904 may include a graphics processor 9041 and a microphone 9042 .
- the display unit 906 may include a display panel 9061 .
- the user input unit 907 includes a touch panel 9071 and other input devices 9072 .
- the memory 909 may be used to store software programs as well as various data.
- the memory 909 may mainly include a stored program area and a stored data area, wherein the stored program area may store an operating system, an application program required for at least one function (such as a sound playback function, an image playback function, etc.), and the like.
- the electronic device 900 may also include a power supply (such as a battery) for supplying power to various components, and the power supply may be logically connected to the processor 910 through a power management system, so that the power management system can manage charging, discharging, and power management. consumption management and other functions.
- a power supply such as a battery
- the structure of the electronic device shown in FIG. 9 does not constitute a limitation to the electronic device.
- the electronic device may include more or less components than the one shown, or combine some components, or arrange different components, which will not be repeated here. .
- the processor 910 is configured to: acquire N images, where the N images are images captured by the image acquisition component at different focusing distances; divide each of the N images into M regions; acquire each image The pixel contrast value of each area of then; according to the pixel contrast value, determine the coordinate information of the target object in the target image, wherein the target image is an image in N images; according to the coordinate information, obtain the target object in the target image.
- each of the N images is divided into M areas, and the coordinate information of the target object in the target image is determined according to the pixel contrast value of each area of each image, and according to the coordinate information, Obtaining the target object in the target image is to realize the matting of the target object.
- the embodiments of the present application can automatically perform image matting on the target object, which can improve the accuracy of image matting.
- the processor 910 may be specifically configured to:
- the M areas are classified, and the classification result is obtained;
- the coordinate information of the target object in the target image is determined.
- the processor 910 may be specifically configured to:
- the first area in the M areas is classified as a background area, wherein the change trend of the pixel contrast value of the first area is the same as the change trend of the focus distance;
- the third area in the M areas is classified as the main area, wherein the change trend of the pixel contrast value of the third area and the change trend of the focus distance are the same first and then opposite, or the change of the pixel contrast value of the third area.
- the trend and the change trend of focus distance are opposite first and then the same;
- the target object includes: at least one of a background area, a foreground area and a subject area.
- the display unit 906 may be used for:
- the target object is displayed in the fourth area of the screen, and other objects in the target image except the target object are displayed in the fifth area of the screen.
- an embodiment of the present application further provides an electronic device, including a processor 910, a memory 909, a program or instruction stored in the memory 909 and executable on the processor 910, the program or instruction being processed by the processor
- an electronic device including a processor 910, a memory 909, a program or instruction stored in the memory 909 and executable on the processor 910, the program or instruction being processed by the processor
- 910 When 910 is executed, each process of the above image processing method embodiment is implemented, and the same technical effect can be achieved. To avoid repetition, details are not described here.
- the electronic devices in the embodiments of the present application include the above-mentioned mobile electronic devices and non-mobile electronic devices.
- Embodiments of the present invention further provide an electronic device configured to execute each process of the above image processing method embodiments, and can achieve the same technical effect, which is not repeated here to avoid repetition.
- Embodiments of the present application further provide a computer-readable storage medium, where a program or an instruction is stored on the computer-readable storage medium, and when the program or instruction is executed by a processor, each process of the above image processing method embodiment is implemented, and can To achieve the same technical effect, in order to avoid repetition, details are not repeated here.
- the processor is the processor in the electronic device described in the foregoing embodiments.
- Examples of the computer-readable storage medium include non-transitory computer-readable storage media, such as computer read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk, etc. .
- An embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction to implement the above image processing method embodiments.
- the chip includes a processor and a communication interface
- the communication interface is coupled to the processor
- the processor is configured to run a program or an instruction to implement the above image processing method embodiments.
- An embodiment of the present invention further provides a computer program product, which can be executed by a processor to implement the various processes of the above image processing method embodiments, and can achieve the same technical effect. To avoid repetition, details are not repeated here. .
- the chip mentioned in the embodiments of the present application may also be referred to as a system-on-chip, a system-on-chip, a system-on-a-chip, or a system-on-a-chip, or the like.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Studio Devices (AREA)
Abstract
Description
Claims (15)
- 一种图像处理方法,包括:An image processing method, comprising:获取N个图像,其中,所述N个图像为图像采集组件在不同对焦距离下拍摄的图像;acquiring N images, wherein the N images are images captured by the image acquisition component at different focusing distances;将所述N个图像中的每个图像划分为M个区域;dividing each of the N images into M regions;获取每个图像的每个区域的像素反差值;Get the pixel contrast value of each area of each image;根据所述像素反差值,确定目标对象在目标图像中的坐标信息,其中,所述目标图像为所述N个图像中的图像;Determine the coordinate information of the target object in the target image according to the pixel contrast value, wherein the target image is an image in the N images;根据所述坐标信息,获取所述目标图像中的所述目标对象。Acquire the target object in the target image according to the coordinate information.
- 根据权利要求1所述的方法,其中,所述根据所述像素反差值,确定目标对象在目标图像中的坐标信息,包括:The method according to claim 1, wherein the determining the coordinate information of the target object in the target image according to the pixel contrast value comprises:根据所述像素反差值的变化趋势和所述对焦距离的变化趋势,对所述M个区域进行分类,得到分类结果;According to the change trend of the pixel contrast value and the change trend of the focus distance, classify the M areas to obtain a classification result;根据所述分类结果,确定所述目标对象在所述目标图像中的坐标信息。According to the classification result, the coordinate information of the target object in the target image is determined.
- 根据权利要求2所述的方法,其中,所述根据所述像素反差值的变化趋势和所述对焦距离的变化趋势,对所述M个区域进行分类,得到分类结果,包括:The method according to claim 2, wherein the classification of the M regions according to the change trend of the pixel contrast value and the change trend of the focus distance to obtain a classification result, comprising:将所述M个区域中的第一区域,归类为背景区域,其中,所述第一区域的像素反差值的变化趋势和所述对焦距离的变化趋势相同;Classifying the first area in the M areas as a background area, wherein the change trend of the pixel contrast value of the first area is the same as the change trend of the focus distance;将所述M个区域中的第二区域,归类为前景区域,其中,所述第二区域的像素反差值的变化趋势和所述对焦距离的变化趋势相反;Classifying the second area in the M areas as a foreground area, wherein the change trend of the pixel contrast value of the second area is opposite to the change trend of the focus distance;将所述M个区域中的第三区域,归类为主体区域,其中,所述第三区域的像素反差值的变化趋势和所述对焦距离的变化趋势先相同再相反,或,所述第三区域的像素反差值的变化趋势和所述对焦距离的变化趋势先相反再相同;The third area in the M areas is classified as the main area, wherein the change trend of the pixel contrast value of the third area and the change trend of the focus distance are the same first and then opposite, or, the first The change trend of the pixel contrast value of the three regions and the change trend of the focus distance are opposite first and then the same;其中,所述目标对象包括:所述背景区域、所述前景区域和所述主体区域中至少一个。Wherein, the target object includes: at least one of the background area, the foreground area and the main body area.
- 根据权利要求1所述的方法,其中,在所述根据所述坐标信息,获取所述目标图像中的所述目标对象之后,所述方法还包括:The method according to claim 1, wherein after acquiring the target object in the target image according to the coordinate information, the method further comprises:在屏幕的第四区域显示所述目标对象;display the target object in the fourth area of the screen;在所述屏幕的第五区域显示所述目标图像中除所述目标对象之外的其他对象。Objects other than the target object in the target image are displayed in the fifth area of the screen.
- 根据权利要求1所述的方法,其中,所述目标图像为所述N个图像中以所述目标对象为焦点的图像。The method according to claim 1, wherein the target image is an image with the target object as a focus among the N images.
- 一种图像处理装置,包括:An image processing device, comprising:第一获取模块,用于获取N个图像,其中,所述N个图像为图像采集组件在不同对焦距离下拍摄的图像;a first acquisition module, configured to acquire N images, wherein the N images are images captured by the image acquisition component at different focusing distances;划分模块,用于将所述N个图像中的每个图像划分为M个区域;a dividing module, for dividing each of the N images into M regions;第二获取模块,用于获取每个图像的每个区域的像素反差值;The second acquisition module is used to acquire the pixel contrast value of each area of each image;确定模块,用于根据所述像素反差值,确定目标对象在目标图像中的坐标信息,其中,所述目标图像为所述N个图像中的图像;a determining module, configured to determine the coordinate information of the target object in the target image according to the pixel contrast value, wherein the target image is an image in the N images;第三获取模块,用于根据所述坐标信息,获取所述目标图像中的所述目标对象。A third acquiring module, configured to acquire the target object in the target image according to the coordinate information.
- 根据权利要求6所述的装置,其中,所述确定模块包括:The apparatus of claim 6, wherein the determining module comprises:分类子模块,用于根据所述像素反差值的变化趋势和所述对焦距离的变化趋势,对所述M个区域进行分类,得到分类结果;A classification submodule, configured to classify the M areas according to the change trend of the pixel contrast value and the change trend of the focus distance, and obtain a classification result;确定子模块,用于根据所述分类结果,确定所述目标对象在所述目标图像中的坐标信息。A determination submodule, configured to determine the coordinate information of the target object in the target image according to the classification result.
- 根据权利要求7所述的装置,其中,所述分类子模块具体用于:The device according to claim 7, wherein the classification submodule is specifically used for:将所述M个区域中的第一区域,归类为背景区域,其中,所述第一区域的像素反差值的变化趋势和所述对焦距离的变化趋势相同;Classifying the first area in the M areas as a background area, wherein the change trend of the pixel contrast value of the first area is the same as the change trend of the focus distance;将所述M个区域中的第二区域,归类为前景区域,其中,所述第二区域的像素反差值的变化趋势和所述对焦距离的变化趋势相反;Classifying the second area in the M areas as a foreground area, wherein the change trend of the pixel contrast value of the second area is opposite to the change trend of the focus distance;将所述M个区域中的第三区域,归类为主体区域,其中,所述第三区域的像素反差值的变化趋势和所述对焦距离的变化趋势先相同再相反,或, 所述第三区域的像素反差值的变化趋势和所述对焦距离的变化趋势先相反再相同;The third area in the M areas is classified as the main area, wherein the change trend of the pixel contrast value of the third area and the change trend of the focus distance are the same first and then opposite, or, the first The change trend of the pixel contrast value of the three regions and the change trend of the focus distance are opposite first and then the same;其中,所述目标对象包括:所述背景区域、所述前景区域和所述主体区域中至少一个。Wherein, the target object includes: at least one of the background area, the foreground area and the main body area.
- 根据权利要求6所述的装置,其中,所述装置还包括:The apparatus of claim 6, wherein the apparatus further comprises:显示模块,用于在屏幕的第四区域显示所述目标对象,在所述屏幕的第五区域显示所述目标图像中除所述目标对象之外的其他对象。The display module is configured to display the target object in the fourth area of the screen, and display other objects in the target image except the target object in the fifth area of the screen.
- 根据权利要求6所述的装置,其中,所述目标图像为所述N个图像中以所述目标对象为焦点的图像。The apparatus according to claim 6, wherein the target image is an image of the N images with the target object as a focus.
- 一种电子设备,包括:处理器、存储器及存储在所述存储器上并可在所述处理器上运行的程序或指令,所述程序或指令被所述处理器执行时实现如权利要求1至5任一项所述的图像处理方法的步骤。An electronic device, comprising: a processor, a memory, and a program or instruction stored on the memory and executable on the processor, the program or instruction being executed by the processor to implement claims 1 to 5. The steps of any one of the image processing methods.
- 一种电子设备,被配置为用于执行如权利要求1至5中任一项所述的图像处理方法的步骤。An electronic device configured to perform the steps of the image processing method as claimed in any one of claims 1 to 5.
- 一种计算机可读存储介质,所述计算机可读存储介质上存储程序或指令,所述程序或指令被处理器执行时实现如权利要求1至5任一项所述的图像处理方法的步骤。A computer-readable storage medium storing programs or instructions on the computer-readable storage medium, when the programs or instructions are executed by a processor, the steps of the image processing method according to any one of claims 1 to 5 are implemented.
- 一种计算机程序产品,所述计算机程序产品可被处理器执行以实现如权利要求1至5中任一项所述的图像处理方法的步骤。A computer program product executable by a processor to implement the steps of the image processing method according to any one of claims 1 to 5.
- 一种芯片,包括处理器和通信接口,所述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现如权利要求1至5任一项所述的图像处理方法的步骤。A chip, comprising a processor and a communication interface, wherein the communication interface is coupled to the processor, and the processor is used to run a program or an instruction to implement the steps of the image processing method according to any one of claims 1 to 5 .
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010611475.X | 2020-06-30 | ||
CN202010611475.XA CN111866378A (en) | 2020-06-30 | 2020-06-30 | Image processing method, apparatus, device and medium |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022001648A1 true WO2022001648A1 (en) | 2022-01-06 |
Family
ID=72989934
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/100019 WO2022001648A1 (en) | 2020-06-30 | 2021-06-15 | Image processing method and apparatus, and device and medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN111866378A (en) |
WO (1) | WO2022001648A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115002356A (en) * | 2022-07-19 | 2022-09-02 | 深圳市安科讯实业有限公司 | Night vision method based on digital video photography |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112362164B (en) * | 2020-11-10 | 2022-01-18 | 广东电网有限责任公司 | Temperature monitoring method and device of equipment, electronic equipment and storage medium |
CN113055603A (en) * | 2021-03-31 | 2021-06-29 | 联想(北京)有限公司 | Image processing method and electronic equipment |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2008294785A (en) * | 2007-05-25 | 2008-12-04 | Sanyo Electric Co Ltd | Image processor, imaging apparatus, image file, and image processing method |
US20120320239A1 (en) * | 2011-06-14 | 2012-12-20 | Pentax Ricoh Imaging Company, Ltd. | Image processing device and image processing method |
CN102843510A (en) * | 2011-06-14 | 2012-12-26 | 宾得理光映像有限公司 | Imaging device and distance information detecting method |
CN110189339A (en) * | 2019-06-03 | 2019-08-30 | 重庆大学 | The active profile of depth map auxiliary scratches drawing method and system |
CN111246106A (en) * | 2020-01-22 | 2020-06-05 | 维沃移动通信有限公司 | Image processing method, electronic device, and computer-readable storage medium |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101930533B (en) * | 2009-06-19 | 2013-11-13 | 株式会社理光 | Device and method for performing sky detection in image collecting device |
KR20110020519A (en) * | 2009-08-24 | 2011-03-03 | 삼성전자주식회사 | Digital photographing apparatus, controlling method of the same, and recording medium storing program to implement the method |
CN102338972A (en) * | 2010-07-21 | 2012-02-01 | 华晶科技股份有限公司 | Assistant focusing method using multiple face blocks |
CN105629631B (en) * | 2016-02-29 | 2020-01-10 | Oppo广东移动通信有限公司 | Control method, control device and electronic device |
CN108305215A (en) * | 2018-01-23 | 2018-07-20 | 北京易智能科技有限公司 | A kind of image processing method and system based on intelligent mobile terminal |
CN110336951A (en) * | 2019-08-26 | 2019-10-15 | 厦门美图之家科技有限公司 | Contrast formula focusing method, device and electronic equipment |
-
2020
- 2020-06-30 CN CN202010611475.XA patent/CN111866378A/en active Pending
-
2021
- 2021-06-15 WO PCT/CN2021/100019 patent/WO2022001648A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2008294785A (en) * | 2007-05-25 | 2008-12-04 | Sanyo Electric Co Ltd | Image processor, imaging apparatus, image file, and image processing method |
US20120320239A1 (en) * | 2011-06-14 | 2012-12-20 | Pentax Ricoh Imaging Company, Ltd. | Image processing device and image processing method |
CN102843510A (en) * | 2011-06-14 | 2012-12-26 | 宾得理光映像有限公司 | Imaging device and distance information detecting method |
CN110189339A (en) * | 2019-06-03 | 2019-08-30 | 重庆大学 | The active profile of depth map auxiliary scratches drawing method and system |
CN111246106A (en) * | 2020-01-22 | 2020-06-05 | 维沃移动通信有限公司 | Image processing method, electronic device, and computer-readable storage medium |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115002356A (en) * | 2022-07-19 | 2022-09-02 | 深圳市安科讯实业有限公司 | Night vision method based on digital video photography |
Also Published As
Publication number | Publication date |
---|---|
CN111866378A (en) | 2020-10-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2022001648A1 (en) | Image processing method and apparatus, and device and medium | |
RU2596580C2 (en) | Method and device for image segmentation | |
CN108737739B (en) | Preview picture acquisition method, preview picture acquisition device and electronic equipment | |
US10317777B2 (en) | Automatic zooming method and apparatus | |
CN112714255B (en) | Shooting method and device, electronic equipment and readable storage medium | |
CN106446223B (en) | Map data processing method and device | |
CN108898082B (en) | Picture processing method, picture processing device and terminal equipment | |
WO2022042679A1 (en) | Picture processing method and apparatus, device, and storage medium | |
CN108961267B (en) | Picture processing method, picture processing device and terminal equipment | |
WO2018166069A1 (en) | Photographing preview method, graphical user interface, and terminal | |
CN112714253B (en) | Video recording method and device, electronic equipment and readable storage medium | |
TWI777098B (en) | Method, apparatus and electronic device for image processing and storage medium thereof | |
CN103824252A (en) | Picture processing method and system | |
JP7210089B2 (en) | RESOURCE DISPLAY METHOD, APPARATUS, DEVICE AND COMPUTER PROGRAM | |
CN111290684B (en) | Image display method, image display device and terminal equipment | |
WO2022161340A1 (en) | Image display method and apparatus, and electronic device | |
CN110463177A (en) | The bearing calibration of file and picture and device | |
CN112269522A (en) | Image processing method, image processing device, electronic equipment and readable storage medium | |
CN110689565B (en) | Depth map determination method and device and electronic equipment | |
US20220182554A1 (en) | Image display method, mobile terminal, and computer-readable storage medium | |
WO2019090734A1 (en) | Photographing method and apparatus, mobile terminal, and computer readable storage medium | |
CN111325220A (en) | Image generation method, device, equipment and storage medium | |
CN112734661A (en) | Image processing method and device | |
CN111638844A (en) | Screen capturing method and device and electronic equipment | |
CN112383708B (en) | Shooting method and device, electronic equipment and readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21833084 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21833084 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21833084 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 26-06-2023) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21833084 Country of ref document: EP Kind code of ref document: A1 |