CN109410169B - Image background interference degree identification method and device - Google Patents

Image background interference degree identification method and device Download PDF

Info

Publication number
CN109410169B
CN109410169B CN201811056912.5A CN201811056912A CN109410169B CN 109410169 B CN109410169 B CN 109410169B CN 201811056912 A CN201811056912 A CN 201811056912A CN 109410169 B CN109410169 B CN 109410169B
Authority
CN
China
Prior art keywords
main body
image
subject
selected image
category
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811056912.5A
Other languages
Chinese (zh)
Other versions
CN109410169A (en
Inventor
邓立邦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Intellvision Technology Co ltd
Original Assignee
Guangdong Intellvision Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Intellvision Technology Co ltd filed Critical Guangdong Intellvision Technology Co ltd
Priority to CN201811056912.5A priority Critical patent/CN109410169B/en
Publication of CN109410169A publication Critical patent/CN109410169A/en
Application granted granted Critical
Publication of CN109410169B publication Critical patent/CN109410169B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Geometry (AREA)
  • Quality & Reliability (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method and a device for identifying image background interference degree, which comprises the steps of obtaining a selected image and extracting a characteristic vector to obtain a first convolution characteristic diagram; then comparing the second convolution characteristic graph with a second convolution characteristic graph in the classification library to determine a main body class to which a main body image existing in the selected image belongs; determining a first subject category based on a ratio of the area of the subject image to the total area of the selected image; extracting a main body class with the highest priority from the first main body classes as a second main body class; extracting the edge of each main body image corresponding to the second main body category and establishing a communication area; respectively solving the average value of the hue values of the pixels on the inner side and the outer side of the communicated area, and making a difference, wherein if the absolute value of the difference is smaller than a second preset threshold, the background interference degree is determined to be high; otherwise, determining that the background interference degree is low. By implementing the embodiment of the invention, the interference degree of the background pattern on the image main body can be automatically identified, and the accuracy of image identification is improved.

Description

Image background interference degree identification method and device
Technical Field
The invention relates to the field of computers, in particular to a method and a device for identifying image background interference degree.
Background
The image is used as the visual basis of the human perception world, is a carrier of all visual information, and is an important means for acquiring information, expressing information and transmitting information by human beings. Many technical fields are developed based on processing of collected images, such as face recognition, video analysis, intelligent driving, and industrial vision detection. The collected image often has larger interference factors, such as background interference, and the like, so that the collected image needs to be preprocessed before image processing, and an image with low background interference degree is screened out.
In the prior art, people select images according to own visual feelings, remove images with high background interference degree of the images according to the visual feelings, and keep images with low interference degree. However, the manual selection method is low in efficiency and strong in subjectivity, and the accuracy is low due to the fact that different people have different discrimination standards.
Disclosure of Invention
The embodiment of the invention provides an image background interference degree identification method, which can identify the background interference degree of an image, improve the accuracy of image identification and provide a high-quality image for subsequent image processing.
The first embodiment of the present invention provides a method for identifying an image background interference degree, including:
obtaining a selected image, and extracting a feature vector of the selected image through a CNN convolution layer to obtain a first convolution feature map;
comparing the first convolution feature map with a second convolution feature map in a pre-trained classification library to determine subject categories to which a plurality of subject images existing in the selected image belong; wherein each subject category comprises at least one subject image;
determining the area ratio of all the main body images corresponding to each main body type to the selected image according to the areas of the plurality of main body images, and taking all the main body types with the area ratios exceeding or equal to a first preset threshold value as the first main body type of the selected image;
according to preset category priority, extracting a main body category with the highest priority from the first main body categories as a second main body category of the selected image;
extracting the edge of each main body image corresponding to the second main body type, and establishing a communication area according to the extracted edge;
taking the average value of the hues of all the pixel points inside the connected region as a first hue leveling average value; taking the average value of the hues of all the pixel points outside the connected region as a second color leveling average value;
if the absolute value of the difference between the first color leveling mean value and the second color leveling mean value is smaller than a second preset threshold value, determining that the background interference degree of the selected image is high; otherwise, determining that the background interference degree of the selected image is low.
Further, the feature vector of the selected image includes shape, color, texture, material, or any combination thereof.
Further, the subject categories include: people, things, and landscapes.
Further, the first preset threshold is 0.15.
Further, the preset category priority is: the person has a higher priority than the object, and the object has a higher priority than the landscape.
A second embodiment is correspondingly provided on the basis of the first embodiment of the present invention.
A second embodiment of the present invention provides an apparatus for identifying an image background interference degree, including: the system comprises an image processing module, an image classification module, a first main body identification module, a second main body identification module, a main body edge extraction module, a pigment value averaging module and an interference degree identification module;
the image processing module is used for acquiring a selected image, and extracting a feature vector of the selected image through a CNN convolution layer to obtain a first convolution feature map;
the image classification module compares the first convolution feature map with a second convolution feature map in a pre-trained classification library to determine subject categories to which a plurality of subject images exist in the selected images belong; wherein each subject category comprises at least one subject image;
the first subject identification module is used for determining the area ratio of all subject images corresponding to each subject category to the selected image according to the areas of the plurality of subject images, and taking all subject categories with the area ratios exceeding a first preset threshold value or being equal to the first preset threshold value as first subject categories of the selected image;
the second main body identification module is used for extracting a main body category with the highest priority from the first main body categories as a second main body category of the selected image according to preset category priorities;
the main body edge extraction module is used for extracting the edge of each main body image corresponding to the second main body category and establishing a communication area according to the extracted edge;
the pigment value averaging module is used for taking the color tone average value of all pixel points inside the communicated area as a first color tone average value; taking the average value of the hues of all the pixel points outside the connected region as a second color leveling average value;
the interference degree identification module is used for determining that the background interference degree of the selected image is high if the absolute value of the difference between the first color leveling mean value and the second color leveling mean value is smaller than a second preset threshold value; otherwise, determining that the background interference degree of the selected image is low.
Further, the feature vector of the selected image comprises any one or more of the following combinations: shape, color, texture or material.
Further, the subject categories include: people, things, and landscapes.
Further, the first preset threshold is 0.15.
Further, the preset category priority is: the person has a higher priority than the object, and the object has a higher priority than the landscape.
By implementing the embodiment of the invention, the following beneficial effects are achieved:
the embodiment of the invention provides an image background interference degree identification method and device, wherein a first convolution feature map of a feature vector of a selected image is extracted through a CNN convolution layer, the first convolution feature map is compared with a trained second convolution feature map, a main body category to which a main body image in the selected image belongs is determined, the area ratio of all main body images corresponding to all main body categories to the selected image is calculated, the main body category with the area ratio exceeding a first preset threshold or equal to the first preset threshold is taken as a first main body category, a second main body category is determined from the first main body category according to a preset priority, the edges of all main body images in the second main body category are further extracted, a communication area is established according to the extracted main body edges, and the tone average value of all pixel points inside the communication area is taken as a first color leveling average value; taking the average value of the hues of all the pixel points outside the connected region as a second hue leveling average value; and if the absolute value of the difference between the first color leveling mean value and the second color leveling mean value is smaller than a second preset threshold value, determining that the background interference degree is high, otherwise, determining that the background interference degree is low. Compared with the prior art that the background interference degree is manually identified, the method can automatically identify the background interference degree of the image, improve the accuracy of image identification and provide high-quality images for subsequent image processing.
Drawings
Fig. 1 is a schematic flowchart of an image background interference degree identification method according to a first embodiment of the present invention;
fig. 2 is a schematic structural diagram of an image background interference degree recognition apparatus according to a second embodiment of the present invention;
description of reference numerals: 101. an image processing module; 102. an image classification module; 103. a first subject identification module; 104. a second subject identification module; 105. a body edge extraction module; 106. a pigment value averaging module; 107. and an interference degree identification module.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 1, the identification of the image background interference degree according to the first embodiment of the present invention includes the steps of:
s101, obtaining a selected image, and extracting a feature vector of the selected image through a CNN convolution layer to obtain a first convolution feature map;
s102, comparing the first convolution feature map with a second convolution feature map in a pre-trained classification library, and determining a main body class to which a plurality of main body images exist in the selected image; wherein each subject category comprises at least one subject image;
s103, determining the area ratio of all the main body images corresponding to each main body type to the selected image according to the areas of the plurality of main body images, and taking all the main body types with the area ratios exceeding or equal to a first preset threshold value as the first main body type of the selected image;
s104, according to the preset class priority, extracting a main body class with the highest priority from the first main body class to serve as a second main body class of the selected image;
and S105, extracting the edge of each main body image corresponding to the second main body category, and establishing a connected region according to the extracted edge.
S106, taking the average value of the hues of all the pixel points inside the communicated area as a first average value of the hues; and taking the average value of the hues of all the pixel points outside the connected region as the average value of the second color leveling.
S107, if the absolute value of the difference between the first color leveling mean value and the second color leveling mean value is smaller than a second preset threshold value, determining that the background interference degree of the selected image is high; otherwise, determining that the background interference degree of the selected image is low.
As for step S101, specifically, one image is selected from the network media or the live images as a selected image, the selected image is input into the trained CNN, and the feature vector of the selected image is extracted through the convolutional layer in the CNN, so as to obtain a first convolutional feature map. It should be noted that the extracted feature vector may be, but is not limited to, any one or more of the following combinations: shape, color, texture or material. It should be added that, according to the features of various human, animal, object, landscape shape, color texture material or any combination thereof, the invention first extracts the feature vectors of various human, animal, object, landscape main categories, inputs them into the CNN network for training, and obtains the trained CNN model, wherein the extracted feature vectors can be adjusted according to the actual situation, the extracted main categories can also be any combination of human, animal, object, landscape, and can be adjusted at any time according to the actual need.
For step S102, the subject categories may be, but are not limited to, people, things, and landscapes. It will be appreciated that there may be multiple subject categories in the selected image, and there may be multiple subjects in each subject category.
For step S103, preferably, the first preset threshold may be, but is not limited to, 0.15 or 0.2; the specific value can be set according to actual conditions. Specifically, after the category to which the subject in the selected image belongs is determined, the total area of the subjects belonging to the same subject type is calculated, and is compared with the total area of the selected image, and the subject category in which the subject area in the subject category accounts for 15% or 20% or more of the total area of the selected image is used as the first subject category of the selected image.
For step S104, preferably, the preset category priorities are: the person has a higher priority than the object, and the object has a higher priority than the landscape. Specifically, if the first subject category includes a human, a physical object, and a scene, the human is selected as the second subject category of the selected image.
For step S105, specifically, edge detection is performed on each subject image corresponding to the second subject category, the edge of the subject image is extracted, and then a connected region is established according to the edge of the subject image.
In step S106, specifically, hue values of all pixel points inside the connected region and an H value of the HSV values are extracted, and an average value is taken to obtain a first hue leveling average value. And extracting hue values of all pixel points outside the connected region, and taking an average to obtain a second color leveling average value.
For step S107, specifically, the first color leveling average value and the second color leveling average value are subtracted, and an absolute value is taken, if the absolute value is smaller than a second preset threshold, the difference between the color tone of the image main body and the color tone of the background pattern is small, that is, the interference degree of the image main body by the background pattern is high; otherwise, determining that the background interference degree is low.
On the basis of the first embodiment of the present invention, a second embodiment is correspondingly provided.
As shown in fig. 2: a second embodiment of the present invention provides an apparatus for identifying an image background interference degree, including: an image processing module 101, an image classification module 102, a first subject identification module 103, a second subject identification module 104, a subject edge extraction module 105, a pigment value averaging module 106 and an interference degree identification module 107;
the image processing module 101 is configured to obtain a selected image, and extract a feature vector of the selected image through a CNN convolutional layer to obtain a first convolutional feature map;
the image classification module 102 compares the first convolution feature map with a second convolution feature map in a pre-trained classification library to determine a subject class to which a plurality of subject images existing in the selected image belong; wherein each subject category comprises at least one subject image;
the first subject identification module 103 is configured to determine, according to areas of the plurality of subject images, an area ratio between all subject images corresponding to each subject category and the selected image, and use all subject categories of which the area ratios exceed a first preset threshold or are equal to the first preset threshold as first subject categories of the selected image;
the second subject identification module 104 is configured to extract, according to a preset category priority, a subject category with a highest priority from the first subject categories as a second subject category of the selected image;
the main body edge extraction module 105 is configured to extract an edge of each main body image corresponding to the second main body category, and establish a connected region according to the extracted edge;
the pigment value averaging module 106 is configured to use the average hue value of all pixel points inside the connected region as a first hue leveling average value; taking the average value of the hues of all the pixel points outside the connected region as a second hue leveling average value;
the interference degree identification module 107 is configured to determine that the background interference degree of the selected image is high if the absolute value of the difference between the first color leveling mean value and the second color leveling mean value is smaller than a second preset threshold; otherwise, determining that the background interference degree of the selected image is low.
Further, the feature vector of the selected image includes any one or more of the following combinations: shape, color, texture or material.
Further, the subject categories include: people, things, and landscapes.
Further, the first preset threshold is 0.15.
Further, the preset category priority is: the person has a higher priority than the object, and the object has a higher priority than the landscape.
By implementing the embodiment of the invention, the following beneficial effects are achieved:
the embodiment of the invention provides an identification method and device based on image background interference degree, wherein a first convolution feature map of a feature vector of a selected image is extracted through a CNN convolution layer, the first convolution feature map is compared with a trained second convolution feature map, a main body category to which a main body image in the selected image belongs is determined, the area ratio of all main body images corresponding to all main body categories to the selected image is calculated, the main body category with the area ratio exceeding a first preset threshold or being equal to the first preset threshold is taken as a first main body category, a second main body category is determined from the first main body category according to a preset priority, the edges of all main body images in the second main body category are further extracted, a communication area is established according to the extracted main body edges, and the tone average value of all pixel points inside the communication area is taken as a first color leveling average value; taking the average value of the hues of all the pixel points outside the connected region as a second hue leveling average value; if the absolute value of the difference between the first color leveling mean value and the second color leveling mean value is smaller than a second preset threshold value, and if the absolute value of the difference between the first color leveling mean value and the second color leveling mean value is smaller than the second preset threshold value, determining that the interference degree of the main body in the selected image by the background pattern is high; otherwise, determining that the background interference degree of the selected image is low. Therefore, the background interference degree of the image is automatically identified, the accuracy of image identification is improved, and a high-quality image is provided for subsequent image processing.
It should be noted that the above-described device embodiments are merely illustrative, wherein modules described as separate parts may or may not be physically separate, and parts shown as modules may or may not be physical modules, may be located in one place, or may be distributed on a plurality of network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. In addition, in the drawings of the embodiment of the apparatus provided by the present invention, the connection relationship between the modules indicates that there is a communication connection between them, and may be specifically implemented as one or more communication buses or signal lines. One of ordinary skill in the art can understand and implement it without inventive effort.
The foregoing is a preferred embodiment of the present invention, and it should be noted that it would be apparent to those skilled in the art that various modifications and enhancements can be made without departing from the principles of the invention, and such modifications and enhancements are also considered to be within the scope of the invention.

Claims (10)

1. A method for identifying image background interference degree is characterized by comprising the following steps:
obtaining a selected image, and extracting a feature vector of the selected image through a CNN convolution layer to obtain a first convolution feature map;
comparing the first convolution feature map with a second convolution feature map in a pre-trained classification library to determine subject categories to which a plurality of subject images existing in the selected image belong; wherein each subject category comprises at least one subject image;
determining the area ratio of all the main body images corresponding to each main body type to the selected image according to the areas of the plurality of main body images, and taking all the main body types with the area ratios exceeding or equal to a first preset threshold value as the first main body type of the selected image;
according to preset category priority, extracting a main body category with the highest priority from the first main body categories as a second main body category of the selected image;
extracting the edge of each main body image corresponding to the second main body type, and establishing a communication area according to the extracted edge;
taking the average value of the hues of all the pixel points inside the connected region as a first hue leveling average value; taking the average value of the hues of all the pixel points outside the connected region as a second color leveling average value;
if the absolute value of the difference between the first color leveling mean value and the second color leveling mean value is smaller than a second preset threshold value, determining that the background interference degree of the selected image is high; otherwise, determining that the background interference degree of the selected image is low.
2. The method according to claim 1, wherein the feature vector of the selected image comprises shape, color, texture, material or any combination thereof.
3. The method for identifying the image background interference degree according to claim 1, wherein the subject category comprises: people, things, and landscapes.
4. The method for identifying the image background interference degree according to claim 1, wherein the first preset threshold is 0.15.
5. The method for recognizing the image background interference according to claim 1, wherein the preset category priority is: the person has a higher priority than the object, and the object has a higher priority than the landscape.
6. An apparatus for recognizing an image background interference level, comprising: the system comprises an image processing module, an image classification module, a first main body identification module, a second main body identification module, a main body edge extraction module, a pigment value averaging module and an interference degree identification module;
the image processing module is used for acquiring a selected image, and extracting a feature vector of the selected image through a CNN convolution layer to obtain a first convolution feature map;
the image classification module compares the first convolution feature map with a second convolution feature map in a pre-trained classification library to determine subject categories to which a plurality of subject images exist in the selected images belong; wherein each subject category comprises at least one subject image;
the first subject identification module is used for determining the area ratio of all subject images corresponding to each subject category to the selected image according to the areas of the plurality of subject images, and taking all subject categories with the area ratios exceeding a first preset threshold value or being equal to the first preset threshold value as first subject categories of the selected image;
the second main body identification module is used for extracting a main body category with the highest priority from the first main body categories as a second main body category of the selected image according to preset category priorities;
the main body edge extraction module is used for extracting the edge of each main body image corresponding to the second main body category and establishing a communication area according to the extracted edge;
the pigment value averaging module is used for taking the color tone average value of all pixel points inside the communicated area as a first color tone average value; taking the average value of the hues of all the pixel points outside the connected region as a second color leveling average value;
if the absolute value of the difference between the first color leveling mean value and the second color leveling mean value is smaller than a second preset threshold value, determining that the background interference degree of the selected image is high; otherwise, determining that the background interference degree of the selected image is low.
7. The apparatus for identifying image background interference according to claim 6, wherein the feature vector of the selected image includes any one or more of the following combinations: shape, color, texture or material.
8. The apparatus for recognizing image background interference according to claim 6, wherein the subject category includes: people, things, and landscapes.
9. The apparatus for recognizing image background interference according to claim 6, wherein the first preset threshold is 0.15.
10. The apparatus for recognizing image background interference according to claim 6, wherein the preset category priority is: the person has a higher priority than the object, and the object has a higher priority than the landscape.
CN201811056912.5A 2018-09-11 2018-09-11 Image background interference degree identification method and device Active CN109410169B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811056912.5A CN109410169B (en) 2018-09-11 2018-09-11 Image background interference degree identification method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811056912.5A CN109410169B (en) 2018-09-11 2018-09-11 Image background interference degree identification method and device

Publications (2)

Publication Number Publication Date
CN109410169A CN109410169A (en) 2019-03-01
CN109410169B true CN109410169B (en) 2020-06-05

Family

ID=65463998

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811056912.5A Active CN109410169B (en) 2018-09-11 2018-09-11 Image background interference degree identification method and device

Country Status (1)

Country Link
CN (1) CN109410169B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110442521B (en) * 2019-08-02 2023-06-27 腾讯科技(深圳)有限公司 Control unit detection method and device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103020947A (en) * 2011-09-23 2013-04-03 阿里巴巴集团控股有限公司 Image quality analysis method and device
CN105574857A (en) * 2015-12-11 2016-05-11 小米科技有限责任公司 Image analysis method and device
CN105631455A (en) * 2014-10-27 2016-06-01 阿里巴巴集团控股有限公司 Image main body extraction method and system
CN105809704A (en) * 2016-03-30 2016-07-27 北京小米移动软件有限公司 Method and device for identifying image definition
CN105825511A (en) * 2016-03-18 2016-08-03 南京邮电大学 Image background definition detection method based on deep learning
CN107341805A (en) * 2016-08-19 2017-11-10 北京市商汤科技开发有限公司 Background segment and network model training, image processing method and device before image

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9965588B2 (en) * 2014-03-06 2018-05-08 Ricoh Co., Ltd. Film to DICOM conversion

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103020947A (en) * 2011-09-23 2013-04-03 阿里巴巴集团控股有限公司 Image quality analysis method and device
CN105631455A (en) * 2014-10-27 2016-06-01 阿里巴巴集团控股有限公司 Image main body extraction method and system
CN105574857A (en) * 2015-12-11 2016-05-11 小米科技有限责任公司 Image analysis method and device
CN105825511A (en) * 2016-03-18 2016-08-03 南京邮电大学 Image background definition detection method based on deep learning
CN105809704A (en) * 2016-03-30 2016-07-27 北京小米移动软件有限公司 Method and device for identifying image definition
CN107341805A (en) * 2016-08-19 2017-11-10 北京市商汤科技开发有限公司 Background segment and network model training, image processing method and device before image

Also Published As

Publication number Publication date
CN109410169A (en) 2019-03-01

Similar Documents

Publication Publication Date Title
CN108206917B (en) Image processing method and device, storage medium and electronic device
CN106682601B (en) A kind of driver's violation call detection method based on multidimensional information Fusion Features
CN103971126B (en) A kind of traffic sign recognition method and device
CA2249140C (en) Method and apparatus for object detection and background removal
CN100393106C (en) Method and apparatus for detecting and/or tracking image or color area of image sequence
CN109918971B (en) Method and device for detecting number of people in monitoring video
CN101599175B (en) Detection method for determining alteration of shooting background and image processing device
CN106682665B (en) Seven-segment type digital display instrument number identification method based on computer vision
KR100253203B1 (en) The objection extraction method using motion picture
CN107767390B (en) The shadow detection method and its system of monitor video image, shadow removal method
CN108563979B (en) Method for judging rice blast disease conditions based on aerial farmland images
CN109544583B (en) Method, device and equipment for extracting interested area of leather image
CN112101260B (en) Method, device, equipment and storage medium for identifying safety belt of operator
CN102306307B (en) Positioning method of fixed point noise in color microscopic image sequence
CN111160194B (en) Static gesture image recognition method based on multi-feature fusion
CN111369529B (en) Article loss and leave-behind detection method and system
CN111815528A (en) Bad weather image classification enhancement method based on convolution model and feature fusion
CN108805838A (en) A kind of image processing method, mobile terminal and computer readable storage medium
CN106920266B (en) The Background Generation Method and device of identifying code
CN104299234B (en) The method and system that rain field removes in video data
CN115100240A (en) Method and device for tracking object in video, electronic equipment and storage medium
CN107491714B (en) Intelligent robot and target object identification method and device thereof
CN109410169B (en) Image background interference degree identification method and device
CN101739678B (en) Method for detecting shadow of object
Garg et al. Color based segmentation using K-mean clustering and watershed segmentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant