CN111046748A - Method and device for enhancing and identifying large-head photo scene - Google Patents
Method and device for enhancing and identifying large-head photo scene Download PDFInfo
- Publication number
- CN111046748A CN111046748A CN201911161737.0A CN201911161737A CN111046748A CN 111046748 A CN111046748 A CN 111046748A CN 201911161737 A CN201911161737 A CN 201911161737A CN 111046748 A CN111046748 A CN 111046748A
- Authority
- CN
- China
- Prior art keywords
- background image
- portrait
- image
- scene
- photo
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 38
- 230000002708 enhancing effect Effects 0.000 title claims abstract description 12
- 238000012545 processing Methods 0.000 claims abstract description 14
- 230000009467 reduction Effects 0.000 claims abstract description 12
- 230000004927 fusion Effects 0.000 claims description 8
- 238000013519 translation Methods 0.000 claims description 6
- 238000000926 separation method Methods 0.000 claims description 3
- 230000003796 beauty Effects 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 238000013136 deep learning model Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000005065 mining Methods 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000012502 risk assessment Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20112—Image segmentation details
- G06T2207/20152—Watershed segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a method and a device for enhancing and identifying a large-head photo scene, belongs to the technical field of computational vision, mode identification and image processing, and solves the problems that the subsequent scene identification accuracy is low and even the subsequent scene identification cannot be completely identified due to small identifiable information amount of the large-head photo scene. The method comprises the steps of preparing a large-head photo image required for scene identification; separating a portrait from a background in a large-head photo image to obtain a portrait mask and a background image, wherein the portrait mask is a binary image, the value of the background part is 0, the value of the portrait part is 1, and the portrait part in the background image is white; copying a background image to perform length-width equal-proportion reduction processing to obtain a reduced background image; fusing the reduced background image into an unreduced background image to perform edge stretching on the unreduced background image to obtain an enhanced background image; and finally, using the enhanced background image for scene recognition. And apparatus corresponding to the method are set forth. The method is used for completing and enhancing the information of the large-head photo scene.
Description
Technical Field
A method and a device for enhancing and identifying a large-head photo scene are used for completing the scene in the large-head photo scene and belong to the technical field of computational vision, mode identification and image processing.
Background
With the development of computer image processing technology becoming mature and abundant image data appearing, more and more data derived from images are widely applied in reality, and especially, a large photo taken by a user becomes an abundant data resource for enterprises. Generally speaking, self-photographing mainly includes portrait and scene information, but due to the fact that the coverage area of the portrait is large, the amount of information identifiable by the background is small, namely, the problem that subsequent scene identification accuracy is low or even the subsequent scene identification cannot be performed completely (only the information of the portrait can be identified) is caused, and then the scene information cannot be effectively utilized in data analysis and modeling work by using portrait big-head illumination. For the problem that the information identification of a self-photographing large-head photograph scene (large-head photograph background) is inaccurate or insufficient, an effective solution is not inquired at present, and the solution close to the effective solution is as follows:
1: a scene recognition hardware module is added in specific hardware, and information returned by the scene recognition hardware module in the equipment is utilized to carry out follow-up work;
2: abandoning scene information in self-photographing, only extracting useful information in self-photographing portrait, or requiring a user to submit a photo with richer scene information and easy later-stage judgment and identification;
the disadvantages of using the above method are as follows:
1: specific hardware needs to be provided for a user, and the setting of a hardware device usually causes the fixation of a scene, so that information disclosed by the scene in a photo cannot be used in actual modeling; secondly, the use cost is increased and the user experience is reduced.
2: the photographed scene contains rich personal attribute information, for example, whether the photographed scene is in a car, whether the photographed scene is in a high-grade consumption place or not, and whether the photographed scene is in an office environment are indirect reflections of a plurality of personal financial attributes and consumption habits, the information is particularly important in links of characteristic risk management, marketing and the like, and the correct use of the information is helpful for mining more user information and market depth. For example, in the credit field, the design of an incorporated credit contract is further carried out according to the reference of the scene information of the photo provided by the user for credit risk assessment; in the broker-dealer service, the scene information can be used for considering the asset scale and the risk bearing capacity of the user and accurately identifying the clear asset passenger group; in the field of e-commerce and platform services, relevant commodity recommendation and advertisement display are given according to large-head photo scene information of a user; in the security field, scene information which often appears in the large head of the accident crowd is analyzed, and early warning is given in advance. In addition, requiring users to provide rich-scene self-photographs reduces portrait information content, increases user photographing difficulty, and reduces user experience.
Disclosure of Invention
Aiming at the research problems, the invention aims to provide a method and a device for enhancing and identifying a large-head photo scene, which solve the problems in the prior art that: (1) the amount of identifiable information of a large-head photo scene is small, so that the problem that the subsequent scene identification accuracy is low, even completely impossible to identify is caused; (2) by providing specific hardware, a large photo scene cannot be used for actual modeling, usage cost is increased, and user experience is reduced; (3) the user provides self-photographing with rich scenes, so that the portrait information content is reduced, the photographing difficulty of the user is increased, and the like.
In order to achieve the purpose, the invention adopts the following technical scheme:
a method for enhancing and identifying a large-head photo scene comprises the following steps:
s1, preparing a large photo image needed for scene recognition;
s2, separating the portrait from the background in the large-head photo image to obtain a portrait mask and a background image, wherein the portrait mask is a binary image, the value of the background part is 0, the value of the portrait part is 1, and the portrait part in the background image is white;
s3, copying a background image to perform length-width equal-proportion reduction processing to obtain a reduced background image;
s4, fusing the reduced background image into an unreduced background image to perform edge stretching on the unreduced background image to obtain an enhanced background image;
and S5, carrying out scene recognition on the enhanced background image.
Further, in the step S2, the algorithm for separating the portrait from the background in the photo image is one of a GrabCut algorithm and a watershed algorithm.
Further, in the step S3, the reduction ratio k of the background image ranges from (1-h1/h, 1), where h1 represents the height of the highest point of the portrait and h represents the height of the large-head photograph image.
Further, the specific step of step S4 is:
s4.1, according to a portrait mask, circularly searching and comparing coordinates of each pixel point of the portrait, obtaining a highest point coordinate (x1, y1) of the portrait, and recording the lowest point coordinates of the left side and the right side of the portrait as (x2, y2) and (x3, y3), and then calculating a fusion focus coordinate (x0, y0), namely the center of the portrait area, wherein x0 is (x1+ x2+ x3)/3, and y0 is (y1+ y2+ y 3)/3;
s4.2, respectively adding translation amounts (1-k) y0 and (1-k) x0 to the height and width of the reduced background image on the basis that the upper left corners of the reduced background image and the non-reduced background image are the origin of coordinates, so as to obtain the reduced background image after translation, and keeping the two images superposed at the non-reduced background image (x0, y0) when the non-reduced background image and the reduced background image are fused;
s4.3, extracting coordinate points of the portrait position in the portrait mask, namely coordinate points with the gray value of 1 in the binary image;
and S4.4, extracting a BGR value of the coordinate point corresponding to the portrait position based on the translated reduced background image, and assigning the BGR value to the coordinate point corresponding to the portrait position of the unreduced background image to obtain the enhanced background image.
A large-head-shot scene enhancement recognition device, comprising:
an acquisition module: the method comprises the steps of acquiring a large photo image required for scene identification;
a separation module: the device comprises a photo mask, a background image and a photo processing module, wherein the photo mask is used for separating a portrait from a background in a large-head photo image to obtain a portrait mask and the background image, the portrait mask is a binary image, the value of the background part is 0, the value of the portrait part is 1, and the portrait part in the background image is white;
a fusion module: the system is used for copying a background image and performing length-width equal-proportion reduction processing to obtain a reduced background image; fusing the reduced background image into an unreduced background image, and performing edge stretching on the unreduced background image to obtain an enhanced background image;
an identification module: for using the enhanced background image for scene recognition.
Further, the system also comprises a storage module for storing the big head shot image returned by the user through the mobile phone or the camera and an acquisition module for acquiring. Compared with the prior art, the invention has the beneficial effects that:
the method has the advantages that the background image after portrait matting is reduced in length and width in equal proportion and then is fused with the background image which is not subjected to reduction processing, and the scene information shielded by partial portrait is restored in an edge stretching mode, so that the accuracy of subsequent scene information identification is greatly improved, the method is not limited by portrait outline and background complexity, and the application range is wide;
secondly, the scene information obtained by the method can be used for fusion modeling;
thirdly, the self-photographing large-head scene enhancement process only depends on self-photographing images owned by enterprises, and other data are not introduced, so that the introduction of noise is reduced, and the enterprise cost is reduced;
fourthly, the self-photographing image is obtained without additionally adding a hardware device, and the self-photographing image is returned through the photographing function of the common smart phone, so that the cost is further reduced;
fifthly, the scene enhancement can be rapidly carried out without a complex model and a high-cost training sample for image completion;
the method is easy to calculate, and compared with a complex graph calculation scheme which can be completed only by adopting a cloud server or a local GPU server, the design of image fusion hardly puts higher requirements on the existing calculation resources.
Drawings
FIG. 1 is a schematic flow diagram of the present invention;
FIG. 2 is a diagram of a large photo required for scene recognition and the resulting enhanced background image in the present invention.
Detailed Description
The invention will be further described with reference to the accompanying drawings and specific embodiments.
In the credit field, in dealer broker business, in e-commerce and platform business field or in security field, the key information needed in the scene needs to be extracted by analyzing the scene information frequently appearing in the big picture of the accident crowd. The characteristic of the photo with big head is that the specific area of the head is very large, the specific area of the scene is very small, the prior art is generally used for filling the scene information with more abundant environmental information such as the whole body image, and the like, and is not for filling the scene information of the photo with big head, so under the scene with big head, the filling effect is not good, in order to improve the filling of the scene information of the photo with big head, the following technical scheme is provided, the technical scheme can fill some missing parts of some scene information in the photo with big head, namely, the identification of the subsequent scene information can be improved:
a method for enhancing and identifying a large-head photo scene comprises the following steps:
s1, preparing a large photo image needed for scene recognition;
s2, separating the portrait from the background in the large-head photo image to obtain a portrait mask and a background image, wherein the portrait mask is a binary image, the value of the background part is 0, the value of the portrait part is 1, and the portrait part in the background image is white; the algorithm for separating the portrait from the background in the large-head photo image is GrabCut algorithm or watershed algorithm.
S3, copying a background image to perform length-width equal-proportion reduction processing to obtain a reduced background image; the range of the reduction scale k of the background image is (1-h1/h, 1), namely, the reduction scale range is greater than 1-h1/h and less than 1, wherein h1 represents the height of the highest point of the portrait, and h represents the height of the large head photograph image.
S4, fusing the reduced background image into an unreduced background image to perform edge stretching on the unreduced background image to obtain an enhanced background image;
the method comprises the following specific steps:
s4.1, according to a portrait mask, circularly searching and comparing coordinates of each pixel point of the portrait, obtaining a highest point coordinate (x1, y1) of the portrait, and recording the lowest point coordinates of the left side and the right side of the portrait as (x2, y2) and (x3, y3), and then calculating a fusion focus coordinate (x0, y0), namely the center of the portrait area, wherein x0 is (x1+ x2+ x3)/3, and y0 is (y1+ y2+ y 3)/3;
s4.2, respectively adding translation amounts (1-k) y0 and (1-k) x0 to the height and width of the reduced background image on the basis that the upper left corners of the reduced background image and the non-reduced background image are the origin of coordinates, so as to obtain the reduced background image after translation, and keeping the two images superposed at the non-reduced background image (x0, y0) when the non-reduced background image and the reduced background image are fused;
s4.3, extracting coordinate points of the portrait position in the portrait mask, namely coordinate points with the gray value of 1 in the binary image;
and S4.4, extracting a BGR value of the coordinate point corresponding to the portrait position based on the translated reduced background image, and assigning the BGR value to the coordinate point corresponding to the portrait position of the unreduced background image to obtain the enhanced background image.
S5, the enhanced background image is input to the deep learning model for scene recognition, but other models may be used for scene recognition.
A large-head-shot scene enhancement recognition device, comprising:
an acquisition module: the method comprises the steps of acquiring a large photo image required for scene identification;
a separation module: the device comprises a photo mask, a background image and a photo processing module, wherein the photo mask is used for separating a portrait from a background in a large-head photo image to obtain a portrait mask and the background image, the portrait mask is a binary image, the value of the background part is 0, the value of the portrait part is 1, and the portrait part in the background image is white;
a fusion module: the system is used for copying a background image and performing length-width equal-proportion reduction processing to obtain a reduced background image; fusing the reduced background image into an unreduced background image, and performing edge stretching on the unreduced background image to obtain an enhanced background image;
an identification module: for using the enhanced background image for scene recognition.
Further, the system also comprises a storage module for storing the big head shot image returned by the user through the mobile phone or the camera and an acquisition module for acquiring.
Examples
In the credit risk field, the scene in the photo pictures needs to be acquired, and the scene acquisition can be performed on the photo pictures shot in a car, the photo pictures shot in a home and the photo pictures shot in an office. As shown in fig. 2, the pixel values are 1437 × 1079, and the face portion of the portrait is mosaiced in consideration that the portrait portion in fig. 2 is derived from the real test case.
Before the method is adopted, the identification is carried out aiming at the part (a) in the figure 2, the identification result (namely scene information) of the obtained original photo image is ' figure close-up ', ' glasses ' or beauty ', after the method in the invention is adopted, the scene identification is carried out, and the result (namely scene information) obtained after the identification is ' cabinet ', ' indoor corner ' or ' chair ', therefore, the method can more accurately identify the scene in the photo image and can obtain more related information;
before the method is adopted, the identification is carried out aiming at the image (b) in the image 2, the identification results (namely scene information) of the obtained original big-head photo image are 'man', 'boy' and 'old people', and the results (namely scene information) obtained after the method is adopted are 'cabin', 'car' and 'electric skylight', so that the method can more accurately identify the scene in the big-head photo image and can obtain more related information;
before the method is adopted, the identification is carried out aiming at the (c) of the figure 2, the identification results (namely scene information) of the obtained original head-shot images are 'figure close-up', 'boy' and 'beauty', after the method is adopted, the scene identification is carried out, the identification results (namely scene information) are 'automobile interior trim', 'rear seat' and 'automobile armrest box', and therefore the method can identify the scenes in the head-shot images more accurately and obtain more related information;
before the method is adopted, the identification is carried out aiming at the step (d) in the step (2), the identification result (namely scene information) of the original big-head photo image is 'beauty', 'character close-up' and 'girl', after the method is adopted, the scene identification is carried out, and the identification result (namely scene information) is 'ceiling lamp', 'restaurant lamp' and 'indoor corner', so that the method can identify the scene in the big-head photo image more accurately and obtain more related information. In summary, the background picture after portrait matting is reduced in length and width in equal proportion and then fused with the background picture which is not reduced, and the scene information blocked by part of the portrait is repaired in an edge stretching mode without the limitation of the shape of the portrait outline and the complexity of the background. The scene information of the photo can be used as an important input variable of the user characteristics, for example, in the credit field, the financial attribute of the user is predicted by adopting the scene information of the photo, and the scene information is used for guiding the credit granting decision of a financial broker; in the marketing field, the consumption tendency of a user is predicted according to a self-photographing scene, and the market potential and the like are mined. In addition, the method can be fused with a scene recognition technology for modeling, and compared with the mode recognition of the original picture, the method adopts the image data after scene enhancement as optional input data, so that the recognition precision and the recognition breadth of the artificial intelligent products such as general objects, scene recognition and the like are improved.
The above are merely representative examples of the many specific applications of the present invention, and do not limit the scope of the invention in any way. All the technical solutions formed by the transformation or the equivalent substitution fall within the protection scope of the present invention.
Claims (6)
1. A method for enhancing and identifying a large-head-shot scene is characterized by comprising the following steps:
s1, preparing a large photo image needed for scene recognition;
s2, separating the portrait from the background in the large-head photo image to obtain a portrait mask and a background image, wherein the portrait mask is a binary image, the value of the background part is 0, the value of the portrait part is 1, and the portrait part in the background image is white;
s3, copying a background image to perform length-width equal-proportion reduction processing to obtain a reduced background image;
s4, fusing the reduced background image into an unreduced background image to perform edge stretching on the unreduced background image to obtain an enhanced background image;
and S5, carrying out scene recognition on the enhanced background image.
2. The method for enhanced recognition of a photo-looking scene as claimed in claim 1, wherein in said step S2, the algorithm for separating the human image and the background in the photo-looking image is one of GrabCut algorithm or watershed algorithm.
3. The method for enhancing and recognizing the large head portrait scene according to claim 1 or 2, wherein in the step S3, the reduction ratio k of the background image is in the range of (1-h1/h, 1), where h1 represents the height of the highest point of the portrait and h represents the height of the large head portrait image.
4. The method for enhanced identification of a large head-lit scene as claimed in claim 3, wherein the specific steps of said step S4 are as follows:
s4.1, according to a portrait mask, circularly searching and comparing coordinates of each pixel point of the portrait, obtaining a highest point coordinate (x1, y1) of the portrait, and recording the lowest point coordinates of the left side and the right side of the portrait as (x2, y2) and (x3, y3), and then calculating a fusion focus coordinate (x0, y0), namely the center of the portrait area, wherein x0 is (x1+ x2+ x3)/3, and y0 is (y1+ y2+ y 3)/3;
s4.2, respectively adding translation amounts (1-k) y0 and (1-k) x0 to the height and width of the reduced background image on the basis that the upper left corners of the reduced background image and the non-reduced background image are the origin of coordinates, so as to obtain the reduced background image after translation, and keeping the two images superposed at the non-reduced background image (x0, y0) when the non-reduced background image and the reduced background image are fused;
s4.3, extracting coordinate points of the portrait position in the portrait mask, namely coordinate points with the gray value of 1 in the binary image;
and S4.4, extracting a BGR value of the coordinate point corresponding to the portrait position based on the translated reduced background image, and assigning the BGR value to the coordinate point corresponding to the portrait position of the unreduced background image to obtain the enhanced background image.
5. A device for enhancing and identifying a large-head-shot scene, comprising:
an acquisition module: the method comprises the steps of acquiring a large photo image required for scene identification;
a separation module: the device comprises a photo mask, a background image and a photo processing module, wherein the photo mask is used for separating a portrait from a background in a large-head photo image to obtain a portrait mask and the background image, the portrait mask is a binary image, the value of the background part is 0, the value of the portrait part is 1, and the portrait part in the background image is white;
a fusion module: the system is used for copying a background image and performing length-width equal-proportion reduction processing to obtain a reduced background image; fusing the reduced background image into an unreduced background image, and performing edge stretching on the unreduced background image to obtain an enhanced background image;
an identification module: for using the enhanced background image for scene recognition.
6. The device for enhancing and identifying a large-head-shot scene according to claim 5, wherein: the system also comprises a storage module used for storing the big head shot image returned by the user through the mobile phone or the camera and used for being acquired by the acquisition module.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911161737.0A CN111046748B (en) | 2019-11-22 | 2019-11-22 | Method and device for enhancing and identifying big head scene |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911161737.0A CN111046748B (en) | 2019-11-22 | 2019-11-22 | Method and device for enhancing and identifying big head scene |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111046748A true CN111046748A (en) | 2020-04-21 |
CN111046748B CN111046748B (en) | 2023-06-09 |
Family
ID=70233236
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911161737.0A Active CN111046748B (en) | 2019-11-22 | 2019-11-22 | Method and device for enhancing and identifying big head scene |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111046748B (en) |
Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070040832A1 (en) * | 2003-07-31 | 2007-02-22 | Tan Tiow S | Trapezoidal shadow maps |
US20080246759A1 (en) * | 2005-02-23 | 2008-10-09 | Craig Summers | Automatic Scene Modeling for the 3D Camera and 3D Video |
CN102542275A (en) * | 2011-12-15 | 2012-07-04 | 广州商景网络科技有限公司 | Automatic identification method for identification photo background and system thereof |
CN103839223A (en) * | 2012-11-21 | 2014-06-04 | 华为技术有限公司 | Image processing method and image processing device |
US9478039B1 (en) * | 2015-07-07 | 2016-10-25 | Nanjing Huajie Imi Technology Co., Ltd | Background modeling and foreground extraction method based on depth image |
CN107529020A (en) * | 2017-09-11 | 2017-12-29 | 广东欧珀移动通信有限公司 | Image processing method and device, electronic installation and computer-readable recording medium |
CN107592491A (en) * | 2017-09-11 | 2018-01-16 | 广东欧珀移动通信有限公司 | Video communication background display methods and device |
CN107767355A (en) * | 2016-08-18 | 2018-03-06 | 深圳市劲嘉数媒科技有限公司 | The method and apparatus of image enhaucament reality |
KR101841993B1 (en) * | 2016-11-15 | 2018-03-26 | (주) 아이오티솔루션 | Indoor-type selfie support Camera System Baseon Internet Of Thing |
CN109345531A (en) * | 2018-10-10 | 2019-02-15 | 四川新网银行股份有限公司 | A kind of method and system based on picture recognition user's shooting distance |
US20190082118A1 (en) * | 2017-09-08 | 2019-03-14 | Apple Inc. | Augmented reality self-portraits |
CN109697703A (en) * | 2018-11-22 | 2019-04-30 | 深圳艺达文化传媒有限公司 | The background stacking method and Related product of video |
US20190156122A1 (en) * | 2017-11-17 | 2019-05-23 | Adobe Inc. | Intelligent digital image scene detection |
US20190206117A1 (en) * | 2017-12-29 | 2019-07-04 | UBTECH Robotics Corp. | Image processing method, intelligent terminal, and storage device |
-
2019
- 2019-11-22 CN CN201911161737.0A patent/CN111046748B/en active Active
Patent Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070040832A1 (en) * | 2003-07-31 | 2007-02-22 | Tan Tiow S | Trapezoidal shadow maps |
US20080246759A1 (en) * | 2005-02-23 | 2008-10-09 | Craig Summers | Automatic Scene Modeling for the 3D Camera and 3D Video |
CN102542275A (en) * | 2011-12-15 | 2012-07-04 | 广州商景网络科技有限公司 | Automatic identification method for identification photo background and system thereof |
CN103839223A (en) * | 2012-11-21 | 2014-06-04 | 华为技术有限公司 | Image processing method and image processing device |
US9478039B1 (en) * | 2015-07-07 | 2016-10-25 | Nanjing Huajie Imi Technology Co., Ltd | Background modeling and foreground extraction method based on depth image |
CN107767355A (en) * | 2016-08-18 | 2018-03-06 | 深圳市劲嘉数媒科技有限公司 | The method and apparatus of image enhaucament reality |
KR101841993B1 (en) * | 2016-11-15 | 2018-03-26 | (주) 아이오티솔루션 | Indoor-type selfie support Camera System Baseon Internet Of Thing |
US20190082118A1 (en) * | 2017-09-08 | 2019-03-14 | Apple Inc. | Augmented reality self-portraits |
CN107592491A (en) * | 2017-09-11 | 2018-01-16 | 广东欧珀移动通信有限公司 | Video communication background display methods and device |
CN107529020A (en) * | 2017-09-11 | 2017-12-29 | 广东欧珀移动通信有限公司 | Image processing method and device, electronic installation and computer-readable recording medium |
US20190156122A1 (en) * | 2017-11-17 | 2019-05-23 | Adobe Inc. | Intelligent digital image scene detection |
US20190206117A1 (en) * | 2017-12-29 | 2019-07-04 | UBTECH Robotics Corp. | Image processing method, intelligent terminal, and storage device |
CN109345531A (en) * | 2018-10-10 | 2019-02-15 | 四川新网银行股份有限公司 | A kind of method and system based on picture recognition user's shooting distance |
CN109697703A (en) * | 2018-11-22 | 2019-04-30 | 深圳艺达文化传媒有限公司 | The background stacking method and Related product of video |
Non-Patent Citations (4)
Title |
---|
RAO,G ANANTHA: "Deep convolutional neural networks for sign language recognition", 《2018 CONFERENCE ON SIGNAL PROCESSING AND COMMUNICATION ENGINEERING SYSTEMS(SPACES)》 * |
秦爱梅等: "基于人工智能视觉的特定场景识别系统设计", 《现代电子技术》 * |
罗小兰等: "基于Parzen核估计的动态背景建模算法", 《微计算机信息》 * |
郑欢: "数字图像修复算法研究", 《中国优秀硕士学位论文全文数据库》 * |
Also Published As
Publication number | Publication date |
---|---|
CN111046748B (en) | 2023-06-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11830230B2 (en) | Living body detection method based on facial recognition, and electronic device and storage medium | |
CN109919830B (en) | Method for restoring image with reference eye based on aesthetic evaluation | |
CN106096542B (en) | Image video scene recognition method based on distance prediction information | |
CN102799669B (en) | Automatic grading method for commodity image vision quality | |
US20240037852A1 (en) | Method and device for reconstructing three-dimensional faces and storage medium | |
CN110598610A (en) | Target significance detection method based on neural selection attention | |
CN112784736B (en) | Character interaction behavior recognition method based on multi-modal feature fusion | |
CN111488865A (en) | Image optimization method and device, computer storage medium and electronic equipment | |
Khan et al. | Localization of radiance transformation for image dehazing in wavelet domain | |
CN104572804A (en) | Video object retrieval system and method | |
CN110827312B (en) | Learning method based on cooperative visual attention neural network | |
Kim et al. | Low-light image enhancement based on maximal diffusion values | |
CN112634125A (en) | Automatic face replacement method based on off-line face database | |
CN111222433A (en) | Automatic face auditing method, system, equipment and readable storage medium | |
Liu et al. | Painting completion with generative translation models | |
CN111274946B (en) | Face recognition method, system and equipment | |
Hu et al. | A new method of Thangka image inpainting quality assessment | |
Wang et al. | Pert: a progressively region-based network for scene text removal | |
Zheng et al. | A new artistic information extraction method with multi channels and guided filters for calligraphy works | |
Singh et al. | Visibility enhancement and dehazing: Research contribution challenges and direction | |
CN114049290A (en) | Image processing method, device, equipment and storage medium | |
Zhang et al. | Semantic prior guided face inpainting | |
Li et al. | SPN2D-GAN: semantic prior based night-to-day image-to-image translation | |
CN116342519A (en) | Image processing method based on machine learning | |
CN111046748B (en) | Method and device for enhancing and identifying big head scene |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |