CN111046748B - Method and device for enhancing and identifying big head scene - Google Patents

Method and device for enhancing and identifying big head scene Download PDF

Info

Publication number
CN111046748B
CN111046748B CN201911161737.0A CN201911161737A CN111046748B CN 111046748 B CN111046748 B CN 111046748B CN 201911161737 A CN201911161737 A CN 201911161737A CN 111046748 B CN111046748 B CN 111046748B
Authority
CN
China
Prior art keywords
background image
image
portrait
scene
big head
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911161737.0A
Other languages
Chinese (zh)
Other versions
CN111046748A (en
Inventor
韩晗
胡俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan XW Bank Co Ltd
Original Assignee
Sichuan XW Bank Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan XW Bank Co Ltd filed Critical Sichuan XW Bank Co Ltd
Priority to CN201911161737.0A priority Critical patent/CN111046748B/en
Publication of CN111046748A publication Critical patent/CN111046748A/en
Application granted granted Critical
Publication of CN111046748B publication Critical patent/CN111046748B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20152Watershed segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention discloses a method and a device for enhancing and identifying a big head scene, belongs to the technical field of computational vision, pattern recognition and image processing, and solves the problems that the identification accuracy of a subsequent scene is low and even the subsequent scene cannot be identified at all due to the small identifiable information quantity of the big head scene. The method prepares a big head picture required for scene recognition; separating a portrait in the big head photo image from a background to obtain a portrait mask and a background image, wherein the portrait mask is a binary image, the value of the background part is 0, the value of the portrait part is 1, and the portrait part in the background image is white; copying a background image to perform length-width equal-proportion reduction processing to obtain a reduced background image; fusing the reduced background image into an undeniated background image, and carrying out edge stretching on the undeniated background image to obtain an enhanced background image; and finally, using the enhanced background image for scene recognition. And an apparatus for corresponding methods are set forth. The method is used for complement and enhancement of the big head scene information.

Description

Method and device for enhancing and identifying big head scene
Technical Field
A method and a device for enhancing and identifying a big head scene are used for completing the scene in the big head scene, and belong to the technical fields of computational vision, pattern recognition and image processing.
Background
With the development of computer image processing technology becoming mature and abundant image data, more and more image-derived data are widely applied in reality, and especially, self-timer headshot of users has become a rich data resource for enterprises. Generally, the self-photographing mainly comprises two major types of information, namely a portrait and a scene, but because of a large portrait coverage area, the background identifiable information is small, namely the problem that the subsequent scene identification accuracy is low and even the scene cannot be identified at all (only the information of the portrait can be identified) is caused, so that the scene information cannot be effectively utilized in the data analysis and modeling work by using the portrait big head. For the problem of inaccurate or insufficient identification of self-timer headshot scene (headshot background) information, no effective solution is currently queried, and the solutions closer to the effective solution are as follows:
1: a hardware module for scene recognition is added in specific hardware, and the information returned by the hardware module for scene recognition in the equipment is utilized to carry out subsequent work;
2: giving up the scene information in the self-shooting, only extracting useful information in the self-shooting portrait, or requiring the user to submit a photo which is richer in scene information and easy to judge and recognize in later period;
the disadvantages of using the above method are as follows:
1: specific hardware needs to be provided for a user, setting of a hardware device usually causes fixation of a scene, and information revealed by the scene in a photo cannot be used in actual modeling; second, the cost of use is increased and the user experience is reduced.
2: the photographed scene has rich personal attribute information, such as whether the photographed scene is in a car, in a high-grade consumption place and in an office environment, is an indirect reflection of many financial attributes and consumption habits of individuals, and is particularly important in links such as feature risk management and sales, and the correct use of the photographed scene is helpful for mining more user information and market depth. For example, in the credit field, the design of an inclusion credit contract is further performed according to the reference that the user provides the scene information of the headshot for credit risk assessment; in the dealer broker business, the scene information can be used for considering the user asset scale and risk bearing capacity, and accurately identifying the equity asset guest group; in the field of e-commerce and platform business, according to the user big head scene information, related commodity recommendation and advertisement display are given out; in the security field, scene information frequently appearing in the big head photo of the accident crowd is analyzed to early warn in advance. In addition, requiring the user to provide self-photographing with rich scenes reduces the content of portrait information, increases the photographing difficulty of the user and reduces the user experience.
Disclosure of Invention
Aiming at the problems of the research, the invention aims to provide a method and a device for enhancing and identifying a big head scene, which solve the problems in the prior art that: (1) The identifiable information amount of the big head scene is small, so that the problem that the subsequent scene is low in identification accuracy and even completely unrecognizable is caused; (2) By providing specific hardware, the big head scene cannot be used for actual modeling, the use cost is increased, and the user experience is reduced; (3) The problems that the content of portrait information is reduced, the photographing difficulty of a user is increased and the like are solved by providing the self-photographing with rich scenes for the user.
In order to achieve the above purpose, the invention adopts the following technical scheme:
a method for enhancing and identifying a big head scene comprises the following steps:
s1, preparing a big head photo image required for scene recognition;
s2, separating a human image and a background in the big head photo image to obtain a human image mask and a background image, wherein the human image mask is a binary image, the value of the background part is 0, the value of the human image part is 1, and the human image part in the background image is white;
s3, copying a background image to perform length-width equal-proportion reduction processing to obtain a reduced background image;
s4, fusing the reduced background image into an undegraded background image, and carrying out edge stretching on the undegraded background image to obtain an enhanced background image;
s5, performing scene recognition on the enhanced background image.
Further, in the step S2, the algorithm for separating the portrait and the background in the big head photo image is one of a GrabCut algorithm and a watershed algorithm.
Further, in the step S3, the range of the reduction scale k of the background image is (1-h 1/h, 1), where h1 represents the portrait highest point height and h represents the top view image height.
Further, the specific steps of the step S4 are as follows:
s4.1, according to a portrait mask, each pixel point coordinate of a comparison portrait is searched circularly, a portrait highest point coordinate (x 1, y 1) and a portrait left and right lowest point coordinate are respectively marked as (x 2, y 2) and (x 3, y 3), and a fusion focus coordinate (x 0, y 0) is calculated, namely the center of a portrait region, wherein x0= (x1+x2+x3)/3, and y0= (y1+y2+y3)/3;
s4.2, on the basis that the left upper corner of the reduced background image and the left upper corner of the non-reduced background image are the origins of coordinates, respectively adding translation amounts (1-k) x0 and (1-k) x0 to the height and width of the reduced background image to obtain a translated reduced background image, so that the non-reduced background image and the reduced background image are kept to be overlapped at the non-reduced background image (x 0, y 0) when being fused;
s4.3, extracting coordinate points of the portrait position in the portrait mask, namely coordinate points with gray values of 1 in the binary image;
s4.4, extracting a BGR value of a coordinate point corresponding to the portrait position based on the translated reduced background image, and assigning the BGR value to the portrait position coordinate point corresponding to the background image which is not reduced, so as to obtain the enhanced background image.
A device for identifying a big head scene enhancement, comprising:
and the acquisition module is used for: the method comprises the steps of acquiring a big head photo image required for scene recognition;
and a separation module: the method comprises the steps of separating a portrait in a big head photo image from a background to obtain a portrait mask and a background image, wherein the portrait mask is a binary image, the value of the background part is 0, the value of the portrait part is 1, and the portrait part in the background image is white;
and a fusion module: the method comprises the steps of copying a background image to perform length-width equal-proportion reduction processing to obtain a reduced background image; fusing the reduced background image into an undeniated background image, and carrying out edge stretching on the undeniated background image to obtain an enhanced background image;
and an identification module: for using the enhanced background image for scene recognition.
Further, the mobile phone or camera is used for storing the big head photo image returned by the user through the mobile phone or camera and acquiring the big head photo image through the acquisition module. Compared with the prior art, the invention has the beneficial effects that:
1. according to the invention, the background image after the portrait is drawn is reduced by the length-width equal proportion, and then is fused with the background image which is not reduced, and the scene information shielded by part of the portrait is repaired by the edge stretching mode, so that the accuracy of the subsequent scene information recognition is greatly improved, and the method is free from the limitation of the contour of the portrait and the complexity of the background, and has wide application range;
2. the scene information obtained by the method can be used for fusion modeling;
3. the self-shooting big head scene enhancement process only depends on the self-shooting images owned by enterprises, and other data are not introduced, so that the introduction of noise is reduced, and the enterprise cost is reduced;
4. according to the invention, a hardware device is not required to be additionally arranged to acquire the self-photographing image, and the self-photographing image is returned through the photographing function of the common smart phone, so that the cost is further reduced;
5. the image complement in the invention can quickly enhance the scene without complex models and high-cost training samples;
6. the method is easy to calculate, and compared with a complex graph calculation scheme which can be completed only by adopting a cloud server or a local GPU server, the method hardly has higher requirements on the existing calculation resources.
Drawings
FIG. 1 is a schematic flow chart of the present invention;
fig. 2 is a top view of the scene recognition required in the present invention with the resulting enhanced background image.
Detailed Description
The invention will be further described with reference to the drawings and detailed description.
In the field of credit, in dealer broker business, in the field of e-commerce and platform business or in the field of security, it is necessary to extract the key information required in a scene by analyzing the scene information that frequently occurs in the head-up of the crowd of events. The characteristic of the big head photo is that the occupied area of the head is very large, the occupied area of the scene is very small, the prior art is generally used for filling scene information with richer environmental information such as whole body images, and is not aimed at filling the scene information applied to the big head photo, so that the filling effect is poor under the scene of the big head photo, in order to improve the filling of the scene information of the big head photo, the following technical scheme is provided, and the technical scheme can fill some parts with missing scene information in the big head photo, namely, the recognition of the follow-up scene information can be improved:
a method for enhancing and identifying a big head scene comprises the following steps:
s1, preparing a big head photo image required for scene recognition;
s2, separating a human image and a background in the big head photo image to obtain a human image mask and a background image, wherein the human image mask is a binary image, the value of the background part is 0, the value of the human image part is 1, and the human image part in the background image is white; the algorithm for separating the portrait and the background in the big head picture is one of GrabCot algorithm and watershed algorithm.
S3, copying a background image to perform length-width equal-proportion reduction processing to obtain a reduced background image; the range of the reduction scale k of the background image is (1-h 1/h, 1), namely the reduction scale range is larger than 1-h1/h and smaller than 1, wherein h1 represents the highest point height of the portrait, and h represents the height of the top head picture.
S4, fusing the reduced background image into an undegraded background image, and carrying out edge stretching on the undegraded background image to obtain an enhanced background image;
the method comprises the following specific steps:
s4.1, according to a portrait mask, each pixel point coordinate of a comparison portrait is searched circularly, a portrait highest point coordinate (x 1, y 1) and a portrait left and right lowest point coordinate are respectively marked as (x 2, y 2) and (x 3, y 3), and a fusion focus coordinate (x 0, y 0) is calculated, namely the center of a portrait region, wherein x0= (x1+x2+x3)/3, and y0= (y1+y2+y3)/3;
s4.2, on the basis that the left upper corner of the reduced background image and the left upper corner of the non-reduced background image are the origins of coordinates, respectively adding translation amounts (1-k) x0 and (1-k) x0 to the height and width of the reduced background image to obtain a translated reduced background image, so that the non-reduced background image and the reduced background image are kept to be overlapped at the non-reduced background image (x 0, y 0) when being fused;
s4.3, extracting coordinate points of the portrait position in the portrait mask, namely coordinate points with gray values of 1 in the binary image;
s4.4, extracting a BGR value of a coordinate point corresponding to the portrait position based on the translated reduced background image, and assigning the BGR value to the portrait position coordinate point corresponding to the background image which is not reduced, so as to obtain the enhanced background image.
S5, taking the enhanced background image as input, and inputting the enhanced background image into a deep learning model to perform scene recognition, wherein the enhanced background image can be other models for scene recognition.
A device for identifying a big head scene enhancement, comprising:
and the acquisition module is used for: the method comprises the steps of acquiring a big head photo image required for scene recognition;
and a separation module: the method comprises the steps of separating a portrait in a big head photo image from a background to obtain a portrait mask and a background image, wherein the portrait mask is a binary image, the value of the background part is 0, the value of the portrait part is 1, and the portrait part in the background image is white;
and a fusion module: the method comprises the steps of copying a background image to perform length-width equal-proportion reduction processing to obtain a reduced background image; fusing the reduced background image into an undeniated background image, and carrying out edge stretching on the undeniated background image to obtain an enhanced background image;
and an identification module: for using the enhanced background image for scene recognition.
Further, the mobile phone or camera is used for storing the big head photo image returned by the user through the mobile phone or camera and acquiring the big head photo image through the acquisition module.
Examples
In the credit risk field, a scene in the big head photo image needs to be acquired, and the scene can be acquired for the big head photo image shot in a vehicle, the big head photo image shot in a home and the big head photo image shot in an office. As shown in fig. 2, the pixel values are 1437×1079, and the face portion of the portrait in fig. 2 is subjected to mosaic processing in consideration of that the face portion is derived from a real measurement sample.
Before the invention is adopted, the identification is carried out according to the (a) of the figure 2, the identification result (namely, scene information) of the original big head picture is obtained as 'figure feature', 'glasses', 'beauty', after the method of the invention is adopted, the scene identification is carried out, the result (namely, scene information) obtained after the identification is obtained as 'cabinet', 'indoor corner', 'chair', and as a result, the invention can more accurately identify the scene in the big head picture, and can obtain more relevant information;
before the invention is adopted, the identification is carried out according to the (b) of the figure 2, the identification result (namely, scene information) of the original big head picture is obtained as a man, a boy and an old person, and after the invention is adopted, the identification result (namely, scene information) is obtained as a car cabin, a car and an electric skylight, therefore, the invention can more accurately identify the scene in the big head picture, and can obtain more relevant information;
before the invention is adopted, the identification is carried out according to the (c) of the figure 2, the identification result (namely, scene information) of the original big head picture is 'personage close-up', 'boy', 'beauty', after the method of the invention is adopted, the scene identification is carried out, the identification result (namely, scene information) is 'automobile interior trim', 'back row seat', 'automobile armrest box', and as a result, the invention can more accurately identify the scene in the big head picture, and can obtain more relevant information;
before the invention is adopted, the identification is carried out according to the (d) of the figure 2, the identification result (namely, the scene information) of the original big head picture is obtained as a "beauty", "character close-up", "girl", after the method is adopted, the scene identification is carried out, and the identification result (namely, the scene information) is obtained as a "ceiling lamp", "restaurant lamp" or "indoor corner", therefore, the invention can more accurately identify the scene in the big head picture, and can obtain more relevant information. In summary, the background picture after the portrait is scratched is reduced by the length-width equal proportion, then is fused with the background picture which is not subjected to the reduction treatment, and the scene information shielded by part of the portrait is restored by the edge stretching mode, so that the method is not limited by the shape of the outline of the portrait and the complexity of the background. The scene information of the headlamps can be used as an important input variable of the user characteristics, for example, in the credit field, the financial attribute of the user is predicted by adopting the scene information of the headlamps, and the scene information is used for guiding the credit decision of a financial agency; in the marketing field, consumption trends of users are predicted according to self-shooting scenes, market potential is mined, and the like. In addition, the method can also carry out fusion modeling with a scene recognition technology, and compared with the mode recognition of an original picture, the method adopts image data after scene enhancement as optional input data, and improves the recognition precision and breadth of artificial intelligent products such as general objects, scene recognition and the like.
The above is merely representative examples of numerous specific applications of the present invention and should not be construed as limiting the scope of the invention in any way. All technical schemes formed by adopting transformation or equivalent substitution fall within the protection scope of the invention.

Claims (5)

1. A method for enhancing and identifying a big head scene is characterized by comprising the following steps:
s1, preparing a big head photo image required for scene recognition;
s2, separating a human image and a background in the big head photo image to obtain a human image mask and a background image, wherein the human image mask is a binary image, the value of the background part is 0, the value of the human image part is 1, and the human image part in the background image is white;
s3, copying a background image to perform length-width equal-proportion reduction processing to obtain a reduced background image;
s4, fusing the reduced background image into an undegraded background image, and carrying out edge stretching on the undegraded background image to obtain an enhanced background image;
the specific steps of the step S4 are as follows:
s4.1, according to a portrait mask, each pixel point coordinate of a comparison portrait is searched circularly, a portrait highest point coordinate (x 1, y 1) and a portrait left and right lowest point coordinate are respectively marked as (x 2, y 2) and (x 3, y 3), and a fusion focus coordinate (x 0, y 0) is calculated, namely the center of a portrait region, wherein x0= (x1+x2+x3)/3, and y0= (y1+y2+y3)/3;
s4.2, on the basis that the left upper corner of the reduced background image and the left upper corner of the non-reduced background image are the origins of coordinates, respectively adding translation amounts (1-k) x0 and (1-k) x0 to the height and width of the reduced background image to obtain a translated reduced background image, so that the non-reduced background image and the reduced background image are kept to be overlapped at the non-reduced background image (x 0, y 0) when being fused;
s4.3, extracting coordinate points of the portrait position in the portrait mask, namely coordinate points with gray values of 1 in the binary image;
s4.4, extracting a BGR value of a coordinate point corresponding to the portrait position based on the translated reduced background image, and assigning the BGR value to the portrait position coordinate point corresponding to the background image which is not reduced to obtain an enhanced background image;
s5, performing scene recognition on the enhanced background image.
2. The method according to claim 1, wherein in step S2, the algorithm for separating the portrait and the background in the big head picture is one of a GrabCut algorithm and a watershed algorithm.
3. The method according to claim 1 or 2, wherein in the step S3, the reduction ratio k of the background image ranges from (1-h 1/h, 1), where h1 represents the highest point height of the portrait, and h represents the height of the portrait.
4. A device for identifying a scene enhancement of a big head, comprising:
and the acquisition module is used for: the method comprises the steps of acquiring a big head photo image required for scene recognition;
and a separation module: the method comprises the steps of separating a portrait in a big head photo image from a background to obtain a portrait mask and a background image, wherein the portrait mask is a binary image, the value of the background part is 0, the value of the portrait part is 1, and the portrait part in the background image is white;
and a fusion module: the method comprises the steps of copying a background image to perform length-width equal-proportion reduction processing to obtain a reduced background image; fusing the reduced background image into an undeniated background image, and carrying out edge stretching on the undeniated background image to obtain an enhanced background image;
and an identification module: for using the enhanced background image for scene recognition.
5. The device for identifying a scene enhancement of a headlamp according to claim 4, wherein: the system also comprises a large head photo image which is used for storing the large head photo image returned by the user through the mobile phone or the camera and is used for being acquired by the acquisition module.
CN201911161737.0A 2019-11-22 2019-11-22 Method and device for enhancing and identifying big head scene Active CN111046748B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911161737.0A CN111046748B (en) 2019-11-22 2019-11-22 Method and device for enhancing and identifying big head scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911161737.0A CN111046748B (en) 2019-11-22 2019-11-22 Method and device for enhancing and identifying big head scene

Publications (2)

Publication Number Publication Date
CN111046748A CN111046748A (en) 2020-04-21
CN111046748B true CN111046748B (en) 2023-06-09

Family

ID=70233236

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911161737.0A Active CN111046748B (en) 2019-11-22 2019-11-22 Method and device for enhancing and identifying big head scene

Country Status (1)

Country Link
CN (1) CN111046748B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102542275A (en) * 2011-12-15 2012-07-04 广州商景网络科技有限公司 Automatic identification method for identification photo background and system thereof
CN103839223A (en) * 2012-11-21 2014-06-04 华为技术有限公司 Image processing method and image processing device
US9478039B1 (en) * 2015-07-07 2016-10-25 Nanjing Huajie Imi Technology Co., Ltd Background modeling and foreground extraction method based on depth image
CN107529020A (en) * 2017-09-11 2017-12-29 广东欧珀移动通信有限公司 Image processing method and device, electronic installation and computer-readable recording medium
CN107592491A (en) * 2017-09-11 2018-01-16 广东欧珀移动通信有限公司 Video communication background display methods and device
CN107767355A (en) * 2016-08-18 2018-03-06 深圳市劲嘉数媒科技有限公司 The method and apparatus of image enhaucament reality
KR101841993B1 (en) * 2016-11-15 2018-03-26 (주) 아이오티솔루션 Indoor-type selfie support Camera System Baseon Internet Of Thing
CN109345531A (en) * 2018-10-10 2019-02-15 四川新网银行股份有限公司 A kind of method and system based on picture recognition user's shooting distance
CN109697703A (en) * 2018-11-22 2019-04-30 深圳艺达文化传媒有限公司 The background stacking method and Related product of video

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070040832A1 (en) * 2003-07-31 2007-02-22 Tan Tiow S Trapezoidal shadow maps
WO2006089417A1 (en) * 2005-02-23 2006-08-31 Craig Summers Automatic scene modeling for the 3d camera and 3d video
US11394898B2 (en) * 2017-09-08 2022-07-19 Apple Inc. Augmented reality self-portraits
US10515275B2 (en) * 2017-11-17 2019-12-24 Adobe Inc. Intelligent digital image scene detection
CN109993824B (en) * 2017-12-29 2023-08-04 深圳市优必选科技有限公司 Image processing method, intelligent terminal and device with storage function

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102542275A (en) * 2011-12-15 2012-07-04 广州商景网络科技有限公司 Automatic identification method for identification photo background and system thereof
CN103839223A (en) * 2012-11-21 2014-06-04 华为技术有限公司 Image processing method and image processing device
US9478039B1 (en) * 2015-07-07 2016-10-25 Nanjing Huajie Imi Technology Co., Ltd Background modeling and foreground extraction method based on depth image
CN107767355A (en) * 2016-08-18 2018-03-06 深圳市劲嘉数媒科技有限公司 The method and apparatus of image enhaucament reality
KR101841993B1 (en) * 2016-11-15 2018-03-26 (주) 아이오티솔루션 Indoor-type selfie support Camera System Baseon Internet Of Thing
CN107529020A (en) * 2017-09-11 2017-12-29 广东欧珀移动通信有限公司 Image processing method and device, electronic installation and computer-readable recording medium
CN107592491A (en) * 2017-09-11 2018-01-16 广东欧珀移动通信有限公司 Video communication background display methods and device
CN109345531A (en) * 2018-10-10 2019-02-15 四川新网银行股份有限公司 A kind of method and system based on picture recognition user's shooting distance
CN109697703A (en) * 2018-11-22 2019-04-30 深圳艺达文化传媒有限公司 The background stacking method and Related product of video

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Deep convolutional neural networks for sign language recognition;Rao,G Anantha;《2018 Conference on signal processing and communication engineering systems(Spaces)》;全文 *
基于Parzen核估计的动态背景建模算法;罗小兰等;《微计算机信息》(第27期);全文 *
基于人工智能视觉的特定场景识别系统设计;秦爱梅等;《现代电子技术》(第10期);全文 *
数字图像修复算法研究;郑欢;《中国优秀硕士学位论文全文数据库》;全文 *

Also Published As

Publication number Publication date
CN111046748A (en) 2020-04-21

Similar Documents

Publication Publication Date Title
Cai et al. PiiGAN: generative adversarial networks for pluralistic image inpainting
CN106096542B (en) Image video scene recognition method based on distance prediction information
Tian et al. Depth estimation using a self-supervised network based on cross-layer feature fusion and the quadtree constraint
CN107610202B (en) Face image replacement method, device and storage medium
Huang et al. Deep learning for image colorization: Current and future prospects
US20240037852A1 (en) Method and device for reconstructing three-dimensional faces and storage medium
CN104572804A (en) Video object retrieval system and method
CN111488865A (en) Image optimization method and device, computer storage medium and electronic equipment
CN111832745A (en) Data augmentation method and device and electronic equipment
CN108573192B (en) Glasses try-on method and device matched with human face
CN112634125A (en) Automatic face replacement method based on off-line face database
CN114972847A (en) Image processing method and device
CN109166172B (en) Clothing model construction method and device, server and storage medium
Zhang et al. Semantic prior guided face inpainting
CN111046748B (en) Method and device for enhancing and identifying big head scene
Li et al. SPN2D-GAN: semantic prior based night-to-day image-to-image translation
CN113658195B (en) Image segmentation method and device and electronic equipment
CN114049290A (en) Image processing method, device, equipment and storage medium
CN113762059A (en) Image processing method and device, electronic equipment and readable storage medium
CN105718050B (en) Real-time human face interaction method and system
Xiang et al. Structure-aware multi-view image inpainting using dual consistency attention
CN116962817B (en) Video processing method, device, electronic equipment and storage medium
KR102467295B1 (en) Apparel wearing system based on face application, and method thereof
Huang et al. Multi-feature learning for low-light image enhancement
Du et al. Mhgan: Multi-hierarchies generative adversarial network for high-quality face sketch synthesis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant