CN109145831A - A kind of method for detecting human face and device in video fusion - Google Patents

A kind of method for detecting human face and device in video fusion Download PDF

Info

Publication number
CN109145831A
CN109145831A CN201810975217.2A CN201810975217A CN109145831A CN 109145831 A CN109145831 A CN 109145831A CN 201810975217 A CN201810975217 A CN 201810975217A CN 109145831 A CN109145831 A CN 109145831A
Authority
CN
China
Prior art keywords
video
frame
fused
face
obtains
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201810975217.2A
Other languages
Chinese (zh)
Inventor
王志纯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei Jingzhang Technology Co Ltd
Original Assignee
Hefei Jingzhang Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei Jingzhang Technology Co Ltd filed Critical Hefei Jingzhang Technology Co Ltd
Priority to CN201810975217.2A priority Critical patent/CN109145831A/en
Publication of CN109145831A publication Critical patent/CN109145831A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The invention discloses a kind of method for detecting human face and device in video fusion, wherein the method for detecting human face includes: to instruct in response to video fusion, obtains at least two videos to be fused;For any one video to be fused, picture frame is obtained from the video to be fused;Face datection is carried out to each frame in described image frame, obtains the face image data in each frame;For the video to be fused, according to the timeline and face image data of the video to be fused;Fusion described image frame obtains target video image frame;Target video is generated according to the target video image frame.Face datection is carried out by each frame to video to be fused, and obtains corresponding face image data, and then according to the timeline and face image data of the video to be fused;Fusion described image frame obtains target video image frame;Target video is ultimately produced, video fusion technology is realized and is applied under the usage scenario of portrait video.

Description

A kind of method for detecting human face and device in video fusion
Technical field
The present invention relates to artificial intelligence field, in particular to a kind of method for detecting human face and dress in video fusion It sets.
Background technique
Video fusion technology is a branch of virtual reality technology, it may also be said to be a development rank of virtual reality Section.Video fusion technology refers to is regarded one or more by what video capture device acquired about the image sequence of certain scene or model Frequency is merged with an associated virtual scene, to generate the new virtual scene or model about this scene.
Addition in this model because of video obtains an either virtual scene or video itself all can not be independent The information combination of completion.The purpose of video fusion is to increase the interactivity of virtual scene and reality, is reduced in computer model Information uncertain factor increases the relativity of information of dummy model, is to erect a bridge block between real and virtual, expands virtual The application field of reality technology.
But already existing video fusion technology in the prior art, not yet under the usage scenario of portrait video extensively Using.
Summary of the invention
The technical problem to be solved in the present invention is to provide a kind of method for detecting human face and device in video fusion.
In order to solve the above-mentioned technical problem, the technical solution of the present invention is as follows:
A kind of method for detecting human face in video fusion characterized by comprising
It is instructed in response to video fusion, obtains at least two videos to be fused;
For any one video to be fused, picture frame is obtained from the video to be fused;
Face datection is carried out to each frame in described image frame, obtains the face image data in each frame;
For the video to be fused, according to the timeline and face image data of the video to be fused;Described in fusion Picture frame obtains target video image frame;
Target video is generated according to the target video image frame.
On the basis of the above embodiments, each frame in described image frame carries out Face datection, obtains each Before face image data in frame, further includes:
Each frame in described image frame is formatted and/or depression of order is handled.
On the basis of the above embodiments, each frame in picture frame carries out Face datection, obtains in each frame Face image data, comprising:
Capture the human face region in each frame of described image frame;
Region segmentation is carried out to the human face region using three five front yard split plot designs;
Reference area is filtered out from the region after segmentation.
Based on identical thinking, the present invention also provides a kind of human face detection device in video fusion, specifically:
Video acquiring module is instructed in response to video fusion, obtains at least two videos to be fused;
Picture frame obtains module, for being directed to any one video to be fused, obtains image from the video to be fused Frame;
Detection module obtains the face figure in each frame for carrying out Face datection to each frame in described image frame As data;
Fusion Module, for being directed to the video to be fused, according to the timeline and facial image of the video to be fused Data, fusion described image frame obtain target video image frame;
Video generation module, for generating target video according to the target video image frame.
Further, described device further includes preprocessing module, for carrying out format to each frame in described image frame Conversion and/or depression of order processing.Target video is generated according to the target video image frame.
Further, the detection module includes:
Image capture unit, the human face region in each frame for capturing described image frame;
Region segmentation unit, for carrying out region segmentation to the human face region using three five front yard split plot designs;
Screening unit, for filtering out reference area from the region after segmentation.
By adopting the above technical scheme, Face datection is carried out by each frame to video to be fused, and obtains corresponding people Face image data, and then according to the timeline and face image data of the video to be fused;Fusion described image frame obtains mesh Mark video image frame;Target video is ultimately produced, video fusion technology is realized and is applied under the usage scenario of portrait video.
Detailed description of the invention
Fig. 1 is a kind of flow chart for the method for detecting human face in video fusion that the embodiment of the present invention one provides;
Fig. 2 is a kind of structural schematic diagram of video fusion device provided by Embodiment 2 of the present invention.
Specific embodiment
Specific embodiments of the present invention will be further explained with reference to the accompanying drawing.It should be noted that for The explanation of these embodiments is used to help understand the present invention, but and does not constitute a limitation of the invention.In addition, disclosed below The each embodiment of the present invention involved in technical characteristic can be combined with each other as long as they do not conflict with each other.
Embodiment one
Fig. 1 is a kind of flow chart for the method for detecting human face in video fusion that the embodiment of the present invention one provides, should Method can be executed by a kind of human face detection device, which can be realized by way of software and/or hardware, and be integrated It is being in smart machine.Specifically, described includes: for the method for detecting human face in video fusion
S110, it is instructed in response to video fusion, obtains at least two videos to be fused.
Method for detecting human face described in the present embodiment usually executes in the server, wherein video fusion instruction by User is issued by terminal (including the end PC and mobile terminal), meanwhile, at least two video to be fused can also melt with video It closes instruction and is sent to server-side from terminal together, to improve the execution efficiency of video fusion method.The video to be fused can be with It shoots the video certainly including pre-stored in user terminal, may include video interested to user, also may include user institute Like the video of star.
Illustratively, if the video to be fused includes the video of user to be shot the video certainly with the liked star of user, Merge the picture that meeting presentation user interacts with star in the obtained target video of video to be fused.
S120, it is directed to any one video to be fused, obtains picture frame from the video to be fused.
Wherein, the video to be fused includes picture frame, and described image frame includes key frame of video and normal frames.
Optionally, the type of picture frame includes Inter Frame (I frame), P-Frame (P frame) and B-Frame (B frame).
S130, Face datection is carried out to each frame in described image frame, obtains the face image data in each frame.
Wherein, the face image data is used to indicate the data of the face characteristic in described image frame, and the face is special Sign includes histogram feature, color characteristic, template characteristic, structure feature and Haar (Haar-like feature) feature, specifically, The Haar feature includes edge feature, linear character, central feature and diagonal line feature etc..For example, Haar characteristic value reflects The grey scale change situation of image.Such as: some features of face can simply be described by rectangular characteristic, and such as: eyes are than face Cheek color is deep, and than bridge of the nose color depth, mouth is deeper etc. than ambient color for bridge of the nose two sides.But rectangular characteristic is only to some simple Graphic structure, if edge, line segment are more sensitive, so the structure of particular orientation (horizontal, vertical, diagonal) can only be described.
S140, it is directed to the video to be fused, according to the timeline and face image data of the video to be fused;Fusion Described image frame obtains target video image frame.
Wherein, the timeline is for arranging to all picture frames in video image.It is specifically executed in the present embodiment In the process, it can be merged according to the timeline in picture frame mutually in the same time
S150, target video is generated according to the target video image frame.
Embodiment two
On the basis of example 1, the present embodiment can also be the preprocessing process increased to picture frame, specifically, The method for detecting human face, comprising:
S210, it is instructed in response to video fusion, obtains at least two videos to be fused.
S220, it is directed to any one video to be fused, obtains picture frame from the video to be fused.
S230, each frame in described image frame is formatted and/or depression of order processing.
S240, Face datection is carried out to each frame in described image frame, obtains the face image data in each frame;
S250, it is directed to the video to be fused, according to the timeline and face image data of the video to be fused;Fusion Described image frame obtains target video image frame;
S260, target video is generated according to the target video image frame.
Embodiment three
Fig. 2 is a kind of structural representation for the human face detection device in video fusion that the embodiment of the present invention three provides Figure, specifically includes: video acquiring module 310, picture frame obtain module 320, detection module 330, Fusion Module 340 and video life At module 350.
Video acquiring module 310 is instructed in response to video fusion, obtains at least two videos to be fused;
Picture frame obtains module 320, and for being directed to any one video to be fused, figure is obtained from the video to be fused As frame;
Detection module 330 obtains the face in each frame for carrying out Face datection to each frame in described image frame Image data;
Fusion Module 340, for being directed to the video to be fused, according to the timeline and face figure of the video to be fused As data, fusion described image frame obtains target video image frame;
Video generation module 350, for generating target video according to the target video image frame.
On the basis of the above embodiments, the human face detection device further include:
Preprocessing module, for each frame in described image frame is formatted and/or depression of order processing.According to institute It states target video image frame and generates target video.
On the basis of the above embodiments, the detection module includes:
Image capture unit, the human face region in each frame for capturing described image frame;
Region segmentation unit, for carrying out region segmentation to the human face region using three five front yard split plot designs;
Screening unit, for filtering out reference area from the region after segmentation.
In conjunction with attached drawing, the embodiments of the present invention are described in detail above, but the present invention is not limited to described implementations Mode.For a person skilled in the art, in the case where not departing from the principle of the invention and spirit, to these embodiments A variety of change, modification, replacement and modification are carried out, are still fallen in protection scope of the present invention.

Claims (6)

1. a kind of method for detecting human face in video fusion characterized by comprising
It is instructed in response to video fusion, obtains at least two videos to be fused;
For any one video to be fused, picture frame is obtained from the video to be fused;
Face datection is carried out to each frame in described image frame, obtains the face image data in each frame;
For the video to be fused, according to the timeline and face image data of the video to be fused;Merge described image Frame obtains target video image frame;
Target video is generated according to the target video image frame.
2. method for detecting human face according to claim 1, which is characterized in that each frame in described image frame into Row Face datection, before obtaining the face image data in each frame, further includes:
Each frame in described image frame is formatted and/or depression of order is handled.
3. the method according to claim 1, wherein each frame in picture frame carries out Face datection, Obtain the face image data in each frame, comprising:
Capture the human face region in each frame of described image frame;
Region segmentation is carried out to the human face region using three five front yard split plot designs;
Reference area is filtered out from the region after segmentation.
4. a kind of human face detection device in video fusion characterized by comprising
Video acquiring module is instructed in response to video fusion, obtains at least two videos to be fused;
Picture frame obtains module, for being directed to any one video to be fused, obtains picture frame from the video to be fused;
Detection module obtains the facial image number in each frame for carrying out Face datection to each frame in described image frame According to;
Fusion Module, for being directed to the video to be fused, according to the timeline and face image data of the video to be fused, Fusion described image frame obtains target video image frame;
Video generation module, for generating target video according to the target video image frame.
5. device according to claim 4, which is characterized in that further include:
Preprocessing module, for each frame in described image frame is formatted and/or depression of order processing.According to the mesh It marks video image frame and generates target video.
6. device according to claim 4, which is characterized in that the detection module includes:
Image capture unit, the human face region in each frame for capturing described image frame;
Region segmentation unit, for carrying out region segmentation to the human face region using three five front yard split plot designs;
Screening unit, for filtering out reference area from the region after segmentation.
CN201810975217.2A 2018-08-24 2018-08-24 A kind of method for detecting human face and device in video fusion Withdrawn CN109145831A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810975217.2A CN109145831A (en) 2018-08-24 2018-08-24 A kind of method for detecting human face and device in video fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810975217.2A CN109145831A (en) 2018-08-24 2018-08-24 A kind of method for detecting human face and device in video fusion

Publications (1)

Publication Number Publication Date
CN109145831A true CN109145831A (en) 2019-01-04

Family

ID=64828027

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810975217.2A Withdrawn CN109145831A (en) 2018-08-24 2018-08-24 A kind of method for detecting human face and device in video fusion

Country Status (1)

Country Link
CN (1) CN109145831A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115396644A (en) * 2022-07-21 2022-11-25 贝壳找房(北京)科技有限公司 Video fusion method and device based on multi-segment external parameter data

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115396644A (en) * 2022-07-21 2022-11-25 贝壳找房(北京)科技有限公司 Video fusion method and device based on multi-segment external parameter data
CN115396644B (en) * 2022-07-21 2023-09-15 贝壳找房(北京)科技有限公司 Video fusion method and device based on multi-section external reference data

Similar Documents

Publication Publication Date Title
CN102799868B (en) Method for identifying key facial expressions of human faces
CN104050449B (en) A kind of face identification method and device
US5774591A (en) Apparatus and method for recognizing facial expressions and facial gestures in a sequence of images
US5802220A (en) Apparatus and method for tracking facial motion through a sequence of images
WO2018103244A1 (en) Live streaming video processing method, device, and electronic apparatus
CN110827193B (en) Panoramic video significance detection method based on multichannel characteristics
CN103731583B (en) Intelligent synthetic, print processing method is used for taking pictures
JP5949331B2 (en) Image generating apparatus, image generating method, and program
CN102859991A (en) A Method Of Real-time Cropping Of A Real Entity Recorded In A Video Sequence
CN103327231A (en) Image creating device and image creating method
JP2019117577A (en) Program, learning processing method, learning model, data structure, learning device and object recognition device
CN109978805A (en) It takes pictures processing method, device, mobile terminal and storage medium
CN108010037A (en) Image processing method, device and storage medium
CN109035147B (en) Image processing method and device, electronic device, storage medium and computer equipment
US20140233858A1 (en) Image creating device, image creating method and recording medium storing program
CN109977846B (en) Living body detection method and system based on near-infrared monocular photography
JP6225460B2 (en) Image processing apparatus, image processing method, control program, and recording medium
JP2013200735A (en) Image generation device, image generation method, and program
JP7218786B2 (en) Image processing device, image processing method and program
CN112700568B (en) Identity authentication method, equipment and computer readable storage medium
US9323981B2 (en) Face component extraction apparatus, face component extraction method and recording medium in which program for face component extraction method is stored
CN109145831A (en) A kind of method for detecting human face and device in video fusion
US20120237089A1 (en) Image processing apparatus and image processing method
US8971636B2 (en) Image creating device, image creating method and recording medium
CN113723306B (en) Push-up detection method, push-up detection device and computer readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20190104