CN110798634A - Image self-adaptive synthesis method and device and computer readable storage medium - Google Patents

Image self-adaptive synthesis method and device and computer readable storage medium Download PDF

Info

Publication number
CN110798634A
CN110798634A CN201911186190.XA CN201911186190A CN110798634A CN 110798634 A CN110798634 A CN 110798634A CN 201911186190 A CN201911186190 A CN 201911186190A CN 110798634 A CN110798634 A CN 110798634A
Authority
CN
China
Prior art keywords
foreground
background
coordinate system
image
affine transformation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911186190.XA
Other languages
Chinese (zh)
Other versions
CN110798634B (en
Inventor
王斌
杨晓春
刘一
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shuangzhibo Shenyang Electromechanical Equipment Manufacturing Co ltd
Original Assignee
Northeastern University China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northeastern University China filed Critical Northeastern University China
Priority to CN201911186190.XA priority Critical patent/CN110798634B/en
Publication of CN110798634A publication Critical patent/CN110798634A/en
Application granted granted Critical
Publication of CN110798634B publication Critical patent/CN110798634B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/272Means for inserting a foreground image in a background image, i.e. inlay, outlay

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)
  • Studio Circuits (AREA)

Abstract

The invention discloses an image self-adaptive synthesis method and device and a computer readable storage medium, and belongs to the technical field of image processing. The image self-adaptive synthesis method comprises the following steps: firstly, determining camera parameters of a foreground and a background; then, any matching point of the corresponding synthetic track in the foreground and the background is respectively obtained, and the coordinates of the two matching points in a camera coordinate system and the coordinates of the two matching points in a pixel coordinate system are determined; and then, judging the depth of the two matching points in a camera coordinate system, carrying out affine transformation processing on the foreground or the background, and then carrying out image synthesis through OpenCV (open computer vision library), thus obtaining a synthetic image. According to the invention, the affine transformation processing is firstly carried out on the foreground or the background, and then the image synthesis is carried out through the OpenCV, so that the purposes of no need of manually adjusting control parameters and real-time output of a synthesized image in the shooting process can be realized under the condition that the positions and parameters of the corresponding foreground camera and the corresponding background camera are different.

Description

Image self-adaptive synthesis method and device and computer readable storage medium
Technical Field
The invention relates to the technical field of image processing, in particular to an image self-adaptive synthesis method, an image self-adaptive synthesis device and a computer readable storage medium.
Background
The virtual studio is a unique multimedia production technology developed in recent years, and is widely applied to the fields of television program production, movie and television play shooting, teaching, live broadcasting and the like. The virtual studio technology is to replace the video background by using the color key keying technology, replace the blue box or green box background deducted by the color key by using the two-dimensional or three-dimensional scene shot in advance or made by a computer, keep the perspective relation of the background consistent with the foreground according to the parameters such as the position focal length of the foreground camera by using the computer three-dimensional graphic technology and the video synthesis technology, and make the characters and props in the foreground completely in the fused background by the synthesis of the color key device, thereby creating the vivid and three-dimensional studio effect.
The current matting and synthesis technology used by virtual studio devices is generally implemented by the following two methods in order to keep the perspective relationship between the foreground and the replaced background consistent: the other method is that a foreground recording scene of a blue box or a green box which is consistent with a background space is directly built, the position and the parameters of a foreground camera are adjusted before the foreground is shot to enable the foreground camera to be relatively consistent with a camera for shooting the background space, and then shooting, matting and synthesizing are carried out. The method has the advantages that the synthesized image can be output in real time, and has the disadvantages that if the background scene is large, a great deal of capital and time are required to be invested for building the foreground scene with the same scale, and the method needs to require the camera parameters for shooting the foreground and the background to be consistent. The other method is that after the shot foreground video is subjected to image matting, control parameters such as the position and the proportion of a figure in the foreground are manually adjusted, and then the adjusted figure is synthesized into the background video. The method has the advantages that a satisfactory synthesis effect can be obtained without investing a large amount of time and funds, and the method has the defects that the real-time synthesis effect cannot be ensured, and the method cannot be applied to the fields such as live broadcast and the like which need high real-time performance.
Disclosure of Invention
The present invention is directed to an image adaptive synthesis method, an image adaptive synthesis apparatus, and a computer-readable storage medium, to solve the problems in the background art.
In order to achieve the above purpose, the embodiments of the present invention provide the following technical solutions:
an image adaptive synthesis method comprises the following steps:
acquiring two shooting scenes which are respectively used as a foreground and a background;
determining camera parameters of the foreground and the background;
respectively acquiring any matching point of the corresponding synthetic track in the foreground and the background, and determining the coordinates of the two matching points under a camera coordinate system and the coordinates under a pixel coordinate system;
judging the depth of the two matching points in a camera coordinate system to obtain a judgment result;
carrying out affine transformation processing on the foreground or the background according to the judgment result, the camera parameters of the foreground and the camera parameters of the background;
performing mask processing on the foreground or the foreground after affine transformation processing to obtain a mask foreground;
and synthesizing the mask foreground and the background or the background after affine transformation to obtain a synthesized image.
In the preferred embodiment of the present invention, in the step, the camera parameters of the foreground and the background are determined by a zhangyingyou scaling method.
In another preferred scheme provided by the embodiment of the present invention, in the step, if the depth of the foreground matching point in the camera coordinate system is greater than the depth of the background matching point in the camera coordinate system, performing affine transformation on the foreground first, then performing masking processing on the foreground after the affine transformation processing to obtain a mask foreground, and then synthesizing the mask foreground and the background to obtain a synthesized image; if the depth of the foreground matching point in the camera coordinate system is not larger than the depth of the background matching point in the camera coordinate system, firstly carrying out affine transformation processing on the background, then carrying out mask processing on the foreground to obtain a mask foreground, and then synthesizing the mask foreground and the background subjected to affine transformation processing to obtain a synthetic image.
In another preferred embodiment of the present invention, the mask processing method includes the following steps: firstly, creating a gray-scale image of the foreground or the foreground after affine transformation processing; then, utilizing an Open source computer Vision Library (OpenCV) to extract a mask image from the gray-scale image and process the mask image; and then, synthesizing the processed mask image with the foreground or the foreground after affine transformation processing to obtain the mask foreground.
In another preferred embodiment of the present invention, the calculation formula of the affine transformation processing is as follows:
Figure BDA0002292464820000031
Figure BDA0002292464820000032
Figure BDA0002292464820000033
in the formula, Affinine is an Affine transformation matrix;
if the depth of the foreground matching point in the camera coordinate system is greater than the depth of the background matching point in the camera coordinate system, then (u) in the formula1,v1) For the coordinates of the foreground matching point in the pixel coordinate system before the affine transformation processing, (u)2,v2) For the coordinates of the foreground matching point under the pixel coordinate system after the affine transformation processing, (X)1,Y1,Z1) Coordinates of the matching point for the background in the camera coordinate system, (X)2,Y2,Z2) Coordinates of the matching point as foreground in the camera coordinate system, fx1、fy1、u01And v01Camera parameters, f, both backgroundx2、fy2、u02And v02Camera parameters that are foreground;
if the depth of the foreground matching point in the camera coordinate system is not greater than the depth of the background matching point in the camera coordinate system, then (u) in the formula1,v1) For the coordinates of the background matching points in the pixel coordinate system before the affine transformation processing, (u)2,v2) For the coordinates of the background matching points in the pixel coordinate system after the affine transformation processing, (X)1,Y1,Z1) Coordinates of matching points as foreground in camera coordinate system, (X)2,Y2,Z2) Coordinates of the matching points for the background in the camera coordinate system, fx1、fy1、u01And v01Camera parameters, f, which are all foregroundx2、fy2、u02And v02Are camera parameters of the background.
An embodiment of the present invention further provides an image adaptive synthesis apparatus, which includes:
the acquisition module is used for acquiring two shooting scenes which are respectively used as a foreground and a background;
a parameter determination module for determining camera parameters of the foreground and the background;
the coordinate determination module is used for respectively acquiring any matching point of the corresponding synthetic track in the foreground and the background, and determining the coordinates of the two matching points under a camera coordinate system and the coordinates under a pixel coordinate system;
the judging module is used for judging the depth of the two matching points in the camera coordinate system to obtain a judging result;
the self-adaptive affine transformation module is used for carrying out affine transformation processing on the foreground or the background according to the judgment result, the camera parameters of the foreground and the camera parameters of the background;
the foreground processing module is used for carrying out mask processing on the foreground or the foreground after affine transformation processing to obtain a mask foreground;
and the synthesis module is used for synthesizing the mask foreground and the background or the background after affine transformation processing to obtain a synthesized image.
In another preferred embodiment of the present invention, the parameter determining module determines the camera parameters of the foreground and the background by a Zhang-Yongyou scaling method.
The embodiment of the invention also provides a computer readable storage medium, on which a computer program is stored, and the computer program is executed by a computer to implement the image adaptive synthesis method.
Compared with the prior art, the technical scheme provided by the embodiment of the invention has the following technical effects:
according to the image self-adaptive synthesis method provided by the embodiment of the invention, the affine transformation processing is firstly carried out on the foreground or the background, and then the image synthesis is carried out through the OpenCV, so that the purposes of no need of manually adjusting control parameters and real-time output of the synthesized image in the shooting process can be realized under the condition that the positions and parameters of the corresponding foreground camera and the background camera are different, and the effect of the synthesized image and the efficiency of the image synthesis can be greatly improved.
Drawings
Fig. 1 is a schematic structural diagram of an image adaptive synthesis apparatus provided in embodiment 3.
Detailed Description
The following specific examples are given to provide specific and clear descriptions of the technical solutions of the present application.
Example 1
The embodiment provides an image adaptive synthesis method, which comprises the following steps:
(1) and acquiring two shooting scenes to be synthesized, wherein the two shooting scenes are respectively used as a foreground and a background.
(2) Determining the parameters of the foreground and background camera by Zhang-Yongyou scaling method, the parameters of the camera including fx、fy、u0And v0
(3) Respectively acquiring any matching point of the corresponding synthetic track in the foreground and the background, and determining coordinates (X, Y, Z) of the two matching points under a camera coordinate system and coordinates (u, v) of the two matching points under a pixel coordinate system; wherein the coordinates of the matching points in the camera coordinate system can be obtained by sensors or manual measurements.
(4) Judging the depth of the two matching points in the camera coordinate system (namely the Z value of the two matching points) to obtain a judgment result;
(5) carrying out affine transformation processing on the foreground or the background according to the judgment result, the camera parameters of the foreground and the camera parameters of the background; specifically, the foreground and the background are scaled according to the target size in an equal proportion; secondly, if the depth of the foreground matching point in the camera coordinate system is larger than that of the background matching point in the camera coordinate system, performing affine transformation on the foreground; if the depth of the foreground matching point in the camera coordinate system is not larger than the depth of the background matching point in the camera coordinate system, performing affine transformation on the background.
In addition, the calculation formula of the affine transformation processing is as follows:
Figure BDA0002292464820000062
Figure BDA0002292464820000063
in the formula, Affinine is an Affine transformation matrix;
if the depth of the foreground matching point in the camera coordinate system is greater than the depth of the background matching point in the camera coordinate system, then (u) in the formula1,v1) For the coordinates of the foreground matching point in the pixel coordinate system before the affine transformation processing, (u)2,v2) For the coordinates of the foreground matching point under the pixel coordinate system after the affine transformation processing, (X)1,Y1,Z1) Coordinates of the matching point for the background in the camera coordinate system, (X)2,Y2,Z2) Coordinates of the matching point as foreground in the camera coordinate system, fx1、fy1、u01And v01Camera parameters, f, both backgroundx2、fy2、u02And v02Camera parameters that are foreground;
if the depth of the foreground matching point in the camera coordinate system is not greater than the depth of the background matching point in the camera coordinate system, then (u) in the formula1,v1) For the coordinates of the background matching points in the pixel coordinate system before the affine transformation processing, (u)2,v2) For the coordinates of the background matching points in the pixel coordinate system after the affine transformation processing, (X)1,Y1,Z1) Coordinates of matching points as foreground in camera coordinate system, (X)2,Y2,Z2) The matching point for the background isCoordinates in the camera coordinate system, fx1、fy1、u01And v01Camera parameters, f, which are all foregroundx2、fy2、u02And v02Are camera parameters of the background.
(6) According to the processing result, performing mask processing on the foreground or the foreground after affine transformation processing to obtain a mask foreground; the mask processing method comprises the following steps: firstly, creating a gray-scale image of the foreground or the foreground after affine transformation processing; then, utilizing OpenCV to extract a mask image from the gray level image and process the mask image; and then, synthesizing the processed mask image with the foreground or the foreground after affine transformation processing to obtain the mask foreground. Specifically, a mask image is extracted from the gray-scale image through binarization processing of OpenCV, and then the mask image is processed through a morphologyEx function and a Gaussian Blur function in the OpenCV in sequence.
(7) And according to the processing result, synthesizing the mask foreground and the background or the background after affine transformation processing through a synthesis function in OpenCV, and obtaining a synthesized image.
Example 2
The embodiment provides a specific implementation scheme of the above image adaptive synthesis method in video synthesis, wherein the running environment of the above image adaptive synthesis method is Windows 10+ OpenCV 3.4.3+ Kinect for Windows SDK 2.0+ VS2017, and the method specifically comprises the following steps:
(1) recording a video by using an iphone 6s plus camera as a background, and marking the coordinates of a background matching point in a camera coordinate system; and a kinect 2.0 camera is used for acquiring a video in real time to serve as a foreground, and the coordinates of the foreground matching point in a camera coordinate system are marked.
(2) The Camera parameters for the iphone 6s plus Camera and the kinec t 2.0 Camera were calibrated using GML Camera Calibration.
(3) The image data stream of the foreground and background is loaded and the progress of the foreground and background is adjusted to the point in time when it is desired to start the composition.
(4) And synchronously acquiring a pair of frame data from the foreground and the background, and synthesizing by the provided image self-adaptive synthesis method to acquire a result frame.
(5) The result frame data can be displayed in real time, or the result frame can be recorded to a local computer, or the two operations can be carried out simultaneously.
(6) And (5) if the image data streams of the foreground and the background are not closed and the synthesis is to be continued, repeating the steps (4) to (5).
Example 3
Referring to fig. 1, the embodiment provides an image adaptive synthesis apparatus including: the acquisition module is used for acquiring two shooting scenes which are respectively used as a foreground and a background; the parameter determining module is used for determining the camera parameters of the foreground and the background by a Zhang-Zhengyou scaling method; the coordinate determination module is used for respectively acquiring any matching point of the corresponding synthetic track in the foreground and the background, and determining the coordinates of the two matching points under a camera coordinate system and the coordinates under a pixel coordinate system; the judging module is used for judging the depth of the two matching points in the camera coordinate system to obtain a judging result; the self-adaptive affine transformation module is used for carrying out affine transformation processing on the foreground or the background according to the judgment result, the camera parameters of the foreground and the camera parameters of the background; the foreground processing module is used for carrying out mask processing on the foreground or the foreground after affine transformation processing to obtain a mask foreground; and the synthesis module is used for synthesizing the mask foreground and the background or the background after affine transformation processing to obtain a synthesized image.
It should be noted that the image adaptive synthesis method implemented by the image adaptive synthesis apparatus provided in this embodiment is the same as the image adaptive synthesis method provided in embodiment 1, and details thereof are not described here.
Example 4
This embodiment provides a computer-readable storage medium on which a computer program is stored, wherein the computer program, when executed by a computer, implements the image adaptive synthesis method described above.
It should be noted that the above embodiments are only specific and clear descriptions of technical solutions and technical features of the present application. However, to those skilled in the art, aspects or features that are part of the prior art or common general knowledge are not described in detail in the above embodiments.
Of course, the technical solutions of the present application are not limited to the above-mentioned embodiments, and those skilled in the art should take the description as a whole, and the technical solutions in the embodiments may also be appropriately combined, so that other embodiments that may be understood by those skilled in the art may be formed.

Claims (8)

1. The image self-adaptive synthesis method is characterized by comprising the following steps:
acquiring two shooting scenes which are respectively used as a foreground and a background;
determining camera parameters of the foreground and the background;
respectively acquiring any matching point of the corresponding synthetic track in the foreground and the background, and determining the coordinates of the two matching points under a camera coordinate system and the coordinates under a pixel coordinate system;
judging the depth of the two matching points in a camera coordinate system to obtain a judgment result;
carrying out affine transformation processing on the foreground or the background according to the judgment result, the camera parameters of the foreground and the camera parameters of the background;
performing mask processing on the foreground or the foreground after affine transformation processing to obtain a mask foreground;
and synthesizing the mask foreground and the background or the background after affine transformation to obtain a synthesized image.
2. The adaptive image synthesis method according to claim 1, wherein the step of determining the camera parameters of the foreground and the background is performed by a Zhang-friend scaling method.
3. The adaptive image synthesis method according to claim 1, wherein in the step, if the depth of the foreground matching point in the camera coordinate system is greater than the depth of the background matching point in the camera coordinate system, affine transformation processing is performed on the foreground first, then mask processing is performed on the foreground after the affine transformation processing to obtain a mask foreground, and then the mask foreground and the background are synthesized to obtain a synthesized image; if the depth of the foreground matching point in the camera coordinate system is not larger than the depth of the background matching point in the camera coordinate system, firstly carrying out affine transformation processing on the background, then carrying out mask processing on the foreground to obtain a mask foreground, and then synthesizing the mask foreground and the background subjected to affine transformation processing to obtain a synthetic image.
4. The image adaptive synthesis method according to claim 3, wherein the masking processing method comprises the steps of: firstly, creating a gray-scale image of the foreground or the foreground after affine transformation processing; then, utilizing OpenCV to extract a mask image from the gray level image and process the mask image; and then, synthesizing the processed mask image with the foreground or the foreground after affine transformation processing to obtain the mask foreground.
5. The image adaptive synthesis method according to claim 3, wherein the affine transformation process is calculated by the following formula:
Figure FDA0002292464810000021
Figure FDA0002292464810000022
Figure FDA0002292464810000023
in the formula, Affinine is an Affine transformation matrix;
if the depth of the foreground matching point in the camera coordinate system is greater than the depth of the background matching point in the camera coordinate system, then (u) in the formula1,v1) For the coordinates of the foreground matching point in the pixel coordinate system before the affine transformation processing, (u)2,v2) Matching point-on-image for foreground after affine transformationCoordinates in the prime coordinate system, (X)1,Y1,Z1) Coordinates of the matching point for the background in the camera coordinate system, (X)2,Y2,Z2) Coordinates of the matching point as foreground in the camera coordinate system, fx1、fy1、u01And v01Camera parameters, f, both backgroundx2、fy2、u02And v02Camera parameters that are foreground;
if the depth of the foreground matching point in the camera coordinate system is not greater than the depth of the background matching point in the camera coordinate system, then (u) in the formula1,v1) For the coordinates of the background matching points in the pixel coordinate system before the affine transformation processing, (u)2,v2) For the coordinates of the background matching points in the pixel coordinate system after the affine transformation processing, (X)1,Y1,Z1) Coordinates of matching points as foreground in camera coordinate system, (X)2,Y2,Z2) Coordinates of the matching points for the background in the camera coordinate system, fx1、fy1、u01And v01Camera parameters, f, which are all foregroundx2、fy2、u02And v02Are camera parameters of the background.
6. An image adaptive synthesis apparatus, comprising:
the acquisition module is used for acquiring two shooting scenes which are respectively used as a foreground and a background;
a parameter determination module for determining camera parameters of the foreground and the background;
the coordinate determination module is used for respectively acquiring any matching point of the corresponding synthetic track in the foreground and the background, and determining the coordinates of the two matching points under a camera coordinate system and the coordinates under a pixel coordinate system;
the judging module is used for judging the depth of the two matching points in the camera coordinate system to obtain a judging result;
the self-adaptive affine transformation module is used for carrying out affine transformation processing on the foreground or the background according to the judgment result, the camera parameters of the foreground and the camera parameters of the background;
the foreground processing module is used for carrying out mask processing on the foreground or the foreground after affine transformation processing to obtain a mask foreground;
and the synthesis module is used for synthesizing the mask foreground and the background or the background after affine transformation processing to obtain a synthesized image.
7. The adaptive image synthesis apparatus according to claim 6, wherein the parameter determination module determines the camera parameters of the foreground and the background by Zhang-Yongyou scaling.
8. Computer-readable storage medium, on which a computer program is stored, which, when being executed by a computer, carries out the image adaptive synthesis method according to any one of claims 1 to 5.
CN201911186190.XA 2019-11-28 2019-11-28 Image self-adaptive synthesis method and device and computer readable storage medium Active CN110798634B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911186190.XA CN110798634B (en) 2019-11-28 2019-11-28 Image self-adaptive synthesis method and device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911186190.XA CN110798634B (en) 2019-11-28 2019-11-28 Image self-adaptive synthesis method and device and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN110798634A true CN110798634A (en) 2020-02-14
CN110798634B CN110798634B (en) 2020-10-09

Family

ID=69446529

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911186190.XA Active CN110798634B (en) 2019-11-28 2019-11-28 Image self-adaptive synthesis method and device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN110798634B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112003999A (en) * 2020-09-15 2020-11-27 东北大学 Three-dimensional virtual reality synthesis algorithm based on Unity 3D
CN112367534A (en) * 2020-11-11 2021-02-12 成都威爱新经济技术研究院有限公司 Virtual-real mixed digital live broadcast platform and implementation method
CN112969007A (en) * 2021-02-02 2021-06-15 东北大学 Video post-production method oriented to virtual three-dimensional background
CN113592753A (en) * 2021-07-23 2021-11-02 深圳思谋信息科技有限公司 Image processing method and device based on industrial camera shooting and computer equipment
CN115086686A (en) * 2021-03-11 2022-09-20 北京有竹居网络技术有限公司 Video processing method and related device

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07192147A (en) * 1993-12-27 1995-07-28 Matsushita Electric Ind Co Ltd Image display device
CN1499816A (en) * 2002-11-07 2004-05-26 ���µ�����ҵ��ʽ���� Image processing method and appts. thereof
US7015954B1 (en) * 1999-08-09 2006-03-21 Fuji Xerox Co., Ltd. Automatic video system using multiple cameras
CN101110908A (en) * 2007-07-20 2008-01-23 西安宏源视讯设备有限责任公司 Foreground depth of field position identification device and method for virtual studio system
US20140126818A1 (en) * 2012-11-06 2014-05-08 Sony Corporation Method of occlusion-based background motion estimation
US20150062381A1 (en) * 2013-09-03 2015-03-05 Samsung Electronics Co., Ltd. Method for synthesizing images and electronic device thereof
CN105279771A (en) * 2015-10-23 2016-01-27 中国科学院自动化研究所 Method for detecting moving object on basis of online dynamic background modeling in video
CN106033614A (en) * 2015-03-20 2016-10-19 南京理工大学 Moving object detection method of mobile camera under high parallax
CN106709865A (en) * 2015-11-13 2017-05-24 杭州海康威视数字技术股份有限公司 Depth image synthetic method and device
CN107087123A (en) * 2017-04-26 2017-08-22 杭州奥点科技股份有限公司 It is a kind of that image space method is scratched based on the real-time high-definition that high in the clouds is handled
CN107730433A (en) * 2017-09-28 2018-02-23 努比亚技术有限公司 One kind shooting processing method, terminal and computer-readable recording medium
JP2019046239A (en) * 2017-09-04 2019-03-22 大日本印刷株式会社 Image processing apparatus, image processing method, program, and image data for synthesis
US20190104253A1 (en) * 2017-10-04 2019-04-04 Canon Kabushiki Kaisha Image processing apparatus, image capturing apparatus, and image processing method

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07192147A (en) * 1993-12-27 1995-07-28 Matsushita Electric Ind Co Ltd Image display device
US7015954B1 (en) * 1999-08-09 2006-03-21 Fuji Xerox Co., Ltd. Automatic video system using multiple cameras
CN1499816A (en) * 2002-11-07 2004-05-26 ���µ�����ҵ��ʽ���� Image processing method and appts. thereof
CN101110908A (en) * 2007-07-20 2008-01-23 西安宏源视讯设备有限责任公司 Foreground depth of field position identification device and method for virtual studio system
US20140126818A1 (en) * 2012-11-06 2014-05-08 Sony Corporation Method of occlusion-based background motion estimation
US20150062381A1 (en) * 2013-09-03 2015-03-05 Samsung Electronics Co., Ltd. Method for synthesizing images and electronic device thereof
CN106033614A (en) * 2015-03-20 2016-10-19 南京理工大学 Moving object detection method of mobile camera under high parallax
CN105279771A (en) * 2015-10-23 2016-01-27 中国科学院自动化研究所 Method for detecting moving object on basis of online dynamic background modeling in video
CN106709865A (en) * 2015-11-13 2017-05-24 杭州海康威视数字技术股份有限公司 Depth image synthetic method and device
CN107087123A (en) * 2017-04-26 2017-08-22 杭州奥点科技股份有限公司 It is a kind of that image space method is scratched based on the real-time high-definition that high in the clouds is handled
JP2019046239A (en) * 2017-09-04 2019-03-22 大日本印刷株式会社 Image processing apparatus, image processing method, program, and image data for synthesis
CN107730433A (en) * 2017-09-28 2018-02-23 努比亚技术有限公司 One kind shooting processing method, terminal and computer-readable recording medium
US20190104253A1 (en) * 2017-10-04 2019-04-04 Canon Kabushiki Kaisha Image processing apparatus, image capturing apparatus, and image processing method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
HEECHAN PARK, GRAHAM R. MARTIN, ABHIR BHALERAO: "Local Affine Image Matching and Synthesis Based on Structural Patterns", 《IEEE TRANSACTIONS ON IMAGE PROCESSING》 *
仪晓斌,陈莹: "分段仿射变换下基于泊松融合的正面人脸合成", 《计算机工程与应用》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112003999A (en) * 2020-09-15 2020-11-27 东北大学 Three-dimensional virtual reality synthesis algorithm based on Unity 3D
CN112367534A (en) * 2020-11-11 2021-02-12 成都威爱新经济技术研究院有限公司 Virtual-real mixed digital live broadcast platform and implementation method
CN112367534B (en) * 2020-11-11 2023-04-11 成都威爱新经济技术研究院有限公司 Virtual-real mixed digital live broadcast platform and implementation method
CN112969007A (en) * 2021-02-02 2021-06-15 东北大学 Video post-production method oriented to virtual three-dimensional background
CN112969007B (en) * 2021-02-02 2022-04-12 东北大学 Video post-production method oriented to virtual three-dimensional background
CN115086686A (en) * 2021-03-11 2022-09-20 北京有竹居网络技术有限公司 Video processing method and related device
CN113592753A (en) * 2021-07-23 2021-11-02 深圳思谋信息科技有限公司 Image processing method and device based on industrial camera shooting and computer equipment
CN113592753B (en) * 2021-07-23 2024-05-07 深圳思谋信息科技有限公司 Method and device for processing image shot by industrial camera and computer equipment

Also Published As

Publication number Publication date
CN110798634B (en) 2020-10-09

Similar Documents

Publication Publication Date Title
CN110798634B (en) Image self-adaptive synthesis method and device and computer readable storage medium
CN106251396B (en) Real-time control method and system for three-dimensional model
CN106375748B (en) Stereoscopic Virtual Reality panoramic view joining method, device and electronic equipment
CN109685913B (en) Augmented reality implementation method based on computer vision positioning
CN104408701B (en) A kind of large scene video image joining method
CN108932735B (en) Method for generating deep learning sample
CN104519340B (en) Panoramic video joining method based on many depth images transformation matrix
US20030034977A1 (en) Method and apparatus for varying focus in a scene
CN109361880A (en) A kind of method and system showing the corresponding dynamic picture of static images or video
CN111724317A (en) Method for constructing Raw domain video denoising supervision data set
CN108053373A (en) One kind is based on deep learning model fisheye image correcting method
CN111986296B (en) CG animation synthesis method for bullet time
CN106998430B (en) Multi-camera-based 360-degree video playback method
CN105704398A (en) Video processing method
CN110458964B (en) Real-time calculation method for dynamic illumination of real environment
CN111080776A (en) Processing method and system for human body action three-dimensional data acquisition and reproduction
CN108090877A (en) A kind of RGB-D camera depth image repair methods based on image sequence
CN112003999A (en) Three-dimensional virtual reality synthesis algorithm based on Unity 3D
CN107833266B (en) Holographic image acquisition method based on color block matching and affine correction
CN117501313A (en) Hair rendering system based on deep neural network
CN108053376A (en) A kind of semantic segmentation information guiding deep learning fisheye image correcting method
Rajan et al. A realistic video avatar system for networked virtual environments
Leung et al. Realistic video avatar
CN109218602A (en) Image capture unit, image treatment method and electronic device
CN111105484B (en) Paperless 2D serial frame optimization method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20220120

Address after: 110000 No. 168-2, Harbin Road, Shenhe District, Shenyang City, Liaoning Province (1-18-2)

Patentee after: Shuangzhibo (Shenyang) Electromechanical Equipment Manufacturing Co.,Ltd.

Address before: 110819 Northeast University, No. 195, Chuangxin Road, Hunnan New Area, Shenyang City, Liaoning Province

Patentee before: Northeastern University

TR01 Transfer of patent right