CN114327341A - Remote interactive virtual display system - Google Patents
Remote interactive virtual display system Download PDFInfo
- Publication number
- CN114327341A CN114327341A CN202111656082.1A CN202111656082A CN114327341A CN 114327341 A CN114327341 A CN 114327341A CN 202111656082 A CN202111656082 A CN 202111656082A CN 114327341 A CN114327341 A CN 114327341A
- Authority
- CN
- China
- Prior art keywords
- video stream
- background
- contour
- recognition module
- identification module
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000002452 interceptive effect Effects 0.000 title claims abstract description 17
- 238000007670 refining Methods 0.000 claims abstract description 4
- 238000000034 method Methods 0.000 claims description 16
- 230000000694 effects Effects 0.000 claims description 12
- 238000004364 calculation method Methods 0.000 claims description 9
- 238000009499 grossing Methods 0.000 claims description 8
- 230000004075 alteration Effects 0.000 claims description 7
- 238000006748 scratching Methods 0.000 claims description 7
- 230000002393 scratching effect Effects 0.000 claims description 7
- 238000004422 calculation algorithm Methods 0.000 claims description 4
- 230000001131 transforming effect Effects 0.000 claims description 3
- 238000000691 measurement method Methods 0.000 claims 1
- 238000003708 edge detection Methods 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 6
- 238000001514 detection method Methods 0.000 description 3
- 238000001914 filtration Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 238000012821 model calculation Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 239000011800 void material Substances 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000005094 computer simulation Methods 0.000 description 1
- 238000001093 holography Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000007790 scraping Methods 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Landscapes
- Image Analysis (AREA)
Abstract
The invention belongs to the technical field of virtual interactive display, in particular to a remote interactive virtual display system, which comprises: the camera, the video capture card, host computer and the model of removing of setting in the host computer, it has GPU operation platform to carry out in the host computer, it is based on motion identification module, infrared identification module and colour difference identification module to remove the model, the motion identification module is used for choosing the show target, infrared identification module is used for carrying out the profile identification to the show target, colour difference identification module is used for further refining the profile of show target. The camera starts to work, reads the video stream, inputs the video stream into the GPU operation platform, establishes the background through the specific key, starts to take the video stream input later as the source video stream, inputs the background and the source video stream into the specific model, carries out real-time background removal, does not need large space, specific places and background walls arranged in the space, saves time and space for users, and has certain convenience and economy.
Description
Technical Field
The invention relates to the technical field of virtual interactive display, in particular to a remote interactive virtual display system.
Background
The virtual display system is a high and new technology in the field of graphic images appearing in recent years, also called smart technology or artificial environment, and is used for generating a three-dimensional virtual world by utilizing computer simulation, bringing users in the virtual world to provide simulation of senses of the users about vision, hearing, touch and the like, enabling the users to observe objects in the three-dimensional space in time without limitation as if the users had the same situation, and simultaneously displaying self and environment information for virtual interaction.
The existing company carries out live broadcast of industrial products by grafting an industrial internet, carries out live broadcast on a digital twin industrial product line, needs people to interact with products in the process of off-line exhibition and display, and is limited in scene, so that the artificial intelligent green-curtain-free real-time image matting technology is invented, live broadcast personnel can be free from site limitation, and particularly when off-line exhibition and trade service is carried out, the technical personnel can conveniently carry out virtual explanation and live broadcast through a network.
Disclosure of Invention
The present invention is directed to solving one of the technical problems of the prior art or the related art.
Therefore, the technical scheme adopted by the invention is as follows:
a remote interactive virtual display system comprising:
the system comprises a camera, a video acquisition card, a host and a scratching model arranged in the host, wherein a GPU operation platform is loaded in the host, the scratching model is based on a motion recognition module, an infrared recognition module and a chromatic aberration recognition module, the motion recognition module is used for selecting a display target, the infrared recognition module is used for carrying out contour recognition on the display target, and the chromatic aberration recognition module is used for further refining the contour of the display target;
after the video stream is read by the camera, the video stream is input into a GPU operation platform in the host, after a background is established through a specific key, the video stream input later is used as a source video stream, the background and the source video stream are input into a matting model to be matte in real time, an image except a display target contour is identified by a matte identification module, and the image after the background is removed is output.
By adopting the technical scheme, the one-key background deduction is realized by utilizing model calculation in a high-efficiency and convenient mode, special hardware and field configuration are not needed, only one camera is needed, one computer is needed, the camera is connected with the computer and then is started to be applied, the camera starts to work and reads video streams and then inputs the video streams into the GPU operation platform, after the background is established through specific keys, the video streams input after the video streams are started to be source video streams, the background and the source video streams are input into specific models and are subjected to real-time background scratching, background walls do not need to be arranged in large space, specific places and space, the video streams can be used anytime and anywhere, time and space are saved for users, and the video processing system has certain convenience and economy.
The present invention in a preferred example may be further configured to: the host is also provided with a voice input device, and the voice input device is a microphone.
By adopting the technical scheme, the sound and video synchronization is facilitated, and the remote interaction delay is reduced.
The present invention in a preferred example may be further configured to: the camera is an infrared sensing camera and is used for human body identification.
By adopting the technical scheme, the target tracking can be assisted, and the method is reasonable.
The present invention in a preferred example may be further configured to: the motion identification module adopts a three-frame difference method, and the three-frame difference method comprises the following algorithm flows: firstly, three frames of images I are readk-1、IkAnd Ik+1Graying the images, respectively calculating difference images D (k-1, k) and D (k, k +1) of two continuous frames of images, setting a threshold value T for binarization processing, and extracting a moving object, wherein the calculation formula is as follows:
finally, the moving target is obtained through logical AND operation, and the calculation formula is as follows:
by adopting the technical scheme, the method and the device are used for carrying out contour recognition on the display target.
The present invention in a preferred example may be further configured to: after the motion recognition module adopts a three-frame difference method for detection, the following Laplace-Gaussian operator formula is utilized to eliminate the void phenomenon, and a target prospect is extracted:
in the formula: the sigma is a standard deviation, the detail enhancement effect is better when the sigma is smaller, the smoothing effect is better when the sigma is larger, and the basic value is 0.5.
By adopting the technical scheme, the method and the device are used for tracking and selecting the display target, and are convenient for comprehensive virtual display.
The present invention in a preferred example may be further configured to: the infrared recognition module adopts a Kinect sensor, utilizes the depth measurement technology of the Kinect sensor to recognize the human body, the human body part and the human body joint and outputs a human body outline image.
By adopting the technical scheme, the Laplace-Gaussian operator is a Log edge detection operator, the Log edge detection combines a Gaussian smoothing filter and a Laplace sharpening filter, and the original image with noise is subjected to smoothing filtering and then sharpened to enhance edges and details, so that the edge detection effect is better.
The present invention in a preferred example may be further configured to: the color difference identification module comprises the following identification steps:
s1: determining the position and the size of the interested background area;
s2: transforming to a required color space, and counting color components of each pixel in the background area at the moment;
s3: drawing a histogram of each color component in the background area;
s4: calculating the expectation of a background area, and estimating a threshold value;
s5: detecting a background area in a video in real time, extracting pixel points outside a threshold value by using an Euclidean distance and weighted Euclidean distance formula, namely reserving the pixel points, and replacing the pixel points within the threshold value with a new background color.
Through adopting above-mentioned technical scheme for further refine the profile of show target, draw out and scratch the scope, what the cooperation scope was scratched and is removed goes on.
The present invention in a preferred example may be further configured to: the calculation formula of the color difference identification module is as follows:
in the formula: the background has n pixels with a color mean value of E (r)e,ge,be) Each pixel point p on the imageiColor C ofiTo use (r)i,gi,bi) Three components and 0 ≦ ri,gi,bi≤255;
In this model, point E is taken as the center of sphere, CiAre distributed at various points in space, CiThe square formula of the distance to E in euclidean is:
let R be the threshold radius whenWhen the foreground points are extracted according to the contour, the contour of the foreground points is determined according to the contour of the foreground pointsAt that time, points outside the source video stream are filtered out according to the contour.
Through adopting above-mentioned technical scheme for further refine the profile of show target, draw out and scratch the scope, what the cooperation scope was scratched and is removed goes on.
The technical scheme of the invention has the following beneficial technical effects:
1. the invention utilizes model calculation, realizes one-key background deduction in an efficient and convenient mode, does not need special hardware and site configuration, only needs one camera and one computer, starts application after the camera is connected with the computer, starts the camera to work, reads a video stream, inputs the video stream into a GPU operation platform, establishes the background through a specific key, starts to take the video stream input later as a source video stream, inputs the background and the source video stream into a specific model, carries out real-time background scraping, can be used anytime and anywhere without large space, specific places and background walls arranged in the space, saves time and space for users, and has certain convenience and economy.
2. The invention can also solve the problem that more and more people create more special videos and live broadcasts by technical support in the later-stage production of the videos and the like at present, so that the videos and the live broadcasts need to be created by using a virtual-real combination mode, while the background can be deducted only by building a special green curtain or blue curtain live broadcast room in the traditional virtual-real combination technology.
Drawings
FIG. 1 is a flow chart of a three-frame differencing method according to an embodiment of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail with reference to the accompanying drawings in conjunction with the following detailed description. It should be noted that the embodiments of the present invention and features of the embodiments may be combined with each other without conflict.
It is to be understood that this description is made only by way of example and not as a limitation on the scope of the invention.
A remote interactive virtual exhibition system provided by some embodiments of the present invention will be described below with reference to the accompanying drawings.
Referring to fig. 1, the remote interactive virtual display system provided by the present invention includes:
the system comprises a camera, a video acquisition card, a host and a scratching model arranged in the host, wherein a GPU operation platform is loaded in the host, the scratching model is based on a motion recognition module, an infrared recognition module and a chromatic aberration recognition module, the motion recognition module is used for selecting a display target, the infrared recognition module is used for carrying out contour recognition on the display target, and the chromatic aberration recognition module is used for further refining the contour of the display target.
Specifically, still be equipped with voice input equipment on the host computer, and voice input equipment package is the microphone, and the camera is infrared sensing camera for human body identification, the image hardware end of in addition output can be replaced by XR equipment, realizes live broadcast and the application of effects such as CR/AR MR and holography, bore hole 3D.
After the video stream is read by the camera, the video stream is input into a GPU operation platform in the host, after a background is established through a specific key, the video stream input later is used as a source video stream, the background and the source video stream are input into a matting model to be matte in real time, an image except a display target contour is identified by a matte identification module, and the image after the background is removed is output.
In this embodiment, the motion recognition module adopts a three-frame difference method, which includes the following algorithm flows: firstly, three frames of images I are readk-1、IkAnd Ik+1Graying the images, respectively calculating difference images D (k-1, k) and D (k, k +1) of two continuous frames of images, setting a threshold value T for binarization processing, and extracting a moving object, wherein the calculation formula is as follows:
finally, the moving target is obtained through logical AND operation, and the calculation formula is as follows:
further, after the motion recognition module adopts a three-frame difference method for detection, the following Laplacian-Gaussian operator formula is used for eliminating the void phenomenon, and the target foreground is extracted:
in the formula: the sigma is a standard deviation, the detail enhancement effect is better when the sigma is smaller, the smoothing effect is better when the sigma is larger, and the basic value is 0.5.
It should be noted that the three-frame difference method is a moving target detection method for obtaining a moving target contour by calculating a difference value between corresponding pixel points of two adjacent frames of images, the algorithm is simple to implement, and the moving target can be quickly detected.
Furthermore, the laplacian-gaussian operator is a Log edge detection operator, and the Log edge detection combines a gaussian smoothing filter and a laplacian sharpening filter, and performs smoothing filtering on an original image with noise and then sharpening to enhance edges and details, so that the edge detection effect is better. Wherein σ is standard deviation, the smaller σ is, the better the detail enhancement effect is, and the larger σ is, the better the smoothing effect is, and the value of σ is 0.5. And after the edge image is obtained, filling a connected domain, and filling the closed region into white to eliminate the phenomenon of 'holes' and extract a target foreground.
Specifically, the infrared recognition module adopts a Kinect sensor, utilizes the depth measurement technology of the Kinect sensor to recognize the human body, the human body part and the human body joint, and outputs a human body contour image.
In the above embodiment, the color difference identification module includes the following identification steps:
s1: determining the position and the size of the interested background area;
s2: transforming to a required color space, and counting color components of each pixel in the background area at the moment;
s3: drawing a histogram of each color component in the background area;
s4: calculating the expectation of a background area, and estimating a threshold value;
s5: detecting a background area in a video in real time, extracting pixel points outside a threshold value by using an Euclidean distance and weighted Euclidean distance formula, namely reserving the pixel points, and replacing the pixel points within the threshold value with a new background color.
Specifically, the calculation formula of the color difference identification module is as follows:
in the formula: the background has n pixels with a color mean value of E (r)e,ge,be) Each pixel point p on the imageiColor C ofiTo use (r)i,gi,bi) Three components and 0 ≦ ri,gi,bi≤255;
In this model, point E is taken as the center of sphere, CiAre distributed at various points in space, CiThe square formula of the distance to E in euclidean is:
let R beThreshold radius of whenWhen the foreground points are extracted according to the contour, the contour of the foreground points is determined according to the contour of the foreground pointsAt that time, points outside the source video stream are filtered out according to the contour.
Specifically, the weighted euclidean distance formula is:
in the formula: omegar、ωg、ωbRespectively represent ri、gi、biThe weights of the three components.
The working principle and the using process of the invention are as follows: the method comprises the steps of firstly, after a video stream is read by a camera, inputting the video stream into a GPU operation platform in a host, after a background is established through a specific key, starting to take the video stream input later as a source video stream, inputting the background and the source video stream into a matting model, carrying out real-time background matting, selecting a display target by a motion recognition module, carrying out contour recognition on the display target by an infrared recognition module, recognizing an image except the contour of the display target by a matting color difference recognition module, and outputting the image after the background is removed.
In the present invention, the term "plurality" means two or more unless explicitly defined otherwise. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items. The terms "mounted," "connected," "fixed," and the like are to be construed broadly, and for example, "connected" may be a fixed connection, a removable connection, or an integral connection; "coupled" may be direct or indirect through an intermediary. The specific meanings of the above terms in the present invention can be understood by those skilled in the art according to specific situations.
It will be understood that when an element is referred to as being "mounted to," "secured to" or "disposed on" another element, it can be directly on the other element or intervening elements may also be present. When an element is referred to as being "connected" to another element, it can be directly connected to the other element or intervening elements may also be present. The terms "vertical," "horizontal," "upper," "lower," "left," "right," and the like as used herein are for illustrative purposes only and do not denote a unique embodiment.
In the description herein, the description of the terms "one embodiment," "some embodiments," "specific embodiments," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the claims and their equivalents.
Claims (8)
1. A remote interactive virtual display system, comprising:
the system comprises a camera, a video acquisition card, a host and a scratching model arranged in the host, wherein a GPU operation platform is loaded in the host, the scratching model is based on a motion recognition module, an infrared recognition module and a chromatic aberration recognition module, the motion recognition module is used for selecting a display target, the infrared recognition module is used for carrying out contour recognition on the display target, and the chromatic aberration recognition module is used for further refining the contour of the display target;
after the video stream is read by the camera, the video stream is input into a GPU operation platform in the host, after a background is established through a specific key, the video stream input later is used as a source video stream, the background and the source video stream are input into a matting model to be matte in real time, an image except a display target contour is identified by a matte identification module, and the image after the background is removed is output.
2. The remote interactive virtual display system of claim 1, wherein a voice input device is further provided on the host computer, and the voice input device package is a microphone.
3. The remote interactive virtual display system of claim 1, wherein the camera is an infrared sensing camera for human body identification.
4. The remote interactive virtual display system of claim 1, wherein the motion recognition module employs a three-frame difference method, the three-frame difference method comprising the following algorithm flows: firstly, three frames of images I are readk-1、IkAnd Ik+1Graying the images, respectively calculating difference images D (k-1, k) and D (k, k +1) of two continuous frames of images, setting a threshold value T for binarization processing, and extracting a moving object, wherein the calculation formula is as follows:
finally, the moving target is obtained through logical AND operation, and the calculation formula is as follows:
5. the remote interactive virtual display system of claim 4, wherein the motion recognition module, after detecting by using a three-frame difference method, eliminates a hole phenomenon by using the following Laplacian-Gaussian operator formula to extract a target foreground:
in the formula: the sigma is a standard deviation, the detail enhancement effect is better when the sigma is smaller, the smoothing effect is better when the sigma is larger, and the basic value is 0.5.
6. The remote interactive virtual display system of claim 1, wherein the infrared recognition module employs a Kinect sensor, and uses a depth measurement technique of the Kinect sensor to recognize the human body, the human body part and the human body joint and output a human body contour image.
7. The remote interactive virtual display system of claim 1, wherein the color difference identification module comprises the following identification steps:
s1: determining the position and the size of the interested background area;
s2: transforming to a required color space, and counting color components of each pixel in the background area at the moment;
s3: drawing a histogram of each color component in the background area;
s4: calculating the expectation of a background area, and estimating a threshold value;
s5: detecting a background area in a video in real time, extracting pixel points outside a threshold value by using an Euclidean distance and weighted Euclidean distance formula, namely reserving the pixel points, and replacing the pixel points within the threshold value with a new background color.
8. The remote interactive virtual exhibition system of claim 7, wherein the calculation formula of the color difference identification module is:
in the formula: the background has n pixels with a color mean value of E (r)e,ge,be) Each pixel point p on the imagejColor C ofiTo use (r)i,gi,bi) Three components and 0 ≦ ri,gi,bi≤255;
In this model, point E is taken as the center of sphere, CiAre distributed at various points in space, CiThe square formula of the distance to E in euclidean is:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111656082.1A CN114327341A (en) | 2021-12-31 | 2021-12-31 | Remote interactive virtual display system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111656082.1A CN114327341A (en) | 2021-12-31 | 2021-12-31 | Remote interactive virtual display system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114327341A true CN114327341A (en) | 2022-04-12 |
Family
ID=81018855
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111656082.1A Pending CN114327341A (en) | 2021-12-31 | 2021-12-31 | Remote interactive virtual display system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114327341A (en) |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1588431A (en) * | 2004-07-02 | 2005-03-02 | 清华大学 | Character extracting method from complecate background color image based on run-length adjacent map |
US20120321198A1 (en) * | 2011-06-15 | 2012-12-20 | Fujitsu Limited | Image processing method and image processing apparatus |
CN102855758A (en) * | 2012-08-27 | 2013-01-02 | 无锡北邮感知技术产业研究院有限公司 | Detection method for vehicle in breach of traffic rules |
CN106997598A (en) * | 2017-01-06 | 2017-08-01 | 陕西科技大学 | The moving target detecting method merged based on RPCA with three-frame difference |
CN107833242A (en) * | 2017-10-30 | 2018-03-23 | 南京理工大学 | One kind is based on marginal information and improves VIBE moving target detecting methods |
CN108549891A (en) * | 2018-03-23 | 2018-09-18 | 河海大学 | Multi-scale diffusion well-marked target detection method based on background Yu target priori |
CN109544694A (en) * | 2018-11-16 | 2019-03-29 | 重庆邮电大学 | A kind of augmented reality system actual situation hybrid modeling method based on deep learning |
CN109976519A (en) * | 2019-03-14 | 2019-07-05 | 浙江工业大学 | A kind of interactive display unit and its interactive display method based on augmented reality |
CN110956681A (en) * | 2019-11-08 | 2020-04-03 | 浙江工业大学 | Portrait background automatic replacement method combining convolutional network and neighborhood similarity |
CN112101370A (en) * | 2020-11-11 | 2020-12-18 | 广州卓腾科技有限公司 | Automatic pure-color background image matting algorithm, computer-readable storage medium and equipment |
WO2021023106A1 (en) * | 2019-08-02 | 2021-02-11 | 杭州海康威视数字技术股份有限公司 | Target recognition method and apparatus, and camera |
-
2021
- 2021-12-31 CN CN202111656082.1A patent/CN114327341A/en active Pending
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1588431A (en) * | 2004-07-02 | 2005-03-02 | 清华大学 | Character extracting method from complecate background color image based on run-length adjacent map |
US20120321198A1 (en) * | 2011-06-15 | 2012-12-20 | Fujitsu Limited | Image processing method and image processing apparatus |
CN102855758A (en) * | 2012-08-27 | 2013-01-02 | 无锡北邮感知技术产业研究院有限公司 | Detection method for vehicle in breach of traffic rules |
CN106997598A (en) * | 2017-01-06 | 2017-08-01 | 陕西科技大学 | The moving target detecting method merged based on RPCA with three-frame difference |
CN107833242A (en) * | 2017-10-30 | 2018-03-23 | 南京理工大学 | One kind is based on marginal information and improves VIBE moving target detecting methods |
CN108549891A (en) * | 2018-03-23 | 2018-09-18 | 河海大学 | Multi-scale diffusion well-marked target detection method based on background Yu target priori |
CN109544694A (en) * | 2018-11-16 | 2019-03-29 | 重庆邮电大学 | A kind of augmented reality system actual situation hybrid modeling method based on deep learning |
CN109976519A (en) * | 2019-03-14 | 2019-07-05 | 浙江工业大学 | A kind of interactive display unit and its interactive display method based on augmented reality |
WO2021023106A1 (en) * | 2019-08-02 | 2021-02-11 | 杭州海康威视数字技术股份有限公司 | Target recognition method and apparatus, and camera |
CN110956681A (en) * | 2019-11-08 | 2020-04-03 | 浙江工业大学 | Portrait background automatic replacement method combining convolutional network and neighborhood similarity |
CN112101370A (en) * | 2020-11-11 | 2020-12-18 | 广州卓腾科技有限公司 | Automatic pure-color background image matting algorithm, computer-readable storage medium and equipment |
Non-Patent Citations (2)
Title |
---|
李帅: "基于机器学习的运动目标识别与跟踪技术研究", 《中国优秀硕士学位论文全文数据库》 * |
黎杨梅: "抠像算法设计", 《襄樊职业技术学院学报》 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN101765022B (en) | Depth representing method based on light stream and image segmentation | |
Shi et al. | Real-time tracking using level sets | |
CN103020965B (en) | A kind of foreground segmentation method based on significance detection | |
CN108122206A (en) | A kind of low-light (level) image denoising method and device | |
US20110148868A1 (en) | Apparatus and method for reconstructing three-dimensional face avatar through stereo vision and face detection | |
TW200942019A (en) | Method of processing partition of dynamic target object in digital video and system thereof | |
CN106919257B (en) | Haptic interactive texture force reproduction method based on image brightness information force | |
CN108377374A (en) | Method and system for generating depth information related to an image | |
CN106156714A (en) | The Human bodys' response method merged based on skeletal joint feature and surface character | |
CN103268482B (en) | A kind of gesture of low complex degree is extracted and gesture degree of depth acquisition methods | |
CN113724273B (en) | Edge light and shadow fusion method based on neural network region target segmentation | |
Kumar et al. | Multiple cameras using real time object tracking for surveillance and security system | |
CN104298961B (en) | Video method of combination based on Mouth-Shape Recognition | |
CN103473807A (en) | 3D model transformation system and method | |
CN114327341A (en) | Remote interactive virtual display system | |
Wang et al. | Example-based video stereolization with foreground segmentation and depth propagation | |
Mathai et al. | Automatic 2D to 3D video and image conversion based on global depth map | |
Dave | A lip localization based visual feature extraction method | |
CN110111368B (en) | Human body posture recognition-based similar moving target detection and tracking method | |
CN110969454A (en) | Cosmetics propaganda and promotion system based on intelligent product album | |
Subashini et al. | Implementation of object tracking system using region filtering algorithm based on simulink blocksets | |
CN108198140A (en) | Three-dimensional collaboration filtering and noise reduction method based on NCSR models | |
CN113298891A (en) | Full-reference 3D synthetic image quality evaluation method based on quaternion wavelet transform | |
CN113487487A (en) | Super-resolution reconstruction method and system for heterogeneous stereo image | |
Kanchan et al. | Recent trends in 2D to 3D image conversion: algorithm at a glance |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20220412 |
|
RJ01 | Rejection of invention patent application after publication |